Designing a Network (Part 2)

If you’ve considered the problem I wrote about back in December, no doubt there have been some questions about requirements. Actually, if you’re remotely interested in pursuing the CCIE Architecture path, this could be one of the design problems you would need to solve. This case study has its origins from a company I worked for. There were numerous issues not the least of which were static routes EVERYWHERE and a single OSPF area 0 for all regions. I came up with the design and headed out to the Cisco Proof Of Concept Lab (CPOC) in Research Triangle Park, NC to test the topology on physical hardware. While I was mocking the solution up at CPOC, I was fortunate enough to meet one of the writers of RFC 7868, Steven Moore. We talked about what I was trying to do, and he said, “You’re actually moving FROM OSPF to EIGRP? Why?”

That made me pause.

Was I doing the right thing? Was there another way to accomplish the goal? What if I was eliminating important technologies because I was using EIGRP? After some significant soul searching, I remembered my criteria and knew EIGRP was the best way to accomplish the goal—to meet both the IT staff needs and the needs of the business. Sometimes, rethinking your options is a good thing. The art is to avoid analysis paralysis. Eventually you have to pull the trigger (read Plan Before You Execute).

The design criteria for this case study is straight forward. This is the excerpt from my project plan/business analysis submitted to management.

  • Improve network convergence time.
  • Improve Mean-Time-To-Repair (MTTR).
  • Limit use of static routes.
  • Additional funding for hardware is not available.
  • Reply traffic will follow the same path as the source request. Asymmetric routing will be avoided.
  • LAN traffic will follow local high-speed links before failing to low-speed MPLS
  • MPLS entry points will prefer local destination addresses (e.g. Austin, TX MPLS router will be primary for Austin subnets, London, UK MPLS router will be primary for London subnets, etc.)
  • Redundant MPLS entry points will be preferred by order of in-country networks followed by closest out of country, and furthest out of country (e.g. Los Angeles is secondary, Dublin is tertiary, and London is last resort for Austin subnets).

Your mission: redesign this network topology.
Without the ability to purchase additional hardware, the design will utilize EIGRP so that circuit characteristics and and route tagging may be used for both LAN and WAN connectivity. The current network was a flat OSPF backbone area with static routes for transatlantic connectivity. As you read this post, I’d suggest checking out the diagram and the configuration scripts as you read through this discussion.

Core Switching

This network design leverages use of route-maps and route tagging. I’ve assigned route tag numbers of 10, 20, 30, and 40 for Austin, Los Angeles, Dublin, and London campus’, respectively. The ACME Co. is a medium sized, point-to-point campus environment with a few dozen MPLS connected remote offices. These values are added to a route update, so, a route originating from the Austin data center will be tagged as “10”, and any route coming out of the Dublin data center will show a route tag of “30”. These tags may be observed when you check a given route. For example, viewing an Dublin route from ATX-CS2 shows the tag value:

ATX-CS2# shoW ip route 192.168.80.0
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%' in via output denotes VRF

192.168.80.0/24, ubest/mbest: 1/0
*via 192.168.1.2, Po2, [90/3328], 2d01h, eigrp-100, internal, tag 30
ATX-CS2#

For the Austin core switch one, (ATX-CS1), you’ll see two links with an applied route-map. One link heads east to Dublin and one heads south to Los Angeles. It is important to remember the switch processes the route-maps in sequential order, finds a match; the rule of thumb is to go from specific to general when creating the policy.

The route-map recognizes the routing protocol, and tag a value. The redistribution looks at the route-map, checks the tag, and writes the same tag value on the outbound advertisement. If you fail to use the outbound route-map, the switch overwrites the inbound tag with its own value, and advertises the route with its own tag. If this is a tough concept, or, you want to see it in action, the Cisco Virtual Internet Routing Lab (VIRL) topology is included with the configurations.

To aide troubleshooting efforts, I tend to be somewhat verbose for interface descriptions, route-map names, and prefix-list names. I use things like EIGRP-METRIC-IN to let me see that a route-map is being applied inbound for a routing protocol and is modifying the default values. If I have a prefix-list that the route-map matches, I use the same title so it’s easy to search for. I also use capitol letters for configuration variables which helps bits of information stand out in the middle of a long configuration. A simple show running-config | include EIGRP-METRIC  gives me 100% of the proper values.

ATX-CS2#show run | include EIGRP-METRIC
router eigrp 100
  router-id 10.1.1.2
  passive-interface default
  default-metric 10000000 100 250 25 9216
!
route-map EIGRP-METRIC-IN permit 10
  match tag 40
  set metric 10000000 500 250 25 9216
route-map EIGRP-METRIC-IN permit 15
  match source-protocol eigrp
route-map EIGRP-METRIC-OUT permit 15
  match tag 20
  set tag 20
route-map EIGRP-METRIC-OUT permit 20
  match tag 30
  set tag 30
route-map EIGRP-METRIC-OUT permit 25
  match tag 40
  set tag 40
route-map EIGRP-METRIC-OUT permit 30
  match source-protocol direct
  set tag 10
route-map EIGRP-METRIC-OUT permit 35
  match ip address prefix-list STATIC
  set tag 10

These route-maps are applied, where appropriate, to the inbound or outbound interface of the core/distribution layer switches. This keeps all the tagging correct and applies the proper metrics for traffic engineering. For the transatlantic circuits, the EIGRP metrics are modified to prevent asymmetric routing. In some instances, an asymmetric route isn’t a big deal, but performance problems will happen with time sensitive traffic like voice and video. Also, if stateful firewalls are in place along these paths, data transmission will break. The TCP SYN will be added to the firewall state table, but the ACK may enter firewalls on a different path. Since the SYN is absent from those firewall state tables, the packet will be dropped.

In this snippet located on ATX-CS1, the configuration and route-maps applied to the transatlantic DCI to DUB-CS1 accomplishes a few critical items:

  • Sets the bandwidth to 1Gbps for monitoring and routing protocol interface calculation.
  • Advertises the interface into EIGRP process 100.
  • Matches the route-map EIGRP-METRIC-OUT for outbound advertisements.
  • Matches the route-map EIGRP-METRIC-IN for inbound advertisements.
  • Allows EIGRP adjacencies to be formed on the interface. Best practice is to enable passive-interface on a physical or SVI interface so you control route adjacency.
interface Ethernet1/5
  description DUB-CS1
  no switchport
  bandwidth 1000000
  ip address 192.168.1.17/30
  ip router eigrp 100
  ip distribute-list eigrp 100 route-map EIGRP-METRIC-OUT out
  ip distribute-list eigrp 100 route-map EIGRP-METRIC-IN in
  no ip passive-interface eigrp 100
  no shutdown

I’m a big fan of standardized VLAN IDs, interface numbers and their functions, and hostnames. Scripting is so much easier when there’s a standard. With this design, any new office that may be added to the network has a basic route script. The tagging is handled at the core/distribution rather than one-off functions. If a new point-to-point link connecting a new office in Dallas to Austin is installed, exchanging EIGRP routes is the only requirement for the remote end. The tagging/route metrics happen automatically within the core based on the route-map on the interface.

With the sample metro-E connections taken care of, let’s look at the MPLS side of the equation. In the diagram you’ll notice that the MPLS routers are connected to core switch “2”, while the DCIs are on core switch “1”. With Nexus, there is a design consideration for VPC that you can not miss. The ability to pass routing protocols across a VPC peer link on the Nexus switches is not supported. Virtual Port Channels are a layer 2 virtualization technology so it does not support layer 3/multicast across the link (remember the OSI model?). This was prior to NX-OS version 7.0.3.I5.2; afterwards, EIGRP virtualization was supported across the VPC peer link. I’ve not tested it, but technically, the recent code supports unicast route updates. If you can force a routing protocol to use unicast, you’re in good shape for adding your redundant DCI circuits into VPC. It is strongly encouraged that you test full functionality prior to using this feature in production environments.

That’s all side note/hint, just be aware of the issue. If you see route neighbors get stuck in EXSTART or flap on neighbor negotiation, you’re probably sending route updates across the VPC peer link.

To achieve routing protocol reachability across Nexus switches (if your routers exchange across a VPC), you have two choices for architecture. First, you can configure a routed interface between the switch and the directly connected router interface (e.g. a /30 for the uplink). Then exchange dynamic routes across that uplink. This means adjacencies will be formed between the router and directly connected Nexus switch, thus bypassing the VPC peer link.

Your other option is to create a new trunk between your two Nexus cores, assuming you have a VLAN created for routing protocols. Once you create the new dot1q port-channel, prune the route-protocol VLAN from the VPC peer link, and allow it on the new trunk. If you have a router connected to each switch addressed to the same SVI, these devices will form an adjacency.

Each of these options have some drawbacks, so I’d suggest investigating how they should be introduced into your environment.

Distribution Routing

Now we get to MPLS route distribution. The tricky part is getting remote offices to enter the proper ingress points, and data center traffic to egress properly. Remember how painful it is to translate IGP metrics into BGP? The old way to advertise engineered routes from an IGP to BGP was to use a route-map and modify the AS-prepend. This way of route-modification has slipped into disrepute because of things like the BGP best-path decision tree, unsupported AS-prepending for some protocols, and is administratively onerous. We’ve not even talked about automation within ACI-enabled devices.

Thankfully, Cisco managed to include some IOS code to help. The Accumulated Interior Gateway Protocol (AIGP) metric command is used to ease the metric redistribution pain. AIGP will calculate the IGP metric and automatically change it for the proper EGP value. The command uses a route-map associated with the routing protocol, on the outbound redistribution. If you review the diagram, the MPLS routers have to redistribute EIGRP into BGP (and vis versa) for full network reachability which makes AIGP a useful tool.

route-map EIGRP-BGP-METRIC permit 10
  match source-protocol eigrp 100
  set aigp-metric igp-metric
!
route-map BGP-METRIC permit 10
  set metric 520 20 255 50 1500
  set tag 15
!
router eigrp 100
  default-metric 300000 1000 200 75 1500
  network 10.2.2.1 0.0.0.0
  network 192.168.1.12 0.0.0.3
  redistribute bgp 65001 route-map BGP-METRIC
  eigrp router-id 10.2.2.1
!
router bgp 65001
  bgp router-id 10.2.2.1
  bgp log-neighbor-changes
  bgp deterministic-med
  redistribute eigrp 100 route-map EIGRP-BGP-METRIC
  neighbor 192.168.250.5 remote-as 65400
  neighbor 192.168.250.5 soft-reconfiguration inbound
  distance bgp 190 190 190

This command tells MPLS connected remote sites to use the ingress point closest to the destination subnet. If the source address is in REMOTE-A, it is not desirable for data to travel through London to connect to a server in the Austin data center. Also, if the Austin circuit goes down for “maintenance”, the best secondary option is to enter through Los Angeles to connect to the same host. AIGP easily enables this functionality.

Failure Not An Option, But Is Recoverable

Now that all the devices are configured, let’s examine some failure scenarios. Keep in mind the objectives—use the closest entry point and follow the ingress path for egress traffic. Convergence times may be modified with technologies like Bidirectional Forwarding (BFD). In production and testing I’ve found that the data centers/campus recovers with extreme rapidity. However, redistributed routes to MPLS offices may take upwards of 90 seconds. Again, BFD helps in these situations.

Let’s begin with the network behavior during an MPLS failure. The output below shows the routes received for the Austin data center at the REMOTE-A. This example illustrates how tagging and AIGP work in conjunction to provide the best ingress/egress decisions.

Routes in REMOTE-A for 192.168.32.0/24

REMOTE-A#sho ip route 192.168.32.0
Routing entry for 192.168.32.0/24
Known via "bgp 65400", distance 20, metric 3072
Tag 65001, type external
Last update from 192.168.250.1 02:18:30 ago
Routing Descriptor Blocks:
* 192.168.250.1, from 192.168.250.1, 02:18:30 ago
Route metric is 3072, traffic share count is 1
AS Hops 1
Route tag 65001
MPLS label: none
REMOTE-A#
REMOTE-A#sho ip bgp 192.168.32.0/24 subnets

BGP table version is 194, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

Network          Next Hop            Metric LocPrf Weight Path
*    192.168.32.0/24  192.168.250.2         3328             0 65101 ?
*>                    192.168.250.1         3072             0 65001 ?
*                     192.168.250.4         3840             0 65301 ?
*                     192.168.250.3         3584             0 65201 ?

REMOTE-A#

Notice the route preference based on the metric value. In order, Austin, Los Angeles, Dublin, and London. As we shut down the MPLS links in each geographic location, you’ll see the best path change accordingly.

1. Shutdown the MPLS link in Austin

ATX-DR1# conf t
Enter configuration commands, one per line. End with CNTL/Z.
ATX-DR1(config)# inter gi0/2
ATX-DR1(config-if)# shut
ATX-DR1(config-if)# end
ATX-DR1#

2. Check REMOTE-A for the route to 192.168.32.0/24 and the BGP preference

REMOTE-A#sho ip bgp 192.168.32.0/24 subnets
BGP table version is 219, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

Network          Next Hop            Metric LocPrf Weight Path
*    192.168.32.0/24  192.168.250.3         3584             0 65201 ?
*                     192.168.250.4         3840             0 65301 ?
*>                    192.168.250.2         3328             0 65101 ?
REMOTE-A#

Notice how the ingress route through Austin has been removed. The preferred routes are Los Angeles, Dublin, and London.

3. Shutdown the Los Angeles MPLS circuit

LAX-DR1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
LAX-DR1(config)#inter gi0/2
LAX-DR1(config-if)#shut
LAX-DR1(config-if)#
LAX-DR1(config-if)#end
LAX-DR1#

4. Check REMOTE-A for the route to 192.168.32.0/24 and the BGP preference

REMOTE-A#sho ip route 192.168.32.0
Routing entry for 192.168.32.0/24
Known via "bgp 65400", distance 20, metric 3584
Tag 65201, type external
Last update from 192.168.250.3 00:00:21 ago
Routing Descriptor Blocks:
* 192.168.250.3, from 192.168.250.3, 00:00:21 ago
Route metric is 3584, traffic share count is 1
AS Hops 1
Route tag 65201
MPLS label: none
REMOTE-A#sho ip bgp 192.168.32.0/24 subnets
BGP table version is 232, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

Network          Next Hop            Metric LocPrf Weight Path
*>   192.168.32.0/24  192.168.250.3         3584             0 65201 ?
*                     192.168.250.4         3840             0 65301 ?
REMOTE-A#

At this point we see the ingress routes in the U.S. get flushed from REMOTE-A. The preferred path to the Austin core is now Dublin then London. The MPLS originated traffic is then backhauled across the northern and southern DCI links, respectively.

5. Shutdown Dublin MPLS

DUB-DR1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
DUB-DR1(config)#inter gi0/2
DUB-DR1(config-if)#shut
DUB-DR1(config-if)#end
DUB-DR1#

6. Check REMOTE-A for the route to 192.168.32.0/24 and the BGP preference

REMOTE-A#sho ip route 192.168.32.0
Routing entry for 192.168.32.0/24
Known via "bgp 65400", distance 20, metric 3840
Tag 65301, type external
Last update from 192.168.250.4 00:00:51 ago
Routing Descriptor Blocks:
* 192.168.250.4, from 192.168.250.4, 00:00:51 ago
Route metric is 3840, traffic share count is 1
AS Hops 1
Route tag 65301
MPLS label: none
REMOTE-A#
REMOTE-A#sho ip bgp 192.168.32.0/24 subnets
BGP table version is 249, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

Network          Next Hop            Metric LocPrf Weight Path
*>   192.168.32.0/24  192.168.250.4         3840             0 65301 ?
REMOTE-A#

The likelihood of three of four geographically diverse MPLS circuits dropping simultaneously is pretty slim. Losing the regional north-south links is a different story altogether. In the previous 5 years of my career I’ve had seagoing vessels drop anchors on circuits in Singapore and London causing multi-day outages on those transport paths. How’s that for luck?!

Two years ago, the company I worked for actually had both north-south U.S. links drop, isolating the Austin and Los Angeles data centers from each other. Early one morning an Austin city worker came out of a bar, jumped into his garbage truck, put the “arms” upright like a football referee signaling a touchdown, and left for home. As he headed down Pleasant Valley Ave. he clipped the telephone pole which just happened to hold our two, “diversely” routed data center connections. Everything went dark!

Wouldn’t it be great if you could use your MPLS connections to talk to your data centers without your involvement outside of opening a ticket with the carrier? Here’s how that scenario shakes out. The tags, EIGRP, and route-maps take over. Be sure to look for the “tag” option whenever you issue the show ip route x.x.x.x  command in your lab or production environment.

7. Verify the traffic path between ATX-CS and LAX-CS switch pair

ATX-CS1# traceroute 192.168.136.2 source 192.168.32.3
traceroute to 192.168.136.2 (192.168.136.2) from 192.168.32.3 (192.168.32.3), 30 hops max, 40 byte packets
1  192.168.1.10 (192.168.1.10)  6.083 ms  2.719 ms  2.729 ms
2  192.168.136.2 (192.168.136.2)  28.921 ms  9.968 ms  8.599 ms
ATX-CS1#
LAX-CS1# traceroute 192.168.32.2 source 192.168.136.3
traceroute to 192.168.32.2 (192.168.32.2) from 192.168.136.3 (192.168.136.3), 30 hops max, 40 byte packets
1  192.168.1.9 (192.168.1.9)  23.576 ms  3.008 ms  2.601 ms
2  192.168.32.2 (192.168.32.2)  46.741 ms  13.897 ms  16.825 ms
LAX-CS1#

Note: This traces the path between an interface on the CS1 switches to an interface on the CS2 switches

ATX-CS2# traceroute 192.168.136.3 source 192.168.32.2
traceroute to 192.168.136.3 (192.168.136.3) from 192.168.32.2 (192.168.32.2), 30 hops max, 40 byte packets
1  192.168.1.6 (192.168.1.6)  8.766 ms  2.384 ms  6.538 ms
2  192.168.136.3 (192.168.136.3)  11.657 ms  17.43 ms  8.782 ms
ATX-CS2#
LAX-CS2# traceroute 192.168.32.3 source 192.168.136.2
traceroute to 192.168.32.3 (192.168.32.3) from 192.168.136.2 (192.168.136.2), 30 hops max, 40 byte packets
1  192.168.1.5 (192.168.1.5)  4.601 ms  3.405 ms  2.641 ms
2  192.168.32.3 (192.168.32.3)  10.03 ms  12.762 ms  9.426 ms
LAX-CS2#

Note: This traces the path between an interface on the CS2 switches to an interface on the CS1 switches

8. Disable the Metro-E circuit on LAX-CS2

LAX-CS2# conf t
Enter configuration commands, one per line. End with CNTL/Z.
LAX-CS2(config)# inter e1/1
LAX-CS2(config-if)# shut
LAX-CS2(config-if)# end
LAX-CS2#

9. Verify path change between the CS2 switches. These will pass through Port-Channel2 on both ends.

ATX-CS2# traceroute 192.168.136.2 source 192.168.32.2
traceroute to 192.168.136.2 (192.168.136.2) from 192.168.32.2 (192.168.32.2), 30 hops max, 40 byte packets
1  192.168.1.2 (192.168.1.2)  9.2 ms  6.539 ms  16.229 ms
2  192.168.1.10 (192.168.1.10)  12.028 ms  15.48 ms  9.528 ms
3  192.168.136.2 (192.168.136.2)  25.748 ms  17.715 ms  24.574 ms
ATX-CS2#
LAX-CS2# traceroute 192.168.32.2 source 192.168.136.2
traceroute to 192.168.32.2 (192.168.32.2) from 192.168.136.2 (192.168.136.2), 30 hops max, 40 byte packets
1  192.168.2.2 (192.168.2.2)  5.44 ms  6.105 ms  4.12 ms
2  192.168.1.9 (192.168.1.9)  7.863 ms  10.453 ms  8.645 ms
3  192.168.32.2 (192.168.32.2)  18.347 ms  18.916 ms  19.525 ms
LAX-CS2#

The final test in this architecture will fail one of the transatlantic DCI circuits. EIGRP will converge quickly and traffic should flow to the active link. The test below will show reachability from the Austin and London CS2 switches.

10. Verify the path from ATX-CS2 to LON-CS2

ATX-CS2# traceroute 192.168.224.3 source 192.168.32.2
traceroute to 192.168.224.3 (192.168.224.3) from 192.168.32.2 (192.168.32.2), 30 hops max, 40 byte packets
1  192.168.1.2 (192.168.1.2)  5.848 ms  5.386 ms  4.544 ms
2  192.168.1.10 (192.168.1.10)  9.046 ms  10.062 ms  8.499 ms
3  192.168.2.18 (192.168.2.18)  12.09 ms  13.447 ms  8.867 ms
4  192.168.224.3 (192.168.224.3)  14.263 ms  13.388 ms  16.351 ms
ATX-CS2#
LON-CS2# traceroute 192.168.32.2 source 192.168.224.3
traceroute to 192.168.32.2 (192.168.32.2) from 192.168.224.3 (192.168.224.3), 30 hops max, 40 byte packets
1  172.16.1.9 (172.16.1.9)  4.655 ms  2.517 ms  10.994 ms
2  172.16.1.1 (172.16.1.1)  7.823 ms  8.435 ms  10.224 ms
3  192.168.1.17 (192.168.1.17)  11.662 ms  12.986 ms  11.887 ms
4  192.168.32.2 (192.168.32.2)  14.757 ms  14.276 ms  14.342 ms
LON-CS2#

Note: The reply traffic will follow the same path as the initiated traffic. We have no loops to contend with.

11. Shutdown the ATX-CS1 interface pointing to DUB-CS1

ATX-CS1# conf t
Enter configuration commands, one per line. End with CNTL/Z.
ATX-CS1(config)# inter e1/5
ATX-CS1(config-if)# shut
ATX-CS1(config-if)# end
ATX-CS1#

12. Verify the path between ATX-CS2 and LON-CS2

ATX-CS2# traceroute 192.168.224.3 source 192.168.32.2
traceroute to 192.168.224.3 (192.168.224.3) from 192.168.32.2 (192.168.32.2), 30 hops max, 40 byte packets
1  192.168.1.2 (192.168.1.2)  9.175 ms  4.734 ms  6.873 ms
2  192.168.1.10 (192.168.1.10)  8.705 ms  8.259 ms  10.426 ms
3  192.168.2.18 (192.168.2.18)  10.882 ms  14.101 ms  18.057 ms
4  192.168.224.3 (192.168.224.3)  18.043 ms  24.186 ms  18.808 ms
ATX-CS2#
LON-CS2# traceroute 192.168.32.2 source 192.168.224.3
traceroute to 192.168.32.2 (192.168.32.2) from 192.168.224.3 (192.168.224.3), 30 hops max, 40 byte packets
1  172.16.2.1 (172.16.2.1)  11.011 ms  9.514 ms  5.567 ms
2  192.168.2.17 (192.168.2.17)  9.145 ms  6.54 ms  13.105 ms
3  192.168.1.9 (192.168.1.9)  9.625 ms  8.62 ms  8.862 ms
4  192.168.32.2 (192.168.32.2)  13.796 ms  13.721 ms  26.402 ms
LON-CS2#

Conclusion

This design took many hours of prep-work, testing, documenting the changes, and then documenting the final state of the network. The early design considerations tried to leverage OSPF and BGP rather than convert from OSPF to EIGRP. Unfortunately, OSPF convergence times weren’t fast enough and there were inevitable loops given the circuit topology. This new topology continues to use OSPF on certain VLANs for devices that do not support EIGRP, but, the primary routes are all carried on EIGRP and BGP.

While this architecture may not fit your needs, hopefully you can see how the use of tags gives great flexibility for traffic engineering. Each design criteria is met because the defined needs gave us the target for successful network changes. Honestly, that is just as important as the configuration snippets.

Leave a Reply