EVPN-VXLAN Data Centers: ERB Deployment

After talking through EVPN-VXLAN topologies in the data center, it seems appropriate to go through some of the common configurations. In this article, I’m going to walk through a configuration using Juniper Junos in an Edge Route Bridging (ERB) deployment in the data center.

Data Center: ERB Requirements

This example we’ll use EVPN-VXLAN, connect all VLANs in a 3-stage Clos, with multi-/single-homed systems. We’re using Juniper’s vQFX images to deploy the spines and leaves within an EVE-NG environment. This is limited to a single-tenant and data center. It is possible to do multi-tenancy and extend connectivity to multiple-data centers, but that is for a future article.

When deploying your EVPN-VXLAN enabled data center it’s recommended you plan out all the details. IPv4/IPv6 address pools should be thought out for your loopbacks, point-to-point, and host subnets. VXLAN Network Identifiers (VNI), route targets (RT), and route distinguishers (RD) are critical data points you’ll need to document, especially if you’re planning for data center growth, mutli-tenancy, and VXLAN-stitching.

The Underlay

EVPN-VXLAN ERB Topology
EVPN-VXLAN ERB Topology

The underlay is your data center’s IP fabric. It connects spines and leaves with routed links rather than 802.1q trunks. This eliminates the need for Spanning Tree Protocol and proprietary MC-LAG configurations. In an IP fabric, the data plane is managed by routing protocols in the control plane. EVPN is built around MP-BGP and the open fields within the protocol. It’s still okay to use OSPF or ISIS since there is no “requirement” for the underlay. The underlay is meant to provide loopback Network Layer Reachability Information (NLRI) to the neighboring device so the overlay can function.

Spine Deployment

Workflows for EVPN-VXLAN are important. I begin with the data center fabric and the underlay NLRI. We only need to import and export the loopback addresses so the VTEPs have a place to terminate. There’s no requirement for BGP in the underlay. You can use OSPF, IS-IS, or BGP. Most vendors recommend eBGP for the underlay and iBGP for the overlay but even that is changing to eBGP in both.

Spine 1:

Spine 2:

The above configuration snippets show the point-to-point IP address assignments. The policy-options policy-statement  settings define how routes are to be used by the forwarding engine. UNDERLAY terms restrict advertisements to loopback0 interfaces. The LOAD-BALANCE terms are imported into the Junos forwarding table so all of the learned routes may be used for session based load-balancing. BGP uses one active route by default. We want all links to be used in the data center and this policy makes it possible.

Leaf Configuration

The underlay on the leaves has similar parameters as the spines. It’s still required to have NLRI, ECMP, and loopback advertisement and insertion. The leaves will also include VLANs, IRBs, and EVPN-VXLAN sections for ERB connectivity. It is recommended data centers segment leaves to support border and host functions. Border leaves connect edge services like WAN CPE, firewalls, and load balancers.

Leaf 1 (Border):

Leaf 2 (Border):

Leaf 3:

Leaf 4:

Now that the data center fabric is deployed it’s good practice to verify functionality. If you don’t have the underlay ready, you’re not going to get the overlay. Check that the loopbacks are installed in the route tables of all devices by checking BGP session status, routes, and route advertisement. I address verification later in this write up.

The Overlay

After configuring the underlay, it’s time to move to the overlay. This is the VLAN transport portion of our deployment. We’ll use Anycast on each VLAN across the edge. You will need to use BGP in the overlay to enable EVPN signaling. EVPN signaling provides the control plane for VXLAN enabled traffic.

Spine 1:

Spine 2:

Leaf 1 (Border):

Leaf 2 (Border):

Leaf 3:

Leaf 4:

EVPN-VXLAN

ERB topology has VXLAN encapsulation at the leaf. Spines are only involved in IP transit rather than VTEP termination. The switch-options  and protocols evpn are where the magic happens for sending L2 frames across L3 links.

Host connectivity is advertised within a BGP announcement to all participating VTEPs and contains MAC and IP addresses. Critical concepts on all EVPN-VXLAN topologies are:

    • A vrf-target , or route target (RT), is a BGP extended community allowing the switch to import/export IP prefixes for a VRF instance or instances. It’s different from a route-distinguisher  (RD) which uniquely identifies a prefix. For instance if two different customers connect to the same PE with the same IP prefix, the RD keeps the traffic segregated and the RT notifies the router(s) to which VRF the prefix belongs.
    • VXLAN Network Identifiers (VNI) uniquely label a VXLAN enabled VLAN. In the default routing instance we’re using unique VNIs for each VLAN. If this were a MAC-VRF VXLAN type we could use a single VNI for a group of VLANs.
    • Adding the command set protocols evpn default-gateway do-not-advertise  is recommended when the virtual gateway IP/MAC address are equal across all VTEP devices.

Leaf 1 (Border):

Leaf 2 (Border):

Leaf 3:

Leaf 4:

Host Connectivity

Once the basic reachability configurations are completed, we jump into the layer-2 and VXLAN section. VXLAN encapsulates layer-2 traffic as the data plane protocol while EVPN announces the information in the control plane. In our example deployment there is one host that is multihomed requiring ESI configuration on Leaf 1 and 2. The remaining leaves will have single-homed hosts which require no additional configuration outside of access port items.

Leaf 1 (Border):

Leaf 2 (Border):

Leaf 3:

Leaf 4:

ESI connections are announced as a Type-4 route in EVPN. Since it’s standards based, there’s no need for proprietary MC-LAG configurations. Just let EVPN do it’s thing.

Verification

How do you support this new topology? Data center network infrastructure should be extremely stable and all databases must be in sync. That means:

  • BGP state should be stable and proper routes are exported/imported to BGP participants.
  • The EVPN database contains proper endpoints, hosts, and VNIs.
  • EVPN routes for all required route types should be in the RIB.

Let’s start with the BGP state. Within ERB deployments, checking the end switch for overlay and underlay connectivity is a top priority. For example, checking R11 you’ll want to make certain you have established connections with your underlay and overlay peers.

With all BGP neighbors established it’s time to check that proper routes are being advertised and received. You’ll want to check both the underlay and overlay routes. If this shows something other than the expected result, it’s time to dig on that configuration issue. This is the resultant from Leaf 1 and Spine 1.

Leaf 1:

Spine 1:

When the data center fabric is confirmed move to the VTEP and EVPN database. This example is from Leaf-2. The EVPN database shows MAC/IP addresses and associated VNIs. The VXLAN tunnel endpoint command reports the active remote VTEPs on the switch. All of this information is important for forwarding of frames from one device to the other. If you fail to see your configured switches, it’s time for further troubleshooting.

Leaf 2:

Closing The Deployment

While there are nearly an infinite amount of options to deploy the ERB, this one will get you up and running rather quickly. There are nerd knobs you can twist depending on your connectivity, business applications, and growth objectives. EVPN-VXLAN is a stable and standards-based option for your data center. Vendors have written inter-operability “HOWTOs” which was unheard of for MC-LAG.

Standards-based data centers should decouple your vendor-specific networks and accomplish the same technical requirements. That’s what makes a data center that can grow and maintain stability.