I have used VXLANs with EVPN using Cisco ACI and the APIC controller. While the APIC controller is great, it masks a lot of the configurations done on the spines and leaf devices.
I currently do not use VXLANS at all so I decided to circle back and reacquaint with it. However, since I decided to do this in my home office lab environment I said to myself let’s use CLI commands.
Got the lab working and it was great to a point, run into an issue that drove me crazy for quite a bit.
The topology is shown below:
As you can see, this is a standard topology, one of many you find on the web.
Here I have one spine and three-leaf devices connected to a standard switch and also to a 9k switch using VPC.
I was ready to start configuring the devices when I found on an old folder several configurations for a similar topology. It seemed like a lab example I have downloaded a while back but never used.
I said to myself this will save me time, or so I thought.
The original lab used multicasting for BUM traffic. I decided to modify it so it will use Ingress Replication.
Thus, I changed the configurations a bit and have the lab up and running.
Just to remind you I am using EVE-NG and you need a server with the right amount of memory. Each 9k I am using takes 8MB of memory.
As stated I had the configurations for the lab already, so basically other than changing interface naming conventions and not using multicasting most of the configurations were there already including the spine.
Started to test and I run into an interesting issue.
From server 2 I could ping server 3 and vice versa. On the other hand, I could not ping across the standard switch.
So I roll out my sleeves and started troubleshooting.
I will not go onto how, perhaps another blog post, look at the NVE interfaces, the VNI instances, the L2VPN EVPN BGP configuration.
Nada, could not find why I could not ping across.
I could see labels were propagated, routes targets were exported and imported.
It was driving me crazy!
So, I went looking. Found a Cisco article regarding the configuration and verification of MP-BGP EVPN using the control plane.
Their topology was very similar, they had an additional leaf to mimic access to the Internet with an external BGP peer from that leaf device.
I went over through the configurations and still could not see what I was doing wrong.
Then, voila I noticed something.
The original lab had the spine configured with the following snippet of code under the address-family L2 EVPN.
retain route-target all
Took that line off and voila I could ping across!
The question is what does that do?
Looking at the configuration guide for the nexus 9000, found the following:
“Required for eBGP. Allows the spine to retain and advertise all EVPN routes when there are no local VNI configured with matching import route targets.”
And that was the issue, my spine is not an ASBR device.
If the spine was in a different AS than the leaf devices, I would need it.
Not so sure why the original lab had it and if that lab ever worked.
There you have it.