Hello,
This is a follow up to my previous blog regarding PVE SDN and NXOS .
Astute readers which all of you are, would have realized that in order to go across VXLAN segments, the default gateway for each VNET is located at the PVE nodes.
This is not really a an efficient way of routing across an EVPN SDN layout. The default gateway should be local to the VTEPs in which your VNIs are located.
A way of accomplishing this, is to use the “anycast-gateway” feature. To do this will create a SVI on the 9k, and assign it to the vrf (the L3 vrf we configured) and give it an IP address.
Looking at the configuration we have:
vlan 1,100,200,1000,2000 vlan 100 vn-segment 31000 vlan 200 vn-segment 41000 vlan 1000 vn-segment 30000 vlan 2000 vn-segment 40000
vrf context management
vrf context vrfvx_v100
vni 30000
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
vrf context vrfvx_v200
vni 40000
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
Thus we need to create a VLAN 100 and assign it to vrf vrfvx_100 and _200.
Before we do this we need to make note of the MAC addresses the PVE is giving to the VNETs we configured. This will be under the “sdn” file in the “interface.d” folder. This ensures that the gateways we create will have the same MAC addresses, so if you were to migrate a VM it will not have a conflict between ARP entries and MAC addresses on the different leaf it now resides.
In our case we will use the static method or manual MAC configuration. Another way is to use MAC aliasing.
Now we can configure both VLANS:
interface Vlan100 no shutdown mac-address bc24.11e7.f432 vrf member vrfvx_v100 ip address 10.100.1.1/24 fabric forwarding mode anycast-gateway interface Vlan200 no shutdown mac-address BC:24:11:58:A1:F9 vrf member vrfvx_v200 ip address 10.200.1.1/24 fabric forwarding mode anycast-gateway
Notice that we defined the mac addresses for each VNET and then we use the any-cast gateway command.
The interfaces should be up and if the PVE allowed this you would be able to ping across VNIs. How to allow access across VNIs will be a topic for another blog.
Cheers,
Ciao.
