PVE, VLANs, Trunks, Oh My!

Hello there,

I thought I was done with PVE, then I noticed that some users had issues with VLANs, trunks, etc.

In principle it should fairly straight forward to configure VLANs (not VXLANs) however, using the native Linux way could be a bit complicated sometimes.

OvS on the other hand was created with virtualization in mind and SDN in particular. A key point is that it supports Open Flow and allows for separation of the data plane and control plane.

Understanding and implementing it is easier said that done, so keep on reading.

Continue reading “PVE, VLANs, Trunks, Oh My!”

PVE SDN NXOS Follow Up!

Hello,

This is a follow up to my previous blog regarding PVE SDN and NXOS .

Astute readers which all of you are, would have realized that in order to go across  VXLAN segments, the default gateway for each VNET is located at the PVE nodes.

This is not really a an efficient way of routing across an EVPN SDN layout. The default gateway should be local to the VTEPs in which your VNIs are located.

A way of accomplishing this, is to use the  “anycast-gateway” feature. To do this will create a SVI on the 9k, and assign it to the vrf (the L3 vrf we configured) and give it an IP address.

Looking at the configuration we have:

vlan 1,100,200,1000,2000
vlan 100
  vn-segment 31000
vlan 200
  vn-segment 41000
vlan 1000
  vn-segment 30000
vlan 2000
  vn-segment 40000

vrf context management
vrf context vrfvx_v100
  vni 30000
  rd auto
  address-family ipv4 unicast
    route-target both auto
    route-target both auto evpn
vrf context vrfvx_v200
  vni 40000
  address-family ipv4 unicast
    route-target both auto
    route-target both auto evpn

Thus we need to create a VLAN 100 and assign it to vrf vrfvx_100 and _200.

Before we do this we need to make note of the MAC addresses the PVE is giving to the VNETs we configured. This will be under the “sdn” file in the “interface.d” folder. This ensures that the gateways we create will have the same MAC addresses, so if you were to migrate a VM it will not have a conflict between ARP entries and MAC addresses on the different leaf it now resides.

In our case we will use the static method or manual MAC configuration. Another way is to use MAC aliasing.

Now we can configure both VLANS:

interface Vlan100
 no shutdown
 mac-address bc24.11e7.f432 
 vrf member vrfvx_v100
 ip address 10.100.1.1/24 
 fabric forwarding mode anycast-gateway

interface Vlan200
 no shutdown
 mac-address BC:24:11:58:A1:F9
 vrf member vrfvx_v200
 ip address 10.200.1.1/24
 fabric forwarding mode anycast-gateway

Notice that we defined the mac addresses for each VNET and then we use the any-cast gateway command.

The interfaces should be up and if the PVE allowed this you would be able to ping across VNIs. How to allow access across VNIs will be a topic for another blog.

Cheers,

Ciao.

 

PVE VXLAN NXOS Integration Oh My!

Introduction

The purpose of the following proof of concept is to demonstrate integration between Proxmox PVE and a Cisco 9000v.

PVE is a fantastic open-source virtualization environment. Recently it has gained a lot of traction in particular with the issues with VSphere licensing.

However, while I see several blogs and videos regarding it and in particular its SDN capabilities, I have not seen detail setups involving integration with other SDN devices.

This is unfortunate since the Cisco 9000 for example is very prevalent and at one point if someone decides to move to PVE, they will need to integrate Cisco or another device to it.

In any case, from the perspective of learning I think the following will be very instructional. Continue reading “PVE VXLAN NXOS Integration Oh My!”

Cerbot, Oh My!

A few weeks ago, my Internet router died suddenly.

Not that much of an issue unless you use a custom box as a modem and you have your website hosted on it.

The server I was using for this was a Fedora Core 9 machine. I know an ancient OS.

The machine itself was not that old, I found about 4 years a cheap Intel 386 with ISA support so I replaced the really old server I had. I was able to migrate to the new box, I was running a hardware raid setup using an ISA card. Talk about making things complicated.

I initially tried using a virtual server to have things running quickly, I had a backup of the server running on a VMware workstation however, I could not make it a go for reasons that may merit another blog.

However, I had laying around my old media server, 4 Gb of ram and 500 GB of disk space. Too much space but that was the only thing I had. It only had one NIC but I found a USB to Ethernet dongle.

This time I decided to use HAproxy to present my website and blog and have proper SSL certs for each.

Continue reading “Cerbot, Oh My!”