VXLan is now upstreamed into the Master build. It is worth mentioning that is the VXLAN framing only, not the Multicast control plane functionality. So basically no need to pull from the fork anymore but pull directly from the Open vSwitch Master or 1.10+ tarball (approximately).
I have done a couple of GRE tunnel how-tos using OpenvSwitch (OVS). I had been itching to give VXLan a spin in OVS so why not ferret out someones tree on GitHub. I believe VXLan is still scheduled to officially release soon in OpenvSwitch. So here are the steps for installing, configuring tunnels on OpenvSwitch with VXLan and GRE encapsulations. At the end we will compare some of the protocols with difference MTU sizes. The results were interesting I think (for a nerd). We will be installing and then configure both GRE and VXLan encapsulated tunnels using Open vSwitch.
I like seeing some collaboration of really smart people from different companies as displayed in the GPL below.
/* * Copyright (c) 2011 Nicira Networks. * Copyright (c) 2012 Cisco Systems Inc. * Distributed under the terms of the GNU GPL version 2. * * Significant portions of this file may be copied from parts of the Linux * kernel, by Linus Torvalds and others. */
Figure 1. Example of how tunnels can be leveraged as overlays.
By the way I should probably disclaim now that huge Layer2 networks do not scale and huge Layer3 networks do. Host count in a broadcast domain/Vlan/network should be kept to a reasonable 3 digit number. Cisco is quick to point out that OTV is the solution over WAN’s for extending Layer 2 networks with OTV as the solution to extend Layer 2 Vlans. That said Overlays are flat out required to overcome Vlan number limitations and have lots of potential with programmatic orchestration.
Figure 2.OVS punts the first packet to user land for the forwarding decision and passes the data path back to the data path in the kernel for subsequent packets in the flow. slow path for the first packet then fast path for the rest.
I dd some simple Iperf tests at the end of the post using different MTU values of 1500 and 9000 byte MTUs. The numbers are kind of fun to look at from the result. For a really nice analysis take a look at Martin’s post at Network Heresy on comparing STT, Linux Bridging and GRE. I am going to spin up some VMs on the VXLan tunnel later this week and measure the speeds a little closer and see how GRE and VXLan stack up to one another from hosts using overlays. I just ran some Iperfs from the hypervisor itself rather than VMs. OVS also supports Capwap encapsulation that performs mac in GRE which is rather slick. Wonder why we do not hear much about that. I am going to dig in when I get back to home from the road later next week.
Figure 3. Lab Setup basically has a fake interface up with Br1. Real world Br1 would have VMs tapped on it. The video uses br1 and br2 but it got to be confusing for people so I changed it to br0 and br1 to match most peoples eth0 = br0 NIC naming in ifconfig.
For those familiar with the build you can just paste the following in your bash shell as root. To walk through the install skip the following snippet. The installation is extensively documented in the INSTALL file in the root of the tarball. The current and LTS releases are located here.
#Verify the kernel module(s) in case you didn't earlier and get errors.
#lsmod | grep ope
openvswitch623270
*note* “brcompat” is depreciated since the OVS upstream. Output should just be “openvswitch” as a loaded kernel module. If they are not there try loading again and check your path to the kernel module. You shouldn’t see it loaded in the kernel modules unless running a very old version (*cough* 1.4 on Citrix). Get to know the functionality of the network control with an SDN OpenFlow controller, setting up overlays and one of the most interesting parts of OVS in the configuration database (OVSDB).
Shell
1
insmod datapath/linux/openvswitch.ko
At this point you have a fucntioning vanilla OVS install. Output should look something like this.
I have one NIC (eth0) on the same LAN segment/network/vlan. We are attaching eth0 to br1 and applying an IP to the bridge interface. We are attaching an IP to br1. br1 is the island that we are building a tunnel for hosts to connect on. Without the VXLAN tunnel, the two br1 interfaces should not be able to ping one another. Note: This is being setup on the same subnet, it is important to keep in mind that the VXLAN framing will allow for the tunnel to be established over disparate networks. E.g. can be done over the Internet etc.
댓글