Using VXLAN to Speed & Secure Your Clouds

Using VXLAN to Speed & Secure Your Clouds

The immense growth of IaaS cloud computing has given rise to a need for highly scalable and secure virtual networks, without requiring significant investment in replacing or adding to the existing infrastructure. One virtual overlay technology that has emerged to address this is VXLAN, writes Ariel Shuper of Mellanox Technologies. While VXLAN seems an ideal solution,there are performance challenges that must be addressed.

Ariel Shuper is senior director of product management at Mellanox Technologies

Mellanox Technologies

The immense growth of IaaS cloud computing has given rise to a need for highly scalable and secure virtual networks, without requiring significant investment in replacing or adding to the existing infrastructure. One virtual overlay technology that has emerged to address this is VXLAN, which uses MAC-in-UDP tunneling to capitalize on the best aspects of existing layer 2 infrastructure and the advantages of layer 3 transport.

However, while VXLAN seems an ideal solution, this article points out performance challenges that arise with its usage, which must be addressed for its benefits to be fully realized.

Background: The Need for Virtualized Overlay Networks

Over the past few years, cloud computing has grown at a tremendous rate, with worldwide spending on IT cloud services tripling since 2008 and expected to reach over $100 billion in 2014.1 More specifically, Infrastructure as a Service (IaaS) is the fastest-growing segment of the public cloud services market, having grown over 45% in 2012 alone.2

IaaS allows multiple tenants to share system resources and infrastructure, which improves hardware utilization, thereby reducing the cost of the IT infrastructure, both at implementation and ongoing. Cloud computing also provides a measure of agility that simplifies the IT management process, provides additional control over proprietary data, and improves the end-user experience.

The IaaS segment of cloud computing is based on the concept of multiple tenants sharing the cloud infrastructure, enabled primarily by server virtualization. IaaS offers the following additional benefits to its consumers:

  • Allows a company’s IT department to focus on core competencies instead of assembling and maintaining network infrastructure
  • Enables dynamic scaling of infrastructure services based on usage demands
  • Reduces upfront investment costs, and changes ongoing costs from CAPEX to OPEX
  • Provides access to the infrastructure from any location on any device

IaaS providers need to support multiple tenancies on their data center infrastructure, creating the need to isolate each tenant within the network to provide the security and traffic isolation levels that independent infrastructure provides. This has typically been achieved through the use of VLANs, which segment the network into virtual network entities to provide security and network traffic control.

As the demand for cloud services continues to grow, many large consumers have outgrown the most basic solutions. For example, VLAN usage is limited to 4,096 entities (VLAN IDs), which, given the tremendous growth in the size of cloud-based networks, is far from sufficient segmentation. A more scalable solution has become a necessity.

A similar concern exists with regard to security. VLAN grouping has its limitations in terms of network isolation, so there is renewed interest in a more comprehensive security solution.

A number of solutions exist to address these issues, each with its own advantages and disadvantages. Technologies such as multiple VLANs (known as Q-in-Q) or MAC-in-MAC try to address the scalability concern by multiplying the number of possible VLAN IDs or MAC addresses that are used. Such technologies, however, remain in the Layer 2 (L2) domain, which means there are inherent challenges in scaling to a high number of entities. For example, Spanning Tree (STP), Rapid Spanning Tree (RTSP), or Multiple Spanning Tree (MSTP) are typically used to prevent loops, but half the connections in the “tree” are reserved (stand-by) for link failures.

An additional challenge in using new VLAN or MAC addresses stems from the need for configuration changes to the existing Layer-2 infrastructure, which complicates Virtual Machine (VM) and workload migration from server to server in the virtual domain.

As demonstrated in Figure 1, an approach that is now being introduced successfully for addressing these challenges is to create a virtual network that transports data across existing Layer-3 (IP) infrastructure. The virtual network is overlaid on the physical Layer-3 (L3) network, which hosts the virtual machines and hypervisors. By adding this virtual overlay network in L3, administrators will be able to logically isolate and scale their cloud-based services without having to significantly reconfigure or add to existing infrastructure.
mellanox1-smClick to enlarge.

Figure 1: Network Virtualization Before and After Overlay Network

VXLAN Technology

In order for an overlay network to be of any use, a technology is required to encapsulate data such that it can tunnel into Layer 2 and be carried across Layer 3. One leading technology that provides this encapsulation and aims to resolve both the security and scalability issues is Virtual Extensible LAN (VXLAN). VXLAN technology provides a solution for stretching the L2 network over the virtual L3 IP network.

Mellanox2-smClick to enlarge.
Figure 2: VXLAN Encapsulation

The VXLAN concept is based on a new encapsulation for VM traffic in which the new encapsulation creates a MAC-in-UDP tunnel for the VM traffic. As Figure 2 shows, it encapsulates the VM’s Layer 2 (Ethernet) traffic with new MAC, IP, and UDP headers. It also adds a VXLAN ID, a 24-bit identifier that radically extends the address space of the VLANs from 4,094 segments up to 16.7 million available IDs, solving the scalability issue.

The encapsulation consists of:

  • An outer MAC address, which provides the physical destination and source addresses of the hypervisors or of intermediate L3 routers
  • An optional 802.1q address to further delineate VXLAN traffic on the LAN
  • Outer IP addresses, which are the IP addresses assigned to the hypervisors that are communicating over L3
  • An outer UDP port
  • A VXLAN ID that designates the specific VXLAN on which the communicating VMs reside

The encapsulation and decapsulation process is handled within the hypervisor, which connects the virtual switch with the IP network. The hypervisor, therefore, ensures that the virtual machines themselves are completely unaware of the VXLAN that has been implemented.

The hypervisor is assigned an IP address and acts as the host for the IP network. The virtual switch is assigned the VXLAN segment ID, which is then assigned to an IP multicast group.

Multicasting is yet another benefit of transporting messages via L3, as opposed to L2, which only offers broadcasting. The hypervisor determines whether the communicating virtual machines are within the same multicast group and thereby determines whether unicast or IP multicast is required. The hypervisor is able to differentiate between individual logical networks and to identify new virtual machines to be associated with multicast groups.

Continued on next page


Figure 3: VXLAN Solution

Since Layer-2 traffic is being carried over IP, it is possible to use L3 switches and routers with access control lists (ACLs) to protect VXLAN traffic such that only virtual machines within the same logical network can communicate with each other. This provides the necessary isolation to address the security concern.

VXLAN is truly a hybrid solution, combining the benefits of L2, such as the ability to shift the location of VMs to maximize the efficiency of the datacenter, with the scalability and security realized by transporting via L3.

Considering its ability to address both challenges of VLAN grouping using a virtualization solution, a VXLAN solution, such as the one displayed in Figure 3, is considered an ideal technology for network administrators working within a cloud computing environment.

VXLAN’s Hidden Challenge

Although VXLAN solves scalability and security concerns, and although it does so at very little monetary expense, there are two concerns that can impact IaaS performance:

1. VXLAN requires that each packet on the VXLAN network must be processed by the hypervisor to add protocol headers on the sender side (encapsulation) and remove those headers on the receiver side (decapsulation).

2. In a traditional Layer-2 network, sizeable processing savings are realized by using CPU offloads. However, in a VXLAN setup, the classical offloading capabilities of the network interface controller (NIC), such as checksum offloading and large segmentation offloading (LSO), cannot be used because the inner packet is no longer accessible because of the added layer of encapsulation. As such, additional CPU resources are required to perform tasks that would previously have been handled more efficiently by the existing NIC.

Recent testing by VMware3 has shown that these performance considerations can add nearly 35% to the existing CPU overhead.

Figure 4: VMware Test Results for CPU Overhead Using VXLAN

VMware used RSS to distribute the overhead over multiple cores, something that can be accomplished by most reliable network controllers. Nonetheless, overhead remains high when running VXLAN. Moreover, VMware achieved its results using 1 Gbps Ethernet; at 10 Gbps much higher overhead would be expected, let alone at 40 Gbps.

The test results show that for small packets, there is considerable message overhead from the amount of headers for such small data. For large packets, there is extensive CPU overhead, as the CPUs are used for encapsulation and decapsulation instead of their intended uses. This unanticipated degradation in performance significantly offsets the many benefits of using VXLAN.

Solving the Performance Challenge

In order for VXLAN to be of real value, the extra CPU overhead it creates must be eliminated.

This can be achieved by supporting all existing hardware offloads in the network controllers. This includes:

  • Allowing checksum to be performed on both the outer and inner headers
  • Performing large segmentation offloading
  • Handling Netqueue to ensure that virtual machine traffic is distributed along hypervisor queues in the most efficient manner
  • Enabling RSS to steer and distribute traffic based on the inner packet, and not only the outer packet

With these innovations to network controllers, the additional overheads implicit in VXLAN will be significantly reduced, if not eliminated entirely.

Editor's note: Brian Klaff, Gadi Singer, Leor Talmor and Gilad Shainer of Mellanox Technologies contributed to this article.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.


1 Wall Street Journal, January 31, 2013:

2 Gartner, Inc., “Forecast Overview: Public Cloud Services, Worldwide, 2011-2016, 2Q12 Update”

3 VMware Performance Study, “VXLAN Performance Evaluation on VMware vSphere® 5.1”

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.