Using VXLAN to Speed & Secure Your Clouds

Add Your Comments

Mellanox3

Figure 3: VXLAN Solution

Since Layer-2 traffic is being carried over IP, it is possible to use L3 switches and routers with access control lists (ACLs) to protect VXLAN traffic such that only virtual machines within the same logical network can communicate with each other. This provides the necessary isolation to address the security concern.

VXLAN is truly a hybrid solution, combining the benefits of L2, such as the ability to shift the location of VMs to maximize the efficiency of the datacenter, with the scalability and security realized by transporting via L3.

Considering its ability to address both challenges of VLAN grouping using a virtualization solution, a VXLAN solution, such as the one displayed in Figure 3, is considered an ideal technology for network administrators working within a cloud computing environment.

VXLAN’s Hidden Challenge

Although VXLAN solves scalability and security concerns, and although it does so at very little monetary expense, there are two concerns that can impact IaaS performance:

1. VXLAN requires that each packet on the VXLAN network must be processed by the hypervisor to add protocol headers on the sender side (encapsulation) and remove those headers on the receiver side (decapsulation).

2. In a traditional Layer-2 network, sizeable processing savings are realized by using CPU offloads. However, in a VXLAN setup, the classical offloading capabilities of the network interface controller (NIC), such as checksum offloading and large segmentation offloading (LSO), cannot be used because the inner packet is no longer accessible because of the added layer of encapsulation. As such, additional CPU resources are required to perform tasks that would previously have been handled more efficiently by the existing NIC.

Recent testing by VMware3 has shown that these performance considerations can add nearly 35% to the existing CPU overhead.

Mellanox4
Figure 4: VMware Test Results for CPU Overhead Using VXLAN

VMware used RSS to distribute the overhead over multiple cores, something that can be accomplished by most reliable network controllers. Nonetheless, overhead remains high when running VXLAN. Moreover, VMware achieved its results using 1 Gbps Ethernet; at 10 Gbps much higher overhead would be expected, let alone at 40 Gbps.

The test results show that for small packets, there is considerable message overhead from the amount of headers for such small data. For large packets, there is extensive CPU overhead, as the CPUs are used for encapsulation and decapsulation instead of their intended uses. This unanticipated degradation in performance significantly offsets the many benefits of using VXLAN.

Solving the Performance Challenge

In order for VXLAN to be of real value, the extra CPU overhead it creates must be eliminated.

This can be achieved by supporting all existing hardware offloads in the network controllers. This includes:

  • Allowing checksum to be performed on both the outer and inner headers
  • Performing large segmentation offloading
  • Handling Netqueue to ensure that virtual machine traffic is distributed along hypervisor queues in the most efficient manner
  • Enabling RSS to steer and distribute traffic based on the inner packet, and not only the outer packet

With these innovations to network controllers, the additional overheads implicit in VXLAN will be significantly reduced, if not eliminated entirely.

Editor’s note: Brian Klaff, Gadi Singer, Leor Talmor and Gilad Shainer of Mellanox Technologies contributed to this article.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Endnotes

1 Wall Street Journal, January 31, 2013: http://blogs.wsj.com/digits/2011/04/21/more-predictions-on-the-huge-growth-of-cloud-computing/

2 Gartner, Inc., “Forecast Overview: Public Cloud Services, Worldwide, 2011-2016, 2Q12 Update”

3 VMware Performance Study, “VXLAN Performance Evaluation on VMware vSphere® 5.1”

Pages: 1 2

Add Your Comments

  • (will not be published)