Raghu Kondapalli is director of technology focused on Strategic Planning and Solution Architecture for the Networking Components Division of LSI Corporation. He brings a rich experience and deep knowledge of the cloud-based, service provider and enterprise networking business, specifically in packet processing, switching and SoC architectures.
The Data Deluge occurring in today’s content-rich Internet, cloud and enterprise applications is growing the volume, velocity and variety of information data centers must now process. In response, organizations have begun virtualizing their data centers to become more cost-effective, power-efficient, scalable and agile.
The migration began with server virtualization using technologies like multi-core CPUs and multi-thread operating systems. Next was the virtualization of storage area networks (SANs) and network attached storage (NAS) to cope with the Data Deluge more efficiently and cost-effectively. The final target for virtualization is the data center network itself, which will necessitate changes in the both the control and data planes to manage traffic flows more intelligently and improve overall performance.
This Industry Perspectives article is the first in a series of three that analyzes the network-related challenges in virtualized data centers, and how these are having an effect on network infrastructures—from the SAN to the core. The focus here is on the effect server virtualization is having on storage virtualization and traffic flows in the data center network.
Server Virtualization’s Effect on Storage and the Network
The need for instantaneous and reliable access to data across all segments of today’s connected world is pushing the boundaries of data center virtualization. Cloud computing, with its superior scalability and lower total-cost-of-ownership (TCO), is at the leading edge of this trend by requiring virtualization of the entire datacenter in a multi-tenancy environment.
Servers were initially virtualized by implementing virtual machines (VMs) in software with the hypervisor creating a layer of abstraction between physical and virtual machines, thereby absorbing many of the connectivity, manageability and scalability issues. Software-based hypervisors, however, are unable to keep pace with the increased performance demands of the Data Deluge. Processor extensions to support x86 virtualization made their debut in the mid-2000’s, providing the hardware acceleration needed to improve performance.
Virtualization of storage is typically done in a SAN, which houses both the VM images and some or all of the data needed by the applications. VM support requires extra storage in the SAN to backup and replicate the images dynamically, and during the initial phase of storage virtualization, storage hypervisors helped administrators perform these tasks more easily by disguising the actual complexity of the SAN. These techniques by themselves, however, proved insufficient for the relentless growth in storage demands. And once again, advances in hardware, particularly the use of flash memory in solid-state drives (SSDs), became critical to boosting SAN performance. Such tiered and/or application-aware storage solutions deliver hardware acceleration to both the SAN and directly attached storage (DAS), providing both improved I/O throughput and real-time analytics.
Until recently, most of the efforts in data center virtualization addressed the server and storage segments. Network virtualization has been ad hoc, at best, normally implemented as an add-on module to traditional compute-centric hypervisors. Network-specific extensions to hypervisors handle basic connectivity and fault management, and are able to meet the performance needs for small data centers. The current generation of large-scale server farms, however, must have thousands of servers with potentially dozens of VMs per server. The application workloads, which are generally distributed across several VMs, increase VM-to-VM communications (east-west traffic), while other factors, such as VM migration and storage applications like data replication, have also increased east-west traffic flows. And these changes are occurring as client-to-server communications (north-south traffic) also continues to grow exponentially.
Reaping Benefits of Virtualization
Currently, IT departments are exploring new options for data center networks to better reap the benefits of virtualization. At present, several solutions have been proposed to improve data center network utilization and performance. At the network architectural level, isolating the control plane functions from the data plane, and virtualizing both, is a growing trend that involves improving the efficiency of the existing network infrastructures with simple upgrades. Scale-out and scale-up are two such techniques that are now being used, and these will be covered in more detail in the third article in this series.
A related trend involves Software-Defined Networking (SDN), which is another abstraction where network application stacks are presented with a virtual view of the network that shields its physical topology. SDN also enables control plane tasks to be virtualized and distributed across the network. OpenFlow is one example of an SDN that proposes to separate control plane functions, such as routing, from data plane functions, like forwarding, enabling them to execute independently on different devices—potentially from different vendors.
But before exploring these proposed network virtualization options, it is useful to dive a bit deeper into the networking issues in a virtualized datacenter, and this is the subject of the second article in this series.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.