Steve Knodl is Director of Product Management for NextIO, provider of I/O consolidation and virtualization solutions.
The trends and cycles of computing technology used to deploy cloud infrastructure are frequently misaligned with each other and end-user needs. Each generation of processing power and hard disk drive capacity doubles in about 18 months’ time. Ethernet speeds, however, increase by ten times over a much longer period. Fibre channel doubles in performance about every 18 months, while hard disk drive performance has never really caught up. Flash SSD trends are tied to silicon growth trends, but the industry is too young to even make predictions that can help in long-term architecture planning. Scaling each component has become a unique challenge in the data center and cloud computing space.
In addition, intense vendor competition has made even industry-standard-based products difficult to integrate across technology cycles due to vendor lock-in on unique tools and features, which may become irrelevant in subsequent cycles. End users and providers want to move to next generation networks and technologies now to simplify their overall architecture, but the performance levels offered on the networks are overkill even for highly utilized hypervisor-based systems.
New Technologies, New Hope
Despite these difficult conditions, every data center manager and cloud provider, whether public or private, will have to embrace new technology as it is released in order to stay competitive. It is simply not possible to apply cloud platform software to a one size fits all infrastructure and perform "forklift" upgrades periodically. What if the data center architects could scale their compute, networking and storage connectivity over multiple technology cycles without vendor lock-in or major upgrades? I/O virtualization presents a solution that can meet these needs.
What is I/O Virtualization?
I/O virtualization can be thought of as a hardware middleware layer that sits between the server component of a system and the various I/O available for the processing units. The I/O component could be any technology that gets data to the processor, including Ethernet, Fibre Channel SAN, Infiniband, SAS, and most importantly, any new technology that may not even be in the market at the time of installation. The ‘middleware’ should be high performance so it will be able to support multiple generations of I/O, old or new. It should be standards-based to avoid vendor lock-in of the existing marketplace. And lastly, it should be sharable and scalable to help moderate the technology cycles presented by the compute component and the I/O devices that are virtualized.
I/O virtualization is attractive and provides value because of a middleware layer that already meets all these requirements and is delivered with every server: PCI Express. PCI Express virtualization is an industry standard supported by the PCI Special Interest Group that can meet the needs of data center architects looking to develop strategies for long-term technology cycles.
A typical use case would be one where the IT manager needs to manage multiple CPU processor families and networking capacity/technologies while providing enough network and storage I/O for each new faster data hungry processor. They would deploy the first generation servers, an I/O virtualization layer and specific I/O capabilities to match the servers as shown below.
When the next generation of servers is delivered, the IT Manager could scale the I/O to support the higher data rates by adding a second controller to the IOV Layer.
When a next generation network is delivered, they could deploy it without changing anything in the servers and using the IOV layer to isolate the changes thus preventing having to wait to deploy new networks and servers in a single step “Forklift upgrade.”
In addition to managing the transitions between compute and I/O technologies, additional benefits include the ability to avoid vendor lock-in by insulating servers from storage and other networking technology. PCI Express IOV has the additional benefit of being essentially transparent to drivers and management software by using standard drivers for the I/O devices attached to the virtualized PCI Express switch devices. All of these factors will contribute to easing technology transitions across a data center over multiple technology refresh cycles.
There are technology solutions on the market that offer the benefits of PCI Express based I/O Virtualization. Some can support up to 30 servers and 8 Industry Standard PCI Express cards for flexibly connecting the servers to existing I/O infrastructure.
As new technologies continue to proliferate throughout the enterprise, IT leaders must cope with constant changes and are increasingly feeling pressure to deliver new services, improve performance speed and manage complex environments. Mobile data access, virtualization and cloud computing are three of the most prominent factors that are driving this change. By implementing I/O virtualization solutions, companies can effectively scale their data center infrastructures to consistently meet the dynamic requirements of today’s technology landscape.
Please note: Graphics above supplied by Next I/O.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.