Hypervisor 201: The 2014 Market Update


The hypervisor market has undergone some changes over the last year, including a shift in focus to the cloud.

Just over a year ago we took a close look at the hypervisor market. We examined the top three players, reviewed their features and offered some ideas regarding direction and technological innovation.

A lot can change in a year.

Over the course of 2013, we saw a huge increase in data center adoption, hybrid cloud models, and an even greater push towards hypervisor and infrastructure agnosticism. That’s where we find out biggest changes. The hypervisor market no longer caters to the paravirtualization drivers it has to optimize, nor does it really care as much about the hardware that it sits on. Of course, those are still crucial to the virtualization experience.

But now the big connection point revolves around the cloud. How well can you integrate with a hypervisor sitting thousands of miles away? How well can your platform extend an existing data center into a hybrid cloud model? Can your hypervisor integrate with critical APIs to increase efficiency and optimize the end-user computing experience?

In our previous discussion, we took a look at the definitions behind what comprises a hypervisor. Let’s revisit those key definitions and add some more:

  • Type I Hypervisor. This type of hypervisor is deployed as a bare-metal installation. This means that the first thing to be installed on a server as the operating system will be the hypervisor. The benefit of this software is that the hypervisor will communicate directly with the underlying physical server hardware. Those resources are then paravirtualized and delivered to the running VMs. This is the preferred method for many production systems.
  • Type II Hypervisor. This model is also known as a hosted hypervisor. The software is not installed onto the bare-metal, but instead is loaded on top of an already live operating system. For example, a server running Windows Server 2008R2 can have vSphere 5 installed on top of that OS. Although there is an extra hop for the resources to take when they pass through to the VM, the latency is minimal and with today’s modern software enhancements, the hypervisor can still perform optimally.
  • Guest Machine. A guest machine, also known as a virtual machine (VM) is the workload installed on top of the hypervisor. This can be a virtual appliance, operating system or other type of virtualization-ready workload. This guest machine will, for all intents and purposes, believe that it is its own unit with its own dedicated resources. So, instead of using a physical server for just one purpose, virtualization allows for multiple VMs to run on top of that physical host. All of this happens while resources are intelligently shared between other VMs.
  • Host Machine.  This is known as the physical host. Within virtualization, there may be several components – SAN, LAN, cabling, and so on. In this case, we are focusing on the resources located on the physical server. The resource can include RAM and CPU. These are then divided between VMs and distributed as the administrator sees fit. So, a machine needing more RAM (a domain controller) would receive that allocation, while a less important VM (a licensing server for example) would have fewer resources. With today’s hypervisor technologies, many of these resources can be dynamically allocated.
  • Paravirtualization Tools. After the guest VM is installed on top of the hypervisor, there usually is a set of tools which are installed into the guest VM. These tools provide a set of operations and drivers for the guest VM to run more optimally. For example, although natively installed drivers for a NIC will work, paravirtualized NIC drivers will communicate with the underlying physical layer much more efficiently. Furthermore, advanced networking configurations become a reality when paravirtualized NIC drivers are deployed.
  • APIs. Application programming interfaces (APIs) dictats how some infrastructure components interact with other resources within a data center. These software-based components really revolved in a certain area of IT – until recently. Now, there is quite a bit of interaction between APIs and the hypervisor specifically. There are new ways to tie in resources or integrate directly into a hypervisor to reduce the amount of hops that resources have to take. Client-less security, application inter-dependence, and integration with key hardware components are all things that APIs can help with. 

With that in mind, let’s examine how the hypervisor market has changed.

Pages: 1 2

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the Vice President of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)