Alan Conboy, Office of the CTO, Scale Computing.
Many companies that could benefit tremendously from switching to hyperconverged infrastructure haven’t done so. One big reason they haven’t is confusion that results from common misuse and misunderstanding of the term “hyperconvergence.”
Hyperconverged infrastructure is sometimes referred to as a “data center in a box” because, after the initial cabling and minimal networking configuration, it has all of the features and functionality of the traditional 3-2-1 virtualization architecture. While it commonly got the job done in a few key areas, it is the opposite of what a business needs today.
The 3-2-1 model consists of virtualization servers or virtual machines (VMs) running three or more clustered host servers, connected by two network switches, backed by a single storage device – most commonly, a storage area network (SAN). The problem is that the virtualization host depends completely on the network, which in turn, depends completely on the single SAN. Everything rests upon a single point of failure: the SAN.
When hyperconvergence was first introduced, it meant a converged infrastructure solution that natively included the hypervisor for virtualization. The “hyper” wasn’t just hype as it is today. This is a critical distinction as it has specific implications for how architecture can be designed for greater storage simplicity and efficiency.
Anyone can provide a native hypervisor. Hypervisors have become a market commodity with little feature difference between them. With free, open source hypervisors like KVM, anyone can build on KVM to create a hypervisor unique and specialized to the hardware they provide in their hyperconverged appliances. Many vendors still choose to stay with converged infrastructure models, perhaps banking on the market dominance of VMware―even with many consumers fleeing the high prices of VMware licensing.
Saving money is only one of the benefits of hyperconverged infrastructure. By utilizing a native hypervisor, storage can be architected and embedded directly with the hypervisor, eliminating inefficient storage protocols, files systems and VSAs. The most efficient data paths allow direct access between the VM and storage. This has only been achieved when the hypervisor vendor is the same as the storage vendor. When the vendor owns the components, it can design the hypervisor and storage to directly interact, resulting in a huge increase in efficiency and performance.
In addition to storage efficiency, having the hypervisor included natively in the solution eliminates another vendor, increasing management efficiency. A single vendor that provides the servers, storage, and hypervisor makes the overall solution easier to support, update, patch, and manage without the traditional compatibility issues and vendor finger-pointing. Ease of management represents a significant savings in both time and training.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.