Skip navigation
Servers
(Photo by Michael Bocchieri/Getty Images)

Understanding the Different Kinds of Infrastructure Convergence

One of the hottest trends in today's data center market is the conversation around converged infrastructure. But what does that actually mean? What is the difference between converged and hyper-converged systems?

As company computing demands change, what will the architecture that supports modern businesses and their cloud initiatives look like?

One of the hottest concepts to emerge is infrastructure convergence. We have unified architecture, converged storage, converged infrastructure, and now also hyper-convergence. But what does it all mean? How can convergence apply to your business and your use cases? Let's take a look at each type of converged infrastructure separately.

Unified Infrastructure

This is where the conversation begins. Traditionally, rack-mounted servers support a one-application-per-server scenario. Virtualization changed all that. Unified infrastructure commonly defines a chassis and blade server environment. Here’s the big point to consider: the modern blade and chassis backplane has come a long way. In fact, you can now integrate directly into fabric interconnects to provide massive blade throughput. Furthermore, you can create hardware and service profiles that allow you to set hardware-based policies around things like UUID, WWN, MAC addresses, and more. Using this kind of architecture, you could create a follow-the-sun data center infrastructure capable of on-boarding new sets of users while using the same hardware components powered by hardware and service profiles by dynamically re-provisioning chassis resources. Although these kinds of systems are powerful and extremely agile, they can be pricey. High-end blade architectures can be costly when compared to alternatives. The most critical aspect to understand, however, will be your use case and how blades might apply.

  • Use cases: A chassis and blade environment is great for a big scale-out architecture, such as a large telecom or service provider. This kind of environment utilizes a vast number of resources and might need to deploy hundreds if not thousands of racks of gear. Blades can isolate workloads, create powerful orchestration rules, and provide dynamic support for business needs.

Converged Node-Based Architecture

The evolution of compute and storage saw a turn when converged infrastructure was introduced. Basically, these are smaller node-based units combining storage and compute on one box, sometimes referred to as an appliance. Need to grow? Simply add another node or a full appliance and go. This has become a fantastic way to augment data center resource utilization. Instead of purchasing pricier gear and storage organizations can offload big workloads to smaller converged infrastructure nodes.

Producing a lot of resources pushed directly into the workloads sitting on top, converged infrastructure is a great scale-out solution. There are some cautions though. Many converged infrastructure solutions only support one hypervisor model or another. For example, if you’re a XenServer shop, be aware of what you can integrate with your environment. Also, many converged infrastructure technologies won’t integrate with things like FC/FCoE. Still, if you’ve got a solid use case for a converged infrastructure technology, you’ll be happy with great performance and a solid price.

  • Use cases: A medium-sized organization wanting to offload VDI architecture from traditional rack-mount servers and a standard SAN to a more efficient platform that is also price-conscious may choose a four-node appliance that can be later upgraded. A compact converged appliance will help eliminate several servers in a rack, free up a lot of disk, and improve performance. Furthermore, desktop and application delivery architecture are all under one hypervisor, making it even easier to manage resources between the converged infrastructure unit and VMs.

Hyper-Converged Infrastructure

This is where it gets a bit more interesting. First, let’s differentiate between converged and hyper-converged infrastructures. They key differentiating point – and the whole premise behind hyper-convergence – is that this model doesn’t actually rely on the underlying hardware. Not entirely at least. This approach truly converges all the aspects of data processing at a single compute layer, dramatically simplifying storage and networking through software-defined approaches. The same compute system now works as a distributed storage system, taking away chunks of complexities in storage provisioning and bringing storage technology in tune with server technology refreshes.

Here’s the big piece to remember: since the key aspect of hyper-convergence is software doing the storage controller functionality, it’s completely hardware-agnostic. This means that hyper-convergence completely abstracts the management process and allows you to custom-build the underlying hardware stack, which in turn could lead to some serious cost savings. What if you prefer one type of vendor because your entire data center is built around them? Fine. Maybe you like white-box or commodity servers? Those work too.

As long as a hyper-converged virtual appliance is running in the hypervisor, you can control the underlying set of resources. Furthermore, this level of convergence opens up a new level of API integration. With an open API architecture and a lot of intelligence in the software, new kinds of hyper-convergence technologies can integrate with OpenStack, CloudStack, IBM, vCenter, vCAC, VVOLs, VAAI, S3, etc. This takes the conversation around convergence to a whole new level by combining functionality of compute, storage, networking, all converged on a single device through intelligent software and basic hardware components.

Let’s assume that one set of hardware is running on one kind of hypervisor in a primary data center, while running another (different vendor's) set of hardware on a different type of hypervisor at a secondary data center. As long as the same hyper-convergence virtual appliance controls the underlying resources – while connected to both data centers – entire data sets and VMs can be migrated between heterogeneous infrastructures.

  • Use cases: Your organization is growing very quickly, both organically and through acquisitions. This means a constant rotation of different hardware sets, new data center additions, and support of an ever-growing number of users. This is where hyper-convergence really shines. To help offset such a large number of new users, you deploy two 24TB appliances built around your required vendor. From there, you deploy the software-defined storage policies and work to create a central storage control infrastructure. Now, you have complete visibility into all storage resources while still processing workloads on the hyper-converged platform. As another initiative, you plan to migrate the workloads being controlled by the hyper-converged VM appliance into the cloud. The cool aspect of working with this kind of VM-based appliance is the capability to integrate with OpenStack, vCAC, and other cloud orchestration platforms. Now, this organization can control resources located both on-premise and in their cloud.

The reality here is that we’re creating a much more fluid data center architecture. Soon, an entire hardware stack will be abstracted and managed from the virtual layer. The ultimate goal is to allow data, VMs, and applications to flow from on-premise data centers to the cloud and everywhere in between. This agility allows organizations to quickly respond to new kinds of business demands by directly applying resources precisely where they’re needed. The future of the data center revolves around supporting an ever-evolving user. Hyper-convergence allows you to utilize heterogonous hardware systems, coupled with different hypervisors, all to deliver dynamic resources to a variety of points. Moving forward, businesses will continue to depend more and more on the underlying data center. Keeping your infrastructure agile will help you retain your competitive edge.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish