(Photo by Michael Bocchieri/Getty Images)

(Photo by Michael Bocchieri/Getty Images)

Why Hyperconverged Infrastructure is so Hot

LAS VEGAS – Hyperconverged infrastructure did not exist as a concept two or three years ago. Today, it is one of the fastest-growing methods for deploying IT in the data center, as IT departments look for ways to adjust to their new role in business and new demands that are placed on them.

Gartner expects it to go from zero in 2012 to a $5 billion market by 2019, becoming the category leader by revenue in pre-integrated full-stack infrastructure products. The category also includes reference architectures, integrated infrastructure, and integrated stacks.

“Hyperconvergence simply didn’t exist two years ago,” Gartner analyst Andrew Butler said. “Near the end of this year, it’s an industry in its own right.” But, he added, the industry has a lot of maturation ahead of it, which means far from all vendors who are in the space today will still be in it a few years from now.

In a session at this week’s Gartner data center management summit here, Butler and his colleague George Weiss shared their view of what hyperconverged infrastructure is, why it’s so hot, and what might it all mean for data center managers.

They also addressed some of the most pervasive myths about hyperconvergence. Check those out in a separate post here.

What is Hyperconverged Infrastructure?

Given that the concept is only about two years old, it’s worth explaining what hyperconverged infrastructure is and how it’s different from its cousin converged infrastructure.

Hyperconvergence is the latest step in the now multiyear pursuit of infrastructure that is flexible and simpler to manage, or as Butler put it, a centralized approach to “tidying up” data center infrastructure. Earlier attempts include integrated systems and fabric infrastructure, and they usually involve SANs, blade servers, and a lot of money upfront.

Converged infrastructure has similar aims but in most cases seeks to collapse compute, storage, and networking into a single SKU and provide a unified management layer.

Hyperconverged infrastructure seeks to do the same, but adds more value by throwing in software-defined storage and doesn’t place much emphasis on networking. The focus is on data control and management.

Hyperconverged systems are also built using low-cost commodity x86 hardware. Some vendors, especially early comers, contract manufacturers like Supermicro, Quanta, or Dell for the hardware bit, adding value with software. More recently, we have seen the emergence of software-only hyperconverged plays, as well as hybrid plays, where a vendor may sell software by itself but will also provide hardware if necessary.

Today hyperconverged infrastructure can come as an appliance, a reference architecture, or as software that’s flexible in terms of the platform it runs on. The last bit is where it’s sometimes hard to tell the difference between a hyperconverged solution or software-defined storage, Butler said.

Why is Hyperconvergence So Hot?

To understand why hyperconvergence has gotten so popular so quickly it’s necessary to keep in mind other trends that are taking place.

There’s pressure on IT departments to be able to provision resources instantly; more and more applications are best-suited for scale-out systems built using commodity components; software-defined storage promises great efficiency gains; data volume growth is unpredictable; and so on.

More and more enterprises look at creation of software products and services as a way to grow revenue and therefore want to adopt agile software development methodologies, which require a high degree of flexibility from IT. In other words, they want to create software and deploy it much more often than they used to, so IT has to be ready to get new applications up and running quickly.

How Companies Use it

But at this point, companies seldom use hyperconverged infrastructure for those purposes. Today, it’s used primarily to deploy general-purpose workloads, virtual desktop infrastructure, analytics (Hadoop clusters for example), and for remote or branch office workloads.

In fewer cases, companies use it to run mission critical applications, server virtualization, or high-performance storage. In yet fewer instances, hyperconverged infrastructure underlies private or hybrid cloud or those agile environments that support rapid software-release cycles.

Gartner expects this to change, as the market evolves and users become more familiar with the architecture.

It Will Not Solve World Hunger

It’s important to keep in mind that hyperconvergence is just one of the approaches to infrastructure and not the ultimate answer to the IT department’s problems. Vendors still have to prove themselves out and show that their solutions have staying power, and that they can beat competition from SAN and blade solutions, which are very much alive and kicking.

Hyperconverged infrastructure’s promise is simplicity and flexibility, but those two words mean different things to different people. When thinking about hyperconvergence, Gartner’s advice is to figure out what those words mean to you and then see which vendor’s message resonates the most with that.

“It’s not going to solve world hunger,” Butler said. “It is an interesting solution [when used] in the right place.”

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)

One Comment

  1. the king

    re: "Hyperconverged infrastructure did not exist as a concept two or three years ago" how is this fundamentally any different from those old Cobalt RAQ systems which shove everything into an app box for you? Its the same lazy concept with ONE point of failure for everything. Is not the point of the internet to have redundancy?