Skip navigation
The Case for Composable Infrastructure

The Case for Composable Infrastructure

Automation still hasn’t pushed very far into today’s enterprise IT, but the wait may be over with the arrival of a new category of infrastructure – composable infrastructure.

Ric Lewis is the SVP & GM for Data Center Infrastructure, HPE.

After all that we’ve been hearing about software-defined everything and infrastructure as code for the past couple of years, CIOs could be forgiven for looking around and saying “Hey, it’s 2016, where’s my programmable data center?” The fact is, automation still hasn’t pushed very far into today’s enterprise IT. But the wait may be over with the arrival of a new category of infrastructure – composable infrastructure – that delivers many of the long-awaited benefits of programmability.

“Composable” is actually not a bad word to describe it. Composable infrastructure is a single package comprising three things: a new kind of hardware, a software intelligence to control it, and a single API to interact with it. What’s new about the hardware is that it turns the core elements of the data center – compute, storage and networking fabric – into pools of resources that can easily be assembled or “composed” to fit the needs of any application. All three elements are designed from the ground up to work together as one. They can be deployed in any kind of operating environment, be it bare-metal, virtualized, or containerized.

The native software intelligence provides a single management interface and handles complexity behind the scenes, making the whole system software-defined. And the API enables humans to communicate very simply and easily with the infrastructure, turning tasks like provisioning infrastructure for a new application into the equivalent of one click: a single line of code. The API also plugs into DevOps tools like Ansible, Chef, Docker and Puppet, so that developers can make the infrastructure dance using the tools they’re already familiar with.

That’s it. Sounds simple, and in essence it is, however dazzling the underlying technology may be. But that simplicity enables CIOs to address one of the biggest challenges they face today: how to find an efficient way to meet the infrastructure demands of the new breed of applications based on mobile, cloud, big data technologies.

Give Me Next-Gen Apps – But Don’t Touch Those Dials

The proliferation of the new generation of apps is only going to accelerate. When you can make a few touches in an airline app on your smart phone and you’ve checked flight availability, seat positions, standby lists … you know you’re looking at the new face of IT. Indeed for many of us, the new face of IT is the new face of business – any business. Consumers love it. CIOs love it too, and they love that as a result they’re getting called more often into the C-suite discussions that matter, around revenue, profit, growth.

At the same time, the rise of next-gen apps puts CIOs in a bind. It calls for super-flexible, development-friendly infrastructure that you can set up quickly and change easily and often. But how to provide that while continuing to ensure total reliability for the mission-critical, don’t-mess-with-the-dials applications – enterprise resource planning, databases, email – where constant change is the last thing you want?

Until now the answer, often enough, has been to keep the usual on-premises setup for the traditional applications and turn to the big public cloud providers for the new breed. But IT leaders have a long list of reasons to keep data on-premises: security, compliance, performance, ease of data-sharing across applications. Cost is a factor too; you can easily run up big bills if you have a large amount of traffic going on public cloud.

Not Either/Or, But Both

Composable infrastructure neatly resolves the dilemma by supporting both the traditional and the new environments. Here’s how it works; take a mobile banking application, for example. The bank’s system has two components. A mobile back-end receives requests from the app on your phone and figures out what do – transfer some money, show a balance. And behind that there’s a traditional database that keeps track of the accounts and does all the computation behind the scenes.

With composable infrastructure, the bank can easily assign resources for both types of application. When you plug in compute, storage, or networking capacity, the infrastructure automatically discovers it and makes it available for any workload. To provision the mobile back-end, the bank selects a software template from a built-in library and assigns it via single line of code. Let’s say it’s a containerized application that uses the open source tool Docker. The workload simply drops into the infrastructure at the right ratio of compute, storage and networking resources. The resources scale independently and automatically.

Deploying the traditional database application works the same way. The bank can run it in a different environment than the containerized mobile back-end – bare metal, virtualized, doesn’t matter. No need for any configuration; the infrastructure configures itself.

If this sounds reminiscent of a cloud services portal, it should. Composability has the same infrastructure-as-code attractions for developers: they can just pull whatever storage and compute they need and get apps into production quickly without getting tied up in the details of infrastructure configuration.

On the traditional IT side, composability offers more benefits in addition to the stability that’s needed for the bet-the-business legacy apps. It’s not unusual for companies to overprovision their traditional infrastructure by 70 percent or more because they want to be ready for peak loads. Composable infrastructure spreads spare capacity across all of the applications running in the datacenter and makes it instantly available, so it reduces cost immediately by reducing the need to overprovision.

Standing up new hardware can be painfully slow in the traditional world. It can take close to a month from when a new box arrives in the shop till when it’s actually usable, because of all the provisioning and configuring involved. With composability, it takes just that one line of code – three minutes.

Composable infrastructure can be deployed incrementally, side-by-side with existing infrastructure, in a way that makes sense for the business. It can scale across multiple chassis or multiple racks. You can start with a pilot program, perhaps as part of the standard refresh cycle, to become familiar with the technology. As the concept gains traction in the market, vendors will be climbing on board in increasing numbers, and it’s important to know whether what they’re claiming as composability is the real thing – see the infographic for some pointers.

As a description of a new way to arrange and orchestrate data center resources, the “composable” metaphor is pretty apt. At any rate, it’s one that’s about to become very familiar to IT leaders.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.