Whatever Happened to High Availability?

1 comment

Kai Dupke is Sr. Product Manager, SUSE LLC, a pioneer in open source software and enterprise Linux.

Kai-Dupke-tnKAI DUPKE
SUSE

You don’t hear a lot of about high availability (HA) these days, what with all the media attention focused on cloud computing. Five years ago, high availability and clustering was a big part of the IT conversation. These days, not so much. But high availability is still a key part of the IT narrative, whether you hear about it or not.

High availability has been lost in the din about cloud computing, because high availability has not been an expectation of the cloud computing story. IT shops looking at cloud computing are seeking the benefits of agility and lower cost instead.

In the past, application development on the UNIX and Linux platforms traditionally took the stance that the infrastructure would shoulder most, if not all, of the high availability (HA) responsibilities. The storage layer would include RAID arrays, the networking layer multiple network configurations, and the operating system would include HA features that would ensure maximum uptime for the application.

There is some HA workload at the application layer, of course: support for clustering is one way application developers have been able to incorporate HA features.

High Availability Still in Infrastructure Layers

Even as enterprise customers move to a more virtualized infrastructure, such as private clouds or virtual data centers, HA is still very much centered at the infrastructure layers, not at the application layer. There may be some HA support at the virtual layers, naturally, but that’s still part of the infrastructure narrative.

Listening to the public cloud story, however, you get a much different tale. In public clouds, the expectation of the infrastructure layer is not as high as the older legacy systems. It’s more of a commodity, get-what-you-pay-for mentality when it comes to the infrastructure, so the application developers have to take the only path open to them: build HA functionality into their applications.

This is not to tear down the public cloud; the flexibility and cost structure of the public cloud is part of why it can work for so many organizations. Plus, there is the very real logistical challenge of trying to apply HA principles to a public cloud. As Japan learned to its dismay in 2011, the capability to support public clouds en masse is impossible with current technology.

Cloud Doesn’t Work For Everything

But HA is still a necessary part of IT, because not every IT department needs all of its services out in the cloud.

First, there’s the very real migration costs to the cloud. Because clouds today do not provide HA, customers are asked to rewrite their applications. Because cloud misses a crucial feature, customers have to take action and spend money to do something the infrastructure should be do anyway.

It’s been cool to watch marketing departments turn this additional workload into a benefit. It’s like selling a car without a steering wheel. “Bring your own wheel to make sure no one uses your car because there is no wheel installed,” is how some companies are selling the cloud.

The fact is, the biggest inhibitor for cloud computing is the lack of infrastructure support needed for many business-related applications. This is complicated by the fact that most of these apps are third-party applications and not even built by the companies using the applications.

In order to obtain the benefits of HA in the cloud, you could argue that these third-party companies should open the setup of and access to their applications? Sounds good on paper, but that means that every third-party vendor will end up creating their own way of doing this, multiplying the effort of getting HA at the application level.

What’s the answer?

Pages: 1 2

Add Your Comments

  • (will not be published)

One Comment