Skip navigation

Clustering in the Cloud: Has The Holy Grail Arrived?

Cloud economics are so compelling the more data center managers are evaluating what additional applications make sense in the cloud, writes Noam Shendar of Zadara Storage. Yet, there are reasons why proven enterprise-class features are considered as such -- because they deliver, reliably, against agreed upon SLAs.

Noam Shendar is VP of Business Development at Zadara Storage, a provider of enterprise Storage as a Service (STaaS) solutions. He has more than 15 years of experience with enterprise technologies including at LSI Corporation, MIPS Technologies, entertainment technology startup iBlast, and Intel.

ZadaraStorage_NoamShendar-tNOAM SHENDAR
Zadara Storage

Cloud economics are so compelling the more data center managers are evaluating what additional applications make sense in the cloud, whether their own private cloud, a hybrid option or a public cloud from Amazon Web Services or a service provider.  Yet, there are reasons why proven enterprise-class features are considered as such -- because they deliver, reliably, against agreed upon SLAs.

After growing accustomed to them, data center managers are loathe to give them up.

Enterprise Storage

This is particularly true with traditional enterprise storage system features.  One such feature is clustering, the standard enterprise method for achieving application high availability on mission critical enterprise applications. Clustering works by having multiple servers run the same application, so that the failure of any one server will not cause downtime (the other servers "pick up the slack" for the failed server). It is de rigueur for databases, and given so much of enterprise computing runs on a database, it’s effectively among a punch list of features that, without it, most enterprise application just can’t move to the cloud.

To date, leveraging clustering in the cloud has required that IT teams rewrite legacy applications specifically for cloud deployment because the storage system constrains the database for one or both of the following reasons:

(1) The storage is too slow, requiring the application to be broken up into parallel processes, each running on slow storage but in aggregate producing sufficient performance

(2) The storage lacks certain capabilities such as volume sharing, protocol support for NFS or CIFS which are commonly found in legacy applications.

Certainly very few IT groups have time for this added work, and so applications requiring clustering have been forced to stay out of the cloud, or to use "managed hosting" options where the service provider creates a private setup for the customer using dedicated equipment.  This is expensive and rigid, requiring long lead-times to modify, and multi-year commitments.

Enter Software-Defined Storage

The software-defined storage movement is changing all the above, and rapidly. Select third-party solutions can create clusters in the cloud despite the networking limitations, such as lack of IP multicast and iSCSI persistent reservations as well as the ability to present a single volume to multiple cloud servers/instances.

Depending on the extent of these limitations, basic file sharing for collaborative work such as CAD, CAM and other types of shared workloads would not be possible. For most legacy application (SQL, Exchange, Oracle), clustering is a standard. Moving these clustered applications to the cloud while leaving the clustering aspects behind is not an option.

There are also novel options made possible by new software-defined approaches to intelligently sharing screaming-fast storage hardware among multiple customers, without issues of either privacy or performance.

To support clustering, cloud-based storage approaches need a punch list of features:

  • Application and/or OS with clustering support (e.g., Red Hat Failover Cluster or Windows Failover Cluster).
  • Volume sharing, so that the same volume can be mounted to all the servers in the cluster, instead of allowing just one-volume-to-one-server attachment, as is typically found in cloud storage solutions.
  • support for SCSI Persistent Reservations in order to allow the servers to avoid modifying the same data simultaneously.   Essentially, each server can temporarily lock out other servers from a data region on which the first server is working – a feature that is common for on-premises storage systems but emerging in the cloud.
  • IP multicast or Layer 2 communication support among the servers in the cluster, in order for the servers to ascertain each other's health.

Moreover, the storage needs to scale up in performance as the number of servers in the cluster grows, and as each server produces more I/O – which may be a challenge in itself since these applications are typically high IOPs in nature to begin with.

Since mid-2013 Deloitte Consulting’s Infrastructure and Operations hosting business unit has used Dimension Data’s servers and its IP Multi-cast feature along with a Storage as a Service (STaaS) solution that Dimension Data provides from Zadara Storage that supports native SQL clustering. In addition to using Zadara for providing over 10 TB of storage to numerous clients, Deloitte’s Managed Analytics SaaS offering also uses Zadara to provide information and insight into the cloud environments that Deloitte hosts for clients so as to help the companies run their IT infrastructures more efficiently.

With several of its clients having SLAs at 4 9’s or 5 9’s levels - clients in some demanding sectors including retail, inventory management and patient claims where there was little room for cloud-induced system hiccups - typically Deloitte would have taken a hybrid approach by placing the web and application tiers in the cloud, and keeping the physical servers on site and connect to clustered storage resources. But doing so meant maintenance time and sunk storage costs.   Deloitte would have been forced to buy allotments of storage – usually more than it needed – if and when growth occurred, and to physically connect new resources to existing ones as its network scaled.

By clustering in the cloud, versus using physical resources, Deloitte’s Michael Hayes, master specialist in Deloitte Consulting’s Infrastructure & Operations group, said he was able to gain an element of nimbleness in meeting client needs that was otherwise impossible when using physical resources that would have to be purchased and installed on site as customers’ needs grew.

As formerly impossible capabilities for enterprise computing become possibilities, like clustering, the Holy Grail of the cloud – where all IT resources are flexible, performant and economical, even at high availability and at scale – is becoming a reality. Data center managers should be on notice that they no longer have to make compromises between features they must have that dictate being on-premises – and management requirements to run an optimal operation.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish