Skip navigation
The Hyperconverged Approach to Increasing Efficiency

The Hyperconverged Approach to Increasing Efficiency

A focus on simplification and efficiency is the best way to attack a vast and sprawling problem, and in the case of storage management this means consolidation.

Mohit Aron is CEO and Founder of Cohesity.

As the volume and complexity of data continue to increase, CIOs and system administrators must consider a structurally new approach to managing secondary storage. A focus on simplification and efficiency is the best way to attack a vast and sprawling problem, and in the case of storage management this means consolidation. Companies today have an enormous opportunity to cut storage costs (or at least halt their growth) and eliminate management headaches by consolidating secondary storage use cases, such as data protection, test and development, and file services, on a single platform.

Secondary Storage Solutions Have Multiplied Beyond Control

Two major forces are driving today’s need for consolidation: the proliferation of point solutions to handle different secondary storage use cases and the exponential growth in the amount of data organizations store. Although the term “secondary storage” is relatively young, it’s been widely adopted to succinctly wrap up the array of storage workflows that aren’t dedicated to mission-critical operations. IT administrators have been using “primary storage” to describe high-performance workloads for years, but only recently have use cases like disaster recovery, archive, test and development, and analytics been grouped together under the secondary storage umbrella.

The term “secondary storage” recognizes that these distinct use cases require a unified approach and also do not require high performance SLAs. Managing different point solutions for archiving, backup, test/dev and analytics (just to name a handful of secondary storage examples you’ll find at a single company), creates serious administrative headaches today. In a recent survey by IDC, IT decision makers ranked data complexity across different departments and locations within the organization as a top concern. Despite the greater attention paid to primary storage, the volume of data held in secondary storage solutions is actually much larger at most companies, averaging 80 percent of total data. In this way, primary storage is really the tip of the iceberg, with data in secondary storage representing the much greater portion hidden below the surface. By bringing together various data use cases on a single platform, IT administrators can gain a much clearer view of their data than the fragmented landscape of point solutions has allowed.

Applying the Proven Benefits of Hyperconvergence

Consolidating secondary storage can also reduce the strain on IT resources caused by growing data. Allocating different storage systems for separate workflows translates into excess capacity for each use case, and so unnecessary or unused capacity multiplies with each additional storage solution, compounding inefficiency over time. By consolidating secondary storage on a single hyperconverged platform that integrates into public clouds, administrators get a holistic view of data utilization that enables more cost-effective usage and ongoing planning. A single copy of data can be used for both backup and then repurposed for test/dev, and also archived to the public cloud, for example, to increase efficiency and manage sprawl.

In fact, the same principle has already proven effective for different primary storage use cases through the success of the hyperconvergence movement. Hyperconvergence made it easier for various virtualized workflows to run across a single scale-out architecture that eliminates hardware compatibility or management issues. Today, system administrators don’t segregate primary storage workflows based on how the data is being used; instead they put it in a single group defined by the performance and resiliency that mission-critical operations require (which often means using all-flash storage arrays and removing any spinning-disk hardware).

There’s no reason companies can’t achieve the same efficiency with secondary storage. Of course, these workflows often have more diverse performance requirements, so designing an effective platform is not a trivial engineering problem. For example, backup is traditionally considered a passive data workflow but it still requires specific ingest speeds and recovery time objectives. On the other hand, test/dev demands higher levels of performance but lower resiliency requirements. But the upside of consolidating secondary storage (and the recent rise of affordable flash and web-scale storage architectures that enable much more flexible platforms) makes this a challenge that should be tackled head on.

Organizations now grapple with an enormous volume of data that is simultaneously being applied to increasingly complex use cases. Most of this data growth – and fragmentation – is happening in the realm of secondary storage. The answer to this problem is a more simple and efficient approach to managing data that also incorporates the public cloud. We’ve seen it work with hyperconvergence for primary storage, but that’s just the tip of the iceberg. The value of converging secondary storage will be enormous.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish