Skip navigation

The Cost of Complexity

New storage solutions are helping enterprises simultaneously manage rapid data growth, inputs from new data sources, and new ways to use data.

David Flynn is CTO of Primary Data.

It’s no secret that we’re in an era of unprecedented change – and much of that change is being driven by data. Luckily, new storage solutions are helping enterprises simultaneously manage rapid data growth, inputs from new data sources, and new ways to use data. These technologies service a wide variety of application needs. Cloud storage delivers agility and savings, SSDs and NVMe flash address the need for fast response times, web-scale architectures give enterprises the ability to scale performance and capacity quickly, and analytics platforms give businesses actionable insight.

While each technology provides unique benefits, collectively, they can also introduce significant complexity to the enterprise. Let’s take a closer look at how complexity is increasing enterprise costs and how both costs and complexity can be eliminated by automatically aligning data with storage that meets changing business objectives.

Solving the Paradox of Storage Choice

Faced with a diverse storage ecosystem, IT often finds themselves making a choice between purchasing all storage from a single vendor, or shopping between different vendors, even building their own with customizable software on commodity servers. IT must carefully weigh these options to ensure they make the best choice for both the top and bottom line.

Sourcing storage through a single vendor is convenient. Procurement is simple, all support calls go to one place, and management interfaces are frequently consistent, making it easier to configure and maintain storage. But vendors with wide product portfolios typically charge premium prices, and single sourcing weakens IT’s ability to negotiate. Further, as it’s unlikely for all of a vendor’s products to offer best in class capabilities, enterprises might need to compromise on certain features, potentially to the detriment of business.

Conversely, sourcing storage through multiple vendors or building storage in-house can reduce upfront costs, but increases labor costs. IT must spend significant time evaluating different products to ensure they purchase the right product for the business. They must then negotiate pricing to ensure they aren’t paying too much. Since each vendor’s software and interfaces are different, they must also invest time to train staff to properly configure and maintain the different systems.

Given all this complexity, it’s no wonder enterprises are eager to push as much data as they can into the cloud. The problem is that many enterprise workloads aren’t cost-effective to run in the cloud, it’s costly to retrieve data back to on-premises storage, and many enterprise applications need to be modified to use cloud data, which may be impractical. This makes the cloud yet another silo for IT to manage.

A metadata engine resolves these problems by separating the metadata path from the data path through virtualization. This makes it possible to connect different types of storage within a single namespace, including integrating the cloud as just another storage tier. This enables IT to assign objectives to data that define data’s performance and protection requirements, analyze if those objectives are being met, and automatically move data to maintain compliance, tiering data between different storage devices to meet performance, cost or reliability requirement—transparently to applications. With these capabilities, IT can transition from a storage-centric architecture to a data-centric architecture. Instead of maintaining separate storage silos, IT can deploy storage with specific capabilities, from their vendor of choice, into a global namespace. The metadata engine will automatically place and move data to meet objectives, while maximizing aggregate storage utilization and efficiency.

The Cost of Avoiding Data Migrations and Upgrades

Many vendors upsell new storage devices to customers every few years. These upgrades usually deliver new features, but few IT teams look forward to a migration. Typically, migrations take months of planning and consume a large portion of IT’s budget and resources. Since it’s so hard to move data without disrupting applications, IT commonly overspends to purchase excess capacity well in excess of expected future demands.

A metadata engine solves common issues with data migrations by making the process of moving data completely transparent to applications. IT no longer has to halt applications, manually copy data to the new storage and reconfigure, then restart the applications. Available performance and capacity can also be seen from a single interface, while alerts and notifications tell admins when they need to deploy additional performance or capacity. Since performance and capacity can be added in minutes or hours instead of days or weeks, IT no longer needs to perform painful sizing exercises or overpurchase storage years in advance of actual need.

The Cost of Downtime

While IT is always looking to reduce complexity and costs and make life easier, the cost that matters most when it comes to complexity is the increased risk of downtime. The more systems IT is managing, the more human involvement is required. The more human involvement that exists, the greater the risk of unplanned downtime, and this downtime can be disastrous for business.

Stephen Elliot of IDC released a report that examines the true cost of downtime and infrastructure failure. Some key data points are:

  • For the Fortune 1000, the average total cost of unplanned application downtime per year is $1.25 billion to $2.5 billion
  • The average hourly cost of an infrastructure failure is $100,000 per hour
  • The average cost of a critical application failure per hour is $500,000 to $1 million

A metadata engine places and moves data non-disruptively to applications, across all storage, to meet business objectives. This ensures applications can always access data at the service levels they require, while greatly reducing or eliminating unplanned downtime.

Storage diversity introduces complexity, but managing a wide range of systems with diverse capabilities doesn’t have to be a challenge any longer. A metadata engine enables enterprises to transition from a storage-centric architecture, where IT manages each system separately, to a data-centric architecture, where IT deploys the storage features they want, automating the placement and movement of data with software. This enables IT to slash complexity and costs, while freeing staff to focus on projects that deliver more direct value to the business.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish