Skip navigation

Four Keys to Unlocking Cloud Data Management

As we continue to delve into this hybrid world, ensuring swift, easy access to data is no walk in the park.

Rebecca Fitzhugh is Principal Technologist for Rubrik.

Eighty-three percent of companies will adopt the public cloud by 2020. Ugh. How many times have you heard some variation of that statistic? It’s tiresome but true. Many companies across all verticals are embarking upon a cloud journey and will dip their toes into the proverbial cloud in the next few years. The goal for many is to take advantage of the latest software tools and development methodologies. 

There will always be some in tech who continue to treat the public cloud as if “there is no there there” (Gertrude Stein). Any tectonic shift such as this will create new hurdles to overcome.

I tend to use the word "hybridity" often when helping customers navigate a multi-cloud world in which data could reside in the public cloud, on-premises, or somewhere in-between. Hybridity refers to the mixture of two or more things—and that is exactly what I am seeing across the industry. Data continues to be retained for long periods of time but is no longer stored solely on a traditional storage array or tape but also in the cloud.

Much of the intrinsic value of a business resides in its data. As we continue to delve into this hybrid world, ensuring swift, easy access to data is no walk in the park. Here are a few recommendations for achieving successful cloud data management:

Take a Declarative Approach to Reduce Hands-on Management Time

Hybridity creates two phenomena: the fragmentation of applications, as components may now reside on-premises or in the cloud, and the data explosion. The only way to survive with existing IT staffing budgets is to govern data holistically, regardless of platform or location, using declarative, policy-driven frameworks.

If such a framework is not already in place, I recommend using your organization’s journey to cloud as the leading driver to put this in motion. Consider it a critical success factor - cloud data management is all about consistency and enterprise-level control across all platforms and locations, regardless of dataset type. Aim to dictate the logic of data management rather than manually manipulating the path for how to get there and incorporate tooling that uses a similar approach.

Use Automation to Enforce Consistency

Consistency is paramount when scaling an infrastructure. Scale adds more than just additional workloads—it adds complexity. Factor in a hybrid model of infrastructure and you will quickly have a mess on your hands if the architecture wasn’t originally designed for this approach. Unlike humans, computers execute tasks the same way each time. Using automation makes it easier for IT to establish standard methods and procedures thus reducing management overhead required.

This practice is complementary to using a declarative approach in order to guarantee the consistency needed for true data mobility. By leveraging APIs, automation tooling can even integrate with external business SLA driven systems.

Design for Data Mobility and Agility

Managing enterprise-scale data is not an easy task. This is true regardless of whether the data exists on-premises, in the cloud, or both. Consistency has been an ongoing theme in this article and it applies here as well. Data is now traveling in and out of the public cloud, on its way to an on-premises data center or possibly another public cloud. Once you get past the initial operational obstacles, you will inevitably be hit in the face with requests for data mobility and agility.

As you design a greenfield or brownfield hybrid, public, or on-premises environment, spend time carefully planning where specific data processing should occur versus initial landing, staging, or long-term. A successful design will allow for data to easily migrate from one location to another. I often see this accomplished by customers creating a common software fabric with robust APIs and data integration toolsets. Better to plan for it upfront than realize it is now time to ditch the systems for which you just automated and built declarative policies.

Ensure Defensible Backup and Recovery When Things Go Sideways

Cyberattacks are becoming more sophisticated and intelligent every day, so should your tools for protection and recovery. Hybridity increases the surface area for attack; it is beneficial to use a tool that is designed to protect and recover workloads on-premises as well as in the cloud and recover data quickly. Modern solutions that are purpose-built to bring data back online fast by not requiring a rehydration of the workload. Leveraging such solutions can lower your recovery time objective (RTO), driving positive business impact.

Additionally, a traditional backup and recovery tool tends to be “metadata-poor” which turns the tool into a one-trick pony. As the use of public cloud increases, consider looking for tools that access, analyze, and diagnose metadata across many platforms and applications. This type of insight provides a value-add to already necessary tools.

Use these four keys to devise a cloud data management strategy that returns the most value and puts your data and metadata to work. Regardless of where your data is located, govern in a policy-driven manner using a holistic approach, automate tasks to ensure consistency, create a common software fabric to untether workloads for data mobility, and ensure protection of the data.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish