Eran Farajun is Executive Vice President of Asigra.
The growing complexity of today’s enterprise computing environment means critical corporate data is stored in increasingly fragmented and heterogeneous infrastructures. Ensuring all this decentralized data is backed up in case of breach or disaster is a major cause of anxiety for both business executives and senior IT professionals.
That’s because comprehensive data protection is really not core to most people’s jobs – most of you have other things to worry about, and you just hope and pray that the systems you’ve implemented have backed up your data and will recover it in case of a disaster. But you’ve got your fingers crossed because you’re really not that confident that they will.
According to Jason Buffington, principal analyst for data protection at ESG, improving data backup and recovery systems has been a top five IT priority and area of investment for the past several years. That’s because continually-evolving computing infrastructures and production platforms are forcing companies to reexamine their data protection strategies. “When an organization goes from 30 percent virtualized to 70 percent, or from on-premises email servers to Office 365 in the cloud, these evolutions to your infrastructure drive the need to redefine your data protection strategy,” says Buffington. “Legacy approaches for data protection can’t protect all of the data in these more complex environments.”
How concerned should you be about your existing data protection solution? Let’s explore the complexity of today’s average computing environment to find out.
Chances are good you have multiple virtualization technologies operating within your infrastructure, including VMware, Hyper-V, and KVM. You may have a data protection solution for one of your hypervisors, or maybe two. But in a data loss event, you’ll lose the data in VMs that you haven’t protected. The same is true of Docker containers. It’s certainly possible, but not trivial, to protect the data in containers. However, if you haven’t deployed a data protection system specifically for them, your data in containers isn’t backed up and won’t be recoverable.
If you’re using convenient, cloud-based apps like Google Docs, Salesforce.com, and Office 365, then you probably know that you can pay the vendor to back up your data. But if the system goes down, as it did for Office 365 in late January, the backup that you’re paying for and counting on could be in the same data center — and maybe on the same servers – that suffered the outage. Then you’re stuck trying to perform data recovery from dead equipment.
Up to 70 percent of enterprise employees use endpoint devices such as laptops, smartphones, and tablets to access corporate data. While these devices may contain some of an organization’s most critical data, implementing a comprehensive data protection plan for multiple endpoints running multiple OSs isn’t child’s play. What happens at your company when a laptop is lost or stolen? Do you have a way to retrieve that data if the device hasn’t been backed up recently? Does your current data protection system geo-locate the missing device and have the ability to perform a remote wipe? If not, no wonder you don’t feel confident about your data protection strategy.
“Today’s data center is anything but simple,” says Marc Staimer, president of Dragon Slayer Consulting. “There are so many different kinds of data being created every day, on myriad platforms, operating systems, in containers, in the cloud, and they each have their own means of data protection. It makes data protection seem like the Wild Wild West, a chaotic free-for-all. And most people don’t realize how bad it is until they have a small data loss event. And then the worrying begins: ‘If I wasn’t able to recover that data, what else isn’t protected’”
Protecting Data in Complex Environments
The traditional approach to protecting data in multiple applications and on disparate platforms is to protect it with multiple point protection solutions. This can quickly lead to big business impacts.
First, there’s cost: The more data protection systems you run, the more you will pay in licensing. Also, in a complex environment, it’s inevitable you’ll have overlapping stores of the same data, so you’ll end up paying to protect it multiple times. This may not seem like a significant pain point – until you are forced to recover that same data multiple times, overwriting earlier recoveries and destroying any new data that’s been added to that recovery. Management of multiple systems is another consideration: keeping current with multiple trainings, methodologies, fixes, patches, updates, and upgrades just adds more cost and complexity for IT. And it’s inherently difficult to recover data from a patchwork of data protection solutions, lengthening Recovery Time Objectives (RTOs) and challenging Recovery Point Objectives (RPOs), and slowing business continuity and disaster recovery efforts after outages.
“Organizations now must have their critical production workloads and critical data available immediately after a data loss event. In today’s ‘no downtime’ world, organizations large and small need to explore a comprehensive data protection solution,” says Staimer. “Businesses need instant-on access to their data, and that requires not just backup, but also replication.”
The complexity of today’s environments require a more comprehensive approach to data protection, one that converges both backup and replication technologies into a single, easily managed solution. A comprehensive solution backs up data from any source – whether in the data center, the cloud, a virtual environment, or on an end-point device – and stores this data not only locally but also remotely in the cloud to ensure full data replicability in case of a natural disaster, hardware failure, data breach or malicious data attack.
“Searching for a single technology that offers both a rigorous on-premises data protection solution and the ability to easily replicate data stored in the cloud can force a reconsideration of vendors,” says Buffington. “Couple that with the fact that primary data is growing over 25 percent year over year, that data protection storage is growing 40 percent annually, and that data protection budgets are only growing four to six percent yearly, and it means that you can’t keep doing what you’re doing because it doesn’t work anymore.”
A Vendor Checklist
For confidence that your data is fully protected, you may need to update your existing data protection to a comprehensive backup and replication technology. Don’t be embarrassed to ask for help: If it were simple to deploy such a solution, you’d have already done it. Bringing in a data protection specialist can give you the confidence to uncross your fingers.
Here are some questions to ask vendors as you narrow your search:
- Does the proposed data protection solution provide protection of data residing in both physical and virtual servers (including containers)?
- Can I protect all forms of data from multiple sources across the enterprise in a centralized data repository?
- In the event of a disaster, how quickly can I access critical applications and resume business operations?
- Can I use the proposed solution to backup and replicate data off site to a secure third party location for disaster recovery purposes?
- Does the solution support security protocols, such as NIST FIPS 140-2 Certification, which is mandatory for regulatory compliance in some industries?
What are the top data protection requirements for your business? What other questions would you ask?
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.