The Human Cost of Data Backup Sprawl
August 5th, 2013 By: Industry Perspectives
Eric Silva is Director, Product Marketing, Sepaton, Inc. He has more than thirteen years of experience in the storage industry, and more than five years of hands-on experience as an IT director and an IT solutions architect.ERIC SILVA
As discussed in the recent article, The Hidden Costs of Sprawl – Total Cost of Ownership, many large enterprise data centers have taken an “add another system” approach to dealing with rapid backup data growth. With this approach, they have simply added more single-node, disk-based backup targets every time they run out of capacity, or fail to meet their backup windows.
In an enterprise data center, where data is measured in tens of terabytes and growing at rate of 20 to40 percent compounded annually, the “add another system” approach often takes a toll on the human beings who manage the IT data center. Large organizations can mitigate this human toll by consolidating data backups onto enterprise-class systems that are designed to enable a single administrator to manage tens of petabytes without stress. They not only scale to meet growing performance and capacity needs, but also automate a variety of disk management, monitoring and reporting tasks.
Disruption to Ongoing Backup Processes
Every time a new backup target is added in a non-scalable environment, IT staff face the time-consuming tasks of re-allocating backup volumes, load balancing all of the backup systems in the environment, and tuning the entire environment to restore optimal efficiency. They also need to make difficult decisions about how to go about dividing their backup volumes among existing and new systems.
For example, should they divide the backup onto multiple systems? Move backups to the new system and let the older data expire off the older systems? Or, should they disrupt ongoing operations for a significant change in backup strategy or use a less disruptive, more costly “Band-Aid” approach? As a result, adding a backup target to a non-scalable backup environment means increasing workloads, adding complexity, and placing more stress on an IT organization.
A better alternative is to use a grid scalable system that enables IT staff to increase performance by adding processing nodes or to increase capacity by adding disk shelves. These systems integrate the added performance or capacity with existing resources and perform all load balancing, tuning and management tasks automatically and seamlessly, without disruption to ongoing operations or the need to make difficult decisions.
More Systems Mean More Maintenance
Every new machine added to the backup environment requires added IT time to maintain software licenses, upgrades, updates and patches as well and hardware maintenance. In short it means more tedious, time-consuming tasks for an IT staff that is probably already stretched thin.
With fully loaded full-time IT employees costing the company $150,000 per year, companies should look for ways to make better use of their time with systems that increase the terabyte (TB) of data that a single sys admin can safely manage. Consolidating backups onto a single, scalable system dramatically reduces total cost of ownership (TCO).
The Stress of Uncertainty
With key information about backups divided on multiple, individual systems, IT staff have more uncertainty to deal with and a harder time making holistic, informed decisions about their company-wide backup environments.
Enterprises should choose a solution that enables IT data managers to get fast, accurate information on the status of their entire backup environment. Robust dashboard functionality can not only enable a single administrator to manage more backup data, it can also enable them to reduce inefficiencies in their backup environment, plan for future capacity needs, and ensure restore Service Level Agreements (SLAs) are achievable. For example, a single dashboard on a grid scalable solution can put the following information at an administrator’s fingertips:
- Which backup volumes have been backed up and to which backup target?
- How efficient was the deduplication process in saving capacity requirements?
- What’s the status and efficiency of data replication (has replication completed? How much bandwidth was required to complete it?
- Are backup targets operating efficiently? Are any systems in danger of failing?
- Can I meet backup windows consistently? Restore service level agreements (SLAs)?
- What is the cost of data loss to the business unit or organization I’m serving?
- Am I adding risk to the business by continuing with this strategy?
When Less is More. . .
While the impact on morale is harder to quantify, the human costs of sprawl can quickly affect the bottom line with increased staff turnover, low staff productivity, increased overtime. By consolidating backups and automating backup tasks, companies can free key IT staff for more productive and more gratifying work but most importantly achieve their mission; meeting their SLAs and reducing the business risk of data loss and downtime.
Scalable, enterprise-class systems also provide a tighter level of management control and reporting to ensure IT departments get the most value from their backup investment and more accurate planning for future needs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
What’s the difference between cloud hosting and off-site backups? After reading your article I got to wondering if off-site backups are necessary any more. If I were to use a cloud drive to store all my data, and that cloud drive is properly backup up, then do I also need to continue with my own off-site backups? I still do a local backup because it is the quickest way to restore should the worse happen.
I think you answered your own question as you typed your comment. You still do a local backup for fastest recovery. That’s one very good reason. Another is to reduce your risk. You elude to that when you say “IF…that cloud drive is properly backed up”…That’s exactly it. If you are willing to trust your Cloud data provider (with all that it entails) as a primary source of your data when you need to restore, then your good to go. If you have any doubts and/or have enough resources, (like most enterprises do today), then you are going to continue to have a local and off site copy under your control. Just depends who you trust with your offsite data. If you have the time, the funds and the skills, the safest bet is to guard your own assets. This still seems to be the thinking for cautious and risk averse enterprises your needs may certainly vary.