Insight and analysis on the data center space from industry thought leaders.

Huge Data Growth and Effective Disaster Recovery

Results from a recent industry survey show data protection of large amounts of data is a concern among large enterprises. With data volumes continuing to climb, enterprise IT managers may need to rethink their approach to data protection.

Industry Perspectives

August 9, 2011

4 Min Read
Data Center Knowledge logo

Joe Forgione, senior vice president of product operations and business development at SEPATON, Inc. Most recently, he served as CEO of mValent, a data center applications management software company, acquired by Oracle in 2009.




The top data protection concern for large enterprises is the speed at which their already massive data volumes are growing, according to our third annual survey of sizable companies. This isn't surprising.

Curbing Data Growth and Data Center Sprawl

For most enterprises, the size of their data is growing at a rate of 30 to 60 percent, compounded annually. This growth has a “knock on” effect of costly data center sprawl as enterprises add more and more limited scale backup targets to stay within their backup windows. More than seventy percent of respondents had added at least one system in the previous 12 months.

Many enterprises are discovering that this type of “band aid” approach to data protection is fraught with hidden costs and risks. They are overspending on hardware – buying full systems when they just need more performance or more capacity. They are also taxing IT staff resources with the cumbersome tasks of load balancing and tuning their entire environment every time a new system is added.

They are also dividing their backup volumes among disparate “islands” of storage that must be individually managed, upgraded, and equipped with software licenses. De-duplication of isolated systems results in sub-optimal capacity optimization vs. global de-duplication across all data, resulting in a more expensive solution than one scalable platform.

A more effective strategy is to start with a system that is sized for current needs, but enables the user to modularly add capacity and performance as it is needed within the same system image. By consolidating data onto a single system, companies not only reduce data center footprint, cooling, and administration costs, they also save capacity with a better de-duplication ratio. This “single pane of glass” strategy enables a single administrator to protect petabytes of data with ease.

Improving Branch Office Data Protection

A second critical concern for enterprises is improving data protection in their branch office locations. Our survey also revealed that most branch office locations have inadequate or outdated data protection. A significant number of respondents are still using physical tape or not protecting branch office data at all. Given the limited IT resources in most enterprise data centers, IT managers should consider using a consolidated hub-spoke topology.

IDC summarizes the disadvantages of backing up to tape as follows: “The challenges with this traditional approach are many. This process is subject to human error when nontechnical office staff manages backups, rotate tapes, or initiate recovery. The risk of data compromise due to the removable nature of tape cartridges presents risk of loss or theft of sensitive data.“

With improvements in de-duplication and replication technology, enterprises can use a “hub and spoke” topology to protect branch office data at a cost comparable to tape. With this strategy, disk-based systems are used in the remote offices that automatically replicate data to a main data center for disaster protection. These automated systems can be managed remotely from a headquarters location.

Low-bandwidth Replication is Essential

“Data will be unrecoverable in the event of a disaster” was rated as the greatest backup/data protection fear. This fear was also evident in responses to “How well does your current disaster recovery (DR) testing prepare you for a real disaster?” as fifty-seven percent scored moderately or poorly. “File Replication” and “Reporting on Application Recovery/Replication Integrity” also scored as high spending priorities.

To provide efficient disaster protection, enterprises need to move massive data volumes to the safety of a disaster recovery site and restore data at wire speed. It is essential that enterprises use a backup system that can handle massive data volumes without choking backup performance with inline deduplication. Enterprises should use data protection platforms capable of de-duplicating concurrently with replication (and restore if necessary), regardless of backup volume. The system should also support new, more powerful solutions for managing replication that can backup over both Fibre Channel and fast 10 Gigabit Ethernet and can replicate both traditional tape cartridge formats and new open storage formats such as NetBackup OST Auto Image Replication.

With data volumes continuing to climb, enterprise IT managers need to rethink their approach to data protection. The pervasiveness of WAN connectivity, combined with disk-based backup systems specifically designed to extend enterprise data protection from the data center to branch locations, makes it more affordable than ever to implement a "set and forget" backup and recovery strategy that is fully automated and centrally controlled. The result is better data protection across the entire enterprise without the "hidden expenses" associated with tape.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like