The Hidden Costs of System Sprawl
May 23rd, 2013 By: Industry Perspectives
Florin Dejeu, director of product management, SEPATON, Inc., has more than 20 years of product management experience, overseeing the development of products that address the information management needs of large enterprises with emphasis on storage, archiving, classification, HSM and data protection solutions.FLORIN DEJEU
While data center managers have grown accustomed to rapid data growth, few could have anticipated the unprecedented data growth and increased complexity that has overwhelmed many data center backup environments in the past few years. According to industry analysts, data in large enterprises is growing at 40-60 percent compounded annually.
Data growth is fueled by the proliferation of new business applications, the introduction of Big Data analytics, the increased use of mobile devices and tablets in the work place, and the increased use of large databases to run core company functions (ERP, payroll, HR, production management). Companies are not only creating massive volumes of data, they are also under pressure to meet increasingly stringent and complex requirements for protecting and managing that data. For example, they have to back it up in shorter times, retain it for longer periods of time, encrypt it without slowing backup performance, replicate it efficiently, and restore it quickly.
Until recently, many enterprise data managers responded to data growth by simply adding disk-based backup targets. The most common type of disk-based backup target provided inline data de-duplication and a reasonable level of performance and capacity to accommodate the increased data volume. However, these systems are simply not designed for today’s massive data volumes or fast data growth because they lack two critical capabilities: they do not scale and they do not de-duplicate enterprise backup data efficiently. As a result, for many large enterprise data centers, the “add another system” approach has reached its breaking point.
The Hidden Costs of Sprawl – Total Cost of Ownership
For many organizations the breaking point for non-scalable systems is the point at which they cannot any longer meet their backup windows. While adding a single system may not seem overly cumbersome, for large enterprise data centers that require several of these systems, it can add unplanned cost, complexity, risk, and administrative time. The hidden costs and total cost of ownership (TCO) impact are significant:
- Overbuying systems. Companies are forced to add an entire system when they have plenty of capacity but only need more performance or conversely, have performance and need more capacity.
- Wasting money on capacity. By separating data onto multiple non-scalable systems, these systems cannot de-duplicate globally, reducing the efficiency of their capacity optimization.
- Wasted IT admin time. To add a new non-scalable backup system, IT admins have to divide the existing backup(s) onto multiple new systems and load balance for optimal utilization a process that becomes more time-consuming and complex with every new system added.
- Added maintenance cost. Each new system increases the cost of system maintenance by an order of magnitude. Every time a new software or hardware update or upgrade is needed, or standard maintenance is required.
- Slow backups. Non-scalable systems typically use hash-based, inline de-duplication that slows backup performance over time. They are highly inefficient in database backup environment common in enterprises data centers for two reasons. First, databases often store data in sub-8KB segments that are too small for inline, hash-based deduplication to process efficiently without becoming a bottleneck to backup. Second, they do not support fast multiplexed, multi-streamed database backups – requiring IT staff to choose between fast backups and capacity optimization.
- Rising data center costs. In simple terms, more systems with less-efficient de-duplication means more rack space, power, cooling, and data center floor space.
Less is More for Low TCO
In today’s fast-growing enterprise backup environments, consolidating backups onto a single, enterprise-class disk-based backup appliance is proving to be both more cost-efficient and less prone to human error and data loss than the “siloed” approach described above.
Backup and recovery appliances are designed specifically to handle the massive data volumes and complex backup requirements of today’s data centers. These purpose built backup appliances (PBBAs) are designed to backup, de-duplicate, replicate, encrypt, and restore large data volumes quickly and cost efficiently. To ensure you choose an enterprise-class backup and recovery appliance, ensure to use the following best practices:
Opt for Guaranteed High Performance
Understand the performance impact that processing-intensive functions, such as de-duplication, replication, and encryption have on the system. Enterprise-class systems are designed to perform these functions in a way that does not slow performance. Some even offload CPU functions Ensure that any published performance rates are for guaranteed, continuous performance, and not simply the highest rates achievable in a widely varying ingest rate.
Grid Scalability is Essential
As described above, adding, managing, and using multiple backup systems is not practical or cost-efficient in today’s fast-growing, complex data centers. Enterprise-class backup and recovery systems offer grid scalability, that is, the ability to add performance and/or capacity independently as you need it. This pay-as-you-grow model eliminates over-buying, reduces IT management time, and enables you to store tens of petabytes of data in a single, consolidated backup appliance.
Storing data in a single, optimized system has the additional benefits of enabling highly efficient, global de-duplication, and eliminating the need for load balancing and ongoing system-tuning.
Ensure Deduplication is Designed for Enterprise Data Centers
One of the most effective ways to reduce the cost of backup and recovery is to implement enterprise-class de-duplication. Unlike de-duplication optimized for small-to-medium-business, enterprise de-duplication is designed to enable faster backup performance and better overall capacity requirements. It is also capable of tuning de-duplication to the specific data types for optimal use of CPU, disk, and replication resources. For example they can de-duplicate database data at the byte level for optimal capacity savings or recognize data that will not de-duplicate efficiently (i.e., image data) and back it up without de-duplication. This “tunability” can save enterprises thousands of dollars in savings in capacity and processing costs.
Reporting and Dashboards Enable Savings
Detailed reporting and dashboards are key to enabling IT administrators to manage more data per person. They automate all disk subsystem management processes and put detailed status information at the administrator’s fingertips. They also provide predictive warning of potential issues enabling administrators to take action before they become urgent.
Lowest Total Cost of Ownership
For today’s large enterprise backup and recovery environments, the days of adding more and more backup systems are over. The speed of data growth, massive volume of data, and complexity of backup and recovery policies necessitate the use of enterprise-class purpose-built backup appliances. These appliances enable organizations to maintain backup windows by moving massive data volumes to the safety of the backup environment at predictable, fast ingest rates. They also streamline and simplify complexity by consolidating tens of petabytes of stored data onto a single, cost-efficient easy-to-manage system.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
[...] provider SEPATON, recently highlighted the issue in a Data Center Knowledge column. Many businesses respond to rapidly growing data volume by overbuying because they need either more performance or greater capacity, but not both. These new systems not [...]