Skip navigation
The Data Center of the Future and Cloud Disaster Recovery
(Photo by Michael Bocchieri/Getty Images)

The Data Center of the Future and Cloud Disaster Recovery

Disaster recovery and business continuity -- no longer a cumbersome duo.

The data center of the future is a constantly evolving concept. If you go back to World War II, the ideal was to have a massive mainframe in a large room fed by punched cards. A few decades later, distributed computing promoted an Indiana Jones-like warehouse with endless racks of servers, each hosting one application. Virtualization upset that apple cart by enabling massive consolidation and greatly reducing the number of physical servers inside the data center.

Now it appears that we are entering a minimalist period: Data center spaces remain but have been so stripped down that all that remains are a few desktops in the center of an otherwise empty space. Like a magic trick by David Copperfield, the Lamborghini under the curtain has disappeared in a puff of smoke. But instead of showing up at the back of the room, the compute hardware has been transported to the cloud. And just as in a magic trick, IT operation managers are applauding loudly.

“We moved backup and disaster recovery (DR) to the cloud and now intend to move even more functions to the cloud,” said Erick Panger, director of information systems at TruePosition, a company that provides location intelligence solutions. “It looks like we are heading to a place where few real data centers will exist in most companies with everything being hosted in the cloud.”

If that’s the overall direction, what does this mean in terms of disaster recovery? How will future file restoration function? How should data best be looked after? And how should the data center manager be preparing for these events?

Hardware Be Gone

Disaster recovery and its big cousin business continuity (BC) used to be a cumbersome duo. The data center manager was tasked to erect a duplicate IT site containing all the storage, servers, networking and software of the core data center. This behemoth stood idle as a standby site in case the primary site went down. Alternatively, the data center purchased space at a colocation facility to host the standby gear in time of need.

Over time, the inefficiency of this set up became apparent. The concept of mirrored data centers came into being where two or more active data centers acted as a failover site for each other.

An “active-passive” DR model meant having a disaster recovery site that is used for test and development when not in disaster mode. An “active-active” model, on the other hand, called for splitting and load balancing a workload across both sites. This trimmed down the amount of hardware, but it remained costly in terms of Capex and management.

What we see happening now is a focus on consolidation of resources and a lowering of Capex costs, said Robert Amatruda, product marketing manager for data protection, Dell Software. That’s why so many companies are linking up to an outsourced data center and being able to leverage all of the efficiencies that cloud models offer, not the least of which is the ability to have a true cloud disaster recovery and business continuity framework that is beyond the resources of many data centers.

“The data center of the future is all about efficiency and scale, having pointers to content indexes so you can resurrect your data, and having a myriad of options to failover to both inside and outside the data center,” said Amatruda. “Especially as cloud becomes more prevalent, the notion of companies having infrastructure that they own and are financially responsible for is becoming increasingly obsolete.”

Some, of course, take it to extremes. Media giant Condé Nast, for example, pulled the plug on its 67,000-square-foot data center a few years ago, and sold it, preferring to use only cloud services. The rationale: to focus on its core function of content creation and the IT resources needed for that. The company gave the IT load to Amazon Web Services (AWS). Over a three-month period, its data center migrated over 500 servers, one petabyte of storage, more than 100 networking and over 100 databases to AWS. As it was already advanced in the process of virtualization, this sped the transition. It resulted in the  performance of core IT content-related functions raised by almost 40 percent and operating costs down about 40 percent.

But not everyone is ready to push everything to the cloud – yet. For one thing, there is a lot of investment already in on premise gear. Amatruda said we are seeing many solutions that are specifically designed and built to act a bridge between legacy architecture and a hybrid architecture of part cloud/part data center. That means focusing on being able to manage data both on-premises and off, and being able to deliver functionality like content indexing to provide resiliency.

“Instead of ensuring that data is recoverable, more organizations are concerned with having an always-on architecture, whereby resiliency is built directly into the architecture itself,” he said. “You’re seeing more products deal with cloud connection capabilities so that users can manage data outside the walls of their physical data center.”

Blurred Lines

A blurring of the lines, then, appears to be happening between physical and virtual. With tools that exist to make it somewhat irrelevant whether data sits in a rack in the next room or in some nebulous cloud-based data center, the big point to grasp is that the future of BC/DR is moving away from the traditional concepts of primary site and recovery site. Instead, it is shifting toward an ability to seamlessly migrate or burst workloads from site to site, for resiliency reasons, but also for peak demands, or even for cost or customer proximity reasons, said Rachel Dines, senior product marketing manager for SteelStore, NetApp.

“These sites could be customer owned, at a private cloud, hosted or a colo, but the key is that data must be able to dynamically shift between them, on demand, while maintaining always-on availability — another term for this is a data fabric,” she said.

This means incorporating more cloud infrastructure into architecture as such services make a lot of sense for backup and disaster recovery workloads. They tend to be inexpensive and are a good way to provide greater data protection while getting more comfortable with the cloud in order to help organizations take the next step forward in maturing their resiliency practices. This also opens the door to the addition of data reduction techniques like deduplication, compression, differential snapshots and efficient replication.

“These technologies can reduce storage footprints by up to 30 times when used in backup and DR environments,” says Dines.

Consequently, the data center of the future will be one where there will be significantly less redundant data which makes the data center more scalable, whether it is in-house or in-cloud.

“The cloud is creating a level of scalability that didn’t exist in the data centers of the past,” says Amatruda.

Step Back

With all these new DR/BC concepts being thrown at the data center manager, what is the best course forward? Greg Schulz, an analyst with Server and StorageIO Group, says it is time to start using both new as well as old things in new ways, stepping back from the focus on the tools and technologies to how those can protect and preserve information.

“Revisit why things are being protected, when, where, with what, and for how long and review if those are meeting the actual needs of the business,” said Schulz. “Align the right tool, technology and technique to the problem instead of racing to a technology or trend then looking for a problem to solve.”

Drew Robb is a freelance writer based in Florida.

TAGS: How to…
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish