Disaster Recovery in the Cloud Age
October 23rd, 2012 By: Industry Perspectives
Robert Offley is President & CEO at CentriLogic Inc. He has more than 20 years of management experience in the IT industry and is now focused on establishing CentriLogic as a leading player in the global data center services market.ROBERT OFFLEY
Disaster recovery procedures are nothing new, but the evolution of cloud hosting allows organizations to leverage aspects of physical and virtual technologies to ensure their information systems and internal business practices remain operational in the event of any type of disaster.
Traditionally, an organization would map their applications against their respective Recovery Time Objectives (“RTO”) and build a disaster recovery plan that would enable them to recover operations in the desired timeframe. Often, this could involve rebuilding from scratch infrastructure and applications over a 72-hour period and dealing with the logistics of procuring and deploying the physical assets. In reality, this process could take much longer. For critical applications where downtime isn’t an option, a fully-redundant, active-active solution where two production systems run in parallel over two geographically-diverse facilities is always recommended. In the past, that would cost six to eight times the cost of a non-redundant solution.
Provisioning With Speed
This has changed with the elasticity of cloud and virtualization technologies. An organization can now deploy computing resources extremely quickly; tens or hundreds of servers can be available in a matter of hours. This means that IT can replicate its production infrastructure without the need to purchase additional servers and deal with the complex logistics of setting them up. Servers are only the first step in the process, since IT also needs to recover and install the operating systems, databases and applications.
Previously, loading the software onto the servers was also a process that could be days or even weeks. Another benefit of the cloud and virtualization is the ability to take snapshots of the computing environment by making a copy of the data and configurations at a specific moment in time. This makes it easier to restore the data in another location or back-up data center. Once the redundant hardware is in place, the snapshot is restored and reduces the time required to load the core software: namely, the operating systems, database and application layers.
Although these virtualization tools and technologies speed up the process and allow compute environments to be provisioned quickly, the devil is in the details. Each system is different in terms of application and infrastructure architecture; for instance some systems have multiple applications servers with load balancing and separate databases all residing on different physical or virtual servers.
Hybrid Approach Preferred
It is for this reason that a hybrid cloud computing approach is ideal, because it combines the strengths of physical, dedicated hardware and technologies whether located in-house or at an outsourced data center. Organizations still require physical hardware, either to leverage existing investments, improve security by isolating data on one physical device, or as a way of boosting performance with dedicated processing power.
The addition of physical hardware to a disaster recovery strategy is particularly helpful in the case of databases, which can be vexing to restore. As well, there is the constant issue of “orphan data” which is lost from the last back-up to the time that the infrastructure failed or there was a catastrophic event. Databases are also the applications where organizations have the highest security concerns, since they may contain sensitive data and there is usually resistance to them being hosted in the cloud even in a disaster scenario. For many organizations it makes sense for the database and other sensitive applications to reside on dedicated hardware and to be replicated onto other physical hardware at another site.
Using the hybrid approach, the cloud works well for application and Web servers that can be restored from a snapshot of the production environment and then attached to the database in a disaster scenario. Alternatively, application and Web servers can be up and running in the cloud and already attached to the database, but with only small compute and memory allocated to them. When the disaster happens, more memory and compute power can be allocated almost seamlessly. This approach not only reduces the time to recovery but means that a completely redundant system can be in place for less than twice the cost of the production system—instead of six to eight times the cost
Disaster Recovery Strategy
The strategy of hosting some data sets and applications internally and others in the cloud or with a third-party data center is sensible and even mandatory. After all, if a company has been affected by a disaster, it could also, ironically, inhibit the organization’s ability to execute on the disaster recover strategy. Using a third party eliminates any potential dependencies and immediately adds resilience. The other advantage of outsourcing the infrastructure is that companies can leverage the provider’s investment in cloud and virtual technologies and skill sets.
There’s really no excuse for an organization to not have a disaster recovery plan in place for their core applications and infrastructure. Embracing the cloud and technologies such as virtualization/replication and advanced storage and hardware platforms, and when needed, third-party services, IT organizations can quickly and cost effectively build a disaster recovery platform – if not a completely redundant, active-active environment.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.