Skip navigation

Resuscitating Legacy Data Centers: A Darwinian Approach

Recent studies and articles have pointed out the fact that IT environments are in a constant state of change. The data center needs to evolve or simply become extinct—think along the lines of Betamax and the Walkman, writes Morris O’Riordan, of Rubicon.

Morris O’Riordan is Vice President of National Construction for Rubicon Professional Services, a mission-critical construction management firm that takes an owner’s approach to the design and building of data centers.

Morris O'RiordanMORRIS O'RIORDAN
Rubicon

Let's face it, information (a.k.a. data) is the global currency driving the world’s economic engine, and there is no singular "Fort Knox" to store all the data. Exacerbating the situation are cloud computing services, the ubiquitous connectivity for mobile devices, government compliance and regulations as well as backup, storage and disaster recovery needs. This new data-driven economy has contributed to the massive data center construction era we are now living in - hello Google, Amazon and Facebook!

Costly Downtime

Given this situation, it behooves the facility managers, IT personnel and engineers to keep these data vaults running at optimal capacity. “Downtime is not an option” sounds cliché by now, but the repercussions of this overused phrase can cost organizations millions of dollars and put data center managers on the unemployment line.

In his now infamous report, industry expert Roger Sessions (ZDNet article on his report) attempted to put a price tag on data center failure.  According to Sessions, the yearly cost worldwide of IT infrastructure interruption could be as high as $6.2 trillion.  Sessions even puts the monthly cost of global IT failure at $500 billion!

And these numbers are proving true time and again.  A recent article in InformationWeek reports that “While it is no secret data center outages are costly, they are also common.  Of those surveyed, 95% experienced one or more unplanned data center outages in the past two years. Total outages occurred once a year on average and device-level outages occurred every two months. . .”

In addition the article states that the average cost of downtime can run as high as $5,600 per minute and for 90 minutes of downtime, this means a hit of almost $506,000.  This price tag is enough to cause even the best data center managers to lose a good night’s sleep. Adding to this insomnia, is the fact that most data centers in existence today simply were not built to handle the volume of information that needs to be properly stored and rapidly retrieved.

Case in point: A recent report by industry group Canalys states that investments worldwide in the data center infrastructure marketplace reached $26.2 billion in Q3 2011.  Campos Research and Analysis also conducted a survey of 300 IT decision-makers at large corporations in North America to track plans for growing or decreasing data center size over a several year period.  The firm noted that one-third of all respondents plan on building out their data centers in 2011.  And 83 percent expect to undertake data center expansion over the next one-to-two years.

Also consider this Information Week article’s excerpt: “Part of the problem facing enterprises is that most data centers were built 10 to 15 years ago to support mainframe technology that don’t have the capabilities needed to support current technologies.”

Shout out to Charles Darwin: all this points to the fact that IT environments are in a constant state of change. The data center needs to evolve or simply become extinct—think along the lines of Betamax, the Walkman and . . . you get the idea.

Data Center or Dodo Bird?

You can't rush the data center evolution or you get failures; the same man that built the Ford Mustang also built the Ford Pinto. Data center reconstruction poses a host of challenges. For most, there’s an active data center in place handling day-to-day business.  The trick is to upgrade it while limiting business disruptions.  In many respects, it's like attempting to tune a car's engine, while the auto is traveling down the highway at 75 miles per hour.

Before taking the leap into fine tuning an operating data center, organizations need to consider many factors, such as the power feed.  A legacy data center built in the 1990s doesn’t have the power required to fuel an updated system and once the power is doubled, you must now account for new cooling capacity.  What’s necessary is a well thought-out plan involving engineers, contractors, architects and equipment vendors – all working towards a common goal.  A smart design includes a best-practices design that meets end requirements for increased capacity and density.  And all of it needs to be done without taking the whole system down – zero downtime is a must.

Common Scenario

Let’s imagine that a cloud hosting provider needs to expand an additional 3,000 square foot data center facility. Obviously in this particular instance, zero downtime is critical as the company’s data center drives a host of customer business functions.

The first mistake upon initiating such a project would be to hire an engineer and general contractor. Under this scenario, proper consulting and planning fails to happen up front – often leading to several “restarts” in an attempt to find the correct design.  This creates costly delays in time-to-market and major budget overruns.

This type of a project is too specialized for a general contractor and shouldn’t be handled by a standard RFP sent out for bids. Rather, specialized mission-critical facility construction organizations (A Trusted Adviser) - with the proven track record in designing data centers - are the best fit for such projects. In addition, the concerns for many facing this type of project is downtime, general contractors won’t be sensitive to the cost of lost revenues.

These trusted advisers chart a step-by-step map and do not deviate from the plan. At some point, the system will need to be bypassed – cutting the existing conduit and cable – and it takes an educated team with documented experience, to get it done right. Devising the best road map takes weeks and many trial runs to ensure all designations are correct; back-up procedures are critical and must be put in place.

The selected adviser must be able to perform a data center retrofit must work closely with the customer to build an innovative approach, coordinating all parties – from engineers to equipment makers. This sounds like another cliché, but it's often overlooked or not performed accurately. Acting as the central point of contact, the adviser must also represent the interests of both owners and contractors to all associated parties.

Then comes the initial feasibility study that must be conducted to help with site selection, planning, equipment integration, construction and commissioning.  The selected adviser must also act as a neutral party to determine facility location, size, scope, and engineering requirements. The end result should be an expanded data center facility built on time and on budget – but most importantly with zero downtime.

Prodding Evolution

The old saying goes, “Getting there is half the fun.”  But when it comes to a data center redesign; nothing can be further from the truth.  It’s a long and arduous process that involves storage, power, cooling, and density – with one change dramatically impacting all the rest. Most importantly, it requires a roadmap.  The key is to expand a data center to handle next-generation requirements without disrupting the current business. This takes planning and a set script involving everyone from vendors to operations.

But the payoff is huge.  If planned out properly, a customer can evolve a legacy data center into a modern, high-density, maintainable and reliable system for the next 10 years—yes, future proof.  By doing it right, customers can realize anywhere from a 30%-60% return on investment.

And while there’s no “easy button” for data center reconstruction, the right planning and processes can make it happen.  Biggest take away: It’s important to partner with a data center-specific Adviser who has the proper knowledge to make the project a success.  Or your competitors will evolve to meet the demands of a future data generation.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish