(Photo by David Shankbone via Wikimedia Commons).

How Sandy Has Altered Data Center Disaster Planning

3 comments

(Photo by David Shankbone via Wikimedia Commons).

The Empire State Building stands out as a beacon of light in a darkened Manhattan landscape during the widespread power outages following Superstorm Sandy. (Photo by David Shankbone via Wikimedia Commons).

NEW YORK – Keep your diesel supplier close, and your employees closer. These were among the “lessons learned” from¬†Superstorm Sandy, according to data center and emergency readiness experts at yesterday’s Datacenter Dynamics Converged conference at the Marriott Marquis, which examined the epic storm’s impact on the industry and the city.

The scope of Sandy has altered disaster planning for many data centers, which now must consider how to manage regional events in which travel may be limited across large areas due to fallen trees and gasoline shortages, restricting the movement of staff and supplies. Yesterday’s panel also raised tough questions about New York’s ability to improve its power infrastructure, as well as the role of city policies governing the placement of diesel fuel storage tanks and electrical switchgear.

A clear theme emerged: Data center operators must expand the scope of their disaster plans to adapt to larger and more intense storms, weighing contingencies that previously seemed unlikely. The power, size and unusual storm track for Sandy proved to be a deadly combination, bringing death and destruction  on an unparalleled scale.

Superstorm Sandy caused $19 billion of damage in New York City, leaving more than 900,000 employees out of work at least temporarily, according to Tokumbo Shobowale, the Chief Business Operations Officer for New York. Shobowale said the storm has led FEMA to redraw the storm surge maps and flood zones for the city.

“We have 200 million square feet of commercial space in the flood plain now,’ said Shobowale, who said the city struggled to adapt to unprecedented flooding that damaged critical infrastructure for transit, telecommunications and power. “A lot of our response was figured out on the fly. Now that experience allows you to create standard operating procedures for next time.”

Focus on Fuel and Personnel

The data center industry has begin that process in earnest. New York area facilities experienced both direct and indirect impacts from Sandy. A handful of data centers in the financial district were knocked offline as the storm surge flooded basements housing critical equipment. Nearly all of lower Manhattan was left without power when ConEd was forced to shut down key parts of the power grid, forcing major carrier hotels and data centers to operate on backup generators for three to seven days. Facilities in New Jersey also faced local power outages and road closures, as roads fell across streets and power lines.

Planning ahead is more important than ever, as data centers will need to consider padding their inventories to ride out longer periods in which they must operate independently.

“If you didn’t have your service providers and employees on-site at the time of the storm, they weren’t going to get there,” said Paul Hines, VP of Operations and Engineering at Sentinel Data Centers, which has a data center in central New Jersey. “That’s affected our planning.” That includes keeping more spare parts at the facility, bringing more staff on-site and additional advance planning with maintenance contracts and fuel suppliers.

Several questions for the panel focused on the availability of diesel fuel for emergency backup generators, which was a key concern in the storm’s aftermath. Data center providers typically arrange priority contracts with fuel suppliers. But what happens when a regional disaster tests supply and creates dueling priorities?

Providers in New Jersey reported no problems finding fuel, although some had to go outside the region to ensure a continuous supply. “We had 10 days of fuel, and contracts with two fuel suppliers” said Hines. “You also have to make sure your fuel suppliers can operate with no power, and have gravity-fed systems. We’ve now found an out-of-region supplier as well. But that doesn’t solve the problem of access to facilities.”

That was also a pressing problem in Manhattan, where flooding made some roads impassable. Building owners worked with city officials to ensure the availability of telecom services, for example. One of the city’s largest data hubs, 111 8th Avenue, had a high priority because the building also houses a hospital.

The Role of the City

Audience members at DatacenterDynamics Converged also pressed Shobowale about the city’s response to Sandy, especially the vulnerability of the utility grid. One questioner noted the failure of a major ConEd substation built alongside the East River.

Pages: 1 2

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

3 Comments

  1. A better solution, get your servers and DR infrastructure out of Manhattan !! Like, Chicago?

  2. Kazuhiko Y

    Natural disaster would be beyond all imagination. So, the place by itself is a secondary matter, which means every place could suffer from natural disasters. In my opinion, the best solution is to establish procedure to stop the facilities of data center in unimaginable natural disaster as being able to restart data center in a short term, while all people understand natural disaster has a tendency to exceed all imagination. Of course, it comes near to stating the obvious that dispersion of risk is important.

  3. P Bulteel

    Fukushima should have been an eye opener 2 years ago that any critical system in a basement is going to be subject to potential flooding. At that point everyone should have evaluated where these systems sit and prepared by not housing those critical systems in a basement or having secondary backup systems that are not in the basement. Hindsight is 20/20 - but 2 years ago we saw it with Japan. Shouldn't something have been learnt from that?