sandy-house

Surviving Sandy: Two Views of the Superstorm

2 comments

sandy-house

A look at some of the damage wrought by Superstom Sandy on a property adjacent to the IFF data center in Union Beach, New Jersey. (Photo: IFF)

LAS VEGAS - For Alex Delgado, things were going from bad to worse as Superstorm Sandy slammed the Jersey Shore. It was high tide, during a full moon. There was a 13 foot storm surge, and the data center was less than a mile from the beach. Six hours into the storm, the company’s operations team in India had to be evacuated due to a cyclone.

The staff at the International Flavors & Fragrances (IFF) data center in Union Beach, N.J. used to joke about a single telephone pole that carried “half of the Internet and half of its power.” As Sandy came ashore, that was the pole that fell. In short, had Delgado won a raffle that week, it would have been for the Hunger Games. Everything was going wrong.

The campus was swamped with six to seven feet of water. Both its power substations were under water, as were the diesel fuel pumps. UPS batteries were nearing their end of life. Street power was out, and access to the facility was hindered by partially collapsed road.

Different Scenarios, Different Considerations

Delgado, the Global Operations and Data Center Manager for IFF, shared his experience this week as part of a keynote panel at Data Center World in Las Vegas. The panel showcased two stories of Sandy’s impact: one from the Jersey Shore at the heart of the damage, another from Philadelphia.

The data center in Union Beach supports more than 50 manufacturing facilities around the world for IFF,  a chemical manufacturing company that did over $2.8 billion in revenue last year. While Delgado and his team struggled with the storm, the event had no major impact on customers, as the company didn’t lose a single order.

The 4,500 square foot facility is a single tenant building with 30 minutes of UPS backup. Its disaster recovery site is 2 hours away at an IBM facility in Sterling Forest, New York. As the storm intensified, IFF was able to shift its critical operations to the backup facility.

The damage in Union Beach was severe. The data hall stayed dry, as it was on the second floor of the building. But the storm surge took out power and mechanical infrastructure, and flooded the machine shop, ruining most of the facility’s power tools and spare parts. With the power out and roads closed or blocked, staff stayed in place for days. With provisions exhausted after the first 48 hours, IFF staff subsisted on vending machine food as they began the recovery effort, Delgado said.

The data center was returned to service on Dec. 8 with new generators and infrastructure. Delgado wound up procuring 300 batteries and 3 generators.

Delgado’s key “lesson learned ” included vendor support. “If you don’t have a good relationship with your vendors, start shaking some hands today,” he said. He also noted that the company had moved to cloud email, which saved a ton of headaches in terms of communication.

The View From Philly

Donna Manley, IT Senior Director at the University of Pennsylvannia, showed a different side of the storm. While Philadelphia wasn’t nearly as impacted as much, the operational impact of the storm was great.

The university’s data center is in a multi tenant building, with a main data center of 4,850 square feet in the University City section of Philadelphia. Manley’s story is important because it revealed a larger concern than just the data center: the city of Philadelphia’s aged infrastructure.

A week prior to the storm, Manley and her team started the planning process. They identified teams, began tarping the windows, and put diisatser recovery provider SunGard on alert. “We started our crisis command center on the 29th, setting up a separate box.net instance just in case we lost power and were in an emergency situation,” said Manley.

Understanding the geographic diversity of the staff was important, as some employees lived in areas where the storm hit hard. “We had very few individuals that could have been on site,” said Manley. ““We needed to make sure there was technical and management staffing.””

Cloud Services Play a Role

Manley leveraged online storage provider Box.net to get them through the storm. “Resourcing doesn’t just mean people,” said Manley. “One of the big things we have going on is our documentation. Up until recently, we had it in Sharepoint. We made it available on box.net, and we didn’t have to worry about servers going down and documentation not being available to us.”

Manley’s advice is to have a data center crash kit checklist. “Because we’re an urban campus, we have a couple of unique items on there – respirator masks, subway tokens to get to disaster recovery site at Sungard)”, she said.

She said it’s also important to read the fine print on Disaster Recovery Agreements to see whether a fee is required to put your provider on standby. There’s also food, as there’s  a chance workers will have to stay put at the data center for extended periods of time, and the local restaurants aren’t as committed to staying online as a data center.

Both organizations said the prospect of managed services and hosting now appealed to them a little bit more than prior to the storm. Cloud services played an important role in both disaster plans, even if only to keep communications open through email.

About the Author

Jason Verge is an Editor/Industry Analyst on the Data Center Knowledge team with a strong background in the data center and Web hosting industries. In the past he’s covered all things Internet Infrastructure, including cloud (IaaS, PaaS and SaaS), mass market hosting, managed hosting, enterprise IT spending trends and M&A. He writes about a range of topics at DCK, with an emphasis on cloud hosting.

Add Your Comments

  • (will not be published)

2 Comments

  1. While we can agree this was a natural disaster, to me the real disaster was in the failure to plan, or another term a colleague of mine uses - arrogance. No company has the budget for proper disaster mitigation and defense, they leave all of that to the risk management people who purchase insurance. How many times will it take before companies actually protect, instead of repair, their facilities against these types of events. It's not playing "Chicken Little" to predict catastrophic natural disasters will happen and the problems they can cause. It's simply good business. I am hopeful we can survive the next event with our after action report containing fewer Lessons Learned pages, and more Problems Avoided examples. Get busy creating plans to prevent, mitigate and avoid disasters, and that may mean fewer hours re-writing your resume and going out on interviews. Just because you don't live in tornado alley, or on the gulf doesn't mean bad weather can't find you.

  2. John Goodman

    While I agree with you on the need to plan I do not agree on the arrogance comment. Its got to be a case by case basis. This is a business. Companies weigh in on risk and assign a value to mitigate accordingly. Which would probably include what the cost of doing nothing is and the loss of any potential business. In some cases a company mind find that its more cost effective to have a solid plan for Disaster Recovery and recover at a DR site than to spend millions upon miollions on Meteor proofing your Data Center for that 1 in a million chance of the 100 year storm. Lets also remember that some Data Centers may not be as critical to one company as they are to another. That having been said as disasterous as this event was in both cases presented there was no real loss of business. If anything it seemed more of just a major inconvenience. When it came down to it, real business was not impacted which is more than any business or data center manager could hope for. I am sure others would agree that its more than just about keeping your data center up. It is also about also having a solid plan for disaster recovery. Whatever their plans were in these cases...they worked!