LAS VEGAS – For Alex Delgado, things were going from bad to worse as Superstorm Sandy slammed the Jersey Shore. It was high tide, during a full moon. There was a 13 foot storm surge, and the data center was less than a mile from the beach. Six hours into the storm, the company’s operations team in India had to be evacuated due to a cyclone.
The staff at the International Flavors & Fragrances (IFF) data center in Union Beach, N.J. used to joke about a single telephone pole that carried “half of the Internet and half of its power.” As Sandy came ashore, that was the pole that fell. In short, had Delgado won a raffle that week, it would have been for the Hunger Games. Everything was going wrong.
The campus was swamped with six to seven feet of water. Both its power substations were under water, as were the diesel fuel pumps. UPS batteries were nearing their end of life. Street power was out, and access to the facility was hindered by partially collapsed road.
Different Scenarios, Different Considerations
Delgado, the Global Operations and Data Center Manager for IFF, shared his experience this week as part of a keynote panel at Data Center World in Las Vegas. The panel showcased two stories of Sandy’s impact: one from the Jersey Shore at the heart of the damage, another from Philadelphia.
The data center in Union Beach supports more than 50 manufacturing facilities around the world for IFF, a chemical manufacturing company that did over $2.8 billion in revenue last year. While Delgado and his team struggled with the storm, the event had no major impact on customers, as the company didn’t lose a single order.
The 4,500 square foot facility is a single tenant building with 30 minutes of UPS backup. Its disaster recovery site is 2 hours away at an IBM facility in Sterling Forest, New York. As the storm intensified, IFF was able to shift its critical operations to the backup facility.
The damage in Union Beach was severe. The data hall stayed dry, as it was on the second floor of the building. But the storm surge took out power and mechanical infrastructure, and flooded the machine shop, ruining most of the facility’s power tools and spare parts. With the power out and roads closed or blocked, staff stayed in place for days. With provisions exhausted after the first 48 hours, IFF staff subsisted on vending machine food as they began the recovery effort, Delgado said.
The data center was returned to service on Dec. 8 with new generators and infrastructure. Delgado wound up procuring 300 batteries and 3 generators.
Delgado’s key “lesson learned ” included vendor support. “If you don’t have a good relationship with your vendors, start shaking some hands today,” he said. He also noted that the company had moved to cloud email, which saved a ton of headaches in terms of communication.
The View From Philly
Donna Manley, IT Senior Director at the University of Pennsylvannia, showed a different side of the storm. While Philadelphia wasn’t nearly as impacted as much, the operational impact of the storm was great.
The university’s data center is in a multi tenant building, with a main data center of 4,850 square feet in the University City section of Philadelphia. Manley’s story is important because it revealed a larger concern than just the data center: the city of Philadelphia’s aged infrastructure.
A week prior to the storm, Manley and her team started the planning process. They identified teams, began tarping the windows, and put diisatser recovery provider SunGard on alert. “We started our crisis command center on the 29th, setting up a separate box.net instance just in case we lost power and were in an emergency situation,” said Manley.
Understanding the geographic diversity of the staff was important, as some employees lived in areas where the storm hit hard. “We had very few individuals that could have been on site,” said Manley. ““We needed to make sure there was technical and management staffing.””
Cloud Services Play a Role
Manley leveraged online storage provider Box.net to get them through the storm. “Resourcing doesn’t just mean people,” said Manley. “One of the big things we have going on is our documentation. Up until recently, we had it in Sharepoint. We made it available on box.net, and we didn’t have to worry about servers going down and documentation not being available to us.”
Manley’s advice is to have a data center crash kit checklist. “Because we’re an urban campus, we have a couple of unique items on there – respirator masks, subway tokens to get to disaster recovery site at Sungard)”, she said.
She said it’s also important to read the fine print on Disaster Recovery Agreements to see whether a fee is required to put your provider on standby. There’s also food, as there’s a chance workers will have to stay put at the data center for extended periods of time, and the local restaurants aren’t as committed to staying online as a data center.
Both organizations said the prospect of managed services and hosting now appealed to them a little bit more than prior to the storm. Cloud services played an important role in both disaster plans, even if only to keep communications open through email.