Storms KO NaviSite San Jose Data Center

6 comments

A NaviSite data center in Silicon Valley was without power for an hour this morning after severe storms knocked out the facility’s utility power from PG&E. NaviSite’s San Jose data center lost utility power from PG&E at 4:45 a.m. Pacific time, and backup power systems failed to operate as designed.   

“Generator power has been restored to the data center in San Jose, but the site was without power for approximately 45 to 60 minutes,” NaviSite reported on the company blog. “The data center has been and continues to run on generator power.  We are still waiting for street power to become available, but will not switch back over until we have an understanding of what caused the original issue.”

NaviSite (NAVI) did not indicate the precise cause of the outage, but one of its customers supplied more information. “Our backup power systems initially functioned correctly shifting to battery power as a bridge to generators which then failed to turn on,” ProStores said on its Twitter feed.

ProStores provides turnkey web stores for retailers who sell their goods on eBay. AuctionByteshas more coverage of the ProStores outage.

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

6 Comments

  1. nate

    issues like this that sends shivers down my spines when people talk about using flywheel UPS systems and having a matter of seconds for backup generators to respond. Assuming on site personnel I'd want at least 15-20 minutes of backup power in the event a generator fails to start, gives some time to troubleshoot at least.

  2. Alfredo

    As a business, it's important to have redudant data center site(s) that can fail over with little to no impact and to insure there are heavy financial penalities in the contract when a colo doesn't meet it's obligations. From a distance (speculation) there are a number of reasons why this load didn't fail over to generator. Perhaps they didn't perform a full comission of the building before going online, or perhaps they reduced their preventative maintenance cost. To Nates point, I don't think anyone should fear the flywheel. I believe a quarterly maintenance that requires a full load transfer under a controlled environment (electricians, mechanical engineers, generator tech, facility staff, on site etc) and following a MOP (Method of Procedure) would have caught the problem and been repaired so when a real utlity power loss happens, you don't drop power to your customers.

  3. Losing backup generators is pretty bad for a company in the "high-availability" hosting business -- and it has a huge trickle down impact for clients like Prostores: "The generators are now up & we are working to get all servers online. We anticipate having service fully restored w/in two to three hours." (http://twitter.com/prostores) Navisite has had this facility for several years, so they might be over-subscribing their power capacity. If not, I doubt they would have had this outage if they had been doing full load generator testing once a week -- which is the standard for Tier-4 data-centers.