Codero Addresses Lengthy Power Outage

Dedicated hosting company Codero suffered a major power outage in its Phoenix data center early Monday that disrupted operations for several hours and caused lengthier downtime for about 10 percent of customers whose servers failed to restart properly. The incident began at about 8 a.m. Central time, when the facility lost utility power. The backup generators started properly, but an automatic transfer switch (ATS) failed to switch the power to generator power, leaving the data center operating on the battery banks of its uninterruptible power supply (UPS) units. “Unfortunately, time ran out and our facility went dark,” said Codero chief operating officer Ryan Elledge. The outage also damaged a power distribution unit (PDU) that supported the core network router, which delayed resumption of service after power was restored to the data center. A small number of servers remained offline late Monday evening due to hardware problems associated with the power issue. Codero staff provided updates and customer service throughout the day via the company’s Twitter channel, while Elledge provided a video update:

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. Inbred Texan

    Howdy I'm glad this was explained to us folks it effected most. There were many folks stranded in the real world and couldn't log into their favorite places. Second Life for instance was down for many hours and its millions of users plumeted into the darkness. I'm glad that you seem to have discovered the root of the problem. I will assume this means we won't have to relive such a traumatizing series of events again. :)

  2. Inbred Texan

    Wait a minute here!!! This says it happened on the 15th. Well I can tell you this, it happened again around the 28th for several hours. At least thats what residents and members of the virtual social/business network Second Life have been told. I am curious to see if these are lessons learned, or repeated mistakes that will continue to effect millions of people.

  3. Circe Broom

    March 15th? Whoa! We were told that the huge blackout of Second Life was caused by the Phoenix data center's power outage on April 28th-29th! Did it happen again? How is that possible? I am amazed.

  4. unPC

    The COO, in explaining the March 15 failure, says that his people had to "replace some breakers" to fix the "automatic transfer switch." Most UL 1008 labeled transfer switches do not utilize breakers, although some do. It almost sounds like Codero just had a couple incoming breakers that were supposed to open and close to utility and generator power. Does anyone know this? The transfer switch for a 24/7 facility should not only have a UL 1008 label, it should have a bypass-isolation feature as well. And it should be tested monthly: