Posted By Rich Miller On December 25, 2012 @ 1:53 pm In Amazon,Downtime | 5 Comments
Amazon Web Services says it has recovered from the latest major outage for cloud computing service, which affected large customers, including Netflix and Heroku. The problems with Amazon’s Elastic Load Balancing (ELB) service began on Christmas Eve at 1:45 p.m. Pacific time, and weren’t fully resolved until 9:41 a.m. on Christmas Day, an outage of about 20 hours.
The incident was the latest in a series of outages for Amazon’s US-East-1 region, the oldest and most crowded portion of its cloud computing infrastructure. The downtime raised new questions about Amazon’s management of the region, and the prospect that load balancing problems in a single zone can undermine the benefits of hosting assets in multiple regions – scenario that first showed up in an extended outage last summer.
This was the second AWS-related outage in six months for Netflix, one of Amazon’s most sophisticated customers, which noted on its Twitter feed that it was “terrible timing.” The streaming video service gradually restored service to different devices throughout the night, but it wasn’t until 9 a.m. Pacific on Christmas morning – more than 19 hours after the incident began – that Netflix reported full recovery:
Special thanks to our awesome members for being patient. We’re back to normal streaming levels. We hope everyone has a great holiday.
— Netflix US (@netflix) December 25, 2012 
The ELB service is important because it is widely used to manage reliability, allowing customers to shift capacity between different availability zones, an important strategy in preserving uptime when a single data center experiences problems.
During a June 29 outage , Amazon said a bug in its Elastic Load Balancing system prevented customers from quickly shifting workloads to other availability zones. This had the affect of magnifying the impact of the outage, as customers that normally use more than one availability zone to improve their reliability (such as Netflix) were unable to shift capacity.
In a July 2 incident report  from that event, Amazon outlined steps it would pursue to avoid a repeat of these issues: “As a result of these impacts and our learning from them, we are breaking ELB processing into multiple queues to improve overall throughput and to allow more rapid processing of time-sensitive actions such as traffic shifts. We are also going to immediately develop a backup DNS re-weighting that can very quickly shift all ELB traffic away from an impacted Availability Zone without contacting the control plane.”
It will be interesting to see whether Amazon’s load balancing problems were related to any of the issues identified in July, and what new solutions are devised to address them. We’ll likely see information on that front soon, as the Amazon team has been scrupulous about publishing details incident reports.
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2012/12/25/major-christmas-outage-for-amazons-cloud/
URLs in this post:
 BCP: http://www.flickr.com/photos/biggaypat/
 Flickr: http://www.flickr.com/photos/biggaypat/254511616/
 December 25, 2012: https://twitter.com/netflix/status/283614473397342208
 June 29 outage: http://www.datacenterknowledge.com/archives/2012/07/03/multiple-generator-failures-caused-amazon-outage/
 July 2 incident report: http://aws.amazon.com/message/67457/
 Rich Miller: http://www.datacenterknowledge.com/archives/author/richm/
Copyright © 2012 Data Center Knowledge. All rights reserved.