Amazon’s Dynamo and Massive Scalability

Add Your Comments

Amazon CTO Werner Vogels yesterday released a paper on Dynamo, a technology Amazon created to engineer its infrastructure for reliability at massive scale. “Dynamo is internal technology developed at Amazon to address the need for an incrementally scalable, highly-available key-value storage system,” Vogels says. “The technology is designed to give its users the ability to trade-off cost, consistency, durability and performance, while maintaining high-availability.” He emphasizes that Amazon has no plans to offer Dynamo as part of its utility computing platform, which currently includes S3 and EC2. The paper on Dynamo is pretty technical, but is generating interest around the web. Here’s a sampling of the reaction:

  • Nicholas Carr says Dynamo “will be of great interest to other engineers engaged in building the massive and massively reliable data-processing systems that will define the future of computing.”
  • Jesse Robbins at O’Reilly Radar says the paper is an “excellent read for anyone thinking about scalable web sites … The operational challenges and solutions presented in the paper are particularly interesting.”
  • Larry Dignan at ZDNet: “Amazon’s paper details how storage technology is critical to managing SOA

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.