Amazon Web Services launched its Simple Storage Service (S3) way back in 2006. The company finally hit the 1 trillion objects in cloud storage mark in June of 2012. Just 10 short months later, that figure has doubled. Amazon cloud evangelist Jeff Barr announced today on his blog that Amazon has now hit 2 trillion objects on its S3 cloud storage.
Earlier this month, The director of Amazon Web Services for UK and Ireland, Iain Gavin said the service had hit 1.7 trillion objects, and was peaking at 835,000 requests per second.
What’s driving this remarkable growth? AWS is an engine for startups and innovators across the web, and often serves (at least partially) as the backend for a lot of big time applications like Dropbox. The world is accepting storage of data in the cloud, and S3 is the biggest cloud storage service.
Barr tried to put this growth in perspective, as 2 trillion is a hard number to wrap your head around. Our galaxy is estimated to contain about 400 billion stars, writes Barr. That works out to five objects for every star in the galaxy. The field of Paleodemography estimates that 100 billion people have been born on planet Earth. Each of them can have 20 S3 objects. Our universe is about 13.6 billion years old. If you added one S3 object every 60 hours starting at the Big Bang, you’d have accumulated almost two trillion of them by now.
Amazon’s announcement serves as a revealing data point in documenting the demand for data center space. All that data needs a place to live. If Amazon’s storage cloud is doubling in 10 months, what impact will cloud applications have on data center requirements as other providers scale up their storage clouds? The numbers may appear to reside in the clouds, but the bits live in servers within physical data centers.