Internet giants Google, Microsoft, Amazon, and Facebook use Machine Learning to enhance their services for end users, such as real-time search suggestions, face recognition in photos, voice commands, or cloud services for software developers, but they also use Artificial Intelligence to optimize their internal operations. Google revealed in 2014 that it uses Machine Learning to improve energy efficiency of its data centers, and Amazon’s use of AI to manage warehouses for its e-commerce business hasn’t been a secret since at least 2015.
So, it comes as no surprise that Amazon Web Services, the company’s cloud services arm, also applies Machine Learning to one of the toughest puzzles in data center management: capacity planning. AWS uses Machine Learning to forecast cloud data center capacity demand and to figure out where on the planet to store additional data center components so that it can expand capacity quickly where and when it’s needed.
AWS CEO Andy Jassy revealed the practice in front of an audience at this week’s Foundations of Science Breakfast by the Pacific Science Center, GeekWire reported. The company buys an enormous amount of servers on a regular basis. GeekWire quotes Jassy:
“One of the least understood aspects of AWS is that it’s a giant logistics challenge, it’s a really hard business to operate.”
“Every single day we add enough new servers to have handled all of Amazon as a $7 billion global business.”
The report doesn’t provide much detail about what kinds of input data the company’s Machine Learning algorithm uses to forecast demand, but one of the primary data sources appears to be its cloud sales team. From the GeekWire report:
For example, it can pick up signals from the process its sales teams follow (enterprise sales cycles are notoriously long) to forecast demand. A lot of new customers like to start slow on AWS and then accelerate their usage as they see more benefits, Jassy said, which can lead to spikes in demand if they move faster than anticipated.