People stand in the lobby of Google's Washington, DC, headquarters in January 2015. (Photo by Mark Wilson/Getty Images)

Google Launches Bare-Metal Cloud GPUs for Machine Learning

Google has rolled out in beta a new cloud service that allows users to rent Nvidia GPUs running in Google data centers for machine learning and other compute-heavy workloads.

While relatively few companies outside of a small group of web giants like Google itself use machine learning in production, there’s a lot of development work going on in the field, with computer scientists building and training machine learning algorithms and companies mulling various business cases for using the capability. Training these systems requires a lot of computational horsepower, and at least today companies have found that harnessing the power of many GPUs working in parallel is the best way to get that horsepower.

The problem is that building, powering, and cooling a GPU cluster is far from trivial – not to mention expensive – which makes renting one an attractive option, especially at the experimental stage, where most companies are with machine learning. It’s a business opportunity for cloud providers, who already have experience with this kind of infrastructure and the resources to offer it as a service.

Google’s biggest rivals in cloud infrastructure services, Amazon and Microsoft, launched cloud GPU services of their own earlier. Amazon Web Services has been offering its P2 cloud VM instances with Tesla K80 GPUs attached since last September, and Microsoft Azure launched its N-Series service, also powered by Tesla K80 chips, in December.

The same GPUs are now available from Google at 70 cents per GPU per hour in the US and 77 cents in Asia and Europe. Google’s pricing beats Amazon’s, whose most basic single-GPU P2 instance, hosted in the US, costs 90 cents per hour. Microsoft doesn’t offer per-hour pricing for its GPU-enabled VMs, charging instead $700 per month for the most basic configuration of N Series.

What type of infrastructure will dominate once the machine learning space matures is unclear at the moment. Dave Driggers, whose company Cirrascale also provides bare-metal GPU-powered servers for machine learning as a cloud service, told us in an interview earlier that he believes a hybrid infrastructure is most likely to become common, where companies use a mix of on-premise computing and cloud services.

But, as one of Cirrascale’s customers also told us, even GPUs themselves may at some point be replaced by a more elegant solution that requires less power.

Read more: This Data Center is Designed for Deep Learning

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)