Google Cloud Platform Introduces 96-CPU Machines

The company claims the new VMs make available the most Skylake vCPUs of any cloud provider.

Christine Hall

October 6, 2017

2 Min Read
Google data center in Council Bluffs, Iowa
An overhead view of the server infrastructure in Google’s data center in Council Bluffs, Iowa. (Photo: Connie Zhou for Google)Alphabet/Google

If you're using, or even thinking about using, the public cloud, Google Cloud Platform definitely wants your business. It's been doing everything from beefing up its already extensive network of data centers to introducing tiered pricing to get you on board. And this week it introduced Compute Engine machine types with up to 96 vCPUs and 624GB of memory for compute- and memory-hungry applications.

You don't need me to tell you that's a lot of power.

The machines are powered by Intel's Skylake Xeon Scalable processors, and GCP claims they make available the most vCPUs of any cloud provider on that chipset. Skylake, of course, is no slouch. It provides up to 20 percent faster compute performance, 82 percent faster HPC performance, and almost 2X the memory bandwidth over any previous generation Xeon.

Why is GCP doing this? Again, they want your business and being first to market with the best specs is a good way of getting it. And since Google designs its own products, both hardware and software, it can do it.

These virtual machines are available in three predefined machine types:

  • Standard: 96 vCPUs and 360 GB of memory.

  • High-CPU: 96 vCPUs and 86.4 GB of memory.

  • High-Memory: 96 vCPUs and 624 GB of memory.

GPC says users can also use custom machine types and extended memory with up to 96 vCPUs and 624GB of memory to create exactly the machine shape needed to avoid wasted resources and pay for only what's being used.

Related:Will Nutanix Partnership Make Google Cloud a Contender?

"The new 624GB Skylake instances are certified for SAP HANA scale-up deployments," Hanan Youssef and Scott Van Woudenberg, product managers with Google Compute Engine, said in a blog announcement. "And if you want to run even larger HANA analytical workloads, scale-out configurations of up to 9.75TB of memory with 16 n1-highmem-96 nodes are also now certified for data warehouses running BW4/HANA."

The 96-core machines are currently available in beta in four GCP regions: Central US, West US, West Europe, and East Asia. They can be created either from a GCP Console or by using the gcloud command line tool.

If you think 96 cores is a lot, it's not the end all and be all. Youssef and Van Woudenberg said the company is working on even larger VMs with up to 4TB of memory.

Amazon, it's your move.

Read more about:

Google Alphabet

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like