Inside a Google data center (Photo: Google)

Inside a Google data center (Photo: Google)

Google Brings Containers to its Cloud With Hosted Kubernetes

Google made several big announcements and cut prices once again for its cloud during its Google Cloud Platform Live event.

Containers and Docker were heralded by several Google execs as the current revolution in cloud. The company announced Google Container Engine, a fully hosted version of Kubernetes, a container management system. Kubernetes is an open source technology that allows you to orchestrate containers.

A slew of announcements were made including:

  • Google Cloud Interconnect which will allow customers to hook into Google cloud via VPN. Carrier Interconnect is the direct link option for carriers and data centers, who can then provide a dedicated secure connection into Google cloud for their customers.
  • Canonical’s Ubuntu is now available on Google’s Cloud Platform for the first time. Ubuntu was the last major linux distro not available on GCP.
  • The company also rolled out Compute Engine auto-scaling into wide release. It allows a customer to grow or shrink a fleet of virtual machines based on demand, based on metrics you set.
  • The company also added local SSDs to Compute Engine, joining a range of network storage options currently available. This is for certain classes of applications with large IO requirements like a Cassandra or SQL cluster. It’s available for any machine type, 1-4 375 gig SSD partitions. With 4 disks, it gives customers 680,000 read and 660,000 write operations. The cost is .28 cents per gig per month.
  • The Cloud debugger is now publicly available in beta. The modern take on debugging in the cloud was first previewed in June. It allows debugging in instances being used in production, across any number of instances.

Container revolution

Containers make developing and deploying applications on cloud easy. Containers package applications and dependencies in a single file. It means not having to worry about configurations and nuances of platforms. It makes for a quick development cycle as they’re quick to spin up and tear down.

Managing containers at scale can be complicated, which is where Kubernetes comes in. Kubernetes is an API that makes deploying a fleet of container applications across the cloud easy. Kubernetes was originally developed by Google and open sourced in June. Google Container Engine is a hosted, formalized version for Google’s cloud.

“We want to change what our users are able to do, not just where they’re doing it,” said Vice President Bryan Stevens. “Just as we’re getting our head around public cloud comes the next disruption: containers. The reason why it’s gotten so popular is that even in early stages its delivered great benefits.”

“A data center is not a collection of computers. A data center is a computer,” said Greg Demichillie, director of product management at Google Cloud. “And we think containers are the technology that will make this possible.”

The new hosted Kubernetes option is an alternative for those who don’t want to work in the open source project. It makes Google cloud and containers easier to manage. Google Container Engine is now in alpha, but is open to everyone immediately. “We want your help in guiding and shaping this product,” he said.

Containers have been used in the Platform-as-a-Service (PaaS) App Engine since day one, however their versatility was limited by the nature of PaaS . “There’s a drawback with PaaS, you have to color in the lines,” said Memichillie. “We knew that was a problem and set out to solve it with it with Managed VM.”

Managed VM lets customers use whatever they want in terms of libraries and open source frameworks on App Engine. The result is that it’s now open because you can run any library, taking away one of the larger limitations of PaaS: being confined to certain setups. Customers can use the complete range of virtual compute with Managed VM.

More price drops for GCP

Google has also cut cloud prices once again. Following a 10 percent cut for Compute Engine last month, the company announced another 10 percent cut today.

Sizable cuts were made to several other services:

  • 23 percent drop for BigQuery
  • 79 percent drop for Persistent Disk Snapshots
  • 48 percent drop for Persistent SSD
  • 25 percent drop for large cloud SQL instances.

Other cloud progress, customer announcements

Google has made three acquisitions since May in support of its cloud:

  • Stackdriver became the backbone of its monitoring system
  • Zync is a new rendering service for the movie and entertainment vertical
  • Firebase is going to be the centerpiece of the mobile developer offering

Google noted great partner and customer momentum. Amazon Web Service’s ecosystem of third party cloud management and enhancement platforms was considered a competitive advantage when Google first launched, but its ecosystem of partners and consultants is rising dramatically closing the gap.

Several customers were named with Office Depot, Wix, and Automic Fiction highlighted, presenting a wide swath of use cases.

  • Office Depot runs its “My Print Center” offering on Google cloud. The service lets customers order print jobs from their computer for pick up at any store. It uses App Engine and Google storage, cutting the time to execute an order by 40 percent.
  • Wix hosts its Wix editor on App Engine and uses Google Cloud storage to store static media files. It now serves production media traffic from Compute Engine. Wix sees 11 million files uploaded per day, manages 600TB and its users resize 8.6 million images per day.
  • Automatic Fiction’s handles the FX for big blockbusters. This process includes rendering images, a normally cost- and time-intensive process.

Automatic Fiction presented its use of Google’s cloud. The company couldn’t afford to build its own giant data center and instead focused on building cloud tools, said founder Kevin Bailie.

One of those tools is an effects rendering tool called Conductor, which the company is making available in 2015.

Rendering is a compute intensive job. The company demonstrated how much easier cloud makes the process. The company held a demonstration, rendering one frame by splitting it into 700 chunks across several instances in real time. Seven hundred and fifty instances and 12,000 cores rendered the frame in minutes, versus the hours and hardware expense it would take to do it in a data center. The cost was the same to render for minutes on several machines versus one machine processing over hours. Cross site interoperability means that rendered frame is available to several dispersed teams, said Bailie.

The company and its customers both touted its per minute pricing versus the standard rounding up. Automatic Fiction said the difference between hourly and per minute billing equated to 10 percent savings for long time frames and nearly 40 percent savings for short time frames.


Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Jason Verge is an Editor/Industry Analyst on the Data Center Knowledge team with a strong background in the data center and Web hosting industries. In the past he’s covered all things Internet Infrastructure, including cloud (IaaS, PaaS and SaaS), mass market hosting, managed hosting, enterprise IT spending trends and M&A. He writes about a range of topics at DCK, with an emphasis on cloud hosting.

Add Your Comments

  • (will not be published)