What do Google’s open source cluster container management software Kubernetes and Pivotal’s open source Platform-as-a-Service software Cloud Foundry have in common? The answer is etcd, the open source distributed key-value store started and maintained by CoreOS, a San Francisco startup that earlier this month announced an $8 million Series A funding round by a group of Silicon Valley venture capital heavyweights.
As Blake Mizerany, head of the etcd project at CoreOS, explained in a blog post, cluster management across distributed systems is complicated business. etcd makes it easier by creating a hub that keeps track of the state of each node in a cluster and manages those states. It replicates the state data across all nodes in the cluster, preventing a single node failure from bringing down the whole group.
Getting clustered servers to agree
In an interview, CoreOS CEO Alex Polvi said etcd was an implementation of Chubby, a software tool Google designed to manage a key component in every distributed system: consistency. For five servers to make a decision as a cluster, they need to agree about the current state of something they are making a decision about. In the world of distributed computing, this is called consensus, and Chubby uses a “consensus algorithm,” called Paxos, to manage consensus in a cluster of servers. This consensus is key to resiliency of distributed systems.
Google published a paper describing Chubby in 2006, which inspired Doozer, a highly available data store Mizerany wrote together with his former colleague Keith Patrick in 2011, when both were working at Heroku, the Platform-as-a-Service company that was at that point already owned by Salesforce. Doozer became inspiration for etcd, Mizerany wrote. Both are written in Go, but a big difference is that Doozer uses Paxos, while etcd’s consensus protocol is Raft, which gives it the ability to keep the same logs of state changing commands on all nodes in a cluster.
Kubernetes, the Docker container manager Google open sourced in June, is a lighter version of its in-house system called Omega and relies on etcd for cluster management. “To run Kubernetes, you have to run etcd,” Polvi said. Everything CoreOS is building has been inspired by the way Google runs its data center infrastructure, so “we’re excited to see them build on top of one of our tools,” he said.
CoreOS, the company’s main product, is a server operating system designed for companies that want to run their data centers the way web giants, such as Google, Amazon or Facebook, run theirs. Its target customers are companies that operate data centers at Google’s scale but, unlike the web giants, don’t design and build everything inside those data centers by themselves. The only customer whose name CoreOS has disclosed so far is Atlassian Software, the Australian company best known for creating JIRA, one of the top software tools used by project managers.
As Polvi puts it, Kubernetes is a step toward the “operational utopia we’ve all been dreaming of for a long time.” That utopia is being able to treat a massive data center as a single operating system. It is too early to say whether Kubernetes will become the de facto standard management tool for doing that, but the style of infrastructure operations it represents is where things are going, he said. “It could be it. I think market wants one. I don’t’ think it wants 20.”
Others in industry want to be involved in Kubernetes
A group of IT infrastructure heavyweights joined Google’s open source project exactly one month after it was announced, which means some level of standardization on Kubernetes is coming. IBM, Red Hat and Microsoft all pledged to contribute to the project, as did a group of startups, including CoreOS, Docker, Mesosphere and SaltStack.
Microsoft wants to make sure Kubernetes works on Linux VMs spun up in its Azure cloud. IBM, looking out for its primary customer base, wants to make sure Docker containers are digestible by enterprises.
Matt Hicks, director of OpenShift engineering at Red Hat, said the software company was interested in Kubernetes because it was interested in having a common model for describing how applications packaged in Linux containers are built and interconnected. “How you orchestrate and how you combine multiple containers to create a useful application is useful technology for us,” he said.
Besides the company’s obvious interest in Kubernetes because of its Enterprise Linux and Fedora Linux operating systems, application containers have been a core technology underlying OpenShift, RedHat’s popular PaaS product.
From common framework to full automation
When it announced arrival of new members to the open source Kubernetes community, Google said the goal was to make sure it becomes an open container management framework for any application in any environment. This means the community will have to come to a consensus on what attributes that common management framework will have.
Hicks said the framework would have to address the way multi-container applications and dependencies between the underlying containers are described. Another component would be defining how the application’s containers are placed across what could be thousands of servers, so they come together cohesively. Since containers can run on shared resources, the framework would also have to address how security is handled.
Polvi said essentially you’ll want to be able to describe your goals to the system and let it figure out the best way to achieve them. With systems like Amazon Web Services or OpenStack-based clouds, you have to specify which server or which database to spin up or spin down when. With Kubernetes, you will ideally be able to tell the system that your app needs a data base, three servers, an x-amount of storage, etc., and “go make it so and guarantee that it’s so,” he said.