SAN FRANCISCO – Developers may not be the only crowd that will benefit from the application container standardization effort that was announced at this week’s DockerCon here. It is bound to make life easier for data center managers and other IT staff that oversee the infrastructure where much of the code developers write will ultimately end up running.
That’s according to Alex Polvi, CEO of CoreOS, a startup with a version of Linux optimized for containers and massive-scale server clusters.
The promise of application containers is providing a fast and easy way to deploy code written on a developer’s laptop in production in a company’s data center or in the cloud. Another promise of containers, sometimes also referred to as “OS-level virtualization,” is much higher server utilization rates than even virtual machines can provide.
The progress toward widespread use of containers, however, had been in danger of slowing down because of a dispute over attributes a standard container should have. That dispute now appears to be over, since all major players in the container ecosystem have gotten behind a vendor-neutral project to create a single container standard.
Given how fast Docker containers have grown in popularity over the two-plus years the company has been in existence, it is clear that in the near future many enterprise IT shops will see their developers start pushing containerized applications into production.
Because the container vendor ecosystem has now agreed to create a common set of basic building blocks, operations staff will not have to worry whether their IT stack supports Docker, CoreOS, or another platform. They will simply have to make sure their stack supports the standard created by the Open Container Project, Polvi, whose company started the standard dispute last year, said.
“We have unanimous industry alignment now,” he said. “Pretty much every single vendor at the table [is] saying this is the way that we see the future of infrastructure going.”
Besides CoreOS and Docker, that list of vendors includes names like Red Hat, Microsoft, VMware, Google, Amazon, HP, IBM, EMC, Cisco, and Mesosphere, among others. In other words, the IT establishment is fully behind OCP (not to be confused with Facebook’s Open Compute Project).
Because Docker has been the undisputed leader in the space, such standardization effort may not immediately seem like a good business move as far as the company is concerned. If most customers use its technology, and the technology is based on a proprietary standard, it’s a lot easier to compete.
But the container ecosystem still has a long way to go before it matures, and having an open standard at the core will spur that ecosystem to grow faster, which will only benefit Docker and others in it.
“Microsoft held on to IE (Internet Explorer) for as long as they could as a proprietary standard, because it’s in the best interest of their business. But it’s not in the best interest of the user,” Polvi said, using the software giant as an analogy.
“We just nipped that one in the bud as an ecosystem right now. We’re going to do this thing right upfront [and] not let anyone grab a hold of the whole market right away.”
The pieces of intellectual property Docker is donating to OCP, the base container format and runtime, represent only five percent of the company’s code base, Docker CEO Ben Golub wrote in a blog post, referring to the IP as low-level “plumbing.”
Vendors will “focus on innovation at the layers that matter, rather than wasting time fighting a low-level standards war,” he wrote. The project, under the auspices of the Linux Foundation, will define the container format and the runtime, and not the entire stack.
It is tools in the layers above that basic plumbing that will make real difference for users, who will not be forced to choose a Docker or a CoreOS and be stuck with it. “Instead, their choices can be guided by choosing the best damn tools to build the best damn applications they can,” Golub wrote.