Docker CEO Ben Golub had one of the prime speaking slots at the inaugural Rackspace Solve conference in San Francisco in July.

Docker CEO Ben Golub had one of the prime speaking slots at the inaugural Rackspace Solve conference in San Francisco in July.

Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge

Add Your Comments

Docker came on fast. It happened unusually fast for the IT infrastructure world.

Launched as an open source project only about 18 months ago, the technology that packages an application in a way that makes it portable across different data center or cloud environments, now enjoys support from the likes of Google, IBM, Microsoft and Red Hat, among many others.

Docker the company officially announced itself in June, launching the first production-ready release of its software. Earlier this month, there was a report that relied on anonymous sources that said the company was close to completing a $40-to-$75-million funding round.

If the momentum it currently enjoys doesn’t lose steam, Docker and a handful of other like-minded startups, are poised to overhaul the way developers build applications and the way IT infrastructure admins serve them.

We caught up with Docker CEO Ben Golub recently to hear his thoughts on the effect companies like the one he’s steering may have on the data center industry and on the role he sees Docker play in the world of enterprise IT.

Data Center Knowledge: What impact do you expect Docker to have on the data center industry?

Ben Golub: I think there are going to be huge impacts across the data center industry. It’ll change how people think about virtualization; how they think about networking; how they think about storage. [It will] certainly drive significantly greater efficiencies. You can get 20x to 80x greater density using Docker than you could by making every application a full VM.

DCK: What should users consider hardware-wise when using Docker?

BG: Docker has been approved and tested on 64-bit architectures, so pretty much all you need is a 64-bit server running some sufficiently modern Linux kernel, and Docker will run. The hardware isn’t a restriction. There are people who are doing things with ARM chips and Docker, and there are people that are doing it with Power systems. We haven’t tested those, but we suspect that Docker architectures will expand just the same way that Docker operating systems will expand.

DCK: What does support from big companies, such as Google, Microsoft and IBM, mean for Docker?

BG: Docker has in a very short amount of time, 17 months since we launched the open source project, has become mainstream. And I think everybody is recognizing that Docker will be a fundamental and disruptive force in how applications are built, shipped and run.

DCK: How is Docker different from other Linux container technologies, such as Red Hat’s?

BG: Linux containers are a low-level component. Until Docker came around some people used containers but the use was very much restricted to large organizations, like Google, that had specialized teams and training. But the containers weren’t portable between different environments. With Docker we’ve made containers easy for everybody to use; we’ve made them portable between environments; we made them exceptionally lightweight and we built up a huge ecosystem around that.

DCK: Why did you go the open source route?

BG: We went open source because we thought that for Docker to succeed we wanted a huge ecosystem to grow up around us. We wanted Docker to work well with all the products that are above us in the stack, which includes a lot of open source tools like Chef and Puppet and Salt and Ansible, as well as everything below us. The Linux stack, OpenStack, every major cloud provider, etc. So being open was really the only option for us.

DCK: What makes Docker attractive for traditional enterprises?

BG: There are really two main use cases. There’s one, which is improving the software development lifecycle. And the other is making it much easier to scale, move across clouds in production.

It used to take weeks or even longer to go from the time a developer developed an application to the time it went through QA test, staging and production, and generally it would break multiple times along the way because of incompatibilities between different environments. With Docker now, you go to places like eBay or Gilt, and they’ll tell you that it takes minutes rather than weeks. The developer commits a change to source; the application is Dockerized automatically; that Docker container goes through whatever automated test system they want, and 90 percent of the time it goes directly to production. The 10 percent of the time it fails, it’s clear whether it’s inside the container and the developer needs to fix something, or its outside the container and ops needs to fix something.

DCK: Are there applications for which using Docker doesn’t make sense?

Right now we only support Linux applications. There’s a huge world of non-Linux applications for which it won’t make sense. We are [eventually] going to have non-Linux support. Not this year, but next year. It’s on the roadmap.

Often people look at Docker as a replacement for VMs, and Docker doesn’t do certain things that VMs do well, like let you take a Windows application and run it on a Linux box or vice versa. That’s totally not something you’d want to use Docker for today.

And Docker does not really support things where you need to freeze the state and live migrate. That’s coming over time but what people are often finding is that with Docker it’s so fast and cheap to create and destroy containers that it can really change the way they think about state and think about the way they do applications.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)