Chris Crosby is CEO of Compass Datacenters.
Have you ever been to a social gathering—a cocktail party let’s say—where you didn’t know anyone and it seemed like everyone was talking about something you couldn’t understand? While you stood by that fern hoping that no one would come up and ask you about the evening’s main topic of chit-chat, didn’t you wish that you knew something about the subject so that you could at least nod knowingly and laugh at the right times? Of course you did.
Sometimes the world of data centers and the cloud can be like that. Just when you think you’ve got it all down, along comes something new that everyone but you, is comfortable talking about, and suddenly you’re thinking that maybe a few ferns on the raised floor might liven up the place. In the world of data centers, the trendy cocktail topic is how Docker (and other containers) has assumed the role of Homo Erectus compared to virtualization’s Neanderthal man. For the less paleo logical among you, this means that containers are the next step in the server efficiency evolutionary process, and virtualization will soon be coming to a museum near you. Like most discussions conducted over a few martinis, the truth lies somewhere in between.
For our purposes let’s use Docker as our container representative. In short, Docker is a container technology in which an application is housed in a file system that contains everything that it needs to run: code, runtime system tools, system libraries, in short, anything that can be installed on any Linux or Microsoft server. When we compare containers like Docker to virtualization it is important to note that both technologies have their own strengths and weaknesses; but in terms of implementation, they are not necessarily mutually exclusive.
When we compare Docker to virtualization, the major points of differentiation are found in two areas: structure and purpose.
- Structure – In a virtual environment, each virtual machine requires a full operating system and operations are controlled via the hypervisor layer. Due to these requirements, each virtual machine is burdened with processer intensive overhead – the hypervisor tax. Docker eliminates the need for the hypervisor layer on condition that all containers on a single machine share the same operation system kernel to reduce overhead and make a more efficient use of RAM.
- Purpose – Whereas virtualization was developed primarily to increase the efficiency of hardware utilization and provide server to OS neutrality, Docker, and containers in general, provide data center or cloud operators a simplified method for the creation of highly distributed systems supporting multiple applications, worker tasks and other processes to run autonomously on a single physical server or multiple virtual machines. This design enhances the portability of Docker in that it can be run multiple platforms including within the data center, in public or private clouds and bare metal offerings.
Due to its lower level of overhead, Docker is less resource intensive than a virtual machine, thereby enabling its applications to “spin up” quickly (in milliseconds) versus minutes with a virtual machine. This speed differential and more efficient resource usage also manifests itself in faster propagation of an application than a virtual machine counterpart and the ability for it (and other container-based systems) to support four (4) to six (6) times the number of application instances on a server. Deployed effectively, a container system provides the ability to run more applications on less hardware; thereby resulting in substantial savings in the areas of servers and power.
So, will Docker ultimately lead to the extinction of virtualization within data centers? At the present time the answer to this question is probably not. Container solutions like Docker are not like for like replacements for virtual machines. Unlike virtual machines, all containers must share the same underlying operating system, a substantial limitation in a mixed environment. For example, at the present time you could not run both Windows and Linux applications on the same server. Virtual machines also offer a higher degree of security than container alternatives, making them potentially unsuitable for an organization’s more sensitive applications. Therefore, it is likely that we will see some manner of coexistence of containers and virtualization within data centers, with the implementation of each based on the specific organizational requirements for the specific application(s) to be supported.
Docker, and containers in general, have been deployed by into various infrastructure tools such as Google Cloud, AWS and Azure. This makes sense as need to quickly spin up applications is characteristic of the requirements of public cloud users. These same capabilities are now beginning to enter into the Enterprise. However, the prevalence of virtualization schemes within existing data centers, coupled with its broad base of end user familiarity ensure its viability in the coming years.
Based on the strengths and weaknesses of both technologies, it is not unreasonable to assume that the use of virtualization moving forward will become more limited by specific requirements, security for example, while the use of containers within the data center will continue to expand. End users will have to factor these considerations into their future data center planning and the educated party guest, when asked for his or her opinion on the evolutionary future of both will be able to take a sip of their scotch and water and confidently answer, “It depends…”
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.