Insight and analysis on the data center space from industry thought leaders.

Why You Still Need Virtualization with Kubernetes

The seemingly binary decision between less infrastructure and more infrastructure is a false choice.

Kit Colbert

June 3, 2021

6 Min Read
Storage server

Kit Colbert, VMware

Kit Colbert-Cloud-CTO-VMware

Kit Colbert is CTO of VMware Cloud

When you deploy Kubernetes and containers, it may seem as though you can get by without virtualization. After all, the stripped-down route, often called “bare metal” — which is really Linux without virtualization — appears to promise less complexity and overhead.

The issue may seem binary: Less infrastructure or more infrastructure—which do you want? But that’s a false choice: It’s about whether you get better infrastructure out of the box or do it yourself.

Bare-metal Linux has the allure of engineering from a clean slate. Besides, Kubernetes and containerization already include the basic properties of virtualization. With process isolation, application packaging, and abstraction all built in, what’s there to worry about? Why not take a clean-slate approach?

It’s an almost philosophical point, but the very simplicity invites bigger complications. The bare-metal Linux option means wading through difficulties that the virtualization layer has already solved.

Most IT organizations’ goal is to deploy and run critical applications; it isn’t to deploy containers and Kubernetes. However, organizations that opt for bare-metal Linux may stall at the first step. They end up deploying, managing, scaling, and operating something that’s an order or two removed from their end goal.

Meanwhile, the pressure to deliver new features faster, safely, and more securely is on. The goal is to differentiate products faster and better than the competition. The better infrastructure route is the one that supports speed to market. Kubernetes isn’t a differentiator. It is a means, not an end.

Organizations with on-prem bare metal deployments (if they can get over that first hurdle of deploying Kubernetes) still have low utilization rates and management difficulties. Availability issues and other unexpected problems often crop up. Their container deployments may have “noisy neighbor” issues or operational maturity issues, and they end up putting a tremendous amount of effort into something that won’t make them more competitive.

Few of us live in a tech wonderland with all-new environments. Most of us have a variety of technologies in our stack, and not all of them are new. While some of the tech may be fresh and impressive, a fair amount likely needs to be replaced. We are part of an ongoing improvement process, which means some aspects of the stack will be old, earmarked for replacement. Virtualization is a reality-based toolset that lets you best manage through these changes.

Cloud-based businesses also have reasons to appreciate virtualization. They usually get a Kubernetes service from their cloud provider, which takes a great deal of complexity out of the mix. But they still face challenges, such as managing across diverse Kubernetes environments. For them the question becomes: How can we make sure that our applications are running reliably? How can we interpret the metrics we’re receiving?

Resiliency, Security, Performance, and Cost

In every case virtualization simplifies the process of getting Kubernetes underway. It keeps your attention on what matters: application modernization and speed to market.

With the better infrastructure model you can deliver everything you need to run enterprise-grade Kubernetes in production. You don’t have to worry about the undifferentiated heavy lifting. You can focus on creating business value—which is why we are all at our desks in the first place.

Virtualization offers four distinct advantages over other options: resiliency, security, performance, and lowest overall cost. Here’s how:


Kubernetes control plane issues can be operationally challenging in a Linux infrastructure environment. But virtualization can:

  • Restart a failed or problematic Kubernetes node before Kubernetes itself even detects a problem.

  • Provide the Kubernetes control plane availability by utilizing mature heartbeat and partition detection mechanisms to monitor servers, Kubernetes VMs, and network connectivity to enable quick recovery.

  • Prevent service disruption and performance impacts through proactive failure detection, live migration, automatic load balancing, restart-due-to-infrastructure failures, and highly available storage.


While some vendors assert containers are just as secure as virtualization, it’s wise to look at the major cloud providers, who isolate tenant workloads using separate VMs or physical hosts — never by container only. Google has consistently stated that containers are not security boundaries and that only highly trusted code should be run in containers on the same VM or physical host. Virtualization is security minded. It provides:

  • True multi-tenant isolation with a reduced fault domain and hardware-level, customer-determined isolation at the Kubernetes cluster, namespace, and even pod level.

  • A smaller blast radius, so any problem with one cluster only affects the pods in that small cluster, not the broader environment. These smaller clusters also allow each developer or environment to have their cluster install their CRDs or operators without adversely affecting other teams.


Given that virtualization is an additional layer, it can be surprising that it delivers excellent performance that can at times exceed Linux through resource management and NUMA optimizations. Various technologies can:

  • Enable direct access to underlying direct-attached storage hardware

  • Balance efficiency and performance for any cluster workload, thus reducing resource waste and contention


One of the principal arguments for “less” infrastructure is the lower overall cost. But virtualization infrastructure can deliver:

  • CapEx savings from higher resource utilization

  • OpEx savings due to simpler management and reduced power and cooling costs

  • Out-of-the-box savings: Virtualization handles much of the functionality that can drive up the cost of implementing a custom solution, such as deployment, patching, and monitoring.

In essence, you have two choices: Running Kubernetes and having the issues taken care of in a simple manner or doing it all yourself. Virtualization offers deep integration with Kubernetes. It can support both traditional apps and modern apps side by side with security built in. It reduces your total cost of ownership. It offers a better developer experience.

Virtualization has worked through the kinks. Thousands of engineers have perfected it. Development cost is spread out over thousands of customers, so each customer pays much less. Moreover, because you are tapping into an extensive partner ecosystem, you can bring in partners to solve your problems. Bare-metal Linux without virtualization means you bear the development cost — minus a bit because you can customize it to your solution. You’re on your own, fighting a cause that doesn’t make you more competitive in the marketplace.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like