Containerized orchestration is no longer the “alternative” software infrastructure. In what may have been an accelerated product deployment, VMware today announced vSphere 7, its first virtualized applications and services platform to instill Kubernetes in two separate, critical layers of its architecture: workload staging and execution. Kubernetes is the open source workload orchestrator born from the Docker movement of 2015.
“With vSphere 7, what we’re doing is fundamentally modernizing vSphere itself,” Kit Colbert, VMware’s VP and CTO for cloud platforms, said during a recent press conference. “We’ve re-architected it to integrate with Kubernetes, so this is a pretty foundational change.”
The company expects to make vSphere 7 available starting in May.
Quest for Seamlessness
It was just last August when VMware announced at its company conference Project Pacific, an effort to build a platform that would succeed vSphere’s present architecture. Its purpose would be to enable classic hypervisor-driven virtual machines and modern Docker-style containers to co-exist on the same infrastructure platform. No changes or adaptations would have to be made to containers this time — unlike the former vSphere Integrated Containers, which were “protective-coated” in a shell to enable vSphere to treat them as VMs. Kubernetes also served as the container orchestrator in another previous VMware project, Photon, but in an environment that would run alongside vSphere rather than be merged with it.
The objective of vSphere 7 is to run all classes of virtualized workloads in an environment that understands their different resource management profiles. This will be done by running the vSphere management platform itself on a Kubernetes layer, but then running enterprise data center workloads on a separate Kubernetes layer managed by vSphere.
VMware CEO Pat Gelsigner said the idea of building an “industry consensus interface for applications” (Kubernetes) directly into vSphere while addressing the need to support existing environments was “powerful.” It’s a combination of the mature VM environment, which has “proven ecosystem support” with “tomorrow, with Kubernetes, in a seamless way.” The combination makes the question of whether to rebuild an existing application to run in containers less urgent. “You do it when you have business value to doing it, and you don’t have to build new infrastructure to run some of those cool new microservices, containerized applications.”
Tomorrow Is Looking Awfully Close
Up until just three years ago, that “tomorrow” seemed far away. That’s when senior VMware executives suggested that while enterprises’ transitions to Kubernetes was indeed happening, the completion of the migratory shift of their applications to the new infrastructure was 10 to 12 years away.
What they may have noticed meanwhile, however, was that even if the final destination was way over the horizon, enterprises were already noticing the big fork in the road and taking it now. That “industry consensus” came very suddenly and very strong.
“Most organizations don’t want to look at a world where they’re building a future that doesn’t have a bridge back into the existing environments that they’re living in,” said Craig McLuckie, one of Kubernetes’ original architects at Google who is now R&D VP for VMware's Modern Applications Platform business unit.
Another major milestone in Kubernetes’ development that VMware’s executives may not have foreseen at the time (even though their own employees now include two of the platform’s creators and many more of its contributors) is the advent of the Custom Resource Definition (CRD). When it was introduced, it was described as a means of integrating other elements of control, such as management of big data streams, into the orchestrator.
The implication wasn’t immediately clear, though when it showed up, it was stark: It meant Kubernetes could be adapted to orchestrate workloads other than Docker-style containers as “custom resources.” Once that revelation was made clear, it seemed VMware was one of the first to exploit its many implications, chief among them being that first-generation VMs could become orchestrated workloads.
“We really need to think through what it means to blend these worlds together,” McLuckie told the recent VMworld 2019 conference. “One of the nice things about Kubernetes first and foremost is it has a very progressive and robust API surface area. Why would we not want to bring a lot of those capabilities to vSphere in a way that enables organizations to start to think about mixing those containers and virtual machines together and scheduling them through a single, common interface into their vSphere infrastructure?”
This mixing of what VMware had historically treated as two distinct ecosystems — what Gelsinger has often called “magic” — is being perceived in the enterprise as something more pragmatic, like a shift from one operating system version to the next. Workloads, it turns out, have a new format. Attached to this new format, it turns out, is a system of built-in development and deployment wrapped in the halfway-appropriate moniker “cloud-native.” The idea is that new applications and services can be nurtured from the conceptual stage into the production environment on a kind of virtual conveyor belt whose deployment processes are all built in. This is what Red Hat OpenShift accomplished – a huge reason IBM acquired the company in October 2018.
Not to be outdone, VMware quickly took its own former spinoff company, Pivotal, under its wing — first by essentially claiming its PKS workload deployment platform for itself and finally acquiring Pivotal outright in a deal finalized in January. This deal has given rise to a new project within the company, led by McLuckie: Tanzu, which is now the official umbrella brand for the development and deployment mechanisms and performance monitoring services Pivotal built around Kubernetes. Those components will also appear as part of the suite that is today being re-dubbed VMware Cloud Foundation 4 with Tanzu.
McLuckie introduced new monitoring and oversight components, such as Tanzu Mission Control, as presenting a workload-centered view of the application lifecycle in progress, as applications are being built for the mixed infrastructure models that will eventually host them. Each workload has a kind of ROI metric, he explained, enabling users to compare the cost of maintaining older workloads to the cost of building and deploying new ones. “A consistent view across these two worlds,” he believes, may enable an operator to project those costs relative to one another well in advance.
Cloud Foundation 4 will be based on what VMware now calls Tanzu Kubernetes Grid, at the core of which is the company’s own Kubernetes distribution. It also contains the APIs necessary to provision additional clusters on AWS, merging Amazon’s infrastructure with enterprises’ own. The Kubernetes capabilities in vSphere 7 will only be available to users of Cloud Foundation 4 with Tanzu.
Separate But Equal?
As Tanzu becomes more prevalent, will it become feasible for enterprises to deploy workloads under development, workloads being tested, and production-ready workloads in the same clusters using tagging features already enabled in Kubernetes and the microsegmentation architecture VMware has already pioneered?
“The straight answer is, absolutely, yes,” responded Colbert. “We do see some customers, for instance, running Kubernetes on bare metal in their data centers. And as you might imagine, we see a lot of the same problems that we saw with traditional workloads running on bare metal, before the advent of computing virtualization: low utilization rates, a lot of ‘siloing,’ this sort of stuff.
“Our vision is that we want to have a single underlying infrastructure platform that can support all these different workloads in different environments: maybe one for dev/test, one for production, one for staging.”
“As we work with enterprise organizations,” added McLuckie, “there’s an ideal state where they look at Kubernetes as being this data center operating system where you can stitch together a broad pool of resources and rely on Kubernetes to schedule across them. The challenge that most enterprise organizations encounter is when that ideal meets the security team, or when it meets the constraints of the network team.”
A bank’s production environment, for example, may have among the most severe restrictions for workload performance, he said. Underdeveloped apps simply cannot share the space with mature ones for reasons that are deeply embedded in banks’ security controls.
“We could theoretically evolve Kubernetes to a point where it has sufficient microsegmentation at the networking layer, the requirements to meet the rigorous needs of enterprise organizations,” McLuckie told Data Center Knowledge. “But frankly, as a community we’re just not there yet.”
Maybe this ideal will become an inspirational organizing principle at some future date, he said. But Kubernetes’ current flexibility is such that “siloing” and isolation, for security purposes, ends up being actively encouraged.
“It’s unfortunate, because it does mean that you don’t necessarily get as much resource efficiencies on physical infrastructure,” McLuckie said. “But the good news is that virtualization still solves those problems. Virtualization does give you the security and the resource isolation capabilities that Kubernetes can’t offer just yet. We’re working towards a vision where that is possible.”
Expected availability, according to VMware: VMware Tanzu Application Catalog, VMware Tanzu Kubernetes Grid, and VMware Tanzu Mission Control are all available today. VMware Cloud Foundation 4 and VMware vSphere 7 are all expected to become available by May 1.