Skip navigation
The VMware area on the expo floor at VMworld 2019 Yevgeniy Sverdlik
The VMware area on the expo floor at VMworld 2019

VMware Rethinks Load Balancing, Takes Dead Aim at NGINX

Integrates recently acquired Avi Networks’ microservices-appropriate delivery controller into NSX

If the microservices model is how applications are delivered in a distributed data center network going forward, the load balancer as we know it may be at risk of obsolescence.

Three and a half years ago, a little-known company called Avi Networks made the case that load balancers in typical enterprise networks were not in the right position to respond to the needs of microservices.

This new class of containerized dynamic program entity would need to request infrastructure resources and get them as needed. Avi introduced the term “delivery controller” as a substitute for “load balancer,” implying an infusion of intelligence, or at least logic, that the leading load balancer NGINX may lack.

Perhaps the highest form of flattery for startup in today’s networking market is to be acquired by either Cisco or VMware. Avi’s turn came last June, when VMware acquired it and said the delivery controller would be integrated into the NSX network virtualization platform.

The integration didn’t take long. This week at VMworld in San Francisco VMware introduced network engineers to a potential alternative to traditional network configuration management: a dynamic system that responds to network traffic patterns at a much deeper level than before.

VMware slide

Typical load balancing is facilitated by distributing pairs of appliances — one on active state, the other on standby — throughout each server in the data center, virtual or physical, Chandra Sekar, Avi’s former VP of marketing, said.

“If any one appliance reaches capacity, you are not able to do anything beyond upgrading that appliance, adding more capacity with more load balancing pairs, etc.,” Sekar said. “Because while you have all this capacity that is sitting out there, none of that is fungible — that you can actually easily use across the infrastructure.”

Each pair of load balancers is typically managed as a unit unto itself, he continued, making the maintenance chore both painful and underappreciated.  What’s more, when public cloud-based resources are tacked onto the enterprise data center, even though they exhibit a respectable degree of elasticity, whatever load management their service providers use is inconsistent with most any on-premises strategy the enterprise may have chosen.

VMware slide

For what’s now called VMware Advanced Load Balancer, the balancing (or delivery control) strategy is delegated to a centralized controller.  Borrowing the guiding principle from SDN, the balancing devices are reduced to “service engines” — minions that take their cue from the controller and reside on a data plane that has been separated from the control plane. These same engines may be distributed throughout the data center and in the public cloud, or anyplace where resources may potentially be claimed by active workloads.

Each engine collects traffic telemetry from all of the active workloads it services, making it possible for the controller to generate a kind of performance profile for each workload.  Through these profiles, the controller can make broad decisions about workload distribution at a more holistic, oversight level.

Though Avi may have pioneered this approach, it’s actually no longer fair to say this design is entirely unique. In September 2017, NGINX released its first Kubernetes Ingress Controller.  This adds an external controller (deployed as a Kubernetes pod) which monitors traffic along virtual server routes to the traditional load balancing architecture. As issues arise, it is capable of directing several NGINX agents throughout a Kubernetes cluster to make routing adjustments.  Now a part of F5 Networks, NGINX released version 1.5.0 of the Ingress Controller last May.

But Avi’s architecture — perhaps intentionally — enabled VMware to integrate its delivery controllers into the NSX product line without having to make alterations to NSX-T architecture.  NSX-T is VMware’s product SKU for NSX that incorporates support for containerization. As VMware announced Monday at VMworld 2019, its plans for the evolution of vSphere — its mainline enterprise virtualization platform — now include the incorporation of Kubernetes not only as the host for the vSphere environment itself, but as a nested, orchestrated environment co-existing alongside virtual machines.

The goal, VMware officials have said, is to bring the company and its mainline platform all the way around to a workload focus, enabling all classes of workloads across all infrastructures. If that’s so, it would probably mean NSX-T becomes, for all intents and purposes, NSX. And it would also mean the active agent formerly known as the Avi Networks Software Load Balancer would play a critical role in the evolved platform. Conceivably, the Avi controller could become the gathering point for all workload telemetry in vSphere, facilitating a dynamic infrastructure orchestration that would require much less pre-configuration and testing by IT operators.

In addition to the Advanced Load Balancer news Tuesday, VMware announced the inclusion of an analytics engine in the newly arriving 2.5 release of NSX-T. Entitled NSX Intelligence, it’s being described as a distributed analytics engine that captures telemetry for the complete traffic flow throughout a network without relying on intermittent samples.

It accomplishes this by integrating the analytics collection process directly into the NSX hypervisor, Chris Wolf, one of VMware’s many emerging CTOs, said in a company blog post. From there profile data on the current traffic flow may be dispatched to all hosts in a network.  What VMware hopes to accomplish by this, wrote Wolf, is the implementation of microsegmentation — a more granular form of firewalling introduced four years ago by VMware — on a massive scale.  This could conceivably give NSX hosts in the public cloud vital telemetry pertinent to the profiles of specific workloads running outside the cloud, so that they can adjust their routing and security policy enforcement accordingly.

TAGS: VMware DevOps
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish