Skip navigation
cloud-native world with service mesh Getty Images

A Cloud-Native World Pushed Service Meshes Forward in 2020

Service meshes debuted in 2017, but it really broke big in 2020 due to the increased complexity of the cloud-native world.

In 2020, one of the most talked and written about emerging technologies was the service mesh. Interest in the technology is on the rise, because it promises to ease workloads for DevOps teams working with large hybrid cloud infrastructures. These infrastructures are becoming increasingly more complex as the "cloud-native" concept expands beyond containers running monolithic applications and now includes technologies such as microservices that spread workloads across a multi-cloud infrastructure.

This sprawling complexity introduces multiple new issues that traditional networks aren't designed to handle. With the addition of microservices, for example, services now have to find and connect to other services that were previously part of a single application, and when a service fails to connect on the first attempt, there needs to be a way to handle retry attempts without bringing the whole system down.

Service meshes solves these issues by acting as something of a traffic cop that sits between the established network and the application to control the additional traffic. In addition, the service mesh can provide load balancing capabilities, metrics for discovering performance bottlenecks or the cause of latency issues, security to help prevent man-in-the-middle attacks, and provide access control and the like.

If this sounds great, you still might want to wait a bit before you order the IT folks to go all in on a service mesh. Even though they're already very much being used in production, they're still very much a work in progress.

"If you look at the progress of service mesh right now in the ecosystem, there's not a lot of maturity there," said Idit Levine, CEO and founder of cloud-native software developer, in an online panel discussion on service mesh at this year's KubeCon event. "Mainly what I see is a lot of marketing around it [and] less execution."

Although each of the other three members on the panel expressed similar sentiments, this assertion might be a little misleading about the overall state of services meshes, however.

That's because the discussion focused almost entirely on Istio, an open source project started by Google and IBM that has extended the service mesh's usefulness beyond containerized workloads to include virtual machines and applications running on bare metal, which, in turn, has made Istio more complex than some other services meshes. This is ironic because Istio was originally intended to reduce the complexity of working with Envoy, a Cloud Native Computing Foundation project started by Lyft that serves as the foundation for many service mesh projects.

"Two years ago, people were talking about multicloud but they weren't doing it," said Dan Berg with IBM Cloud Kubernetes Service and Istio during the same panel discussion. "Now they're trying to do it and struggling quite a lot. I believe service meshes actually fits that space, but they're complicated too because they're dealing with complicated problems."

"In some cases, some of our projects maybe matured or moved quickly to add features but, at some level, I think we need to simplify," he added. "The problems are complex. We don't need complex solutions for those, we need simple solutions to complex problems."

Even though Istio has been garnering the most attention, it's underlying Envoy technology is the elephant in the room. Many, if not most, service meshes have adopted it as their base, in much the same way that Kubernetes was adopted as the container orchestration standard shortly after the container revolution took hold. Service meshes based on Envoy include CNCF's Kuma, AWS App Mesh, Red Hat's Open Shift Service Mesh (which is based on Istio) and Hashicorp's Consul.

The developers at Istio have been working on these ease-of-use issues, with the latest version streamlining both the deployment of extensions and the installation process. 

There are service meshes, generally built for speed and ease of use, that aren't tethered to Envoy. These include Maesh, which not only doesn't require Envoy, but has replaced sidecars (a key ingredient in most service meshes) with simpler proxy endpoints. Another is Linkerd, a CNCF project that has eschewed Envoy, as well as the ability to incorporate deployment platforms other than containers into the mesh.

In a way, Linkerd can be seen as the mother of all service meshes.

"Linkerd was actually the very first service mesh project, the one that coined the term," William Morgan, Linkerd's co-founder, told ITPro Today. "It is known today for being the lightest, fastest, and, most importantly, the simplest service mesh. [It was built] with the design principle that a service mesh doesn't have to be complex, it actually can be something that is operationally simple."

Morgan pointed to two things that set Linkerd apart from the majority of service meshes.

"Because it doesn't use Envoy, we have this dedicated micro proxy written in Rust, and that is the key to us being much faster, lighter and simpler than all the other service meshes," he said. "The other thing that we focus on is operational simplicity. We're actually trying to do the bare minimum for you. We want to give you the ability to build a secure, reliable, Kubernetes-based application with the bare minimum to get there, rather than give you every possible feature in every possible combination."

Looking forward to 2021, the use of services meshes in a cloud-native world will continue to grow, with Istio likely remaining the one to beat, not only because it has Google and IBM's backing, but because it contains the full feature set that the enterprise requires.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.