Istio v Linkerd: The Former May Be More Service Mesh Than You Need

Linkerd developer points to hidden overhead costs in Istio, suggesting that the most popular service mesh isn't right for everyone.

Christine Hall

June 2, 2021

4 Min Read
Cloud Native Computing Foundation

If you followed service mesh content at this year's KubeCon and expected the service mesh panel to focus on Linkerd only to find out that you were wrong, you could be forgiven. After all, KubeCon is a Cloud Native Computing Foundation event and Linkerd is a CNCF project.

Linkerd was also the first service mesh, created by Oliver Gould and William Morgan and based on work they did at Twitter in the first half of last decade. The term "service mesh" originated from the project. (Gould and Morgan are now co-founders of Buoyant, a San Francisco-based Linkerd-focused startup.)

Morgan was the only Linkerd representative on the panel, which was dominated by Istio -- by far the most well-known service mesh -- and about twenty minutes in, it looked like he had had enough. The discussion turned to the complexity of service meshes, seen as a stumbling block to the technology's adoption.

"I think that's a very Istio-specific view," he said. "I'm sorry, but the reality is, the pushback that we see about the service mesh is that it's super complex, and I think that's largely due to Istio, and I'm sorry to say that in a room full of Istio people. It doesn't have to be this way."

Istio v Linkerd

Istio's complexity is common knowledge. It's largely due to the fact that it's built to run on top of CNCF's Envoy, a proxy server that originated at Lyft. Envoy also has a reputation of being difficult to use.

Related:Will CNCF's 'Service Mesh Interface' Help Consolidate the Service Mesh Market?

Envoy's complexity, however, gives Istio capabilities that Linkerd lacks, most notably the ability to bring a variety of deployment types, such as virtual machines, containers, and even legacy software, under a single umbrella and control their traffic from a single control plane.

Linkerd, on the other hand, is currently confined to handling Kubernetes and container environments. VMs and other deployment types are on the agenda but not in a way that will be as tightly integrated as with Istio.

In an interview, Morgan told DCK that Istio is a good fit for enterprises with large IT staffs running highly complex infrastructure but overkill for smaller deployments based almost entirely on Kubernetes. That's both because of the extra resources it uses and the extra staff necessary to keep it running.
"There are companies that have very complex environments, and they're trying to do these very complex things where they have all these VMs and Kubernetes, and they're trying to tie it all together into this one thing. In that case, Linkerd is actually not going to be that helpful because we're so Kubernetes focused," he said.

Related:What Service Meshes Are, and Why Istio Leads the Pack

Linkerd Istio latency graph


Source: Linkerd

"However, if you are Kubernetes-centric and you have decided the Kubernetes lifestyle is the one that you're going to lead, and that's the operational model you want, then Linkerd will be a much better fit for you and will do it all in a way that introduces much less user-facing latency and consumes significantly fewer resources," he added.

"Most importantly, it's much simpler to operate, so the human beings who are involved in this process are not waking up at three am and having to debug some complicated thing."

Benchmarking Service Meshes

The Linkerd team now has some numbers to back up their claims about latency and resource use. They recently ran benchmarks to compare their software with Istio when performing the same tasks and discovered that Istio introduces between 40 percent to 400 percent more latency than Linkerd. Istio's CPU use is also considerably higher. In the control plane (where the heavy lifting is done) Istio's memory use was eight times higher than in Linkerd, and Istio's maximum proxy CPU time was 88ms compared to Linkerd's 10ms.

Linkerd Istio CPU graph


Source: Linkerd

For organizations with large infrastructures handling heavy traffic this could be quite expensive and require more hardware than would be needed without Istio's overhead.

Envoy is an excellent general-purpose proxy, which he often recommends as an ingress solution, Morgan said, but Linkerd didn't use it because of things like the overhead it creates.

"I think what happened with Istio is you have all the complexity of Envoy that is not abstracted over and a lot of that leaks up," he said. "If you look through the Istio open GitHub issues, a ton of them are talking about Envoy. What we're observing is that to become an expert in operating Istio you also have to be an expert in operating Envoy, and that's a lot to ask of people. That was kind of what we suspected was going to happen and why we didn't want to start with Envoy."

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like