KubeCon 2019 CNCF
KubeCon 2019

Will CNCF's 'Service Mesh Interface' Help Consolidate the Service Mesh Market?

Service meshes have proliferated, and they aren't compatible. Will a Kubernetes equivalent emerge in this space?

As more and more applications are written (or refactored) and deployed in a cloud-native fashion, using containers and microservices, the concept of service meshes emerged to help admins and developers deal with the new complexity that accompanies this transition. Various service meshes have proliferated, and a big question now is whether the industry will standardize on a single service mesh platform like it standardized on Kubernetes as the go-to container orchestration platform.

There's a dozen or so service meshes available today, all largely incompatible with each other. Choosing one is a daunting task, and it's neither easy nor cheap to switch from one to another. But many organizations are under pressure to adopt a service mesh at the beginning of any major deployment of a hybrid-cloud architecture. The technology is increasingly seen as necessary for large cloud-native deployments, since it helps control the traffic that's added by containers, VMs, and microservices, and it's easier to design into an infrastructure from the start than to add later.

The differences between them range from slight or vast. Istio, Consul, and Kuma, for example, are all based on Envoy, the proxy server originally developed at Lyft. They are feature-rich and can handle mixed workloads but difficult to deploy and manage. Others, such Linkerd and Maesh, are relatively easy to use but have more limited capabilities.

"I'm sure there's going to be some consolidation from the service meshes that are out there today, but I don't think it's going to be a thing like Kubernetes," William Morgan, a co-founder of Linkerd and CEO of the Linkerd-focused startup Buoyant, said during a service mesh-themed panel discussion for the press earlier this month, during KubeCon Europe. "I think the reality is Kubernetes was kind of an outlier. In the open source ecosystem, if you look more broadly, it's often the case that there's two, or sometimes three, projects that are kind of developing side by side. I think we're too far gone down the path to really think there's a serious world in which there is only one."

Lin Sun, director of open source at solo.io and member of the Istio Technical Oversight Committee disagreed. There can still be a dominant player in a field, even when there are multiple players, she said, pointing out that Mesosphere and Docker Swarm are still being used in some data centers to perform essentially the same job as Kubernetes.

She also pointed to work underway at Service Mesh Interface and at Kubernetes to create standardized APIs, which could have the effect of thinning the field or at least making service meshes somewhat more interchangeable.

"What's coming in the industry that's really interesting is that there is the Service Mesh Interface, which attempts to standardize for service mesh," she said. "There is also the new gateway API emerging from Kubernetes. If Kubernetes is going to adopt gateway routes and API to allow you to configure some of the Layer 7 [application layer] service-mesh capabilities, and if that's going to be the dominant thing, I can see vendors come together on that API. If the SMI API is the winning API, maybe the vendors will come together on that route, so it's really interesting to see how these two APIs play out as far as winning the market goes."

"There's definitely got to be some consolidation," John Joyce, a principal engineer at Cisco, said. "We haven't seen it yet. There's actually more proliferation than a consolidation over the last year or so. That's a bit of a problem, but I agree with William, I don't see a convergence down to a single one.

"What Kubernetes did was really consolidate around an API in a lot of ways," he added. "They have the container runtime interface to CNI [Container Network Interface], so they've had a lot of implementation flexibility underneath the API. SMI was maybe an attempt to get there with service mesh, have a common API and let the implementations underneath be what they may be. I haven't quite seen that happening that way. As Lin said, it seems that there isn't enough consolidation around a SMI, but to me, it's almost first you have to consolidate around an API to then allow the implementations underneath to be flexible."

Chris Campbell, a cloud project architect at HP, mentioned Envoy, which became a "graduated" project in the Cloud Native Computing Foundation in November 2018, while also throwing his hat into the SMI ring.

"I think the other reason why we see a lot of service meshes is because of Envoy's popularity and configurability," he said. "The effort needed is greatly reduced. I think that's why you saw maybe two years back a lot of implementations of service mesh come out all around the same time.

"I'm really excited about SMI. One of the things I constantly am thinking about, especially when adopting open source projects, is what is the cost if for some reason this project went away? The idea of adopting an API or a spec over an implementation is pretty powerful for that concern."

"Linkerd was one of the first in the SMI project," Morgan chimed in. "I think if you look at the GitHub repo, Linkerd people are still the number one contributors to SMI, but we're not seeing a lot of demand for it. I was excited about it personally early on, because it enabled things like Flagger. I was like, OK, there's going to be this giant tooling ecosystem, but I haven't seen a lot beyond that. Certainly from the end user perspective there hasn't been a lot of demand. It's like a checkbox on a feature matrix. In the absence of pressure from either direction, I don't know. It sounds like a good idea, but I'm not personally seeing a lot of pressure for us on Linkerd."

"It's interesting, because it feels like the right solution," Campbell answered. "Maybe we're not matured enough in the space to really drive those features, I don't know. I remember there being a big kerfuffle around Docker continually changing the image spec back when it was the de facto standard before OCI [Open Container Initiative] was eventually created. The feedback that Docker gave was that 'We're still iterating. We're still innovating here. We don't know what the right spec should be.' Specs are maybe more appropriate once you have a maturity and you start to cement stuff.

"It's certainly interesting," he added. "I guess we'll kind of see how that evolves."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish