Shes a genius programmer. Cropped shot of a young computer programmer looking through data.
Image: peopleimages.com/Adobe Stock

While developers have clearly thrived with containers and the Docker format over the past 10 years, it’s been a decade of DIY and trial and error for platform engineering teams tasked with building and operating Kubernetes infrastructure.

In the earliest days of containers, there was a three-way cage match between Docker Swarm, CoreOS and Apache Mesos (famous for killing the “Fail Whale” at Twitter) to see who would claim the throne for orchestrating containerized workloads across cloud and on-premises clusters. Then secrets of Google’s home-grown Borg system were revealed, soon followed by the release of Kubernetes (Borg for the rest of us!), which immediately snowballed all the community interest and industry support it needed to pull away as the de facto container orchestration technology.

So much so, in fact, that I’ve argued that Kubernetes is like a “cloud native operating system” — the new “enterprise Linux,” as it were.

But is it really? For all the power that Kubernetes provides in cluster resource management, platform engineering teams remain mired in the hardest challenges of how cloud-native applications communicate with each other and share common networking, security and resilience features. In short, there’s a lot more to enterprise Kubernetes than container orchestration.

Namespaces, sidecars and service mesh

As platform teams evolve their cloud-native application infrastructure, they are constantly layering on things like emitting new metrics, creating tracing, adding security checks and more. Kubernetes namespaces isolate application development teams from treading on each others’ toes, which is incredibly useful. But over time, platform teams found they were writing the same code for every application, leading them to put that code in a library.

SEE: Hiring kit: Back-end Developer (TechRepublic Premium)

Then a new model called sidecars emerged. With sidecars, now rather than having to physically build these libraries into applications, platform teams could have it coexist alongside the applications. Service mesh implementations like Istio and Linkerd use the sidecar model so that they can access the network namespace for each instance of an application container in a pod. This allows the service mesh to modify network traffic on the application’s behalf — for example, to add mTLS to a connection — or to direct packets to specific instances of a service.

But deploying sidecars into every pod uses additional resources, and platform operators complain about the operational complexity. It also significantly lengthens the path for every network packet, adding significant latency and slowing down application responsiveness, leading Google’s Kelsey Hightower to bemoan our “service mess.”

Nearly 10 years into this cloud-native, containers-plus-Kubernetes journey, we find ourselves at a bit of a crossroads over where the abstractions should live, and what the right architecture is for shared platform features in common cloud-native application requirements across the network. Containers themselves were born out of cgroups and namespaces in the Linux kernel, and the sidecar model allows networking, security and observability tooling to share the same cgroups and namespaces as the application containers in a Kubernetes pod.

To date, it’s been a prescriptive approach. Platform teams had to adopt the sidecar model, because there weren’t any other good options for tooling to get access to or modify the behavior of application workloads.

An evolution back to the kernel

But what if the kernel itself could run the service mesh natively, just as it already runs the TCP/IP stack? What if the data path could be freed of sidecar latency in cases where low latency really matters, like financial services and trading platforms carrying millions of concurrent transactions, and other common enterprise use cases? What if Kubernetes platform engineers could get the benefits of service mesh features without having to learn about new abstractions?

These were the inspirations that led Isovalent CTO and co-founder Thomas Graf to create Cilium Service Mesh, a major new open source entrant into the service mesh category. Isovalent announced Cilium Service Mesh’s general availability today. Where webscalers like Google and Lyft are the driving forces behind sidecar service mesh Istio and de facto proxy service Envoy, respectively, Cilium Service Mesh hails from Linux kernel maintainers and contributors in the enterprise networking world. It turns out this may matter quite a bit.

The Cilium Service Mesh launch has origins going back to eBPF, a framework that has been taking the Linux kernel world by storm by allowing users to load and run custom programs within the kernel of the operating system. After its creation by kernel maintainers who recognized the potential for eBPF in cloud native networking, Cilium — a CNCF project — is now the default data plane for Google Kubernetes Engine, Amazon EKS Anywhere and Alibaba Cloud.

Cilium uses eBPF to extend the kernel’s networking capabilities to be cloud native, with awareness of Kubernetes identities and a much more efficient data path. For years, Cilium acting as a Kubernetes networking interface has had many of the components of a service mesh, such as load balancing, observability and encryption. If Kubernetes is the distributed operating system, Cilium is the distributed networking layer of that operating system. It is not a huge leap to extend Cilium’s capabilities to support a full range of service mesh capabilities.

Cilium creator and Isovalent CTO and co-founder Thomas Graf said the following in a blog post:

With this first stable release of Cilium Service Mesh, users now have the choice to run a service mesh with sidecars or without them. When to best use which model depends on various factors including overhead, resource management, failure domain and security considerations. In fact, the trade-offs are quite similar to virtual machines and containers. VMs provide stricter isolation. Containers are lighter, able to share resources and offer fair distribution of the available resources. Because of this, containers typically increase deployment density, with the trade-off of additional security and resource management challenges. With Cilium Service Mesh, you have both options available in your platform and can even run a mix of the two.

The future of cloud-native infrastructure is eBPF

As one of the maintainers of the Cilium project — contributors to Cilium include Datadog, F5, Form3, Google, Isovalent, Microsoft, Seznam.cz and The New York Times — Isovalent’s chief open source officer, Liz Rice, sees this shift of putting cloud instrumentation directly in the kernel as a game-changer for platform engineers.

“When we put instrumentation within the kernel using eBPF, we can see and control everything that is happening on that virtual machine, so we don’t have to make any changes to application workloads or how they are configured,” said Rice. “From a cloud-native perspective that makes things so much easier to secure and manage and so much more resource efficient. In the old world, you’d have to instrument every application individually, either with common libraries or with sidecar containers.”

The wave of virtualization innovation that redefined the datacenter in the 2000s was largely guided by a single vendor platform in VMware.

Cloud-native infrastructure is a much more fragmented vendor landscape. But Isovalent’s bonafides in eBPF make it a hugely interesting company to watch in how key networking and security abstraction concerns are making their way back into the kernel. As the original creators of Cilium, Isovalent’s team also includes Linux kernel maintainers, and a lead investor in Martin Casado at Andreessen Horowitz, who is well known as the creator of Nicira, the defining network platform for virtualization.

After a decade of virtualization ruling enterprise infrastructure, then a decade of containers and Kubernetes, we seem to be on the cusp of another big wave of innovation. Interestingly, the next wave of innovation might be taking us right back into the power of the Linux kernel.

Disclosure: I work for MongoDB but the views expressed herein are mine.

Obtain Docker certification and study for the Kubernetes CKA certification exam with this course from TechRepublic Academy.

Subscribe to the Developer Insider Newsletter

From the hottest programming languages to commentary on the Linux OS, get the developer and open source news and tips you need to know. Delivered Tuesdays and Thursdays

Subscribe to the Developer Insider Newsletter

From the hottest programming languages to commentary on the Linux OS, get the developer and open source news and tips you need to know. Delivered Tuesdays and Thursdays