What is a Service Mesh
A service mesh is a network layer that handles communication between microservices — routing, load balancing, encryption, and observability — by deploying sidecar proxies alongside each service instance.
How does it work?
In a service mesh, every microservice gets a sidecar proxy — a small process that intercepts all incoming and outgoing network traffic. Instead of Service A connecting directly to Service B, Service A connects to its local sidecar, which handles TLS encryption, retries, timeouts, load balancing, and metrics collection, then forwards the request to Service B's sidecar.
A control plane manages all the sidecars, pushing configuration updates like routing rules, access policies, and certificate rotations. The application code doesn't need to know about any of this — it just makes plain HTTP or gRPC calls to localhost.
What are the main implementations?
- Istio — the most widely deployed service mesh, using Envoy as the sidecar proxy.
- Linkerd — a lighter alternative written in Rust, focused on simplicity.
- Consul Connect — HashiCorp's mesh with built-in service discovery.
Why is it declining?
Service meshes add a sidecar proxy to every pod, which means:
- Memory overhead — each sidecar consumes 50-100 MB of RAM, multiplied by hundreds of pods.
- Latency — every request passes through two extra proxy hops (one on each side).
- Operational complexity — the control plane is another system to manage, upgrade, and debug.
eBPF-based alternatives like Cilium are replacing sidecar proxies by implementing mesh features directly in the kernel. This eliminates the proxy hops and memory overhead while providing the same encryption, observability, and policy enforcement.
Why it matters
Service meshes solved a real problem — securing and observing microservice communication without changing application code. But the sidecar model is being superseded by kernel-level approaches. Understanding service meshes is important because they shaped how the industry thinks about infrastructure networking, even as the implementation shifts to eBPF.