Open Source
10 items
10 items
The high-performance edge and service proxy that powers Istio, AWS App Mesh, and modern service meshes
Envoy is a modern, high-performance L4/L7 proxy designed for microservices architectures. Originally built at Lyft, it provides advanced load balancing, observability, and traffic management without requiring application changes. Envoy runs as a sidecar alongside every service (the service mesh pattern) or as an edge proxy. Its extensible filter architecture, first-class observability (metrics, tracing, logging), and dynamic configuration via xDS APIs make it the foundation for service mesh platforms like Istio and AWS App Mesh.
Envoy runs alongside each service instance, intercepting all network traffic. This moves networking concerns (retries, timeouts, circuit breaking) out of application code into infrastructure. Applications just talk to localhost.
Envoy handles TCP (L4) and HTTP/1.1, HTTP/2, gRPC (L7) natively. It can route based on headers, paths, and metadata. HTTP/2 is used for upstream connections even when downstream is HTTP/1.1, improving connection efficiency.
Request processing is a pipeline of filters - each filter can inspect, modify, or reject requests. Network filters handle L4 (TCP proxy, rate limit), HTTP filters handle L7 (routing, auth, compression). Custom filters extend functionality.
Envoy was created at Lyft in 2015 to solve a common problem: as services proliferate, every team implements networking concerns (timeouts, retries, circuit breaking, observability) differently, often incorrectly.
The insight: Move networking logic from application libraries into infrastructure. Run a proxy (Envoy) alongside every service. Applications connect to localhost; Envoy handles everything else.
Key problems Envoy solves:
Envoy deployment modes:
Most service meshes (Istio, Linkerd, AWS App Mesh) use Envoy as the data plane, with a control plane managing configuration.
Every Envoy instance emits detailed stats (latency histograms, error rates, connection counts), access logs, and distributed traces. This visibility is automatic - no application instrumentation required.
Beyond round-robin, Envoy supports least-request, ring hash (consistent hashing), Maglev, and random. Zone-aware routing prefers local endpoints. Outlier detection ejects unhealthy hosts automatically.
Envoy was created at Lyft in 2015 to solve a common problem: as services proliferate, every team implements networking concerns (timeouts, retries, circuit breaking, observability) differently, often incorrectly.
The insight: Move networking logic from application libraries into infrastructure. Run a proxy (Envoy) alongside every service. Applications connect to localhost; Envoy handles everything else.
Key problems Envoy solves:
Envoy deployment modes:
Most service meshes (Istio, Linkerd, AWS App Mesh) use Envoy as the data plane, with a control plane managing configuration.
| Aspect | Advantage | Disadvantage |
|---|---|---|
| Sidecar deployment | Transparent networking, no application changes, consistent behavior across languages | Resource overhead per pod (CPU, memory), adds latency hop, operational complexity |
| Dynamic configuration (xDS) | No restarts for config changes, control plane manages thousands of proxies | Requires control plane infrastructure, eventual consistency in config propagation |
| Feature richness | Advanced load balancing, observability, security in one package | Complex configuration, steep learning curve, many knobs to tune |
| C++ implementation | High performance, low latency, efficient memory usage | Harder to extend than Go/Java, longer build times, memory safety concerns |
| WASM extensibility | Safe custom logic, hot reload, language flexibility | Performance overhead vs native, limited debugging, ecosystem still maturing |
| Thread-local architecture | No lock contention, predictable performance, linear scaling | Memory duplication across workers, connection affinity to workers |
| Protocol support | HTTP/1.1, HTTP/2, gRPC, TCP, MongoDB, MySQL - many protocols native | Custom protocols require filter development, not all protocols supported |