SystemExpertsSystemExperts
Pricing

Patterns

35 items

Horizontal Scaling Pattern

15mbeginner

Retry with Backoff Pattern

15mbeginner

Replication Pattern

25mintermediate

Caching Strategies Pattern

25mintermediate

Persistent Connections Pattern

20mintermediate

Load Balancing Pattern

20mintermediate

Fan-out Pattern

20mintermediate

Fan-in Pattern

20mintermediate

Circuit Breaker Pattern

20mintermediate

Eventual Consistency Pattern

25mintermediate

Queue-based Load Leveling Pattern

20mintermediate

Bloom Filters Pattern

20mintermediate

Time-Series Storage Pattern

20mintermediate

Bulkhead Pattern

20mintermediate

Batch Processing Pattern

20mintermediate

Write-Ahead Log Pattern

20mintermediate

API Gateway Pattern

20mintermediate

Backend for Frontend Pattern

20mintermediate

Sidecar Pattern

20mintermediate

Idempotency Pattern

20mintermediate

Rate Limiting Pattern

20mintermediate

Backpressure Pattern

20mintermediate

Pub/Sub Pattern

25mintermediate

Strong Consistency Pattern

30madvanced

Conflict Resolution Pattern

25madvanced

Leader Election Pattern

25madvanced

Consensus Protocols Pattern

30madvanced

CQRS Pattern

28madvanced

LSM Trees Pattern

25madvanced

Sharding Pattern

25madvanced

Event Sourcing Pattern

30madvanced

Stream Processing Pattern

25madvanced

Change Data Capture Pattern

25madvanced

Distributed Locking Pattern

25madvanced

Two-Phase Commit Pattern

25madvanced
System Design Pattern
Coordinationleader-electionconsensuscoordinationzookeepersingle-leaderadvanced

Leader Election Pattern

Selecting a single coordinator among distributed nodes

Used in: ZooKeeper, etcd, Consul|25 min read

Summary

Leader election is a coordination pattern where distributed nodes agree on a single leader to coordinate activities. Only the leader performs certain operations (writes, scheduling, coordination) while followers remain on standby. If the leader fails, remaining nodes elect a new leader. This pattern prevents split-brain scenarios, ensures single-writer semantics, and simplifies distributed coordination. Implementations include Raft consensus, ZooKeeper recipes, and etcd. Leader election is fundamental to databases (primary/replica), message brokers (partition leaders), and orchestration systems (scheduler).

Key Takeaways

Single Coordinator Simplifies Logic

With one leader, no need for distributed consensus on every operation. Leader makes decisions, followers accept. Dramatically simplifies system design.

Failure Detection is Critical

Must quickly detect leader failure to minimize downtime. But false positives cause unnecessary elections. Balance sensitivity vs stability.

Elections Must Be Safe

Can never have two leaders simultaneously (split-brain). Use fencing tokens, epoch numbers, or lease expiration to prevent stale leaders.

Problems without leader election:

  • Duplicate work: Multiple nodes do same task
  • Conflicting decisions: Nodes make different choices
  • Split-brain: Two primaries accept writes, data diverges

With leader election: - One node coordinates, others follow - Clear authority for decisions - Automatic failover when leader fails

Leader Election Flow

Summary

Leader election is a coordination pattern where distributed nodes agree on a single leader to coordinate activities. Only the leader performs certain operations (writes, scheduling, coordination) while followers remain on standby. If the leader fails, remaining nodes elect a new leader. This pattern prevents split-brain scenarios, ensures single-writer semantics, and simplifies distributed coordination. Implementations include Raft consensus, ZooKeeper recipes, and etcd. Leader election is fundamental to databases (primary/replica), message brokers (partition leaders), and orchestration systems (scheduler).

Key Takeaways

Single Coordinator Simplifies Logic

With one leader, no need for distributed consensus on every operation. Leader makes decisions, followers accept. Dramatically simplifies system design.

Failure Detection is Critical

Must quickly detect leader failure to minimize downtime. But false positives cause unnecessary elections. Balance sensitivity vs stability.

Elections Must Be Safe

Can never have two leaders simultaneously (split-brain). Use fencing tokens, epoch numbers, or lease expiration to prevent stale leaders.

Lease-Based vs Consensus-Based

Leases: Leader holds time-limited lock, must renew. Simple but clock-dependent. Consensus: Nodes vote, majority wins. Complex but clock-independent.

Followers Must Reject Stale Leaders

After election, old leader may not know it's been replaced. Include term/epoch in all requests. Followers reject requests from old terms.

Leader Election is Building Block

Used by Kafka (partition leader), Elasticsearch (master), Redis Sentinel, Kubernetes (scheduler). Understanding it unlocks understanding many systems.

Pattern Details

Problems without leader election:

  • Duplicate work: Multiple nodes do same task
  • Conflicting decisions: Nodes make different choices
  • Split-brain: Two primaries accept writes, data diverges

With leader election: - One node coordinates, others follow - Clear authority for decisions - Automatic failover when leader fails

Leader Election Flow

Trade-offs

AspectAdvantageDisadvantage

Premium Content

Sign in to access this content or upgrade for full access.