SystemExpertsSystemExperts
Pricing

Patterns

35 items

Horizontal Scaling Pattern

15mbeginner

Retry with Backoff Pattern

15mbeginner

Replication Pattern

25mintermediate

Caching Strategies Pattern

25mintermediate

Persistent Connections Pattern

20mintermediate

Load Balancing Pattern

20mintermediate

Fan-out Pattern

20mintermediate

Fan-in Pattern

20mintermediate

Circuit Breaker Pattern

20mintermediate

Eventual Consistency Pattern

25mintermediate

Queue-based Load Leveling Pattern

20mintermediate

Bloom Filters Pattern

20mintermediate

Time-Series Storage Pattern

20mintermediate

Bulkhead Pattern

20mintermediate

Batch Processing Pattern

20mintermediate

Write-Ahead Log Pattern

20mintermediate

API Gateway Pattern

20mintermediate

Backend for Frontend Pattern

20mintermediate

Sidecar Pattern

20mintermediate

Idempotency Pattern

20mintermediate

Rate Limiting Pattern

20mintermediate

Backpressure Pattern

20mintermediate

Pub/Sub Pattern

25mintermediate

Strong Consistency Pattern

30madvanced

Conflict Resolution Pattern

25madvanced

Leader Election Pattern

25madvanced

Consensus Protocols Pattern

30madvanced

CQRS Pattern

28madvanced

LSM Trees Pattern

25madvanced

Sharding Pattern

25madvanced

Event Sourcing Pattern

30madvanced

Stream Processing Pattern

25madvanced

Change Data Capture Pattern

25madvanced

Distributed Locking Pattern

25madvanced

Two-Phase Commit Pattern

25madvanced
System Design Pattern
Scalabilitycachingcache-asidewrite-throughwrite-behindcache-invalidationintermediate

Caching Strategies Pattern

Multi-tier caching for performance optimization

Used in: CDNs, Redis, Memcached|25 min read

Summary

Caching stores frequently accessed data in fast storage (memory) to reduce latency and database load. The main strategies are: Cache-Aside (application manages cache), Read-Through (cache manages reads), Write-Through (cache manages writes synchronously), Write-Behind (cache manages writes asynchronously), and Refresh-Ahead (proactive cache refresh). Choosing the right strategy depends on read/write ratio, consistency requirements, and failure tolerance. Caching is the single most effective optimization for read-heavy workloads - it can reduce response times from 100ms to 1ms and database load by 99%. Every production system uses caching.

Key Takeaways

Cache-Aside is Most Common

Application checks cache first, falls back to database on miss, then populates cache. Simple, flexible, and handles cache failures gracefully. Default choice for most use cases.

Write Strategies Affect Consistency

Write-Through: consistent but slower (write to both). Write-Behind: fast but risk data loss (write to cache, async to DB). Write-Around: avoids cache pollution but misses on first read.

Cache Invalidation is Hard

"There are only two hard things in CS: cache invalidation and naming things." When source data changes, cache must be invalidated. TTL, explicit invalidation, and event-driven updates each have trade-offs.

Cache Hit Ratio is Key Metric

High hit ratio (>90%) means cache is effective. Low hit ratio means cache size too small, TTL too short, or access pattern not cache-friendly. Monitor and tune.

Cold Cache Stampede

When cache is empty (restart, invalidation), all requests hit database simultaneously. Protect with request coalescing, background refresh, or gradual warming.

Multi-Level Caching

L1 cache (in-process) → L2 cache (Redis) → Database. Each level trades off latency vs freshness. Browser → CDN → API cache → Database for web apps.

Pattern Details

Cache-Aside Pattern

python

Trade-offs

AspectAdvantageDisadvantage