Open Source
10 items
10 items
The Swiss Army knife of caching, messaging, and real-time data that powers Twitter, GitHub, and Stack Overflow
Redis is an in-memory data structure server that supports strings, hashes, lists, sets, sorted sets, streams, and more. Its single-threaded architecture eliminates locking overhead, achieving millions of operations per second. Beyond caching, Redis serves as a message broker, real-time leaderboard, session store, and rate limiter. The key insight: by keeping data in memory and using efficient data structures, Redis achieves microsecond latency that disk-based databases cannot match.
Redis processes commands in a single thread, eliminating lock contention and context switching. This makes reasoning about atomicity trivial -each command executes completely before the next begins. The bottleneck is usually network I/O, not CPU.
Unlike simple caches, Redis provides rich data structures (lists, sets, sorted sets, streams) with O(1) and O(log n) operations. You can LPUSH/RPOP for queues, ZADD/ZRANGE for leaderboards, and XADD/XREAD for event streams -all atomically.
Redis automatically switches between memory-efficient encodings based on data size. Small hashes use ziplists (contiguous memory), large ones use hash tables. This optimization happens transparently, saving 10x memory for small objects.
Redis (Remote Dictionary Server) was created by Salvatore Sanfilippo in 2009 to solve a specific problem: his real-time web analytics startup needed to handle high-velocity writes that MySQL couldn't keep up with.
The core insight was simple: memory is fast, disk is slow. By keeping all data in RAM and using efficient data structures, Redis achieves latency measured in microseconds -1000x faster than disk-based databases.
But Redis isn't just a cache. It's a data structure server. Where Memcached only stores strings, Redis provides:
Common use cases:
Redis supports master-replica replication for read scaling and failover. Redis Cluster provides automatic sharding across multiple nodes using hash slots (16384 slots distributed across masters).
Complex operations spanning multiple keys can be made atomic using Lua scripts. The script executes in the same single thread, so no other command can interleave. This replaces the need for distributed locks in many cases.
Redis (Remote Dictionary Server) was created by Salvatore Sanfilippo in 2009 to solve a specific problem: his real-time web analytics startup needed to handle high-velocity writes that MySQL couldn't keep up with.
The core insight was simple: memory is fast, disk is slow. By keeping all data in RAM and using efficient data structures, Redis achieves latency measured in microseconds -1000x faster than disk-based databases.
But Redis isn't just a cache. It's a data structure server. Where Memcached only stores strings, Redis provides:
Common use cases:
| Aspect | Advantage | Disadvantage |
|---|---|---|
| Single-threaded execution | No locks, no race conditions, predictable latency, simpler code | Cannot utilize multiple CPU cores for single operations; one slow command blocks everything |
| In-memory storage | Microsecond latency, millions of operations per second | Data limited by RAM; more expensive than disk storage; requires persistence strategy |
| Asynchronous replication | Low latency writes, simple master-replica setup | Potential data loss if master fails before replication; replicas can serve stale reads |
| Hash slot clustering | Linear horizontal scaling, automatic failover, no central coordinator | Multi-key operations limited to same slot; resharding requires data migration |
| Lua scripting | Atomic complex operations, reduce round trips, custom commands | Scripts block all other operations; debugging is difficult; must be deterministic |
| Pub/Sub simplicity | Easy real-time messaging, pattern subscriptions | No persistence -offline subscribers miss messages; no replay capability |
| RDB persistence | Compact snapshots, fast restart, minimal runtime overhead | Data loss between snapshots; fork() can cause latency spikes with large datasets |
| AOF persistence | Minimal data loss (1 second with everysec), human-readable log | Larger files than RDB; slower restart; write amplification |