SystemExpertsSystemExperts
Pricing

Patterns

35 items

Horizontal Scaling Pattern

15mbeginner

Retry with Backoff Pattern

15mbeginner

Replication Pattern

25mintermediate

Caching Strategies Pattern

25mintermediate

Persistent Connections Pattern

20mintermediate

Load Balancing Pattern

20mintermediate

Fan-out Pattern

20mintermediate

Fan-in Pattern

20mintermediate

Circuit Breaker Pattern

20mintermediate

Eventual Consistency Pattern

25mintermediate

Queue-based Load Leveling Pattern

20mintermediate

Bloom Filters Pattern

20mintermediate

Time-Series Storage Pattern

20mintermediate

Bulkhead Pattern

20mintermediate

Batch Processing Pattern

20mintermediate

Write-Ahead Log Pattern

20mintermediate

API Gateway Pattern

20mintermediate

Backend for Frontend Pattern

20mintermediate

Sidecar Pattern

20mintermediate

Idempotency Pattern

20mintermediate

Rate Limiting Pattern

20mintermediate

Backpressure Pattern

20mintermediate

Pub/Sub Pattern

25mintermediate

Strong Consistency Pattern

30madvanced

Conflict Resolution Pattern

25madvanced

Leader Election Pattern

25madvanced

Consensus Protocols Pattern

30madvanced

CQRS Pattern

28madvanced

LSM Trees Pattern

25madvanced

Sharding Pattern

25madvanced

Event Sourcing Pattern

30madvanced

Stream Processing Pattern

25madvanced

Change Data Capture Pattern

25madvanced

Distributed Locking Pattern

25madvanced

Two-Phase Commit Pattern

25madvanced
System Design Pattern
Data Distributionfan-inaggregationmetricscollectionconsolidationintermediate

Fan-in Pattern

Aggregating data from many sources into one destination

Used in: Metrics Collection, Log Aggregation, Analytics Pipeline|20 min read

Summary

Fan-in is a messaging pattern where multiple producers send messages that are aggregated by a single consumer or processing point. It's the opposite of fan-out: many sources converge to one destination. This pattern is essential for aggregating data from distributed sources (collecting logs from thousands of servers), implementing scatter-gather (query multiple databases and combine results), and coordinating parallel work (waiting for all workers to complete). Fan-in reduces complexity by providing a single point for aggregation, correlation, and final processing.

Key Takeaways

Many-to-One Aggregation

Multiple producers send messages to a single consumer or aggregation point. This centralizes processing and simplifies downstream logic. Common in logging, metrics collection, and data pipelines.

Correlation and Ordering

Messages from different sources may arrive out of order. Fan-in often requires correlation IDs to group related messages and ordering logic to process them correctly.

Backpressure Management

When many producers send to one consumer, the consumer can become overwhelmed. Implement backpressure mechanisms: rate limiting, buffering, or flow control to prevent overload.

Distributed Search Example:

When a user searches on an e-commerce site: 1. Query goes to multiple index shards (fan-out) 2. Each shard returns partial results 3. Results must be merged, ranked, and paginated (fan-in)

Log Aggregation Example:

1000 servers each produce logs - Each server: 1000 log lines/second - Total: 1M log lines/second - Need central storage for querying and alerting

Without fan-in, you'd need to query each server individually or duplicate logs everywhere.

Fan-In Pattern

Summary

Fan-in is a messaging pattern where multiple producers send messages that are aggregated by a single consumer or processing point. It's the opposite of fan-out: many sources converge to one destination. This pattern is essential for aggregating data from distributed sources (collecting logs from thousands of servers), implementing scatter-gather (query multiple databases and combine results), and coordinating parallel work (waiting for all workers to complete). Fan-in reduces complexity by providing a single point for aggregation, correlation, and final processing.

Key Takeaways

Many-to-One Aggregation

Multiple producers send messages to a single consumer or aggregation point. This centralizes processing and simplifies downstream logic. Common in logging, metrics collection, and data pipelines.

Correlation and Ordering

Messages from different sources may arrive out of order. Fan-in often requires correlation IDs to group related messages and ordering logic to process them correctly.

Backpressure Management

When many producers send to one consumer, the consumer can become overwhelmed. Implement backpressure mechanisms: rate limiting, buffering, or flow control to prevent overload.

Scatter-Gather Pattern

Fan-out a request to multiple services, then fan-in the responses. Used for parallel queries, distributed search, and aggregating data from multiple sources.

Completion Detection

How do you know when all expected messages have arrived? Use timeouts, expected counts, or completion markers. Handle partial results gracefully.

Single Point Bottleneck

The aggregation point can become a bottleneck. Scale by partitioning (multiple aggregators by key), or use hierarchical fan-in (aggregate locally, then globally).

Pattern Details

Distributed Search Example:

When a user searches on an e-commerce site: 1. Query goes to multiple index shards (fan-out) 2. Each shard returns partial results 3. Results must be merged, ranked, and paginated (fan-in)

Log Aggregation Example:

1000 servers each produce logs - Each server: 1000 log lines/second - Total: 1M log lines/second - Need central storage for querying and alerting

Without fan-in, you'd need to query each server individually or duplicate logs everywhere.

Fan-In Pattern

Trade-offs

AspectAdvantageDisadvantage

Premium Content

Sign in to access this content or upgrade for full access.