C++ Caching Strategies: Redis, Memcached, and Patterns [#50-8]

C++ Caching Strategies: Redis, Memcached, and Patterns [#50-8]

이 글의 핵심

Speed up read-heavy APIs with Redis: TTL design, invalidation on writes, probabilistic early expiration, and SET NX locks for stampede control.

Introduction: “The database is the bottleneck”

Repeated identical SELECTs overload the DB and inflate p99 latency. Read-heavy, slowly changing data (product lists, rankings, sessions) belongs in a cache—often Redis or Memcached from C++ via hiredis-style clients.

Topics:

  • Scenarios: DB saturation, stale data, cache stampede, multi-host consistency
  • Cache-Aside, Write-Through, invalidation
  • Distributed locks (SET key value NX EX) for coordination
  • Ops: TTL, monitoring, failure modes

See also: Redis clone from scratch for KV internals.


Scenarios

ProblemApproach
Same query thousands of times/secCache-aside: read Redis → miss → DB → set Redis
Stale prices after updateInvalidate or update cache on write, not only TTL
Thundering herd at TTL expirySingle-flight lock, early jitter, stale-while-revalidate
Per-process map invisible to other serversDistributed Redis/Memcached
Cross-server inventory raceDistributed lock or atomic Lua in Redis

Patterns

  • Cache-Aside: application owns when to fill cache.
  • Write-Through: writes go to DB and cache together (stronger consistency, more write cost).
  • Stampede: only one backend refreshes; others wait or serve slightly stale data.

Redis client (conceptual)

  • Connection pool, timeouts, pipelining for batch reads.
  • Serialization of values (JSON, protobuf).

Production

  • Metrics: hit rate, latency, evictions, memory.
  • Graceful degradation if Redis is down (optional direct DB with circuit breaker).

  • Redis C++

Summary

Distributed caching cuts DB load and latency when paired with clear invalidation rules and stampede defenses. Redis is the usual default for rich data structures and locking primitives.