Redis Caching Strategies | Cache-Aside Through Refresh-Ahead & TTL Patterns

Redis Caching Strategies | Cache-Aside Through Refresh-Ahead & TTL Patterns

이 글의 핵심

Compare five cache patterns and pair TTL, invalidation, and stampede mitigation with Node.js and Redis—same stack as Docker Compose guides on this site.

Introduction

Redis is an in-memory key–value store used for sessions, rate limiting, and pub/sub—but caching remains one of its top use cases. How you attach the cache changes consistency, latency, and failure behavior—that is what Cache-Aside, Read-through, Write-through, Write-behind, and Refresh-ahead formalize.

This article walks each pattern concept → minimal Node.js sketch → TTL and invalidation. Pair it with running Redis via Docker Compose.

After reading this post

  • You can separate responsibilities (app vs cache vs DB) across patterns
  • You can explain TTL, cache stampede, and invalidation in operational terms
  • You know where to hook logic in Node.js code

Table of contents

  1. Concepts: why patterns differ
  2. Pattern behavior and Node.js sketches
  3. TTL, invalidation, and stampedes
  4. Advanced: production considerations
  5. Performance comparison
  6. Real-world cases
  7. Troubleshooting
  8. Conclusion

Concepts: why patterns differ

Basics

  • Cache hit: Data is in Redis → skip the database.
  • Cache miss: Not in Redis → read source (often DB) and decide whether to populate cache.
  • Consistency: How much stale cache after a DB update you tolerate drives pattern choice.

Why it matters

The database is authoritative but expensive; Redis is fast but volatile and size-limited. Patterns define who fills and who invalidates the cache.


Pattern behavior and Node.js sketches

Below assumes ioredis and async functions—minimal examples.

// lib/redis.js
import Redis from 'ioredis';
export const redis = new Redis(process.env.REDIS_URL);

1. Cache-Aside (lazy loading)

The app checks the cache first; on miss, loads from the DB and stores in Redis.

async function getUser(id) {
  const key = `user:${id}`;
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const row = await db.query('SELECT * FROM users WHERE id = $1', [id]);
  if (!row) return null;
  await redis.set(key, JSON.stringify(row), 'EX', 300); // TTL 300s
  return row;
}

Traits: Simple and ubiquitous. On writes, the app owns invalidation or refresh rules.

2. Read-through

The cache layer (or store) calls the loader on miss. The app only talks to the cache API.

// Concept sketch: wrapper runs loader on miss
async function readThrough(key, ttlSec, loader) {
  const hit = await redis.get(key);
  if (hit) return JSON.parse(hit);
  const value = await loader();
  if (value != null)
    await redis.set(key, JSON.stringify(value), 'EX', ttlSec);
  return value;
}

Traits: Centralizes logic—often easier to maintain when ORM/cache modules own the loader.

3. Write-through

On writes, update both DB and cache for higher consistency.

async function updateUser(id, data) {
  await db.query('UPDATE users SET ... WHERE id = $1', [id]);
  const row = await db.query('SELECT * FROM users WHERE id = $1', [id]);
  await redis.set(`user:${id}`, JSON.stringify(row[0]), 'EX', 300);
}

Traits: Read path stays simple; write latency may increase due to double writes.

4. Write-behind (write-back)

Write to cache first; flush to DB asynchronously in batches.

Traits: Can improve write throughput/latency but adds loss risk and complex recovery—often unsuitable for payments, inventory, etc.

5. Refresh-ahead

Background refresh before expiry to reduce miss rate on hot keys.

Traits: Helps traffic spikes; requires job design and duplicate-refresh control.


TTL, invalidation, and stampedes

TTL (time to live)

  • TTL on keys lets stale data expire naturally.
  • Too short → DB load; too long → longer inconsistency windows.

Invalidate on write (pairs well with Cache-Aside)

async function updateUser(id, data) {
  await db.query('UPDATE users SET ... WHERE id = $1', [id]);
  await redis.del(`user:${id}`);
}

Cache stampede

When a hot key expires, many requests can hit the DB at once. Mitigate with probabilistic early expiration or singleflight (one request loads while others wait or retry).

const sleep = (ms) => new Promise((r) => setTimeout(r, ms));

async function getWithSingleFlight(key, ttl, loader) {
  const lockKey = `lock:${key}`;
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const got = await redis.set(lockKey, '1', 'EX', 10, 'NX');
  if (!got) {
    await sleep(50);
    return getWithSingleFlight(key, ttl, loader);
  }
  try {
    const value = await loader();
    await redis.set(key, JSON.stringify(value), 'EX', ttl);
    return value;
  } finally {
    await redis.del(lockKey);
  }
}

Advanced: production considerations

  • Connection pools: Size Redis clients and DB pools for process count.
  • Key naming: Use service:entity:id style to avoid collisions.
  • Compression: Consider MessagePack or compression for large payloads.
  • Failure policy: Document whether you fall back to DB or fail requests when Redis is down.

Performance comparison

PatternRead latencyWrite latencyConsistencyComplexity
Cache-AsideLow on hitLowDepends on TTL + invalidationLow
Read-throughLowDepends on loaderMedium
Write-throughLowHigherRelatively strongerMedium
Write-behindLowVery lowHard to guaranteeHigh
Refresh-aheadVery low on hitDepends on TTL/preload timingHigh

Real-world cases

  • Product detail: Cache-Aside + short TTL + invalidate on price change.
  • Announcement banner: Read-through wrapper to keep app code small.
  • Feeds/rankings: If recomputation is expensive, avoid naive Write-behind—often prefer batch precomputation.

See also the Node.js performance guide.


Troubleshooting

SymptomCauseMitigation
Stale data intermittentlyMissing invalidationdel/version keys on all update paths
Redis memory spikesLarge values / unbounded keysTTL, maxmemory-policy, data design
Sudden DB loadStampedeSingleflight, early refresh
Cascading timeoutsRedis slownessConnection limits, slow logs, network isolation

Conclusion

  • Many teams start with Cache-Aside and evolve toward Write-through or Refresh-ahead when needed.
  • TTL plus explicit invalidation keeps operations manageable.
  • For database choice, continue with PostgreSQL vs MySQL and database integration.