Redis Complete Guide | Caching, Pub/Sub, Streams, Rate Limiting & Sessions
이 글의 핵심
Redis cuts API response times from 500ms to 50ms and reduces database load by 80%. This guide covers everything you need to use Redis in production: data types, caching strategies, Pub/Sub, Streams, and rate limiting.
Why Redis?
Redis (Remote Dictionary Server) is an in-memory data structure store — sub-millisecond reads and writes, 100,000+ operations per second. It’s the universal solution for the most common backend performance problems:
Slow database query (500ms) → Cache result in Redis → 5ms
Session stored in DB → Store in Redis → Instant, no DB needed
Real-time notifications → Redis Pub/Sub → Push to all subscribers
API being hammered → Redis rate limiting → Block abusers atomically
Installation & Connection
# Docker (quickest)
docker run -d --name redis -p 6379:6379 redis:7-alpine
# macOS
brew install redis && redis-server
# Linux
sudo apt install redis-server
# Node.js client
npm install redis
Connect with error handling and reconnection:
// lib/redis.ts
import { createClient } from 'redis';
export const redis = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379',
socket: {
reconnectStrategy: (retries) => Math.min(retries * 50, 2000),
},
});
redis.on('error', (err) => console.error('Redis error:', err));
redis.on('reconnecting', () => console.log('Redis reconnecting...'));
await redis.connect();
console.log('Redis connected');
Data Types
String — Simple Key-Value
// Set and get
await redis.set('user:1:name', 'Alice');
const name = await redis.get('user:1:name'); // 'Alice'
// Set with expiry (seconds)
await redis.setEx('session:abc123', 3600, 'user_data'); // Expires in 1 hour
// Atomic increment — great for counters and rate limiting
await redis.incr('page:views'); // 1
await redis.incr('page:views'); // 2
await redis.incrBy('score:user1', 10); // Increment by 10
Hash — Structured Object
// Store user object
await redis.hSet('user:1', {
name: 'Alice',
email: '[email protected]',
role: 'admin',
});
// Get single field
const email = await redis.hGet('user:1', 'email'); // '[email protected]'
// Get all fields
const user = await redis.hGetAll('user:1');
// { name: 'Alice', email: '[email protected]', role: 'admin' }
// Update single field without overwriting the whole object
await redis.hSet('user:1', 'role', 'superadmin');
List — Queue or Recent Activity
// LPUSH adds to the left (front), RPOP removes from the right (back)
// This creates a FIFO queue
await redis.lPush('job:queue', 'job1');
await redis.lPush('job:queue', 'job2');
await redis.lPush('job:queue', 'job3');
const nextJob = await redis.rPop('job:queue'); // 'job1' (FIFO)
// Keep only the last 100 items (sliding window)
await redis.lTrim('recent:actions:user1', 0, 99);
// Get range
const recentJobs = await redis.lRange('job:queue', 0, -1); // All items
Set — Unique Collection
// Add unique tags
await redis.sAdd('post:42:tags', 'typescript', 'nodejs', 'redis');
await redis.sAdd('post:42:tags', 'typescript'); // Duplicate — ignored
// Check membership (O(1))
const hasTag = await redis.sIsMember('post:42:tags', 'typescript'); // true
// Get all members
const tags = await redis.sMembers('post:42:tags');
// Set operations
const commonTags = await redis.sInter('post:42:tags', 'post:43:tags'); // Intersection
Sorted Set — Leaderboard / Priority Queue
// Add with scores
await redis.zAdd('leaderboard', [
{ score: 2350, value: 'alice' },
{ score: 1800, value: 'bob' },
{ score: 2100, value: 'charlie' },
]);
// Top 10 (descending score)
const top10 = await redis.zRangeWithScores('leaderboard', 0, 9, { REV: true });
// [{ value: 'alice', score: 2350 }, { value: 'charlie', score: 2100 }, ...]
// Get rank (0-indexed, highest score = rank 0)
const rank = await redis.zRevRank('leaderboard', 'alice'); // 0 (first place)
// Update score
await redis.zIncrBy('leaderboard', 50, 'bob'); // bob's score += 50
Caching Strategies
Cache-Aside (Lazy Loading)
The most common pattern — load data into cache only when requested:
async function getUser(id: number) {
const cacheKey = `user:${id}`;
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached); // Cache hit
}
// 2. Cache miss — query database
const user = await db.user.findUnique({ where: { id } });
if (!user) return null;
// 3. Store in cache for 1 hour
await redis.setEx(cacheKey, 3600, JSON.stringify(user));
return user;
}
Write-Through
Update the cache whenever you write to the database:
async function updateUser(id: number, data: Partial<User>) {
// 1. Update database
const user = await db.user.update({ where: { id }, data });
// 2. Update cache immediately
await redis.setEx(`user:${id}`, 3600, JSON.stringify(user));
return user;
}
async function deleteUser(id: number) {
await db.user.delete({ where: { id } });
await redis.del(`user:${id}`); // Invalidate cache
}
Bulk Invalidation with Pattern
// Invalidate all cache keys for a user
async function invalidateUserCache(userId: number) {
const keys = await redis.keys(`user:${userId}:*`);
if (keys.length > 0) {
await redis.del(keys);
}
}
⚠️ Avoid KEYS in production for large keyspaces — it blocks Redis. Use SCAN instead:
async function scanAndDelete(pattern: string) {
let cursor = 0;
do {
const reply = await redis.scan(cursor, { MATCH: pattern, COUNT: 100 });
cursor = reply.cursor;
if (reply.keys.length > 0) {
await redis.del(reply.keys);
}
} while (cursor !== 0);
}
Pub/Sub — Real-Time Messaging
Redis Pub/Sub delivers messages to all subscribers of a channel. Use it for real-time notifications and server-to-server events.
// subscriber.ts
import { createClient } from 'redis';
const subscriber = createClient();
await subscriber.connect();
// Subscribe to a channel
await subscriber.subscribe('notifications', (message, channel) => {
const event = JSON.parse(message);
console.log(`[${channel}] ${event.type}:`, event);
// Handle the event...
});
// publisher.ts — publish from anywhere
await redis.publish('notifications', JSON.stringify({
type: 'order_completed',
orderId: 42,
userId: 123,
amount: 99.99,
}));
Real-World: Socket.IO + Redis Pub/Sub
This pattern lets multiple Node.js servers share real-time events:
import { Server } from 'socket.io';
import { createClient } from 'redis';
const io = new Server(3000);
const subscriber = createClient();
const publisher = createClient();
await Promise.all([subscriber.connect(), publisher.connect()]);
// Forward Redis messages to Socket.IO clients
await subscriber.subscribe('chat', (message) => {
const data = JSON.parse(message);
io.to(data.room).emit('message', data); // Broadcast to room
});
io.on('connection', (socket) => {
socket.on('join', (room) => socket.join(room));
socket.on('chat', async ({ room, user, text }) => {
// Publish to Redis — all servers pick it up
await publisher.publish('chat', JSON.stringify({ room, user, text }));
});
});
Redis Streams
Streams are a persistent, ordered log — like Kafka but built into Redis. Useful for event sourcing and reliable message processing.
// Produce events
await redis.xAdd('events:orders', '*', {
type: 'order_placed',
orderId: '42',
userId: '123',
amount: '99.99',
});
// Consume events (simple read)
const messages = await redis.xRead(
{ key: 'events:orders', id: '0' }, // '0' = from the beginning
{ COUNT: 10 }
);
// Consumer group — distributed processing with acknowledgement
await redis.xGroupCreate('events:orders', 'email-service', '$', { MKSTREAM: true });
// Worker reads and processes messages
const pending = await redis.xReadGroup(
'email-service',
'worker-1',
{ key: 'events:orders', id: '>' }, // '>' = undelivered messages
{ COUNT: 5 }
);
for (const { id, message } of pending[0]?.messages ?? []) {
await sendOrderConfirmationEmail(message);
await redis.xAck('events:orders', 'email-service', id); // Mark as processed
}
Streams vs Pub/Sub:
- Pub/Sub: Fire-and-forget, messages lost if no subscriber. Best for notifications.
- Streams: Persistent, consumer groups, at-least-once delivery. Best for event processing.
Rate Limiting
Fixed Window (Simple)
async function rateLimit(
userId: string,
limit: number = 100,
windowSeconds: number = 60
): Promise<{ allowed: boolean; remaining: number }> {
const windowKey = Math.floor(Date.now() / (windowSeconds * 1000));
const key = `rate:${userId}:${windowKey}`;
const count = await redis.incr(key);
if (count === 1) {
await redis.expire(key, windowSeconds); // Set expiry on first request
}
return {
allowed: count <= limit,
remaining: Math.max(0, limit - count),
};
}
// Express middleware
app.use(async (req, res, next) => {
const result = await rateLimit(req.ip, 100, 60);
res.setHeader('X-RateLimit-Remaining', result.remaining);
if (!result.allowed) {
return res.status(429).json({ error: 'Too many requests' });
}
next();
});
Sliding Window (Precise)
async function slidingWindowRateLimit(userId: string, limit = 100) {
const key = `rate:sliding:${userId}`;
const now = Date.now();
const windowMs = 60_000; // 1 minute
// Remove requests older than the window
await redis.zRemRangeByScore(key, 0, now - windowMs);
const count = await redis.zCard(key);
if (count >= limit) {
return { allowed: false, remaining: 0 };
}
// Record this request
await redis.zAdd(key, { score: now, value: `${now}-${Math.random()}` });
await redis.expire(key, 60);
return { allowed: true, remaining: limit - count - 1 };
}
Session Management
import crypto from 'crypto';
async function createSession(userId: string, metadata: object = {}) {
const sessionId = crypto.randomUUID();
const sessionData = JSON.stringify({ userId, ...metadata, createdAt: Date.now() });
await redis.setEx(`session:${sessionId}`, 86_400, sessionData); // 24 hours
return sessionId;
}
async function getSession(sessionId: string) {
const data = await redis.get(`session:${sessionId}`);
return data ? JSON.parse(data) : null;
}
async function extendSession(sessionId: string, ttlSeconds = 86_400) {
await redis.expire(`session:${sessionId}`, ttlSeconds); // Reset TTL on activity
}
async function destroySession(sessionId: string) {
await redis.del(`session:${sessionId}`);
}
Performance Optimization
Pipelining — Batch Multiple Commands
// Without pipeline: 3 round trips
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// With pipeline: 1 round trip
const pipeline = redis.multi();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Transactions with WATCH
// Optimistic locking — retry if balance changes during transaction
async function deductBalance(userId: string, amount: number) {
const key = `balance:${userId}`;
await redis.watch(key);
const balance = parseInt(await redis.get(key) || '0');
if (balance < amount) throw new Error('Insufficient balance');
const result = await redis
.multi()
.decrBy(key, amount)
.incrBy(`spent:${userId}`, amount)
.exec();
if (!result) throw new Error('Transaction failed — retry'); // Watch triggered
return result;
}
Cluster Setup
For high availability and horizontal scaling:
import { createCluster } from 'redis';
const cluster = createCluster({
rootNodes: [
{ url: 'redis://node1:7000' },
{ url: 'redis://node2:7001' },
{ url: 'redis://node3:7002' },
],
defaults: {
socket: {
reconnectStrategy: (retries) => Math.min(retries * 100, 3000),
},
},
});
await cluster.connect();
Summary
| Use case | Redis feature | Key command |
|---|---|---|
| Cache DB results | String + TTL | setEx(key, seconds, value) |
| Session storage | String + TTL | setEx('session:id', 86400, data) |
| Real-time notifications | Pub/Sub | publish / subscribe |
| Rate limiting | String INCR | incr + expire |
| Leaderboard | Sorted Set | zAdd, zRevRange |
| Event queue | List | lPush + rPop |
| Reliable event processing | Streams | xAdd, xReadGroup, xAck |
| Batch operations | Pipeline | multi() + exec() |
Related posts: