Caching
Redis Patterns
Redis data structures and common caching patterns.
Redis Patterns
Redis is not just a key-value store. It's a Swiss Army knife of data structures that can solve complex caching problems.
Redis Data Structures
Strings
The simplest type. Good for caching single objects.
SET user:123 '{"id":123,"name":"Alice"}'- Store JSON objectGET user:123- Retrieve cached dataEXPIRE user:123 3600- Set TTL to 1 hour
Hashes
Perfect for object fields and partial updates.
HSET user:123 id 123 name "Alice" email "alice@example.com"- Set multiple fieldsHGET user:123 name- Get specific fieldHINCRBY user:123 login_count 1- Atomic increment
Lists
Useful for queues and recent items.
LPUSH recent:posts "post:456"- Add to front of listLRANGE recent:posts 0 9- Get latest 10 itemsLTRIM recent:posts 0 99- Keep only latest 100
Sets
Unique collections, good for tags and relationships.
SADD user:123:followers 456 789 101- Add multiple membersSISMEMBER user:123:followers 456- Check membershipSCARD user:123:followers- Count members
Sorted Sets (ZSETs)
Sets with scores - perfect for leaderboards and rate limiting.
ZADD leaderboard 1500 "user:123" 1800 "user:456"- Add with scoresZRANGE leaderboard 0 9- Get top 10ZINCRBY user:123:views 1 "article:789"- Increment score
Common Caching Patterns
1. Full Page Cache
// Cache rendered HTML for fast responses
async function renderPost(postId) {
const cacheKey = `page:post:${postId}`;
let html = await redis.get(cacheKey);
if (!html) {
html = await renderer.render('post', { post: await db.posts.find(postId) });
await redis.setex(cacheKey, 300, html); // 5 minutes
}
return html;
}2. Rate Limiting with Sliding Window
async function checkRateLimit(userId, limit = 10, window = 60) {
const key = `rate:${userId}`;
const now = Math.floor(Date.now() / 1000);
const pipeline = redis.pipeline();
// Remove old entries
pipeline.zremrangebyscore(key, 0, now - window);
// Add current request
pipeline.zadd(key, now, `${now}-${Math.random()}`);
// Count requests in window
pipeline.zcard(key);
// Set expiry
pipeline.expire(key, window);
const results = await pipeline.exec();
const count = results[2][1];
return count <= limit;
}3. Distributed Lock with RedLock
class RedLock {
constructor(redisInstances, retryCount = 3, retryDelay = 200) {
this.redisInstances = redisInstances;
this.retryCount = retryCount;
this.retryDelay = retryDelay;
}
async lock(resource, ttl = 10000) {
const nonce = `${Date.now()}-${Math.random()}`;
const startTime = Date.now();
for (let attempt = 0; attempt < this.retryCount; attempt++) {
const successes = await Promise.all(
this.redisInstances.map(redis =>
redis.set(resource, nonce, 'PX', ttl, 'NX')
.then(reply => reply === 'OK')
.catch(() => false)
)
);
const successCount = successes.filter(Boolean).length;
const majority = Math.floor(this.redisInstances.length / 2) + 1;
if (successCount >= majority) {
return { resource, nonce, ttl };
}
// Unlock any acquired locks
await Promise.all(
this.redisInstances.map((redis, i) =>
successes[i] ? redis.del(resource) : Promise.resolve()
)
);
await this.sleep(this.retryDelay);
}
throw new Error('Failed to acquire lock');
}
}4. Token Bucket Algorithm
class TokenBucket {
constructor(key, capacity, refillRate) {
this.key = key;
this.capacity = capacity;
this.refillRate = refillRate; // tokens per second
}
async consume(tokens = 1) {
const script = `
local key = KEYS[1]
local capacity = tonumber(ARGV[1])
local tokens = tonumber(ARGV[2])
local interval = 1000000 / tonumber(ARGV[3]) -- microseconds per token
local bucket = redis.call('hmget', key, 'tokens', 'last_refill')
local current_tokens = tonumber(bucket[1]) or capacity
local last_refill = tonumber(bucket[2]) or 0
local now = redis.call('time')[1] * 1000000 + redis.call('time')[2]
local elapsed = now - last_refill
local tokens_to_add = math.floor(elapsed / interval)
current_tokens = math.min(capacity, current_tokens + tokens_to_add)
if current_tokens >= tokens then
current_tokens = current_tokens - tokens
redis.call('hmset', key, 'tokens', current_tokens, 'last_refill', now)
redis.call('expire', key, math.ceil(capacity / tonumber(ARGV[3])))
return 1
else
redis.call('hmset', key, 'tokens', current_tokens, 'last_refill', now)
redis.call('expire', key, math.ceil(capacity / tonumber(ARGV[3])))
return 0
end
`;
const result = await redis.eval(script, 1, this.key, this.capacity, tokens, this.refillRate);
return result === 1; // true if tokens were consumed
}
}Performance Optimization
Pipeline Commands
// Bad: Round trip for each command
const user = await redis.get('user:123');
const posts = await redis.get('user:123:posts');
const followers = await redis.get('user:123:followers');
// Good: Single round trip
const pipeline = redis.pipeline();
pipeline.get('user:123');
pipeline.get('user:123:posts');
pipeline.get('user:123:followers');
const [user, posts, followers] = await pipeline.exec();Lua Scripts for Atomic Operations
// Atomic increment with expiry
const script = `
local current = redis.call('GET', KEYS[1])
if current then
redis.call('EXPIRE', KEYS[1], ARGV[1])
return redis.call('INCR', KEYS[1])
else
redis.call('SETEX', KEYS[1], ARGV[1], 1)
return 1
end
`;
await redis.eval(script, 1, 'daily:views:page:123', 86400);Memory Management
Redis is in-memory. Monitor your memory usage:
- Use
MEMORY USAGE keyto check individual key size - Set
maxmemoryandmaxmemory-policy(allkeys-lru is common) - Use appropriate data structures (Hashes vs Strings for objects)
- Consider Redis persistence vs. pure caching needs
Redis vs. Memcached
Choose Redis when:
- You need data structures beyond strings
- You need persistence or durability
- You need pub/sub or streaming capabilities
Choose Memcached when:
- Simple string caching is all you need
- You need maximum memory efficiency
- You're doing multi-threaded operations (Redis is single-threaded)