2 AM, Django server crawling to a halt — and Redis was what I was missing
I was running a Django API that returned a product list for a mobile app. Everything was fine under low traffic. But when a sale hit, the server started timing out. I jumped in to investigate and found MySQL handling around 400 queries per second — most of them the exact same SELECT statement, just with different filter parameters.
The cause wasn’t hard to find: every request was hitting the database directly, even though the results hadn’t changed in 5 minutes. That’s called a missing cache layer.
Why does the database get overloaded even with simple queries?
Every time a client calls the API /products?category=shoes, Django runs a full SQL query: MySQL has to parse → find the index → read from disk (or the buffer pool) → return the result. With 400 queries per second and 80% being duplicates, MySQL burns through nearly all its CPU doing the same work over and over.
The simplest fix: add a cache layer in between. The first query hits the database, and the result gets stored in cache. Subsequent requests are served from cache without touching the database at all. This is exactly what Redis does best.
Installing Redis on Linux — fast and clean
Ubuntu / Debian
# Install from the official Redis repository (latest version, not the older Ubuntu package)
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt update
sudo apt install redis -y
# Check service status
sudo systemctl status redis
sudo systemctl enable redis
CentOS / RHEL / Rocky Linux
sudo dnf install epel-release -y
sudo dnf install redis -y
sudo systemctl start redis
sudo systemctl enable redis
Verify Redis is running
redis-cli ping
# Expected output: PONG
If you see PONG, you’re good to go. If not, check systemctl status redis — 99% of the time it’s either the service hasn’t started or port 6379 is blocked by a firewall.
The most commonly used Redis commands in practice
SET and GET — the basics
redis-cli
# Store a value
SET username "johndoe"
# Read it back
GET username
# => "johndoe"
# Store with TTL (auto-expires after 300 seconds)
SET session_token "abc123" EX 300
# Check remaining time in seconds
TTL session_token
Hash — storing objects with multiple fields
# Store user information
HSET user:1001 name "John Doe" email "[email protected]" age 28
# Read a single field
HGET user:1001 name
# Read all fields
HGETALL user:1001
List — a simple queue
# Push to the front
LPUSH job_queue "send_email"
LPUSH job_queue "resize_image"
# Pop from the back (FIFO)
RPOP job_queue
# => "send_email"
Deleting and checking keys
# Check if a key exists
EXISTS username
# => 1 (exists) or 0 (doesn't exist)
# Delete a key
DEL username
# List all keys (never use this in production — dangerous)
KEYS *
# Use SCAN instead in production
SCAN 0 MATCH user:* COUNT 100
Using Redis cache in Python — a real-world example
Back to the original Django API problem. Here’s how I fixed it that night:
import redis
import json
import hashlib
# Connect to Redis
r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)
def get_products(category: str, filters: dict) -> list:
# Generate a cache key from query parameters
cache_key = f"products:{category}:{hashlib.md5(json.dumps(filters, sort_keys=True).encode()).hexdigest()}"
# Try to get from cache
cached = r.get(cache_key)
if cached:
return json.loads(cached)
# Cache miss → query the database
results = db.query("SELECT * FROM products WHERE category = %s ...", category)
# Store in cache with 5-minute TTL
r.setex(cache_key, 300, json.dumps(results))
return results
Deployed it that same night. MySQL queries dropped from 400/sec down to around 40/sec. Response time went from 800ms down to 20ms. Just 30 lines of caching code and the server stopped timing out.
Minimum Redis configuration for production
The default config file is at /etc/redis/redis.conf. Here’s what you need to change right away:
# Limit the amount of RAM Redis can use
maxmemory 512mb
# When RAM is full: evict least recently used keys first (ideal for caching)
maxmemory-policy allkeys-lru
# Bind to localhost only — don't expose externally unless necessary
bind 127.0.0.1
# Set a password (required if the port is exposed externally)
requirepass your_strong_password_here
# Save snapshots to disk (RDB persistence)
save 900 1
save 300 10
save 60 10000
After editing the config:
sudo systemctl restart redis
# Test authentication
redis-cli -a your_strong_password_here ping
Use Redis for the right problems — don’t force it where it doesn’t fit
Redis is powerful, but it’s not a silver bullet. I once saw a team use Redis as the primary database for orders — with persistence disabled. One server restart wiped out 3 days of transaction data. It was an expensive lesson: understand the right role for each tool before you touch production data.
Redis is best suited for:
- API response caching — reduce database load, speed up reads
- Session storage — store user sessions and authentication tokens
- Rate limiting — count requests per IP, prevent spam
- Job queues — used with Celery or RQ
- Pub/Sub messaging — simple real-time notifications
Avoid using Redis for:
- Data that must be permanently durable (orders, financial transactions)
- Complex relational data (multi-table joins)
- Large files or binary blobs
Quick Redis monitoring
# View real-time statistics
redis-cli info stats
# Cache hit rate
redis-cli info stats | grep -E 'keyspace_hits|keyspace_misses'
# Watch commands in real-time (like top, but for Redis)
redis-cli monitor
If keyspace_hits is much higher than keyspace_misses, your cache is working well. If not, review your TTL settings and cache key logic.
Redis isn’t complicated. I had it working the first time I installed it. The challenge isn’t the installation — it’s knowing when to use it and configuring it correctly so it doesn’t let you down at the worst possible moment. Next time your server starts crawling in the middle of the night, check your cache layer first — that’s usually the culprit.

