The Problem: When RabbitMQ is too heavy for small and medium projects
Back when I was building a Notification Service for a startup, our system only needed to handle about 50,000 events per day. The team decided to use RabbitMQ because of its “industry standard” reputation. The result? A 2GB RAM VPS running Erlang consumed nearly 500MB just to maintain the service under zero load. Configuring Exchanges, Queues, and Bindings also became unnecessarily cumbersome for the simple need of pushing and receiving messages.
In reality, the project was already using Redis for caching. Maintaining an additional RabbitMQ cluster caused operational costs to skyrocket without providing much extra value. I started wondering: Is there a way to leverage Redis as a professional Message Queue (MQ)? Could it ensure messages aren’t lost and balance the load across multiple workers?
Analysis: Why Redis Pub/Sub or Lists can’t handle a real MQ?
Before version 5.0, we often had to “hack” an MQ using two familiar but risky methods:
- Redis List (LPUSH/BRPOP): It acts like a pipe, pushing into one end and pulling from the other. This method is simple but lacks “Fan-out” capabilities (sending one message to multiple recipients). Most dangerously, if a worker pulls a message and then crashes, that message vanishes completely.
- Redis Pub/Sub: This mechanism is pure real-time “fire and forget.” If a subscriber disconnects while a publisher is sending, the message is lost forever. It lacks history and acknowledgement (ACK) mechanisms.
To bridge this gap, Redis Streams was introduced. It combines the persistence of Lists with flexible distribution capabilities. You get powerful Consumer Groups similar to Kafka but with extremely low resource consumption.
MQ implementation options within the Redis ecosystem
Depending on the complexity of your problem, you can choose one of three approaches:
- Using Lists (Simple Queue): Best for simple background jobs where data loss isn’t a critical concern.
- Using Pub/Sub: Suitable for chat apps or push notifications where if a user is offline, they don’t need to see the message later.
- Using Redis Streams: The best choice for Microservices or Event-driven systems that require “At-least-once delivery” guarantees.
Hands-on Solution: Building a Message Queue with Redis Streams
Streams operate like an append-only log file. It stores key-value pairs with timestamp-based IDs, making traceability effortless.
Basic Operations: Pushing data into a Stream
Instead of LPUSH, we use the XADD command. The * character instructs Redis to auto-generate a unique ID.
# Add an order to the 'orders' stream
redis-cli XADD orders * user_id 1001 item "iPhone 15" status "pending"
Redis will return an ID like 1712745600000-0. You can use this ID to check the message status at any time.
Intelligent Load Balancing with Consumer Groups
This is the “killer feature.” You can group workers together to share the processing workload. If worker A is busy, worker B will automatically pick up the next message.
# Create group 'order_processors' for the 'orders' stream
# '$' means only receive new messages from this point forward
redis-cli XGROUP CREATE orders order_processors $
Then, workers read messages using the XREADGROUP command:
# Worker 'worker_1' gets 1 new message that hasn't been touched (ID is '>')
redis-cli XREADGROUP GROUP order_processors worker_1 COUNT 1 STREAMS orders >
ACK Mechanism and XPENDING: Never Lose Track of a Message
In reality, workers can easily crash while calling third-party APIs. Redis Streams solves this through the PEL (Pending Entries List). When you read a message, it temporarily stays in the PEL. Redis only confirms it’s finished once you send the XACK command.
# Acknowledge successful processing
redis-cli XACK orders order_processors 1712745600000-0
If a message stays in the PEL for too long, use XCLAIM to reassign it to another worker for reprocessing. This is how I handle logic errors effortlessly without needing external monitoring tools.
Real-world Python Code Example
Below is a sample snippet I often use to implement a basic worker:
import redis
import time
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
stream_name = 'orders'
group_name = 'order_processors'
consumer_name = 'worker_v1'
# Create group and stream if they don't exist
try:
r.xgroup_create(stream_name, group_name, id='0', mkstream=True)
except redis.exceptions.ResponseError:
pass
while True:
# Read new unacknowledged messages
messages = r.xreadgroup(group_name, consumer_name, {stream_name: '>'}, count=1)
for stream, content in messages:
for message_id, data in content:
print(f"Processing order: {data['item']} for customer {data['user_id']}")
# Simulate processing logic
time.sleep(0.5)
# Acknowledge completion
r.xack(stream_name, group_name, message_id)
print(f"Order completed: {message_id}")
Battle-tested Tips: How to keep Redis from ‘eating’ your RAM
While powerful, if used incorrectly, Redis will quickly drain your server’s resources. Don’t forget these rules:
- Always limit Stream length: Redis stores data in RAM. Don’t
XADDindefinitely. UseMAXLEN ~ 10000to automatically delete old records once the 10,000 message limit is reached. - Monitor with XINFO: Regularly run
XINFO GROUPS orders. If the number of Pending messages spikes, it’s a signal that you need to scale more workers immediately. - Use Redis Sentinel: The MQ is the heart of your system. Don’t let it run as a single point of failure. Set up Sentinel for automatic failover if the master dies to avoid system-wide congestion.
- Separate Databases: Don’t use DB 0 for both Cache and MQ. Separating resources prevents a bloated Cache from crashing your Message Queue.
In short, Redis Streams is a heavy-duty yet elegant weapon. For projects requiring high speed and resource efficiency, I always prioritize it over giants like Kafka or RabbitMQ.

