Up and Running in 15 Minutes: HAProxy Basics
Our company’s single nginx server started hitting 8–12 second response times during peak hours — CPU pinned at 95%, no more room to scale vertically. Instead of upgrading hardware, I spun up 2 backend servers and placed HAProxy in front of them. From the moment I started installing to having traffic evenly distributed, it took about 15–20 minutes.
This section walks you through building the same stack from scratch.
Topology Overview
Internet
↓
[HAProxy] 192.168.1.10:80
↓ ↓
[Backend1] [Backend2]
192.168.1.11 192.168.1.12
(nginx) (nginx)
Installing HAProxy
# Ubuntu/Debian
sudo apt update && sudo apt install -y haproxy
# CentOS/RHEL
sudo dnf install -y haproxy
# Check version
haproxy -v
Minimal Config File to Get Started
Edit the file /etc/haproxy/haproxy.cfg:
global
log /dev/log local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 30s
frontend web_frontend
bind *:80
default_backend web_servers
backend web_servers
balance roundrobin
server backend1 192.168.1.11:80 check
server backend2 192.168.1.12:80 check
# Validate config before restarting
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
# Restart the service
sudo systemctl restart haproxy
sudo systemctl enable haproxy
Point your browser at the HAProxy server’s IP — requests will alternate between backend1 and backend2. That’s the basic setup done.
Deep Dive: Load Balancing Algorithms
You don’t need to know them all — the following 3 algorithms cover 90% of use cases. Which one I pick depends on the request characteristics of each service:
roundrobin — Default, suitable for most cases
Requests are distributed evenly in rotation: 1→2→1→2. Works well when all backend servers have identical hardware specs.
backend web_servers
balance roundrobin
server backend1 192.168.1.11:80 check weight 1
server backend2 192.168.1.12:80 check weight 1
leastconn — Use when requests have uneven processing times
New requests always go to the least busy server — the one with the fewest active connections at that moment. Ideal for API backends that mix heavy file upload endpoints with lightweight JSON endpoints in the same pool.
backend api_servers
balance leastconn
server api1 192.168.1.21:8080 check
server api2 192.168.1.22:8080 check
source — IP-based sticky sessions
The same client IP is always routed to the same backend. Use this when your application stores sessions locally on each server and hasn’t migrated to Redis yet.
backend legacy_app
balance source
hash-type consistent
server app1 192.168.1.31:80 check
server app2 192.168.1.32:80 check
Advanced Health Checks
By default, check only verifies the TCP connection. To actually validate the HTTP response:
backend web_servers
balance roundrobin
option httpchk GET /health HTTP/1.1\r\nHost:\ example.com
http-check expect status 200
server backend1 192.168.1.11:80 check inter 5s rise 2 fall 3
server backend2 192.168.1.12:80 check inter 5s rise 2 fall 3
Parameter meanings:
inter 5s— check every 5 secondsrise 2— requires 2 consecutive successful checks before marking the server as UPfall 3— 3 consecutive failed checks → server is marked DOWN
I deploy a /health endpoint returning 200 OK on each backend — independent of the database or cache, it simply reflects the health of the process itself. This lets HAProxy distinguish a truly dead backend from one that’s just busy handling heavy workloads.
Advanced: Stats Page, SSL Termination, and ACLs
Enable the Stats Dashboard
HAProxy ships with a built-in web dashboard — open it and you can immediately see which backends are UP or DOWN, current connection counts, error rates, and average response times. No additional tools needed:
frontend stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats auth admin:strong_password
stats hide-version
Visit http://192.168.1.10:8404/stats to see all backend statuses, connection counts, and response times.
SSL Termination — Handling HTTPS at HAProxy
To have HAProxy accept HTTPS and forward plain HTTP to backends (backends don’t need SSL configured):
# Combine cert and key into a single .pem file
cat /etc/ssl/certs/example.com.crt /etc/ssl/private/example.com.key \
> /etc/haproxy/certs/example.com.pem
chmod 600 /etc/haproxy/certs/example.com.pem
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
# Redirect HTTP → HTTPS
http-request redirect scheme https unless { ssl_fc }
# Forward header so backends know the client used HTTPS
http-request set-header X-Forwarded-Proto https if { ssl_fc }
default_backend web_servers
ACLs — Path-based or Domain-based Routing
ACLs allow HAProxy to classify requests and route them to different backend pools. A typical example: /api/ goes to API servers, static files go to CDN cache, and everything else goes to web servers:
frontend web_frontend
bind *:80
acl is_api path_beg /api/
acl is_static path_end .jpg .png .css .js
use_backend api_servers if is_api
use_backend static_servers if is_static
default_backend web_servers
backend api_servers
balance leastconn
server api1 192.168.1.21:8080 check
backend static_servers
balance roundrobin
server cdn1 192.168.1.31:80 check
backend web_servers
balance roundrobin
server web1 192.168.1.11:80 check
server web2 192.168.1.12:80 check
Production Tips from Real-World Experience
Zero-Downtime Config Reload
This is my favorite thing about HAProxy — reload the configuration without dropping existing connections:
# Validate config first
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
# Graceful reload (no dropped connections)
sudo systemctl reload haproxy
# or
sudo kill -USR2 $(cat /var/run/haproxy.pid)
Drain a Server Before Maintenance
Before updating a backend, I always drain traffic rather than killing the server outright. Use weight 0 via the HAProxy runtime socket:
# Connect to the HAProxy runtime API
echo "set weight web_servers/backend1 0" | \
sudo socat stdio /var/run/haproxy/admin.sock
# Check server state
echo "show servers state" | \
sudo socat stdio /var/run/haproxy/admin.sock
# After the update is done, restore weight
echo "set weight web_servers/backend1 100" | \
sudo socat stdio /var/run/haproxy/admin.sock
Remember to add this line to the global section — without it, socat will report a connection error:
stats socket /var/run/haproxy/admin.sock mode 660 level admin
Detailed Logging for Debugging
Getting 502s from clients with no idea which backend is causing them — here’s how I set up the log format to trace down to the specific server:
frontend web_frontend
bind *:80
option httplog
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %tsc %ac/%fc/%bc/%sc/%rc %{+Q}r"
default_backend web_servers
# Watch logs in real time
sudo tail -f /var/log/haproxy.log | grep "502"
The %b/%s field in the log format shows the backend name and the specific server that handled the request. This is exactly how I discovered that backend1 was returning 502s because PHP-FPM had run out of workers — while backend2 was completely healthy.
Limit Connections per IP to Prevent Abuse
frontend web_frontend
bind *:80
# Limit to 100 concurrent connections per IP
stick-table type ip size 100k expire 30s store conn_cur
tcp-request connection track-sc1 src
tcp-request connection reject if { sc_conn_cur(1) gt 100 }
default_backend web_servers
I put this in place after a crawler script accidentally hammered the system with 500 simultaneous connections from a single IP — HAProxy now blocks requests before they ever reach nginx.

