Up and Running in 5 Minutes: A Complete Working Stack
Instead of theory, let’s start with the most practical thing possible — an app + database stack you can run right now. Create a docker-compose.yml file:
version: '3.8'
services:
db:
image: mysql:8.0
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: myapp
MYSQL_USER: app_user
MYSQL_PASSWORD: app_pass
volumes:
- db_data:/var/lib/mysql
app:
image: wordpress:latest
restart: unless-stopped
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_NAME: myapp
WORDPRESS_DB_USER: app_user
WORDPRESS_DB_PASSWORD: app_pass
depends_on:
- db
volumes:
db_data:
Start the stack:
docker compose up -d
Check the status:
docker compose ps
docker compose logs -f app
Open http://localhost:8080 — WordPress is running with MySQL. The whole process takes under 30 seconds. Compare that to doing it manually: one docker run command for MySQL with 5–6 flags, another for WordPress with 8–10 flags, then manually creating a network so the two containers can see each other… Compose wraps all of that into a single file and a single command.
Deep Dive: Understanding the Details So You Don’t Get Burned
depends_on Does Not Mean “Wait Until the Service Is Ready”
This is a trap I fell into the first time I set up a production environment. depends_on only guarantees that containers start in order — it does not guarantee that the service inside is ready to accept connections. MySQL typically needs 15–30 seconds to initialize and accept connections, but the app container may have already finished starting up by then.
The proper fix is to use healthchecks:
services:
db:
image: mysql:8.0
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
app:
depends_on:
db:
condition: service_healthy
Networks: How Containers “See” Each Other
Docker Compose automatically creates a dedicated bridge network for each project. Containers communicate using service names — not IP addresses. That’s why you use WORDPRESS_DB_HOST: db instead of a hardcoded IP: IPs can change on every restart, but service names don’t.
As your stack grows, you’ll want to segment traffic. For example, nginx needs to talk to both the frontend and backend, but the database should never be reachable from the internet:
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No internet access
services:
nginx:
networks:
- frontend
- backend
api:
networks:
- backend
db:
networks:
- backend
internal: true is a great trick for isolating your database — the db container can’t make outbound connections to the internet, which significantly reduces your attack surface.
Volumes: Named vs Bind Mounts
services:
app:
volumes:
# Named volume — managed by Docker, persists even when the container is removed
- app_data:/var/www/html/uploads
# Bind mount — mounts a directory from the host (great for development)
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
# tmpfs — in-memory only, lost when the container stops (use for cache)
- type: tmpfs
target: /tmp/cache
volumes:
app_data:
A simple rule of thumb: use bind mounts for config files during development — edit nginx.conf on the host and it takes effect immediately without rebuilding the image. Use named volumes for data that needs to persist, like databases and uploads — Docker manages them and they survive container deletion and recreation.
Advanced: Patterns I Use Daily in Production
Separating Environments with .env Files
Never hardcode credentials in docker-compose.yml. Use a .env file and add it to .gitignore right away:
# .env
MYSQL_ROOT_PASSWORD=super_secret_pass
MYSQL_DATABASE=production_db
APP_PORT=8080
APP_ENV=production
# docker-compose.yml
services:
db:
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
app:
ports:
- "${APP_PORT}:80"
environment:
APP_ENV: ${APP_ENV}
Override Files for Dev and Prod
This pattern is incredibly handy when your team develops and deploys production from the same codebase. Keep shared config in docker-compose.yml and create separate override files for each environment:
# docker-compose.override.yml (automatically loaded when running 'docker compose up')
services:
app:
volumes:
- .:/var/www/html # Mount source code during development
environment:
APP_DEBUG: "true"
# docker-compose.prod.yml
services:
app:
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
restart: always
# Deploy to production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Resource Limits — Protecting the Entire Cluster
Running a cluster with 30+ containers, I’ve run into this situation more than once: a service silently leaks memory, consumes all available RAM, and brings down the entire node. Nobody notices because nothing is there to stop it. After applying resource limits, that problem never came back — and overall resource usage dropped around 40% because each service was constrained to its fair share.
services:
api:
deploy:
resources:
limits:
cpus: '1.0'
memory: 256M
reservations:
cpus: '0.25'
memory: 128M
Profiles: Only Run Services When You Need Them
services:
app: # No profile = always runs
db: # No profile = always runs
adminer:
image: adminer
profiles: ["tools"] # Only runs when called with --profile tools
ports:
- "8081:8080"
# Run app + db normally
docker compose up -d
# Bring up adminer when you need to debug the database
docker compose --profile tools up -d
Practical Tips: Useful Commands Most People Forget
Commands That Are Handy in Day-to-Day Work
# Tail logs from multiple services at once, filtered by keyword
docker compose logs -f --tail=100 app db | grep ERROR
# Exec into a running container
docker compose exec app bash
# Scale a service without taking down others
docker compose up -d --scale worker=3
# Rebuild and redeploy a single service
docker compose up -d --no-deps --build app
# View real-time resource usage
docker compose stats
# Pull the latest images and restart
docker compose pull && docker compose up -d
Name Your Project — Avoid Conflicts Three Months Down the Road
Docker Compose names containers using the pattern {project}_{service}_{replica}. The project name defaults to the directory name. The problem: two different projects both named app will share a network and have conflicting container names — a bug that’s extremely painful to debug when it hits you at 2 AM. Declaring a name at the top of your file solves it entirely:
name: myapp-production
services:
...
Regular Cleanup to Keep Disk Space Under Control
# Remove containers and networks for the project (volumes are kept)
docker compose down
# Remove volumes too — be careful, this deletes data!
docker compose down -v
# Full Docker cleanup: stopped containers, unused images, volumes
docker system prune -a --volumes
I have a cron job running docker system prune -f weekly on the server — it saves tens of gigabytes of disk space every month without any manual intervention.

