The Real Problem Before Docker
I still remember my early days on the team — finishing a build on my local machine, pushing it to the server, and watching it crash immediately. The reason? The server was running Python 3.8 while my machine had Python 3.11. Different library versions, missing environment variables, mismatched configs… Every deploy was a game of chance.
Docker was built to solve exactly this: packaging your entire application — code, dependencies, configuration — into a single unit that runs anywhere. No more “it works on my machine.”
Docker is not a Virtual Machine (VM). A VM creates a full virtual computer with its own OS, weighing several gigabytes. Containers share the host OS kernel and only isolate at the process level — they start in seconds and consume RAM in megabytes rather than gigabytes.
Installing Docker on Ubuntu/Linux
Most VPS instances today run Ubuntu, so this guide covers Ubuntu 22.04/24.04. The process is nearly identical on Debian.
Step 1: Remove older versions if present
sudo apt remove docker docker-engine docker.io containerd runc
Step 2: Add the official Docker repository
sudo apt update
sudo apt install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Step 3: Install Docker Engine
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Step 4: Allow running Docker without sudo
sudo usermod -aG docker $USER
newgrp docker
Log out and log back in for the group change to take effect. Quick check:
docker --version
# Docker version 26.x.x, build xxxxxxx
docker run hello-world
If you see the “Hello from Docker!” message, the installation was successful.
Core Concepts You Need to Understand
Before typing your first command, spend 5 minutes getting these 4 concepts right — it’ll save you far more time than debugging later. I’ve seen many people confuse Images with Containers, accidentally deleting the wrong thing, restarting the wrong component, and spending entire sessions debugging without knowing where the problem lies.
Image: The Immutable Blueprint
An Image is a read-only file containing everything needed to run an application: a base OS, libraries, code, and environment variables. Think of it like a Windows ISO — you don’t edit the ISO itself, you just use it to create a running instance.
# Pull nginx image from Docker Hub
docker pull nginx:1.25
# List available images
docker images
# Remove image
docker rmi nginx:1.25
Container: A Running Instance of an Image
A Container is the concrete instance when you “run” an Image. Just like an ISO can be installed on 10 different virtual machines — a single Image can spawn as many Containers as you need, each fully isolated from one another and from the host OS.
# Run nginx container, map port 8080 on host → 80 in container
docker run -d -p 8080:80 --name my-nginx nginx:1.25
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop and remove container
docker stop my-nginx
docker rm my-nginx
Volume: Persistent Data Storage
Containers are ephemeral by nature — when you delete a container, all data inside is gone too. Volumes are the solution: mount a directory from the host into the container, and the data persists independently of the container’s lifecycle.
# Mount /data/mysql from host into /var/lib/mysql inside the container
docker run -d \
-v /data/mysql:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
--name mysql-db \
mysql:8.0
Your data now lives at /data/mysql on the host — delete and recreate the container as many times as you like, and the data remains intact.
Network: Container Communication
By default, each container gets its own IP on the bridge network, but this IP changes on every restart. Connecting via hostname is far more stable. Create a custom network so containers in the same stack can find each other by name:
# Create network
docker network create my-app-net
# Run containers in that network
docker run -d --network my-app-net --name db mysql:8.0
docker run -d --network my-app-net --name web my-webapp
# Container "web" connects to "db" via hostname "db" — no need to know the IP
Writing Your First Dockerfile
A Dockerfile is a text file that instructs Docker on how to build an Image from scratch — each instruction in the file corresponds to one layer. Here’s a real-world example for a Python Flask app:
FROM python:3.12-slim
WORKDIR /app
# Copy requirements first — leverage Docker layer cache
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
Build and test it:
# Build image from Dockerfile in current directory
docker build -t my-flask-app:v1 .
# Run for testing
docker run -d -p 5000:5000 my-flask-app:v1
Pro tip: always COPY requirements.txt and RUN pip install before COPYing your code. Docker caches layers in order — if your code changes but requirements don’t, the library installation step will use the cache, making builds significantly faster.
Monitoring and Inspecting Containers
After deployment is when the real work begins. Containers can crash silently, leak RAM, or throw errors you won’t notice — so make sure you know these basic commands from day one.
Viewing container logs
# View container logs
docker logs my-nginx
# Follow logs in realtime (like tail -f)
docker logs -f my-nginx
# View only the last 50 lines
docker logs --tail 50 my-nginx
Monitoring resource usage
# View CPU/RAM usage in realtime
docker stats
# Snapshot without follow
docker stats --no-stream
On a production cluster running 30+ containers, I used docker stats to identify which containers were consuming abnormal resources. After analyzing and optimizing the Dockerfiles — applying multi-stage builds and reducing unnecessary layers — I cut resource usage by 40% without adding more servers. Monitoring tools like Prometheus or Grafana are great, but docker stats is available right after installing Docker, no extra setup needed.
Entering a container to debug
# Open a bash shell in the running container
docker exec -it my-nginx bash
# Run a single command in the container
docker exec my-nginx nginx -t
# View detailed container information
docker inspect my-nginx
Regularly cleaning up resources
Over time, old images, stopped containers, and unused volumes pile up and eat disk space. A quick cleanup command:
# Remove unused resources (stopped containers, dangling images, unused networks)
docker system prune
# Add -a to also remove images not used by any container
docker system prune -a
# Check Docker disk usage
docker system df
Next Steps After Mastering the Basics
Got those commands down? The next thing you’ll use daily is Docker Compose. Instead of typing 5–6 docker run commands with all their flags, you define your entire stack in a YAML file and just run docker compose up -d:
services:
web:
build: .
ports:
- "5000:5000"
depends_on:
- db
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: secret
volumes:
pgdata:
docker compose up -d
docker compose logs -f
docker compose down
Once Docker Compose becomes second nature, the path to Docker Swarm, Kubernetes, or automating your CI/CD pipeline is shorter than you think — because everything builds on the same 4 concepts you just learned: Image, Container, Volume, Network.
