Deploying Node.js with Docker: Tips & Tricks from Real-World Experience

Docker tutorial - IT technology blog
Docker tutorial - IT technology blog

Problems I Faced When Deploying Node.js for the First Time

The first time I used Docker Compose on a real project, I made quite a few basic mistakes that seem funny in hindsight. The built image weighed over 1GB, the container would run for a few hours then stop without explanation, and environment variables were hard-coded directly into the Dockerfile. Everything worked on my local machine but fell apart on the server.

If you’re just starting to deploy Node.js with Docker and running into exactly these problems — this article is for you.

Why Does Node.js Often Run into Problems When Containerized?

Docker and Node.js don’t always play well together. There are a few traps that almost every beginner falls into:

  • node_modules is too large: This directory can reach several hundred MB — copying it entirely into the image is wasteful.
  • Wrong base image: node:latest defaults to full Debian — unnecessarily heavy.
  • Process is not PID 1: Node runs inside the container but doesn’t receive the SIGTERM signal properly, causing graceful shutdown to fail.
  • Exposed environment variables: Hard-coding DB_PASSWORD into the Dockerfile and pushing it to GitHub is a real thing — I’ve seen it happen firsthand.
  • Running as root: Unnecessary, and if exploited, the damage is far greater compared to running as a non-root user.

Ways to Deploy Node.js with Docker

Option 1: Simple Dockerfile (not recommended for production)

The way most people start:

FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]

Build and run:

docker build -t myapp .
docker run -p 3000:3000 myapp

It works — but the image weighs in at ~1.1GB. The node_modules from the host machine gets copied in as-is, which can cause native module errors if the OS differs. Not to mention there’s no proper signal handling for shutdown.

Option 2: Multi-stage build (better, but still lacking)

Multi-stage builds separate the build and runtime steps, significantly reducing image size:

# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Stage 2: Runtime
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
CMD ["node", "index.js"]

The image shrinks to ~200MB thanks to alpine. But it still runs as root, and signal handling is still missing.

Production-Ready Dockerfile: The Template I Actually Use

After many rounds of debugging and gradual improvements, this is the Dockerfile I’m currently running stably in production:

Step 1: Create a .dockerignore file

Create this file first — a step that’s often forgotten but important for keeping junk out of the image:

node_modules
npm-debug.log
.git
.env
*.md
dist
.DS_Store

Step 2: Production-standard Dockerfile

FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

FROM node:18-alpine
WORKDIR /app

# Create a dedicated user, don't run as root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Change file ownership
RUN chown -R appuser:appgroup /app
USER appuser

EXPOSE 3000

# Use tini or --init to handle PID 1 properly
CMD ["node", "--max-old-space-size=512", "index.js"]

Step 3: Docker Compose for Development and Production

Compose makes managing environment variables and volumes much cleaner compared to typing out docker run with a wall of flags:

version: '3.9'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    env_file:
      - .env          # Environment variables loaded from .env file, not hard-coded
    environment:
      - NODE_ENV=production
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          memory: 512M

Step 4: Add a Health Check Endpoint to the App

This route tells Docker the app is alive and responding. I like to include uptime as well — handy for debugging right after a container restarts:

app.get('/health', (req, res) => {
  res.status(200).json({ status: 'ok', uptime: process.uptime() });
});

Step 5: Handle Graceful Shutdown

This part gets skipped the most — and it’s also why requests get silently dropped on every new deployment. When Docker sends SIGTERM, the app needs to handle it properly instead of dying abruptly mid-request:

const server = app.listen(3000);

process.on('SIGTERM', () => {
  console.log('SIGTERM received, shutting down gracefully...');
  server.close(() => {
    console.log('Server closed.');
    process.exit(0);
  });
});

Build and Run

# Build image
docker compose build

# Run in detached mode
docker compose up -d

# View logs
docker compose logs -f app

# Check health check status
docker inspect --format='{{json .State.Health}}' container_name

Real-World Tips to Save Yourself a Headache

  • Always pin versions: Use node:18.20-alpine instead of node:18-alpine to avoid breaking changes from incompatible patch releases.
  • Use npm ci instead of npm install: ci installs exactly according to package-lock.json without upgrading dependencies — critical for reproducible builds.
  • Never commit the .env file: Add it to .gitignore and use .env.example as a template.
  • Limit memory: Node.js can eat all available RAM if there’s a memory leak. Set --max-old-space-size and deploy.resources.limits.memory in Compose.
  • Layer caching: Copy package*.json first, run npm ci, then copy your code. Docker will cache the node_modules layer — rebuilds only take seconds if dependencies haven’t changed.

Inspecting the Image After Building

# Check image size
docker images myapp

# Check which user the process is running as
docker exec container_name whoami

# Scan for security vulnerabilities (if Docker Scout is available)
docker scout cves myapp

Follow the steps above correctly and your image will be down to around 150–250MB — roughly an 80% reduction from the default approach. Running as a non-root user, with a health check, shutting down cleanly on restart. Each one sounds like a small thing, but together they’re the difference between a container that runs stably for months and an app that keeps crashing at 3 in the morning.

Share: