Why Docker-in-Docker is No Longer the Top Choice?
In the early days of implementing CI/CD on Kubernetes, Docker-in-Docker (DinD) was the default choice for many. However, for DinD to function, you must grant the container privileged: true. This is a critical security vulnerability. If an attacker compromises the build container, they can escape and gain control over the entire physical node of the cluster.
Security teams at large corporations will usually reject requests for root privileges or access to /var/run/docker.sock immediately. Kaniko was created to solve this problem once and for all. It is an open-source tool from Google that allows building images from a Dockerfile directly in userspace without needing a Docker daemon.
Practical experience from managing a system with over 50 microservices shows that switching to Kaniko not only makes the system more secure but also reduces RAM/CPU resource usage by 35-40%. This is because you no longer need to maintain a wasteful background Docker Daemon for every build job.
Deploying Kaniko: No Complicated Installation Needed
You don’t need to run apt-get install to get Kaniko. The tool comes pre-packaged as an executable Docker image (executor). You simply call this image in your pipeline to start building immediately.
I usually prefer using the gcr.io/kaniko-project/executor:debug image. The debug version comes with a built-in shell (like /busybox/sh). It’s extremely useful when you need to inject helper scripts or inspect the filesystem while the pipeline is running.
Quickly Test Kaniko on Your Local Machine
Before deploying to a server, test Kaniko’s mechanism locally using the following command:
docker run \
-v $(pwd):/workspace \
-v ~/.docker/config.json:/kaniko/.docker/config.json \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile \
--context=dir:///workspace \
--destination=your-registry.com/your-image:v1
Integrating into GitLab CI
The configuration for GitLab CI is very clean. You don’t need to set up any complex Docker services:
build-image:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- /kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Optimizing Build Performance to Save Time
Without a cache configuration, every pipeline run will be a struggle as it has to download every layer from scratch. For Node.js projects with heavy node_modules folders, optimization is mandatory.
1. Authenticating with a Container Registry
Kaniko needs permission to push images to Docker Hub, GCR, or ECR. It looks for a config.json file in the /kaniko/.docker/ directory. You can quickly generate this file in your pipeline script to secure your credentials.
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "base64_encoded_token"
}
}
}
2. Enabling Layer Caching
This technique helped me reduce build times from 7 minutes to less than 2 minutes. Kaniko saves successfully built layers to a separate registry as a cache.
/kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--cache=true \
--cache-repo=your-registry.com/kaniko-cache \
--destination=your-registry.com/your-app:latest
When --cache=true is enabled, Kaniko checks if the current layer already exists in the cache-repo. If it does, it downloads it instead of re-running the Dockerfile command.
Troubleshooting and Real-World Tips
Despite its power, Kaniko can sometimes confuse newcomers with a few specific errors.
- Out of Memory (OOM) Error: Kaniko extracts the entire filesystem into memory to take snapshots. If you are building an image containing large data (over 2GB), increase the RAM limit for the build Pod to at least 4GB.
- Permission Denied Error: Usually caused by the
config.jsonfile being in the wrong place. Double-check that the absolute path is/kaniko/.docker/config.json. - Deep Debugging: Add the
-v=debugflag to see detailed logs of how Kaniko scans the filesystem. You will discover exactly which step is making your image unusually heavy.
A small tip for Kubernetes users: Use the --digest-file=/dev/termination-log parameter. The pipeline will accurately capture the image’s SHA digest. This helps avoid accidental deployment errors caused by using the latest tag, which is common and risky.
Getting used to Kaniko might take a bit more initial configuration time. However, the peace of mind regarding security and resource efficiency it provides is well worth it for any production system.

