Slow pipelines, deployment anxiety — you’re not alone
I once worked on a 5-person web app project. The GitLab CI/CD pipeline took 18–22 minutes to run on every push. Nobody wanted to merge or deploy — dreading the long wait only to hit a failure at the last step. Feature branches lived for weeks, merge conflicts piled up, and every sprint release was a nightmare.
The problem wasn’t GitLab. It was how the pipeline was configured. After restructuring .gitlab-ci.yml and applying the right strategies, the pipeline dropped to 6–7 minutes. The team started merging daily instead of stacking everything up at the end of the sprint. This article covers what we actually did — and if you’re new to automation pipelines, this beginner’s guide to CI/CD with GitHub Actions gives a solid foundation on the concepts before diving deeper.
Understanding how GitLab CI/CD actually works
Optimizing the wrong thing is worse than not optimizing at all. Get a solid grasp of GitLab CI/CD’s execution model before touching any config.
Stage vs Job vs Pipeline
GitLab runs pipelines through stages. Jobs within the same stage run in parallel — stages run sequentially. Keep this in mind, because it determines how you split jobs and control execution order.
stages:
- build
- test
- deploy
build-image:
stage: build
script:
- docker build -t myapp:$CI_COMMIT_SHA .
test-unit:
stage: test
script:
- pytest tests/unit/
test-integration:
stage: test # runs in parallel with test-unit
script:
- pytest tests/integration/
DAG — Directed Acyclic Graph
GitLab 12.2 introduced needs: to create DAG pipelines. Instead of waiting for an entire stage to finish, a job runs as soon as its dependencies complete — bypassing stage barriers entirely.
stages:
- build
- test
- deploy
build-backend:
stage: build
script: make build-backend
build-frontend:
stage: build
script: make build-frontend
test-backend:
stage: test
needs: [build-backend] # no need to wait for build-frontend to finish
script: make test-backend
test-frontend:
stage: test
needs: [build-frontend] # runs as soon as build-frontend is done
script: make test-frontend
deploy-staging:
stage: deploy
needs: [test-backend, test-frontend]
script: make deploy-staging
With this structure, test-backend and test-frontend no longer have to wait for each other — saving several minutes right away.
Pipeline optimization from A to Z
1. Smart caching — stop reinstalling dependencies every time
The number one reason pipelines are slow: no dependency caching. Every time a job runs, the runner re-downloads all node_modules or Python packages from scratch — easily burning 3–5 minutes on something that was already installed yesterday.
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
key:
files:
- requirements.txt # cache key changes when this file changes
paths:
- .cache/pip
- venv/
test:
stage: test
before_script:
- python -m venv venv
- source venv/bin/activate
- pip install -r requirements.txt
script:
- pytest
For Node.js, use package-lock.json as the cache key:
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
2. Parallel matrix — run tests across multiple environments simultaneously
Instead of running tests sequentially across each Python version, use parallel:matrix:
test:
stage: test
image: python:$PYTHON_VERSION
parallel:
matrix:
- PYTHON_VERSION: ["3.10", "3.11", "3.12"]
script:
- pip install -r requirements.txt
- pytest --tb=short
3 jobs run in parallel instead of sequentially — total time drops to the duration of a single job.
3. Rules instead of only/except — precise control over when jobs run
only/except is deprecated. Use rules: for cleaner, more explicit logic:
deploy-production:
stage: deploy
script:
- make deploy-prod
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # only runs on main
when: manual # requires a manual trigger
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: never # skip in MR pipelines
4. Artifacts — passing files between stages
Build once, use everywhere. Don’t rebuild images or binaries at every stage:
build:
stage: build
script:
- make build
- echo $CI_COMMIT_SHA > build_info.txt
artifacts:
paths:
- dist/
- build_info.txt
expire_in: 1 hour # auto-deleted after 1 hour to save storage
deploy:
stage: deploy
needs:
- job: build
artifacts: true # download artifacts from the build job
script:
- cat build_info.txt
- rsync -av dist/ user@server:/var/www/app/
Real-world continuous deployment strategies
Blue-Green Deployment
The idea: maintain two production environments (Blue is live, Green gets the new deployment). Once Green is stable, switch traffic to Green — zero downtime. If you want to automate triggering deployments via webhooks rather than manual SSH, building an auto-deploy webhook pairs well with this strategy.
deploy-green:
stage: deploy
script:
- docker pull myapp:$CI_COMMIT_SHA
- docker stop myapp-green || true
- docker run -d --name myapp-green -p 8081:8080 myapp:$CI_COMMIT_SHA
- sleep 10
- curl -f http://localhost:8081/health || (docker stop myapp-green && exit 1)
environment:
name: production-green
switch-traffic:
stage: switch
needs: [deploy-green]
when: manual
script:
- nginx -s reload # or update the load balancer
- docker stop myapp-blue || true
- docker rename myapp-green myapp-blue
environment:
name: production
Canary Deployment
Canary deployment doesn’t send 100% of traffic to the new version immediately. Only 5–10% of users receive the new build first — monitor error rates and latency for a few hours. Roll out to everyone only once metrics look healthy.
deploy-canary:
stage: deploy
script:
- kubectl set image deployment/myapp-canary app=myapp:$CI_COMMIT_SHA
- kubectl scale deployment/myapp-canary --replicas=1 # 1/10 pods = 10% traffic
environment:
name: production/canary
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
promotion-canary:
stage: promote
needs: [deploy-canary]
when: manual
script:
# After verifying metrics, roll out to everyone
- kubectl set image deployment/myapp app=myapp:$CI_COMMIT_SHA
- kubectl scale deployment/myapp-canary --replicas=0
environment:
name: production
Managing environment variables and secrets properly
Never hardcode credentials in .gitlab-ci.yml. Use GitLab CI/CD Variables (Settings → CI/CD → Variables). The same principle applies beyond CI — securing API keys in production follows the same logic of keeping secrets out of code and config files.
deploy:
script:
- echo "$DEPLOY_KEY" > /tmp/deploy_key
- chmod 600 /tmp/deploy_key
- ssh -i /tmp/deploy_key user@$PROD_SERVER "cd /app && ./deploy.sh"
- rm /tmp/deploy_key
For sensitive credentials: enable Protected (accessible only on protected branches) and Masked (hidden from logs). These two checkboxes take 5 seconds to enable and can save you a world of trouble.
Pipeline as Code — splitting CI files for large projects
When .gitlab-ci.yml grows past 300 lines, use include: to split it into multiple files:
# .gitlab-ci.yml
include:
- local: .gitlab/ci/build.yml
- local: .gitlab/ci/test.yml
- local: .gitlab/ci/deploy.yml
- project: 'myorg/ci-templates' # reuse templates from another repo
ref: main
file: '/templates/docker.yml'
stages:
- build
- test
- deploy
Results after applying these changes
Back to the project I mentioned at the start. Three things delivered 80% of the gains: DAG with needs:, dependency caching, and a parallel test matrix. Stripping out unnecessary jobs on feature branches brought the pipeline from 20 minutes down to 6. The team started merging daily, and feature branches rarely survived more than 2 days.
The most noticeable change wasn’t the time — it was the team’s attitude toward deployments. Blue-Green allows a rollback in 30 seconds if something goes wrong. When the cost of a bad deploy is just 30 seconds of recovery, the team becomes far more willing to ship — going from once every two weeks to several times a week. Pairing fast pipelines with effective code review practices compounds the benefit: faster feedback loops mean smaller, safer changes land in production continuously.
A good pipeline isn’t a complex pipeline — it’s one that runs fast, fails rarely, and anyone can read. Start with caching and parallel jobs. Once the team is comfortable with that, then think about Blue-Green or Canary. And if your deployments touch Docker images, optimizing Docker image size is another high-leverage change that speeds up every build and pull across your pipeline.
