By the end of this tutorial, you’ll have a fully functional GitLab CI/CD pipeline that automatically tests your code, generates reports, and deploys to multiple environments. In 75 minutes, you’ll build a production-ready testing workflow that catches bugs early and accelerates your release cycle.
What You’ll Build
You’ll create a GitLab CI/CD pipeline that:
- Runs unit, integration, and end-to-end tests automatically
- Executes tests in parallel across multiple stages
- Generates code coverage and quality reports
- Deploys to staging and production environments
- Implements test result caching for faster pipelines
- Sends notifications to team channels on failures
- Creates dynamic test environments for each merge request
This solves the challenge of manual testing bottlenecks and inconsistent quality checks that slow down development velocity.
Learning Objectives
In this tutorial, you’ll learn:
- How to configure
.gitlab-ci.yml
for testing workflows - How to implement multi-stage pipelines with dependencies
- How to use GitLab’s built-in Docker registry for test containers
- How to cache test results and dependencies effectively
- How to implement parallel test execution
- How to secure secrets using GitLab CI/CD variables
Time Estimate: 75-90 minutes
Prerequisites
Required Software
Before starting, install:
Tool | Version | Purpose |
---|---|---|
Git | 2.30+ | Version control |
Docker | 20.10+ | Container runtime |
Node.js | 18.x+ | Test environment |
GitLab account | - | CI/CD platform |
Installation:
# macOS
brew install git docker node
# Linux (Ubuntu/Debian)
sudo apt update
sudo apt install git docker.io nodejs npm
# Windows (using Chocolatey)
choco install git docker-desktop nodejs
Required Knowledge
You should be familiar with:
- Basic Git operations (commit, push, merge)
- YAML syntax fundamentals
- Basic Docker concepts
- Not required: Advanced DevOps or Kubernetes knowledge
Required Resources
- GitLab account (free tier works fine)
- Repository with existing tests
- Docker Hub account (optional, for custom images)
Step 1: Create Basic GitLab CI/CD Configuration
In this step, we’ll create the foundation .gitlab-ci.yml
file.
Create Pipeline Configuration
In your repository root, create .gitlab-ci.yml
:
# .gitlab-ci.yml
image: node:18-alpine
stages:
- test
- build
- deploy
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/.cypress"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- node_modules
- .cypress
before_script:
- npm ci --cache .npm --prefer-offline
unit-tests:
stage: test
script:
- npm run test:unit
coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
artifacts:
when: always
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
What this does:
image
: Uses Node.js 18 Alpine base image (lightweight)stages
: Defines pipeline phases (test → build → deploy)cache
: Caches npm and Cypress for faster runsbefore_script
: Installs dependencies before each jobcoverage
: Extracts coverage percentage from outputartifacts
: Saves test results and coverage reports
Push and Trigger Pipeline
git add .gitlab-ci.yml
git commit -m "Add GitLab CI/CD pipeline"
git push origin main
Navigate to CI/CD → Pipelines in GitLab. You should see:
✅ Pipeline #1234567 passed
✅ test stage
✅ unit-tests (42s)
Checkpoint: You now have automated unit tests running on every push.
Step 2: Add Multi-Stage Testing Pipeline
Implement Integration and E2E Tests
Expand .gitlab-ci.yml
with additional test stages:
stages:
- test
- integration
- e2e
- build
- deploy
# Unit tests (from Step 1)
unit-tests:
stage: test
script:
- npm run test:unit
coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
artifacts:
when: always
reports:
junit: junit.xml
# Integration tests
integration-tests:
stage: integration
services:
- postgres:15-alpine
- redis:7-alpine
variables:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: testpass
DATABASE_URL: "postgresql://test:testpass@postgres:5432/testdb"
REDIS_URL: "redis://redis:6379"
script:
- npm run db:migrate
- npm run test:integration
artifacts:
when: always
reports:
junit: test-results/integration-junit.xml
# E2E tests with Playwright
e2e-tests:
stage: e2e
image: mcr.microsoft.com/playwright:v1.40.0-focal
script:
- npm ci
- npm run build
- npx playwright test
artifacts:
when: always
paths:
- playwright-report/
- test-results/
expire_in: 30 days
What’s new:
services
: Spins up PostgreSQL and Redis for integration testsvariables
: Environment-specific test configurationnpm run db:migrate
: Prepares database schema before tests- Custom image: Uses Playwright’s official Docker image for E2E tests
expire_in
: Artifacts automatically deleted after 30 days
Verify Pipeline Stages
Push changes and check pipeline visualization:
✅ Pipeline #1234568 passed (3m 42s)
├─ ✅ test stage (42s)
│ └─ unit-tests
├─ ✅ integration stage (1m 15s)
│ └─ integration-tests
└─ ✅ e2e stage (1m 45s)
└─ e2e-tests
Checkpoint: Multi-stage pipeline now runs unit, integration, and E2E tests sequentially.
Step 3: Implement Parallel Test Execution
Split Tests Across Multiple Jobs
For faster feedback, run tests in parallel:
unit-tests:
stage: test
parallel: 4
script:
- npm run test:unit -- --shard=${CI_NODE_INDEX}/${CI_NODE_TOTAL}
artifacts:
when: always
reports:
junit: junit-${CI_NODE_INDEX}.xml
e2e-tests:
stage: e2e
image: mcr.microsoft.com/playwright:v1.40.0-focal
parallel:
matrix:
- BROWSER: [chromium, firefox, webkit]
script:
- npm ci
- npx playwright test --project=$BROWSER
artifacts:
when: always
paths:
- playwright-report-$BROWSER/
reports:
junit: test-results/junit-$BROWSER.xml
How parallel execution works:
parallel: 4
: Splits unit tests into 4 concurrent jobsCI_NODE_INDEX/CI_NODE_TOTAL
: Built-in variables for sharding- Matrix strategy: Runs E2E tests in 3 browsers simultaneously
- Dynamic artifact names: Each parallel job uploads separate results
Performance gain:
- Unit tests: 2m 40s → 40s (4x speedup)
- E2E tests: 5m 30s → 2m 10s (3x speedup in parallel)
Expected result:
✅ Pipeline #1234569 passed (2m 25s)
├─ ✅ test stage (40s)
│ ├─ unit-tests [1/4]
│ ├─ unit-tests [2/4]
│ ├─ unit-tests [3/4]
│ └─ unit-tests [4/4]
└─ ✅ e2e stage (2m 10s)
├─ e2e-tests [chromium]
├─ e2e-tests [firefox]
└─ e2e-tests [webkit]
Step 4: Add Code Quality and Security Scanning
Integrate GitLab Code Quality
Add quality checks to your pipeline:
include:
- template: Code-Quality.gitlab-ci.yml
- template: Security/SAST.gitlab-ci.yml
- template: Security/Dependency-Scanning.gitlab-ci.yml
stages:
- test
- integration
- e2e
- quality
- security
- build
- deploy
code_quality:
stage: quality
artifacts:
reports:
codequality: gl-code-quality-report.json
sast:
stage: security
dependency_scanning:
stage: security
What this adds:
- Code Quality: Analyzes code for complexity, duplication, and maintainability
- SAST: Static Application Security Testing for vulnerabilities
- Dependency Scanning: Checks for known security issues in dependencies
- Merge request widgets: Results appear directly in MR interface
View Quality Reports
In merge requests, you’ll see:
Code Quality: 4 issues found
⚠️ Cognitive complexity in auth.js (Score: 15/10)
⚠️ Similar blocks of code in api.js and utils.js
Security: 2 vulnerabilities detected
🔴 High: SQL Injection in user-query.js
🟡 Medium: Insecure randomness in token-generator.js
Step 5: Implement Dynamic Test Environments
Create Review Apps for Each MR
Add dynamic environments for manual QA testing:
deploy-review:
stage: deploy
image: alpine:latest
script:
- apk add --no-cache curl
- |
curl --request POST \
--header "PRIVATE-TOKEN: $DEPLOY_TOKEN" \
--data "environment=review-$CI_COMMIT_REF_SLUG" \
"https://api.your-platform.com/deploy"
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://review-$CI_COMMIT_REF_SLUG.your-app.com
on_stop: stop-review
only:
- merge_requests
stop-review:
stage: deploy
script:
- echo "Stopping review environment"
- |
curl --request DELETE \
--header "PRIVATE-TOKEN: $DEPLOY_TOKEN" \
"https://api.your-platform.com/environments/review-$CI_COMMIT_REF_SLUG"
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
when: manual
only:
- merge_requests
How review apps work:
environment
: Creates temporary deployment for each MRon_stop
: Defines cleanup job when MR is closed- Dynamic URL: Each MR gets unique testing URL
- Manual cleanup:
when: manual
requires explicit action to destroy
Result in merge request:
Environment: review/feature-login
🔗 View app: https://review-feature-login.your-app.com
🗑️ Stop environment (manual action)
Step 6: Add Notifications and Monitoring
Configure Slack Notifications
Add notification job:
notify-failure:
stage: .post
image: curlimages/curl:latest
script:
- |
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-Type: application/json' \
-d '{
"text": "❌ Pipeline failed for '"$CI_PROJECT_NAME"'",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Pipeline Failure*\n\nBranch: `'"$CI_COMMIT_REF_NAME"'`\nCommit: '"$CI_COMMIT_SHORT_SHA"'\nAuthor: '"$CI_COMMIT_AUTHOR"'"
}
},
{
"type": "actions",
"elements": [
{
"type": "button",
"text": { "type": "plain_text", "text": "View Pipeline" },
"url": "'"$CI_PIPELINE_URL"'"
}
]
}
]
}'
when: on_failure
only:
- main
- develop
Set Up Slack Webhook
- Create Slack app at api.slack.com/apps
- Enable “Incoming Webhooks”
- Copy webhook URL
- In GitLab: Settings → CI/CD → Variables
- Key:
SLACK_WEBHOOK_URL
- Value: [paste webhook URL]
- Protected: ✅ (only available on protected branches)
- Masked: ✅ (hidden in logs)
- Key:
Expected notification:
❌ Pipeline failed for my-qa-project
Branch: `main`
Commit: a1b2c3d
Author: Developer Name
[View Pipeline]
Step 7: Optimize Pipeline Performance
Implement Advanced Caching
Optimize caching strategy:
cache:
key:
files:
- package-lock.json
paths:
- .npm
- node_modules
policy: pull
# Override cache policy for install jobs
.install_deps:
cache:
key:
files:
- package-lock.json
paths:
- .npm
- node_modules
policy: pull-push
unit-tests:
extends: .install_deps
stage: test
script:
- npm ci --cache .npm --prefer-offline
- npm run test:unit
Caching improvements:
- Key by lockfile: Cache invalidates only when dependencies change
policy: pull
: Most jobs only read cache (faster)policy: pull-push
: First job updates cache.install_deps
template: Reusable cache configuration
Add Pipeline Optimization Rules
Skip unnecessary jobs:
unit-tests:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
- changes:
- "src/**/*"
- "tests/**/*"
- package.json
when: always
- when: never
e2e-tests:
stage: e2e
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /skip-e2e/'
when: never
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: always
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
Rules logic:
- Run on MRs and main branch
- Skip if only documentation changed
- Skip E2E tests if MR has
skip-e2e
label - Make E2E manual on MRs, automatic on main
Performance gains:
- Doc-only changes: Pipeline skipped (0s vs 3m)
- Dependency caching: Install time 2m → 15s
- Smart job skipping: Average pipeline 3m → 1m 30s
Testing Your Implementation
Test Case 1: Full Pipeline
Create a test commit affecting code:
echo "// test change" >> src/index.js
git add src/index.js
git commit -m "test: trigger full pipeline"
git push origin main
Expected result:
✅ Pipeline #1234570 passed (1m 45s)
├─ ✅ test: unit-tests [1-4/4] (25s)
├─ ✅ integration: integration-tests (45s)
├─ ✅ e2e: e2e-tests [chromium,firefox,webkit] (1m 10s)
├─ ✅ quality: code_quality (20s)
└─ ✅ security: sast, dependency_scanning (35s)
Test Case 2: Skip E2E Tests
Create MR with skip-e2e
label:
git checkout -b feature/docs-update
echo "# Documentation" >> README.md
git add README.md
git commit -m "docs: update README"
git push origin feature/docs-update
In GitLab, create MR and add label skip-e2e
.
Expected result:
✅ Pipeline #1234571 passed (35s)
├─ ✅ test: unit-tests [1-4/4] (25s)
├─ ⏭️ e2e: e2e-tests (skipped - label: skip-e2e)
└─ ✅ quality: code_quality (10s)
Validation Checklist
- Unit tests run in parallel
- Integration tests use database services
- E2E tests execute in multiple browsers
- Code quality reports appear in MR
- Review environments deploy automatically
- Slack notifications sent on failure
- Cache reduces dependency install time
Troubleshooting
Issue 1: Services Not Connecting
Symptoms:
Error: connect ECONNREFUSED 127.0.0.1:5432
Cause: Integration tests trying to connect to localhost instead of service hostname.
Solution: Use service alias as hostname:
integration-tests:
services:
- name: postgres:15-alpine
alias: postgres # Use this as hostname
variables:
DATABASE_URL: "postgresql://test:testpass@postgres:5432/testdb"
Issue 2: Pipeline Timeout
Symptoms:
Job exceeded maximum timeout of 60 minutes
Solution: Increase job timeout:
e2e-tests:
timeout: 90 minutes # Increase from default 60m
Or optimize test execution:
e2e-tests:
parallel: 5 # Split across more jobs
script:
- npx playwright test --workers=2 # Reduce per-job parallelism
Issue 3: Cache Not Working
Symptoms:
npm ci
reinstalls everything every time- Cache size shows 0 MB
Check cache key:
cache:
key:
files:
- package-lock.json # Ensure this file exists and is committed
paths:
- .npm
- node_modules
Verify cache in UI: GitLab → CI/CD → Pipelines → Job → Artifacts → Cache
Next Steps
Congratulations! You’ve built a production-grade GitLab CI/CD testing pipeline. 🎉
What You’ve Built
You now have:
- ✅ Multi-stage pipeline with unit, integration, and E2E tests
- ✅ Parallel test execution for faster feedback
- ✅ Code quality and security scanning
- ✅ Dynamic review environments for each MR
- ✅ Smart caching for optimized performance
- ✅ Slack notifications on failures
Level Up Your Skills
Easy Enhancements (30 min each)
Add Visual Regression Testing
visual-tests: stage: e2e script: - npx playwright test --update-snapshots - npx playwright test --reporter=html
Enable Auto-Merge on Success
auto-merge: stage: .post script: - | curl --request PUT \ --header "PRIVATE-TOKEN: $GITLAB_TOKEN" \ "$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/merge" rules: - if: $CI_MERGE_REQUEST_LABELS =~ /auto-merge/ when: on_success
Intermediate Enhancements (1-2 hours each)
Implement Test Reports Dashboard
- Use GitLab Test Reports API
- Create custom analytics dashboard
- Track test trends over time
Add Performance Testing
performance-tests: stage: e2e script: - npm run lighthouse-ci artifacts: reports: performance: performance.json
Advanced Enhancements (3+ hours)
Multi-Project Pipelines
- Trigger downstream pipelines in microservices
- Coordinate cross-repo testing
- Guide: Multi-project pipelines
Kubernetes Integration
- Deploy to Kubernetes for testing
- Use GitLab Auto DevOps
- Implement canary deployments
Related Tutorials
Continue learning:
Conclusion
What You Accomplished
In this tutorial, you:
- ✅ Created a multi-stage GitLab CI/CD pipeline
- ✅ Implemented parallel test execution for speed
- ✅ Added code quality and security scanning
- ✅ Set up dynamic review environments
- ✅ Configured Slack notifications
- ✅ Optimized pipeline with smart caching and rules
Key Takeaways
- GitLab CI/CD is powerful: Built-in features like code quality, SAST, and review apps accelerate development
- Parallel execution matters: Strategic parallelization cuts pipeline time by 50-70%
- Smart caching is essential: Proper cache configuration reduces redundant work
- Rules optimize costs: Skip unnecessary jobs to save compute resources
Questions or feedback? Drop a comment below!
Found this helpful? Share it with your team!