How to use Docker in CI/CD pipelines on Linux
How to Use Docker in CI/CD Pipelines on Linux
Docker has revolutionized the way we build, test, and deploy applications by providing consistent, portable environments across different stages of the software development lifecycle. When integrated into Continuous Integration and Continuous Deployment (CI/CD) pipelines on Linux systems, Docker enables teams to achieve faster, more reliable deployments while maintaining consistency between development, testing, and production environments.
This comprehensive guide will walk you through everything you need to know about implementing Docker in your CI/CD pipelines on Linux, from basic concepts to advanced deployment strategies. You'll learn how to containerize applications, set up automated builds, implement testing workflows, and deploy to production environments using popular CI/CD platforms.
Prerequisites and Requirements
Before diving into Docker CI/CD implementation, ensure you have the following prerequisites in place:
System Requirements
- Linux-based system (Ubuntu 18.04+, CentOS 7+, or similar distribution)
- Minimum 4GB RAM and 20GB available disk space
- Root or sudo access for Docker installation
- Network connectivity for downloading Docker images and dependencies
Required Software
- Docker Engine: Latest stable version (20.10+)
- Docker Compose: For multi-container applications
- Git: For version control integration
- Text editor: VS Code, vim, or your preferred editor
Knowledge Prerequisites
- Basic understanding of Linux command line
- Familiarity with containerization concepts
- Basic knowledge of software development workflows
- Understanding of version control systems (Git)
Installing and Configuring Docker on Linux
Docker Installation
First, let's install Docker on your Linux system. The following example uses Ubuntu, but similar steps apply to other distributions:
```bash
Update package index
sudo apt-get update
Install required packages
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Set up the repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
Post-Installation Configuration
Configure Docker to run without sudo and start automatically:
```bash
Add your user to the docker group
sudo usermod -aG docker $USER
Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
Verify installation
docker --version
docker run hello-world
```
Understanding Docker in CI/CD Context
Core Concepts
Docker in CI/CD pipelines serves multiple purposes:
1. Build Environment Consistency: Ensures identical build environments across all stages
2. Dependency Management: Packages applications with all required dependencies
3. Scalability: Enables easy horizontal scaling of applications
4. Isolation: Provides process and resource isolation for security
5. Portability: Allows applications to run consistently across different environments
CI/CD Pipeline Stages with Docker
A typical Docker-enabled CI/CD pipeline includes:
1. Source Code Management: Code changes trigger pipeline execution
2. Build Stage: Application is built inside Docker containers
3. Test Stage: Automated tests run in isolated Docker environments
4. Security Scanning: Container images are scanned for vulnerabilities
5. Registry Push: Built images are pushed to container registries
6. Deployment: Containers are deployed to target environments
Creating Docker-Ready Applications
Sample Application Structure
Let's create a sample Node.js application to demonstrate Docker CI/CD integration:
```bash
Create project directory
mkdir docker-cicd-demo
cd docker-cicd-demo
Initialize Node.js project
npm init -y
npm install express
Create application file
cat > app.js << EOF
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.get('/', (req, res) => {
res.json({
message: 'Hello from Docker CI/CD Pipeline!',
version: process.env.APP_VERSION || '1.0.0',
timestamp: new Date().toISOString()
});
});
app.get('/health', (req, res) => {
res.status(200).json({ status: 'healthy' });
});
app.listen(port, () => {
console.log(\`Server running on port \${port}\`);
});
module.exports = app;
EOF
```
Creating Dockerfile
Create an optimized Dockerfile for your application:
```dockerfile
Use official Node.js runtime as base image
FROM node:18-alpine AS builder
Set working directory
WORKDIR /app
Copy package files
COPY package*.json ./
Install dependencies
RUN npm ci --only=production
Production stage
FROM node:18-alpine AS production
Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
Set working directory
WORKDIR /app
Copy dependencies from builder stage
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
Switch to non-root user
USER nodejs
Expose port
EXPOSE 3000
Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
Start application
CMD ["node", "app.js"]
```
Docker Compose for Development
Create a `docker-compose.yml` file for local development:
```yaml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: production
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- APP_VERSION=dev
volumes:
- .:/app
- /app/node_modules
restart: unless-stopped
redis:
image: redis:7-alpine
ports:
- "6379:6379"
restart: unless-stopped
networks:
default:
name: docker-cicd-network
```
Implementing CI/CD with Popular Platforms
GitHub Actions Integration
Create `.github/workflows/ci-cd.yml` for GitHub Actions:
```yaml
name: Docker CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linting
run: npm run lint
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.event_name == 'push'
permissions:
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to production
run: |
echo "Deploying to production environment"
# Add your deployment commands here
```
GitLab CI Integration
Create `.gitlab-ci.yml` for GitLab CI/CD:
```yaml
stages:
- test
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
services:
- docker:20.10.16-dind
before_script:
- docker info
test:
stage: test
image: node:18-alpine
script:
- npm ci
- npm run test
- npm run lint
coverage: '/Lines\s:\s(\d+\.\d+)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
build:
stage: build
image: docker:20.10.16
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- main
- develop
deploy_staging:
stage: deploy
image: docker:20.10.16
script:
- echo "Deploying to staging environment"
- docker pull $IMAGE_TAG
- docker stop staging-app || true
- docker rm staging-app || true
- docker run -d --name staging-app -p 3001:3000 $IMAGE_TAG
environment:
name: staging
url: http://staging.example.com
only:
- develop
deploy_production:
stage: deploy
image: docker:20.10.16
script:
- echo "Deploying to production environment"
- docker pull $IMAGE_TAG
- docker stop production-app || true
- docker rm production-app || true
- docker run -d --name production-app -p 3000:3000 $IMAGE_TAG
environment:
name: production
url: http://production.example.com
only:
- main
when: manual
```
Jenkins Pipeline Integration
Create a `Jenkinsfile` for Jenkins pipeline:
```groovy
pipeline {
agent any
environment {
DOCKER_REGISTRY = 'your-registry.com'
IMAGE_NAME = 'docker-cicd-demo'
IMAGE_TAG = "${BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Test') {
agent {
docker {
image 'node:18-alpine'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
sh 'npm ci'
sh 'npm run test'
sh 'npm run lint'
}
post {
always {
publishTestResults testResultsPattern: 'test-results.xml'
publishCoverage adapters: [coberturaAdapter('coverage/cobertura-coverage.xml')]
}
}
}
stage('Build Docker Image') {
steps {
script {
def image = docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}")
docker.withRegistry("https://${DOCKER_REGISTRY}", 'docker-registry-credentials') {
image.push()
image.push('latest')
}
}
}
}
stage('Security Scan') {
steps {
script {
sh "docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image ${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
}
}
}
stage('Deploy to Staging') {
when {
branch 'develop'
}
steps {
script {
sh """
docker pull ${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}
docker stop staging-app || true
docker rm staging-app || true
docker run -d --name staging-app -p 3001:3000 ${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}
"""
}
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
steps {
input message: 'Deploy to production?', ok: 'Deploy'
script {
sh """
docker pull ${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}
docker stop production-app || true
docker rm production-app || true
docker run -d --name production-app -p 3000:3000 ${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}
"""
}
}
}
}
post {
always {
cleanWs()
}
failure {
emailext (
subject: "Pipeline Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER}",
body: "Pipeline failed. Check console output at ${env.BUILD_URL}",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}
```
Advanced Docker CI/CD Strategies
Multi-Stage Builds
Implement multi-stage builds for optimized production images:
```dockerfile
Development stage
FROM node:18-alpine AS development
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
Test stage
FROM development AS test
RUN npm run test
RUN npm run lint
Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
Production stage
FROM node:18-alpine AS production
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s CMD curl -f http://localhost:3000/health || exit 1
CMD ["node", "app.js"]
```
Container Registry Management
Set up private container registry with authentication:
```bash
Docker Hub login
echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
AWS ECR login
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com
Google Container Registry login
echo $GCR_SERVICE_KEY | base64 -d | docker login -u _json_key --password-stdin https://gcr.io
```
Blue-Green Deployment Strategy
Implement blue-green deployments with Docker:
```bash
#!/bin/bash
blue-green-deploy.sh
APP_NAME="myapp"
NEW_VERSION=$1
CURRENT_COLOR=$(docker ps --filter "name=${APP_NAME}" --format "{{.Names}}" | grep -o 'blue\|green' | head -1)
if [ "$CURRENT_COLOR" = "blue" ]; then
NEW_COLOR="green"
OLD_COLOR="blue"
else
NEW_COLOR="blue"
OLD_COLOR="green"
fi
echo "Deploying version $NEW_VERSION to $NEW_COLOR environment"
Deploy new version
docker run -d \
--name "${APP_NAME}-${NEW_COLOR}" \
--network myapp-network \
-e NODE_ENV=production \
-e APP_VERSION=$NEW_VERSION \
myregistry/${APP_NAME}:${NEW_VERSION}
Health check
sleep 10
if curl -f http://localhost:3000/health; then
echo "Health check passed, switching traffic"
# Update load balancer or proxy configuration
# This depends on your load balancer setup
# Stop old version
docker stop "${APP_NAME}-${OLD_COLOR}"
docker rm "${APP_NAME}-${OLD_COLOR}"
echo "Deployment successful"
else
echo "Health check failed, rolling back"
docker stop "${APP_NAME}-${NEW_COLOR}"
docker rm "${APP_NAME}-${NEW_COLOR}"
exit 1
fi
```
Testing Strategies in Docker CI/CD
Unit Testing in Containers
Create a dedicated testing Dockerfile:
```dockerfile
FROM node:18-alpine AS test
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run test -- --coverage --watchAll=false
CMD ["npm", "test"]
```
Integration Testing
Set up integration tests with Docker Compose:
```yaml
version: '3.8'
services:
app:
build:
context: .
target: test
depends_on:
- redis
- postgres
environment:
- NODE_ENV=test
- REDIS_URL=redis://redis:6379
- DATABASE_URL=postgresql://testuser:testpass@postgres:5432/testdb
volumes:
- .:/app
- /app/node_modules
redis:
image: redis:7-alpine
postgres:
image: postgres:14-alpine
environment:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
```
End-to-End Testing
Implement E2E tests with Cypress in Docker:
```dockerfile
FROM cypress/included:12.3.0
WORKDIR /app
COPY package*.json ./
COPY cypress.config.js ./
COPY cypress ./cypress
RUN npm ci
CMD ["npx", "cypress", "run"]
```
Security Best Practices
Image Security Scanning
Integrate security scanning into your pipeline:
```bash
Trivy scanning
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image --severity HIGH,CRITICAL myapp:latest
Anchore scanning
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
anchore/grype myapp:latest
```
Secure Dockerfile Practices
```dockerfile
Use specific version tags
FROM node:18.12.1-alpine
Create non-root user
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
Use COPY instead of ADD
COPY --chown=nodejs:nodejs . .
Remove unnecessary packages
RUN apk del .build-deps
Use specific port
EXPOSE 3000
Switch to non-root user
USER nodejs
Use exec form for CMD
CMD ["node", "app.js"]
```
Monitoring and Logging
Application Monitoring
Implement health checks and monitoring:
```javascript
// health-check.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on('error', () => {
process.exit(1);
});
request.end();
```
Centralized Logging
Set up centralized logging with Docker:
```yaml
version: '3.8'
services:
app:
build: .
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
labels:
- "logging=true"
logspout:
image: gliderlabs/logspout
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tls://logs.papertrailapp.com:12345
```
Common Issues and Troubleshooting
Build Performance Issues
Problem: Slow Docker builds in CI/CD pipeline
Solutions:
```bash
Use build cache
docker build --cache-from myapp:latest -t myapp:new .
Multi-stage builds with cache mounts
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci --only=production
```
Registry Authentication Problems
Problem: Docker push/pull authentication failures
Solutions:
```bash
Debug authentication
docker config ls
docker system info | grep -i registry
Reset authentication
docker logout
docker login --username $USERNAME --password $PASSWORD
```
Container Resource Limits
Problem: Containers running out of memory or CPU
Solutions:
```yaml
Docker Compose resource limits
services:
app:
build: .
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
```
Network Connectivity Issues
Problem: Services cannot communicate between containers
Solutions:
```bash
Debug network connectivity
docker network ls
docker network inspect bridge
Create custom network
docker network create --driver bridge myapp-network
docker run --network myapp-network myapp:latest
```
Image Size Optimization
Problem: Large Docker images affecting deployment speed
Solutions:
```dockerfile
Use Alpine Linux base images
FROM node:18-alpine
Remove package managers cache
RUN npm ci --only=production && npm cache clean --force
Use .dockerignore
echo "node_modules\n.git\n*.md" > .dockerignore
```
Best Practices and Tips
Dockerfile Best Practices
1. Use specific base image tags: Avoid `latest` tags in production
2. Minimize layers: Combine RUN commands when possible
3. Leverage build cache: Order instructions from least to most frequently changing
4. Use multi-stage builds: Separate build and runtime environments
5. Run as non-root user: Improve security by avoiding root privileges
CI/CD Pipeline Optimization
1. Parallel execution: Run independent jobs concurrently
2. Conditional deployments: Use branch-based deployment strategies
3. Artifact caching: Cache dependencies and build artifacts
4. Fast feedback: Fail fast on critical issues
5. Environment parity: Keep development, staging, and production similar
Container Registry Management
1. Image tagging strategy: Use semantic versioning and git commit hashes
2. Registry cleanup: Implement automated cleanup policies
3. Security scanning: Scan images before deployment
4. Access control: Implement proper authentication and authorization
5. Backup strategies: Ensure registry data is backed up
Monitoring and Alerting
1. Health checks: Implement comprehensive health endpoints
2. Log aggregation: Centralize logging across all containers
3. Metrics collection: Monitor application and infrastructure metrics
4. Alerting rules: Set up alerts for critical issues
5. Performance monitoring: Track application performance metrics
Conclusion
Implementing Docker in CI/CD pipelines on Linux provides numerous benefits including consistent environments, improved scalability, and faster deployments. This comprehensive guide has covered everything from basic Docker setup to advanced deployment strategies, security best practices, and troubleshooting common issues.
Key takeaways for successful Docker CI/CD implementation:
1. Start Simple: Begin with basic containerization and gradually add complexity
2. Automate Everything: Leverage CI/CD platforms to automate build, test, and deployment processes
3. Security First: Implement security scanning and follow best practices from the beginning
4. Monitor Continuously: Set up comprehensive monitoring and logging from day one
5. Iterate and Improve: Continuously optimize your pipelines based on feedback and metrics
Next Steps
To further enhance your Docker CI/CD implementation:
1. Explore Kubernetes: Consider container orchestration for complex applications
2. Implement GitOps: Adopt GitOps practices for declarative deployments
3. Advanced Security: Implement runtime security monitoring and compliance scanning
4. Performance Optimization: Fine-tune container resources and application performance
5. Disaster Recovery: Develop comprehensive backup and recovery strategies
By following the practices and examples outlined in this guide, you'll be well-equipped to build robust, scalable, and secure Docker-based CI/CD pipelines on Linux that can handle the demands of modern software development and deployment.