// From code commit to production deployment.
CI/CD CHANGED SOFTWARE DEVELOPMENT.
Remember manually deploying applications? Uploading files via FTP, running commands on production servers, hoping nothing breaks? Those days are over. CI/CD pipelines automate the entire process from code commit to production deployment.
WHY CI/CD?
Continuous Integration means every code change triggers automated tests and builds. Continuous Deployment means your code automatically reaches production after passing all checks. The result: faster releases, fewer bugs, happier users.
BECOME A DEVOPS ENGINEER.
Learn to build robust CI/CD pipelines with GitHub Actions, GitLab CI, and Jenkins. Understand deployment strategies, testing automation, and infrastructure as code. Ship software with confidence.
12 lessons. Complete CI/CD control.
What is CI/CD? Understanding the software delivery pipeline.
BeginnerGit workflows, branching strategies, and trunk-based development.
BeginnerUnit tests, integration tests, end-to-end tests, and test coverage.
BeginnerBuilding your first workflow with GitHub Actions.
IntermediateGitLab CI pipelines, .gitlab-ci.yml, and runners.
IntermediateJenkins pipelines, Jenkinsfile, and configuration.
IntermediateMakefiles, Gradle, Maven, and build tools.
IntermediateManaging build artifacts with Nexus, Artifactory, and GitHub Packages.
IntermediateBlue-green, canary, rolling deployments, and feature flags.
AdvancedTerraform, Ansible, and immutable infrastructure.
AdvancedDocker, Kaniko, Buildah, and container registries.
AdvancedGitOps, observability, and enterprise CI/CD architecture.
AdvancedCI/CD stands for Continuous Integration and Continuous Delivery (or Deployment). It's a methodology and set of practices that automate the building, testing, and deployment of software.
Developers merge code changes frequently, typically several times per day. Each merge triggers an automated build and test process:
Code changes are automatically prepared for release to production. The deployment to production is triggered manually but is automated:
Every change that passes tests is automatically deployed to production. No manual intervention needed:
A typical CI/CD pipeline includes:
Effective CI/CD starts with good version control practices. Your branching strategy directly impacts your CI/CD pipeline design.
Developers work in short-lived branches (less than 2 days) or directly on main:
main (or trunk)
βββ feature/auth-login (2 days)
βββ feature/api-v2 (1 day)
βββ hotfix/security-patch (hours)
Benefits: Less merge conflicts, faster integration, easier CI/CD
More structured approach with dedicated branches:
main
βββ develop
β βββ feature/new-feature
β βββ release/1.0.0
β βββ bugfix/fix-login
βββ hotfix/urgent-fix
Best for: Projects with scheduled releases
Simple, branch-based workflow:
main
βββ feature/add-login
βββ feature/update-api
βββ fix/typo-in-readme
# Good commit messages
feat: add user authentication
fix: resolve memory leak in cache
docs: update API documentation
refactor: simplify user service
test: add unit tests for auth module
chore: update dependencies
<type>(<scope>): <description>
[optional body]
[optional footer]
Automated testing is the foundation of CI/CD. Without reliable tests, you can't have confidence in your automated pipeline.
Test individual functions and methods in isolation:
// Example: Python unit test
def test_calculate_discount():
assert calculate_discount(100, 10) == 10
assert calculate_discount(100, 0) == 0
assert calculate_discount(100, 50) == 50
Characteristics: Fast, isolated, many per feature
Test how components work together:
// Example: API integration test
def test_create_user():
response = client.post('/api/users', {
'name': 'John',
'email': 'john@example.com'
})
assert response.status_code == 201
assert 'id' in response.json()
Characteristics: Moderate speed, test interactions
Test complete user workflows:
// Example: Cypress e2e test
it('user can login', () => {
cy.visit('/login')
cy.get('[data-testid=email]').type('user@example.com')
cy.get('[data-testid=password]').type('password')
cy.get('[data-testid=login-btn]').click()
cy.url().should('include', '/dashboard')
})
Characteristics: Slow, test complete flows
Aim for meaningful coverageβnot necessarily 100%:
# Using pytest with coverage
pytest --cov=src --cov-report=html
# Example output
Name Stmts Miss Cover
--------------------------------------------
src/models/user.py 50 5 90%
src/services/auth.py 100 20 80%
src/api/routes.py 80 0 100%
--------------------------------------------
TOTAL 230 25 89%
GitHub Actions is GitHub's built-in CI/CD platform. It integrates directly with your GitHub repository and allows you to automate workflows.
Create a workflow file in .github/workflows/ci.yml:
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: |
pytest --cov=src
- name: Upload coverage
uses: codecov/codecov-action@v4
on:
push:
branches: [main]
tags: ['v*']
pull_request:
paths:
- 'src/**'
- 'tests/**'
workflow_dispatch: # Manual trigger
schedule:
- cron: '0 0 * * *' # Daily at midnight
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 10
outputs:
version: ${{ steps.version.outputs.version }}
steps:
- id: version
run: echo "version=1.0.0" >> $GITHUB_OUTPUT
test:
needs: build
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v4
- run: echo "Testing with ${{ matrix.python-version }}"
Reusable actions from the community:
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
GitLab CI is GitLab's built-in CI/CD platform. It's known for its powerful pipeline features and easy setup.
Create .gitlab-ci.yml in your repository:
stages:
- build
- test
- deploy
build:
stage: build
image: python:3.11
script:
- pip install -r requirements.txt
- python -m compileall src
artifacts:
paths:
- build/
test:
stage: test
image: python:3.11
script:
- pip install pytest pytest-cov
- pytest --cov=src
coverage: '/TOTAL.*\s+(\d+%)$/'
deploy:
stage: deploy
script:
- echo "Deploying to production"
only:
- main
Runners execute pipeline jobs. You can use shared runners or register your own:
# Register a runner
gitlab-runner register \
--url https://gitlab.com \
--token REGISTRATION_TOKEN \
--executor docker \
--docker-image ubuntu:latest
deploy:
script: echo "Deploying"
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
build:
stage: build
script: echo "Building"
artifacts:
paths:
- build/
test:
stage: test
script: echo "Testing"
needs:
- build
Jenkins is the oldest and most widely used CI/CD tool. It's highly customizable with thousands of plugins.
# Using Docker
docker run -d -p 8080:8080 -p 50000:50000 \
-v jenkins_home:/var/jenkins_home \
jenkins/jenkins:lts
# Using apt (Ubuntu)
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install jenkins
pipeline {
agent any
environment {
DOCKER_IMAGE = 'myapp'
REGISTRY = 'docker.io'
}
stages {
stage('Build') {
steps {
sh 'pip install -r requirements.txt'
}
}
stage('Test') {
steps {
sh 'pytest --cov=src'
}
post {
always {
junit 'reports/*.xml'
coverageParser 'coverage.xml'
}
}
}
stage('Build Docker') {
steps {
sh '''
docker build -t $DOCKER_IMAGE:$BUILD_NUMBER .
docker tag $DOCKER_IMAGE:$BUILD_NUMBER $DOCKER_IMAGE:latest
'''
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh 'kubectl apply -f k8s/'
}
}
}
post {
success {
slackSend color: 'good', message: "Build succeeded!"
}
failure {
slackSend color: 'danger', message: "Build failed!"
}
}
}
node {
stage('Checkout') {
checkout scm
}
stage('Build') {
sh 'make build'
}
stage('Test') {
sh 'make test'
}
}
Build automation tools compile your code, manage dependencies, and create deployable artifacts.
.PHONY: build test clean install
build:
go build -o bin/app
test:
go test -v -coverprofile=coverage.out ./...
clean:
rm -rf bin/ coverage.out
install:
go install
run:
go run main.go
docker:
docker build -t myapp:latest .
# pom.xml
<project>
<groupId>com.example</groupId>
<artifactId>myapp</artifactId>
<version>1.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
</dependencies>
</project>
# Commands
mvn clean package # Build
mvn test # Test
mvn spring-boot:run # Run
// build.gradle.kts
plugins {
kotlin("jvm") version "1.9.0"
application
}
application {
mainClass.set("com.example.App")
}
dependencies {
implementation("org.jetbrains.kotlin:kotlin-stdlib")
}
tasks.test {
useJUnit()
}
// Commands
gradle build # Build
gradle test # Test
gradle run # Run
gradle bootJar # Spring Boot JAR
// package.json
{
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js",
"test": "jest --coverage",
"build": "webpack --mode production",
"lint": "eslint src/"
}
}
Build artifacts need to be stored, versioned, and managed. Artifact repositories provide centralized storage.
# .npmrc for NPM
@myorg:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=${NPM_TOKEN}
# In workflow
- name: Publish to GitHub Packages
run: npm publish
env:
NODE_AUTH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# GitHub Container Registry
docker build -t ghcr.io/user/image:latest .
echo $CR_PAT | docker login ghcr.io -u USER --password-stdin
docker push ghcr.io/user/image:latest
# AWS ECR
aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_REGISTRY
docker build -t $ECR_REGISTRY/myapp:latest .
docker push $ECR_REGISTRY/myapp:latest
# Upload artifact
curl -u admin:password -X PUT \
"https://artifactory.example.com/artifactory/libs-release/myapp.jar" \
-T myapp.jar
# Download artifact
curl -u admin:password -o myapp.jar \
"https://artifactory.example.com/artifactory/libs-release/myapp.jar"
How you deploy to production matters. Different strategies trade off between risk, speed, and complexity.
Gradually replace old instances with new ones:
# Kubernetes rolling update
kubectl rollout restart deployment/myapp
# Kubernetes deployment config
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
Pros: Simple, no downtime
Cons: Can't easily roll back instantly
Two identical environmentsβswap traffic at once:
# Deploy to green
kubectl apply -f green-deployment.yaml
# Test green
curl -H "Host: app.green.example.com" http://lb
# Switch traffic
kubectl patch service myapp -p '{"spec":{"selector":{"version":"green"}}}'
Pros: Instant rollback, no partial states
Cons: Requires double resources
Gradually shift traffic to new version:
# Istio canary
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp
http:
- route:
- destination:
host: myapp-v1
weight: 90
- destination:
host: myapp-v2
weight: 10
Pros: Test with real traffic, gradual rollout
Cons: More complex setup
// In code
if (featureFlags.isEnabled('new-feature')) {
return renderNewFeature();
} else {
return renderOldFeature();
}
// Toggle without deployment
featureFlags.enable('new-feature')
featureFlags.disable('new-feature')
CI/CD works best when your infrastructure is also defined in code. This ensures consistency and repeatability.
name: Terraform
on:
push:
paths:
- 'infra/**'
pull_request:
paths:
- 'infra/**'
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.5.0
- name: Terraform Format
run: terraform fmt -check
working-directory: infra/
- name: Terraform Init
run: terraform init
working-directory: infra/
- name: Terraform Plan
run: terraform plan
working-directory: infra/
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply -auto-approve
working-directory: infra/
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Ansible
run: pip install ansible
- name: Run Ansible Playbook
run: ansible-playbook -i inventory/prod deploy.yml
env:
ANSIBLE_HOST_KEY_CHECKING: false
Containers have become the standard deployment format. Let's look at building and deploying containers in CI/CD.
name: Build and Push
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ github.sha }}
ghcr.io/${{ github.repository }}:latest
- name: Build multi-platform
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:latest
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: .
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
Enterprise CI/CD requires additional considerations for security, compliance, and observability.
GitOps uses Git as the single source of truth for infrastructure and applications:
# GitOps workflow
# 1. Developer commits code
# 2. CI builds and tests
# 3. CD updates Git with new image tag
# 4. GitOps operator (ArgoCD/Flux) syncs to cluster
# Example: Update deployment in Git
- name: Update deployment
run: |
cd infrastructure
yq -i '.spec.template.spec.containers[0].image = "${{ github.sha }}"' deployment.yaml
git config user.email "ci@example.com"
git config user.name "CI"
git add deployment.yaml
git commit -m "Deploy ${{ github.sha }}"
git push
# Application manifest
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/user/repo.git
targetRevision: HEAD
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Monitor your CI/CD pipelines:
Congratulations on completing this guide! You've learned:
Continue learning with: