// Automate everything. Deploy fearless.
JENKINS IS THE OLD GUARD—AND THAT'S EXACTLY WHY IT'S POWERFUL.
In an era of shiny new SaaS CI/CD platforms that want to lock you into their ecosystem, Jenkins remains the free, open-source workhorse that runs on your infrastructure. You own it. You control it. You modify it. No vendor lock-in. No per-minute billing. No arbitrary limits.
BUILD YOUR OWN CI/CD PIPELINE.
Jenkins gives you complete control over how software moves from developer keyboard to production. Every step is visible, every configuration is text, every pipeline is version-controlled. When something breaks, you have the logs, the scripts, and the power to fix it—not some support ticket waiting game.
MASTER THE AUTOMATION ENGINE.
Learn to build pipelines that test your code, build your artifacts, deploy to your servers, and notify your team. Learn Groovy scripting to customize every behavior. Integrate with Git, Docker, Kubernetes, Ansible, and hundreds of other tools. Jenkins is infinitely extensible because it's built by developers, for developers.
12 lessons. Complete Jenkins control.
What is CI/CD? Installing Jenkins, understanding the architecture, and your first build.
BeginnerJobs, builds, plugins, and the Jenkins dashboard. Understanding the core concepts.
BeginnerCreating your first freestyle job. Build triggers, source management, and build steps.
BeginnerJenkinsfile basics. Declarative vs scripted pipelines. Writing your first pipeline.
IntermediateStages, steps, agents, post-actions. Advanced pipeline syntax and best practices.
IntermediateMaster-agent architecture. Configuring agents. Distributed build execution.
IntermediateEssential plugins. Pipeline plugins, Docker integration, Git integration, and more.
IntermediateRunning tests in pipeline. Test reporting. Quality gates and code coverage.
IntermediateStoring build artifacts. Archiving, artifact repositories, and artifact promotion.
AdvancedMatrix authorization. Role-based access control. Securing your Jenkins instance.
AdvancedBuilding Docker images in pipeline. Docker-in-Docker. Containerized builds.
AdvancedJenkins on Kubernetes. Dynamic agents. Scaling your CI/CD infrastructure.
AdvancedJenkins is an open-source automation server that enables developers to build, test, and deploy software reliably. It's the most widely used CI/CD tool in the world, with over 200,000 active installations. Originally created by Kohsuke Kawaguchi in 2004 as Hudson, it was renamed to Jenkins after a dispute with Oracle.
Jenkins automates the software development process through continuous integration and continuous delivery (CI/CD). It monitors version control systems for changes, automatically triggers builds, runs tests, and can deploy applications to various environments.
With so many SaaS CI/CD platforms available—GitHub Actions, GitLab CI, CircleCI, Travis CI—you might wonder why Jenkins still matters. Here's why:
Jenkins can run on any machine that supports Java. Here's how to install it on Linux:
# Add Jenkins repository curl -fsSL https://pkg.jenkins.io/jenkins.io-2023.key | sudo gpg --dearmor -o /usr/share/keyrings/jenkins.gpg echo "deb [signed-by=/usr/share/keyrings/jenkins.gpg] https://pkg.jenkins.io/debian binary/" | sudo tee /etc/apt/sources.list.d/jenkins.list # Update and install sudo apt update sudo apt install jenkins -y # Start Jenkins sudo systemctl start jenkins sudo systemctl enable jenkins
After installation, access Jenkins at http://localhost:8080. You'll need the initial admin password from:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Understanding Jenkins architecture is crucial for effective usage:
Let's create your first Jenkins job—a simple freestyle project:
echo "Hello from Jenkins!"Watch the console output. You've just executed your first automated build!
In Jenkins, a job (now called "item") is a configurable task that Jenkins executes. A build is a single execution of that job. Each build has a number, timestamp, and outcome (success, failure, or unstable).
Jobs can be configured with:
Triggers define when Jenkins starts a build:
Jenkins periodically checks your repository for changes:
# Check every 5 minutes H/5 * * * * # Check every 15 minutes during work hours H/15 9-17 * * 1-5
Your version control system notifies Jenkins of changes (more efficient than polling):
# In GitHub, add webhook: # http://YOUR_JENKINS_URL/github-webhook/
Run builds on a schedule (useful for reports, maintenance):
# Daily at midnight H 0 * * * # Every 6 hours H H/6 * * *
Make your builds flexible with parameters:
$PARAM_NAME#!/bin/bash # Build with parameter echo "Building version: $VERSION" echo "Environment: $ENVIRONMENT" # Use in Docker builds docker build -t myapp:$VERSION .
Each job gets a dedicated workspace directory—typically /var/lib/jenkins/workspace/JOB_NAME. This is where:
The workspace persists between builds unless you configure cleanup.
Every build provides:
Freestyle projects are the most flexible job type in Jenkins. They allow you to configure various build triggers, source code management, build steps, and post-build actions through a web interface.
Let's build a real project that pulls from Git:
https://github.com/yourusername/your-repo.git
*/main or */masterJenkins automatically checks out code, but you can customize this.
#!/bin/bash npm install npm test
#!/bin/bash npm run build
build/**/***/test-results/*.xmlJenkins provides built-in environment variables:
# Useful Jenkins environment variables $WORKSPACE # Path to job workspace $BUILD_NUMBER # Current build number $BUILD_URL # URL to this build $JOB_NAME # Name of the job $GIT_COMMIT # Current Git commit hash $GIT_BRANCH # Current Git branch $CHANGES # Changes since last build
You can also define custom environment variables in job configuration.
Configure when builds should fail:
Create a freestyle project that:
Pipeline as Code (PaC) means defining your entire CI/CD workflow in a file called Jenkinsfile that's version-controlled alongside your code. This brings:
A Jenkinsfile can be written in two syntaxes:
pipeline {
agent any
environment {
MY_VAR = 'value'
}
stages {
stage('Build') {
steps {
echo 'Building...'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Testing...'
sh 'npm test'
}
}
stage('Deploy') {
steps {
echo 'Deploying...'
}
}
}
post {
always {
echo 'Pipeline complete'
}
success {
echo 'Build succeeded!'
}
failure {
echo 'Build failed!'
}
}
}
node {
stage('Build') {
sh 'npm install'
}
stage('Test') {
sh 'npm test'
}
stage('Deploy') {
sh './deploy.sh'
}
}
Two ways to use Jenkinsfile:
Paste Jenkinsfile directly into Jenkins job configuration.
Jenkinsfile lives in your repository:
Jenkinsfile)pipeline {
// Where to run (any agent, or specific label)
agent { label 'my-agent' }
// Environment variables
environment {
APP_NAME = 'myapp'
REGISTRY = 'docker.io'
}
// Define stages
stages {
stage('Checkout') {
steps {
// Get source code
checkout scm
}
}
stage('Build') {
steps {
// Build your application
sh 'make build'
}
}
stage('Test') {
steps {
// Run tests
sh 'make test'
}
}
}
// Actions after stages
post {
always { cleanWs() }
success { echo 'Done!' }
}
}
Create a Jenkinsfile that:
Stages are logical groupings of work. Steps are the actual commands that do the work.
stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}
You can run multiple steps in sequence. Each step must succeed for the next to run.
Conditionally execute stages:
stages {
stage('Deploy Prod') {
when {
branch 'main'
}
steps {
sh './deploy-prod.sh'
}
}
stage('Deploy Staging') {
when {
branch 'develop'
}
steps {
sh './deploy-staging.sh'
}
}
}
Other when conditions:
when {
environment name: 'DEPLOY_TO', value: 'production'
expression { return params.DEPLOY }
}
Speed up builds by running stages in parallel:
stages {
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm run unit-test'
}
}
stage('Integration Tests') {
steps {
sh 'npm run integration-test'
}
}
stage('E2E Tests') {
steps {
sh 'npm run e2e-test'
}
}
}
}
}
Run the same stage with different configurations:
matrix {
axes {
axis {
name 'NODE_VERSION'
values '14', '16', '18', '20'
}
axis {
name 'OS'
values 'ubuntu', 'centos'
}
}
stages {
stage('Test') {
sh 'npm test'
}
}
}
Actions that run after all stages:
post {
always {
echo 'Runs regardless of result'
}
success {
echo 'Only runs on success'
}
failure {
echo 'Only runs on failure'
}
unstable {
echo 'Runs when build is unstable'
}
changed {
echo 'Runs when build result differs from previous'
}
cleanup {
echo 'Runs after always, even if failed'
cleanWs()
}
}
stage('Deploy') {
steps {
timeout(time: 10, unit: 'MINUTES') {
retry(3) {
sh './deploy.sh'
}
}
}
}
Jenkins can distribute builds across multiple machines. The architecture consists of:
Static machines that are always available:
# In Jenkins UI: # Manage Jenkins > Manage Nodes > New Node # Configure: # - Number of executors: 2 # - Remote root directory: /var/jenkins/agents/agent1 # - Labels: linux docker # - Launch method: via SSH
Agents that connect via Java Web Start:
# On agent machine: java -jar agent.jar -jnlpUrl http://jenkins:8080/computer/agent1/slave-agent.jnlp -secret SECRET_TOKEN
Dynamic agents that spin up on demand (AWS, Azure, Kubernetes):
# Install Amazon EC2 plugin # Configure cloud in Manage Jenkins > Manage Clouds # Agents automatically launch and terminate based on demand
pipeline {
// Run on any available agent
agent any
// Or specify by label
agent { label 'docker && linux' }
// Or use multiple labels (AND logic)
agent { label 'docker && ubuntu && high-memory' }
// Or exclude labels
agent {
node {
label '!windows'
}
}
stages {
stage('Build') {
agent { label 'docker' }
steps {
sh 'docker build .'
}
}
}
}
Run builds in isolated Docker containers:
pipeline {
agent {
docker {
image 'node:18-alpine'
label 'docker-host'
args '-v /tmp:/tmp'
}
}
stages {
stage('Build') {
steps {
sh 'node --version'
}
}
}
}
This pulls the Docker image, runs a container, and executes the pipeline inside it.
For dynamic scaling, use Kubernetes:
pipeline {
agent {
kubernetes {
label 'pod-template'
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: builder
image: node:18
command:
- cat
tty: true
'''
}
}
stages {
stage('Build') {
container('builder') {
sh 'npm install'
}
}
}
}
Jenkins' power comes from its plugin ecosystem. Here are must-have plugins:
# Via Jenkins UI: # Manage Jenkins > Manage Plugins > Available # Search and install # Via REST API: curl -X POST http://jenkins:8080/pluginManager/installNecessaryPlugins \ -d "" # Or download .hpi file and copy to: # /var/lib/jenkins/plugins/
pipeline {
triggers {
githubPush()
}
stages {
stage('Build') {
steps {
echo 'GitHub push triggered this build'
}
}
}
}
pipeline {
agent {
docker {
image 'maven:3.8-openjdk-11'
args '-v ~/.m2:/root/.m2'
}
}
stages {
stage('Build with Maven') {
sh 'mvn --version'
}
}
}
Automated testing in CI/CD catches bugs early, prevents bad code from reaching production, and gives confidence in your deployments. A proper CI/CD pipeline should:
pipeline {
agent any
stages {
stage('Install Dependencies') {
sh 'npm install'
}
stage('Unit Tests') {
steps {
sh 'npm test -- --coverage'
}
}
stage('Integration Tests') {
steps {
sh 'npm run integration-test'
}
}
}
post {
always {
junit 'test-results/*.xml'
cobertura coberturaReportFile: 'coverage/cobertura-coverage.xml'
}
}
}
Jenkins can display beautiful test reports:
# Most test frameworks can output JUnit XML
# npm test -- --junit test-results/junit.xml
# mvn test -Dtest=JUnitTest -Dsurefire.useFile=false
# Jenkins configuration:
post {
always {
junit 'test-results/**/*.xml'
}
}
Enforce minimum quality standards:
pipeline {
stages {
stage('Test & Coverage') {
steps {
sh 'npm test && npm run coverage'
}
post {
always {
jacoco()
}
}
}
stage('Quality Gate') {
steps {
script {
def coverage = readFile('coverage/coverage.json')
def threshold = 80
if (coverage < threshold) {
error "Code coverage ${coverage}% is below threshold ${threshold}%"
}
}
}
}
}
}
Speed up test execution with parallelization:
pipeline {
stages {
stage('Parallel Tests') {
parallel {
stage('Unit Tests') {
sh 'npm run unit-tests'
}
stage('Integration Tests') {
sh 'npm run integration-tests'
}
stage('E2E Tests') {
sh 'npm run e2e-tests'
}
}
}
}
}
Artifacts are files produced by a build—binaries, Docker images, installers, reports. Jenkins can archive these files so they're available after the build completes.
Configure in post-build actions:
build/**/*.jarpipeline {
stages {
stage('Build') {
sh 'npm run build'
}
}
post {
success {
archiveArtifacts artifacts: 'build/**/*', fingerprint: true
}
}
}
The fingerprint option helps track which build produced which artifact.
// Keep artifacts for 5 days, max 10 builds
archiveArtifacts artifacts: 'build/**',
allowEmptyArchive: true,
retentionDays: 5,
maxBuilds: 10
Or configure globally in Manage Jenkins > Configure System > Artifact Manager
Promote builds to different "environments" (e.g., from staging to production):
// Using promotion plugin
pipeline {
stages {
stage('Build') {
steps {
sh 'make build'
}
}
}
}
// Promotion process:
// 1. Configure in job > Promotion
// 2. Name: "Production Ready"
// 3. Criteria: Manual approval + tests passed
// 4. Actions: Copy artifacts, trigger deployment
For production, use external artifact repositories:
// Using Artifactory plugin
pipeline {
stages {
stage('Build & Publish') {
steps {
rtBuildInfo()
rtUpload (
serverId: 'artifactory-server',
repo: 'libs-release-local',
artifacts: {
pattern: 'build/**/*.jar'
}
)
}
}
}
}
// Using S3 plugin
pipeline {
post {
success {
s3Upload(
bucket: 'my-artifacts',
file: 'build/app.jar',
path: "builds/${BUILD_NUMBER}/app.jar"
)
}
}
}
Jenkins is powerful, which means it needs strong security. A compromised Jenkins can execute arbitrary code on your infrastructure.
Configure in Manage Jenkins > Security:
Install Role Strategy Plugin for granular control:
# Manage Roles: # - Global roles: admin, developer, viewer # - Project roles: regex patterns for job names # - Agent roles: for agent management # Example roles: # Global: # - admin: Overall/Administer # - developer: Overall/Read + Job/Build + Job/Create # - viewer: Overall/Read # # Project (regex): # - dev-.*: Job/Build on jobs matching dev-* # - prod-.*: Job/Build + Job/Deploy on prod-*
Store secrets securely with Jenkins Credentials:
pipeline {
environment {
DOCKER_CREDS = credentials('docker-hub-credentials')
}
stages {
stage('Deploy') {
steps {
sh 'docker login -u $DOCKER_CREDS_USR -p $DOCKER_CREDS_PSW'
}
}
}
}
For plain secrets:
withCredentials([string(credentialsId: 'my-secret', variable: 'MY_SECRET')]) {
sh 'echo $MY_SECRET'
}
Enablecrumb scattering to prevent cross-site request forgery:
Docker transforms CI/CD by providing consistent build environments. Let's explore how to use Docker in Jenkins pipelines.
pipeline {
agent any
environment {
REGISTRY = 'docker.io'
IMAGE_NAME = 'myapp'
REGISTRY_CREDS = credentials('docker-hub')
}
stages {
stage('Checkout') {
checkout scm
}
stage('Build Image') {
steps {
sh '''
docker build -t $IMAGE_NAME:$BUILD_NUMBER .
docker build -t $IMAGE_NAME:latest .
'''
}
}
stage('Test Image') {
steps {
sh '''
docker run --rm $IMAGE_NAME:$BUILD_NUMBER npm test
'''
}
}
stage('Push Image') {
steps {
sh '''
echo $REGISTRY_CREDS_PSW | docker login $REGISTRY -u $REGISTRY_CREDS_USR --password-stdin
docker push $IMAGE_NAME:$BUILD_NUMBER
docker push $IMAGE_NAME:latest
docker logout
'''
}
}
}
}
Sometimes you need to run Docker inside Docker:
// In agent configuration or pipeline
agent {
docker {
image 'docker:24-dind'
args '--privileged -v /var/run/docker.sock:/var/run/docker.sock'
}
}
stages {
stage('Build') {
steps {
sh 'docker build .'
}
}
}
Build efficient, small images with multi-stage builds:
# Dockerfile # Build stage FROM node:18 AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build # Production stage FROM node:18-alpine AS production WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules CMD ["node", "dist/index.js"]
# In pipeline
stage('Build Optimized Image') {
sh 'docker build --target production -t myapp:optimized .'
}
Handle secrets during build without exposing them in layers:
# BuildKit secrets
# Enable BuildKit
export DOCKER_BUILDKIT=1
# Build with secrets
docker build --secret id=npm,env=NPM_TOKEN .
# Dockerfile
RUN --mount=type=secret,id=npm \
NPM_TOKEN=$(cat /run/secrets/npm) npm install
pipeline {
environment {
AWS_REGION = 'us-east-1'
ECR_REGISTRY = "${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com"
}
stages {
stage('Build & Push to ECR') {
steps {
sh '''
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
docker build -t $ECR_REGISTRY/myapp:$BUILD_NUMBER .
docker push $ECR_REGISTRY/myapp:$BUILD_NUMBER
'''
}
}
}
}
Running Jenkins in Kubernetes provides dynamic scaling, resource efficiency, and infrastructure as code. Let's set up Jenkins on Kubernetes and configure dynamic agents.
# jenkins.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
- containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: jenkins
# Apply kubectl apply -f jenkins.yaml # Get admin password kubectl exec -it jenkins-xxxxx cat /var/jenkins_home/secrets/initialAdminPassword
# Pod template configuration: # - Name: builder # - Label: builder # - Container template: # - Name: jnlp # - Image: jenkins/inbound-agent:latest # - Working directory: /home/jenkins/agent # - Command: "" (leave empty) # - Arguments: "" (leave empty)
pipeline {
agent {
kubernetes {
label 'jenkins-builder'
defaultContainer 'builder'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: builder
image: node:18-alpine
command:
- cat
tty: true
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace
emptyDir: {}
'''
}
}
stages {
stage('Build') {
container('builder') {
sh '''
cd /workspace
npm install
npm test
'''
}
}
}
}
Create custom agent images for your tech stack:
# Dockerfile for custom agent
FROM jenkins/inbound-agent:latest
# Install tools you need
RUN apk add --no-cache \
nodejs \
npm \
python3 \
pip \
docker-cli \
kubectl \
helm
# Create working directory
WORKDIR /home/jenkins/agent
# Switch back to jenkins user
USER jenkins
# Push to your registry docker build -t myregistry/jenkins-agent:node18 . docker push myregistry/jenkins-agent:node18
Then use in pipeline:
agent {
kubernetes {
label 'custom-agent'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: jnlp
image: myregistry/jenkins-agent:node18
'''
}
}
Configure resource requests and limits:
agent {
kubernetes {
label 'builder'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: builder
image: node:18
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
'''
}
}
You've completed the Jenkins mastery guide. You now know how to:
Next steps: