KUBER
NETES

// Orchestrate at scale.

CONTAINERS ARE JUST THE BEGINNING.

When you have dozens of services, across multiple servers, with scaling needs—managing containers manually becomes impossible. Kubernetes automates the hard stuff.

DECLARE, DON'T EXECUTE.

Tell Kubernetes what you want (3 replicas, auto-scaling, rolling updates). It figures out how to get there. Your infrastructure becomes code—versionable, testable, reviewable.

THE CLOUD-NATIVE STANDARD.

Kubernetes is the operating system of the cloud. AWS, GCP, Azure—all run Kubernetes. Learn it once, deploy everywhere.

START LEARNING →

// The Path to K8s Mastery

12 lessons. Complete orchestration.

LESSON 01

Introduction to Kubernetes

Container orchestration and K8s architecture

Beginner
LESSON 02

Pods - The Smallest Unit

Run containers in Kubernetes pods

Beginner
LESSON 03

Deployments and ReplicaSets

Scale and manage containerized applications

Beginner
LESSON 04

Services - Stable Networking

Expose applications and enable service discovery

Intermediate
LESSON 05

ConfigMaps and Secrets

Manage configuration and sensitive data

Intermediate
LESSON 06

Ingress - HTTP Routing

Route external traffic to internal services

Intermediate
LESSON 07

Persistent Storage

Volumes, PVs, PVCs, and stateful applications

Intermediate
LESSON 08

Resource Limits and Quality of Service

Advanced
LESSON 09

Helm - Package Manager

Install and manage applications with Helm charts

Advanced
LESSON 10

Production Best Practices

Security, monitoring, and production readiness

Advanced
LESSON 11

Security in Kubernetes

RBAC, Pod Security, Network Policies, and secrets

Advanced
LESSON 12

CI/CD with Kubernetes

GitOps, ArgoCD, and automated deployments

Advanced

// Why Kubernetes?

Kubernetes (K8s) provides:

Whether you run 3 containers or 3000—Kubernetes scales with you.

The future is containerized. Kubernetes is the cockpit.

// Tools & References

📖 Kubernetes Docs

Official documentation

kubernetes.io/docs

🛠️ kubectl

Kubernetes CLI

kubernetes.io
📦 Helm

Package manager for K8s

helm.sh
🏠 Minikube

Local K8s cluster

minikube.sigs.k8s.io
☸️ Kind

Kubernetes in Docker

kind.sigs.k8s.io
📊 K9s

Terminal UI for K8s

k9scli.io

// Introduction to Kubernetes

×

What is Kubernetes?

Kubernetes (K8s) is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.

Kubernetes Architecture

  • Control Plane (Master): The brain
    API Server, etcd, Scheduler, Controller Manager
  • Worker Nodes: The muscle
    Kubelet, Kube Proxy, Container Runtime (Docker/Containerd)

Key Concepts

  • Pod: Smallest deployable unit (1+ containers)
  • Node: Worker machine (VM or physical)
  • Cluster: Collection of nodes + control plane
  • Service: Stable endpoint for pods
  • Deployment: Manages pod replicas

Ways to Run Kubernetes

  • Minikube: Single-node local cluster
  • Kind: Kubernetes IN Docker
  • Cloud: EKS, GKE, AKS
  • Self-hosted: kubeadm, k3s

Install kubectl

$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/

Quiz

What is the smallest deployable unit in Kubernetes?

What configuration format does Kubernetes commonly use?

What is the Kubernetes command-line tool called?

What container runtime does Kubernetes typically use?

Which Kubernetes component validates configurations?

Which component decides where pods run?

Where does Kubernetes store cluster state?

Which component ensures desired state matches actual state?

Show Answers

Answers

  1. pod
  2. yaml
  3. kubectl
  4. docker
  5. api server
  6. scheduler
  7. etcd
  8. controller

// Pods - The Smallest Unit

×

Understanding Pods

A Pod is the smallest deployable object in Kubernetes. It can contain one or more tightly-coupled containers that share:

  • Network namespace (same IP, ports)
  • Storage volumes
  • IPC namespace

Pod YAML

$ cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.25
    ports:
    - containerPort: 80
    resources:
      limits:
        memory: "128Mi"
        cpu: "500m"
      requests:
        memory: "64Mi"
        cpu: "250m"

Managing Pods

$ kubectl apply -f nginx-pod.yaml     # Create pod
$ kubectl get pods                    # List pods
$ kubectl get pods -o wide            # More details
$ kubectl describe pod nginx           # Pod details
$ kubectl logs nginx                  # View logs
$ kubectl exec -it nginx -- /bin/sh   # Shell into pod
$ kubectl delete pod nginx            # Delete pod

Multi-Container Pods

spec:
  containers:
  - name: web
    image: nginx
  - name: sidecar
    image: log-collector
    env:
    - name: LOG_LEVEL
      value: "debug"

Quiz

What do containers in a Pod share?

What file format defines a Pod?

What command creates resources from a file?

What command lists all pods?

What command views container output?

What command runs commands in a container?

What field defines CPU and memory limits?

What is a secondary container in a Pod called?

Show Answers

Answers

  1. network namespace
  2. yaml
  3. kubectl apply
  4. kubectl get pods
  5. kubectl logs
  6. kubectl exec
  7. resources
  8. sidecar

// Deployments and ReplicaSets

×

Why Not Just Pods?

Pods are mortal—they die. Deployments give you:

  • Self-healing: Automatic pod replacement
  • Scaling: Multiple replicas
  • Rolling updates: Zero-downtime deployments
  • Rollback: Go back to previous version

Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.25
        ports:
        - containerPort: 80

Managing Deployments

$ kubectl apply -f deployment.yaml     # Create deployment
$ kubectl get deployments            # List deployments
$ kubectl get rs                      # List ReplicaSets
$ kubectl get pods                    # Pods managed by deployment
$ kubectl scale deployment nginx-deployment --replicas=5  # Scale
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.26  # Update image
$ kubectl rollout status deployment/nginx-deployment  # Check rollout
$ kubectl rollout undo deployment/nginx-deployment  # Rollback

Update Strategies

  • RollingUpdate (default): Gradually replace pods
  • Recreate: Kill all, then create all (downtime)
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

Quiz

Which command updates the image in a deployment?

What field specifies number of pods?

What ensures a specific number of pods running?

What manages ReplicaSets?

What is the default update strategy?

What command reverts to previous version?

What changes the number of replicas?

What manages deployment revisions?

Show Answers

Answers

  1. kubectl set image
  2. replicas
  3. replicaset
  4. deployment
  5. rolling update
  6. rollback
  7. scale
  8. kubectl rollout

// Services - Stable Networking

×

The Service Problem

Pods are ephemeral—their IPs change. Services provide stable endpoints.

Service Types

  • ClusterIP (default): Internal cluster IP
  • NodePort: Expose on each node's IP
  • LoadBalancer: External load balancer
  • ExternalName: DNS CNAME alias

ClusterIP Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

NodePort Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

Managing Services

$ kubectl get services              # List services
$ kubectl get svc                  # Short form
$ kubectl describe svc nginx-service  # Service details
$ kubectl get endpoints            # Pod IPs backing service

Headless Services

For StatefulSets or when you need direct pod access:

spec:
  clusterIP: None
  selector:
    app: myapp
  ports:
  - port: 80

Quiz

What Service type exposes a service on each node's IP?

What is the default Service type?

What Service type works with cloud load balancers?

What field ties a Service to pods?

What port does traffic go to on the pod?

What Service has no cluster IP?

Services are discovered via what?

What Service type maps to external DNS?

Show Answers

Answers

  1. nodeport
  2. clusterip
  3. loadbalancer
  4. selector
  5. targetport
  6. headless
  7. dns
  8. externalname

// ConfigMaps and Secrets

×

Configuration in Kubernetes

Separate config from code using ConfigMaps and Secrets.

ConfigMap - Non-Sensitive Config

# From literal values
$ kubectl create configmap app-config --from-literal=ENV=production --from-literal=LOG_LEVEL=debug

# From file
$ kubectl create configmap nginx-config --from-file=nginx.conf

# From env file
$ kubectl create configmap app-env --from-env-file=.env

ConfigMap YAML

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  DATABASE_URL: "postgres://db:5432/app"
  CACHE_TTL: "3600"
  config.json: |
    {
      "logLevel": "debug",
      "features": {
        "featureA": true,
        "featureB": false
      }
    }
                    
                    

Using ConfigMaps in Pods

spec:
  containers:
  - name: app
    image: myapp:latest
    env:
    - name: DATABASE_URL
      valueFrom:
        configMapKeyRef:
          name: app-config
          key: DATABASE_URL
    volumeMounts:
    - name: config
      mountPath: /etc/config
  volumes:
  - name: config
    configMap:
      name: app-config

Secrets - Sensitive Data

$ kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secret123
$ kubectl create secret tls my-tls --cert=cert.crt --key=cert.key

Secret YAML (base64 encoded)

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: YWRtaW4=
  password: c2VjcmV0MTIz

Using Secrets

env:
- name: DB_PASSWORD
  valueFrom:
    secretKeyRef:
      name: db-credentials
      key: password

Quiz

Which Kubernetes object should store database passwords?

What stores non-sensitive configuration?

How can ConfigMap data be passed to pods?

How else can ConfigMap data be mounted?

Secret values are stored in what encoding?

What prevents ConfigMap changes from propagating?

What creates a ConfigMap from CLI?

What imports all configmap values as env vars?

Show Answers

Answers

  1. secret
  2. configmap
  3. environment variables
  4. volumes
  5. base64
  6. immutable
  7. kubectl create configmap
  8. envfrom

// Ingress - HTTP Routing

×

What is Ingress?

Ingress exposes HTTP/HTTPS routes from outside the cluster to services. It's the K8s way to do what nginx/apache do in traditional setups.

Install Ingress Controller

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/cloud/deploy.yaml

Ingress Resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

Multiple Paths

spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

TLS/HTTPS

spec:
  tls:
  - hosts:
    - example.com
    secretName: my-tls-secret
  rules:
  - host: example.com
    ...

Managing Ingress

$ kubectl get ingress                  # List ingresses
$ kubectl describe ingress my-ingress    # Details
$ kubectl delete ingress my-ingress     # Delete

Quiz

What do you need to create first before using Ingress resources?

What protocol does Ingress route?

What field routes by domain name?

What routes different URLs?

What is a common Ingress controller?

What enables HTTPS in Ingress?

What references the TLS certificate?

What adds controller-specific config?

Show Answers

Answers

  1. ingress controller
  2. http
  3. host
  4. path
  5. nginx
  6. tls
  7. secretname
  8. annotations

// Persistent Storage

×

The Storage Problem

Containers are ephemeral—data is lost when they restart. PersistentVolumes provide durable storage.

PersistentVolume (PV)

Cluster-wide storage resource:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /mnt/data

PersistentVolumeClaim (PVC)

Request for storage:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  resources:
    requests:
      storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: standard

Using PVC in Pod

spec:
  containers:
  - name: app
    image: myapp
    volumeMounts:
    - name: data
      mountPath: /var/data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: my-pvc

Storage Classes

$ kubectl get storageclass          # List storage classes
NAME                 PROVISIONER            RECLAIMPOLICY
standard             kubernetes.io/gce-pd   Delete
fast                pd-ssd                Delete

Cloud Storage Examples

# AWS EBS
spec:
  storageClassName: gp3
  awsElasticBlockStore:
    volumeID: vol-12345

# GCE PD
spec:
  storageClassName: standard
  gcePersistentDisk:
    pdName: my-disk

Quiz

What is the relationship between PVC and PV?

What represents cluster storage?

What requests storage from PV?

What access mode allows single node mount?

What provisions PVs dynamically?

What is a node-based storage type?

What determines PV handling on PVC deletion?

What field specifies where to mount in container?

Show Answers

Answers

  1. pvc requests storage from pv
  2. persistentvolume
  3. persistentvolumeclaim
  4. readwritonce
  5. storageclass
  6. hostpath
  7. reclaim policy
  8. mountpath

// Resource Limits and Quality of Service

×

Why Resource Limits?

  • Prevent one app from consuming all resources
  • Enable scheduling decisions
  • Define QoS classes for pod eviction priority

Resource Requests vs Limits

  • Requests: Minimum guaranteed (scheduler uses this)
  • Limits: Maximum allowed (cgroup enforces this)
spec:
  containers:
  - name: app
    image: myapp
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"

QoS Classes

  • Guaranteed: Requests = Limits for all containers
  • Burstable: Requests < Limits, or not guaranteed
  • BestEffort: No requests defined

Guaranteed QoS Example

spec:
  containers:
  - name: nginx
    image: nginx
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "250m"

LimitRanges

Set defaults and limits for namespaces:

apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
spec:
  limits:
  - default:
      memory: "256Mi"
      cpu: "500m"
    defaultRequest:
      memory: "128Mi"
      cpu: "250m"
    type: Container

ResourceQuotas

Limit total resources per namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: my-quota
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "20"

Quiz

What does the scheduler use to decide where to place a pod?

What caps maximum resource usage?

What QoS class has guaranteed resources?

What QoS class has no requests or limits?

What QoS class is the default?

What sets default resources for namespace?

What limits total resources per namespace?

What is measured in millicores?

Show Answers

Answers

  1. requests
  2. limits
  3. guaranteed
  4. besteffort
  5. burstable
  6. limitrange
  7. resourcequota
  8. cpu

// Helm - Package Manager

×

What is Helm?

Helm is the package manager for Kubernetes. It bundles K8s manifests into "charts" that are versioned, configurable, and reusable.

Installing Helm

$ curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Helm Concepts

  • Chart: Package of K8s manifests
  • Repository: Collection of charts
  • Release: Deployed instance of a chart
  • Values: Configuration overrides

Using Charts

$ helm repo add bitnami https://charts.bitnami.com/bitnami  # Add repo
$ helm repo update                                        # Update
$ helm search repo nginx                                  # Search
$ helm install my-nginx bitnami/nginx                    # Install
$ helm list                                                 # List releases
$ helm upgrade my-nginx bitnami/nginx                    # Upgrade
$ helm rollback my-nginx 1                               # Rollback
$ helm uninstall my-nginx                                  # Uninstall

Custom Values

$ helm install my-app bitnami/wordpress --set mariadb.db.password=secret
$ helm install my-app bitnami/wordpress -f values.yaml

values.yaml Example

replicaCount: 3

image:
  repository: myapp
  tag: v1.0
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  hostname: myapp.example.com

resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 250m
    memory: 256Mi

Create Your Own Chart

$ helm create mychart
$ ls mychart/
Chart.yaml  charts/  templates/  values.yaml

Quiz

What is a Helm Release?

What is a Helm package called?

What file contains default config?

What directory holds Kubernetes manifests?

What deploys a chart?

What updates a release?

What reverts to previous version?

What manages chart repositories?

Show Answers

Answers

  1. a deployed instance of a chart
  2. chart
  3. values.yaml
  4. templates
  5. helm install
  6. helm upgrade
  7. helm rollback
  8. helm repo

// Production Best Practices

×

Ready for Production?

Running K8s in production requires attention to:

Security

  • RBAC: Use Role/RoleBinding, not ClusterRole unless needed
  • Network Policies: Limit pod-to-pod communication
  • Pod Security: Use Pod Security Standards
  • Secrets: Use external secrets operators

Network Policy Example

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

High Availability

  • Multiple Replicas: Spread across nodes
  • Pod Disruption Budgets: Ensure min available during updates
  • Anti-affinity: Spread pods across nodes/zones
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: myapp-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

Observability

  • Logging: Aggregate logs (Loki, ELK)
  • Metrics: Prometheus + Grafana
  • Tracing: Jaeger or Tempo
  • Health Checks: Liveness, readiness probes

Health Checks

spec:
  containers:
  - name: app
    livenessProbe:
      httpGet:
        path: /health
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5

CI/CD with Kubernetes

  • GitOps: ArgoCD or Flux
  • Image Scanning: Trivy, Clair
  • Image Updates: Renovate, Dependabot

Next Steps

  • Service Mesh: Istio, Linkerd
  • GitOps: ArgoCD, Flux
  • Multi-cluster: Federation, Karmada
  • Serverless: Knative

Quiz

What ensures a minimum number of pods during voluntary disruptions?

What checks if pod can receive traffic?

What checks if container should restart?

What delays health checks for slow starting containers?

How many pods minimum for high availability?

What spreads pods across nodes?

What restricts pod-to-pod communication?

What controls user permissions?

Show Answers

Answers

  1. poddisruptionbudget
  2. readiness probe
  3. liveness probe
  4. startup probe
  5. replicas
  6. anti-affinity
  7. network policy
  8. rbac

// Security in Kubernetes

×

Security Overview

Securing Kubernetes requires defense in depth. Protect your cluster at every layer—from infrastructure to workloads.

Security Principle: Assume breach. Design your cluster so that even if one component is compromised, the damage is contained.

RBAC (Role-Based Access Control)

RBAC controls who can do what in your cluster. Always follow the principle of least privilege.

Role vs ClusterRole

  • Role: Namespaced permissions
  • ClusterRole: Cluster-wide permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: production
subjects:
- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Pod Security Standards

Pod Security Standards define security policies at three levels:

  • Privileged: Unrestricted, maximum permissions
  • Baseline: Prevents known privilege escalations
  • Restricted: Best practices, hardened security
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  securityContext:
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL

Network Policies

By default, all pods can communicate. Network policies restrict this.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Default Deny All Traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Secrets Management Best Practices

  • Never commit secrets to git - Use external secret operators
  • Use encryption at rest - Enable etcd encryption
  • Rotate secrets regularly - Automate with operators
  • Limit secret access - RBAC on secrets
$ kubectl create secret generic db-creds --from-literal=password=$(openssl rand -base64 32)

External Secrets Operators

  • External Secrets Operator: Sync secrets from AWS/GCP/Azure vaults
  • Sealed Secrets: Encrypt secrets for git storage
  • Vault by HashiCorp: Enterprise secret management

Pod Security Admission

Built-in alternative to PodSecurityPolicy (deprecated):

apiVersion: v1
kind: Namespace
metadata:
  name: restricted-ns
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest

Security Checklist

  • Enable audit logging
  • Use TLS everywhere
  • Regular vulnerability scanning (Trivy, Snyk)
  • Network segmentation with policies
  • Restrict container capabilities
  • Use read-only root filesystems
Pro Tip: Run kube-bench to check your cluster against CIS Kubernetes Benchmark.

Quiz

What controls user permissions in Kubernetes?

What binds a Role to a user or service account?

What resource restricts pod-to-pod communication?

Which Pod Security Standard is the most secure?

What Kubernetes object stores sensitive data?

What syncs secrets from cloud vaults to Kubernetes?

What security principle grants minimum necessary permissions?

What security setting prevents container filesystem modifications?

Show All Answers

Answers

  1. rbac
  2. rolebinding
  3. networkpolicy
  4. restricted
  5. secret
  6. external secrets operator
  7. least privilege
  8. read only root filesystem

// CI/CD with Kubernetes

×

CI/CD Overview

Continuous Integration and Continuous Deployment (CI/CD) automates building, testing, and deploying applications to Kubernetes.

CI/CD Pipeline: Code commit → Build image → Run tests → Scan image → Deploy to staging → Deploy to production

Traditional CI/CD vs GitOps

Traditional GitOps
Push-based deployments Pull-based reconcilation
CI tool pushes to cluster Git is single source of truth
Harder to audit Full audit trail in git
Can drift from config Automatic drift correction

GitOps with ArgoCD

ArgoCD is the most popular GitOps tool for Kubernetes. It watches a git repo and ensures the cluster matches.

$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

ArgoCD Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/myapp.git
    targetRevision: HEAD
    path: k8s/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Jenkins on Kubernetes

Running Jenkins inside your cluster gives you scalable build agents.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
      - name: jenkins
        image: jenkins/jenkins:lts
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: jenkins-pvc

Jenkins Pipeline for K8s

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'docker build -t myapp:${BUILD_NUMBER} .'
            }
        }
        stage('Test') {
            steps {
                sh 'docker run myapp:${BUILD_NUMBER} npm test'
            }
        }
        stage('Push') {
            steps {
                sh 'docker push myapp:${BUILD_NUMBER}'
            }
        }
        stage('Deploy') {
            steps {
                sh 'kubectl set image deployment/myapp myapp=myapp:${BUILD_NUMBER}'
            }
        }
    }
}

Automated Deployments

Techniques for automated deployments:

  • Image Updater: Automatically update images when new tags pushed
  • Webhook Triggers: Git webhooks trigger deployments
  • Polling: Check for new images periodically

ArgoCD Image Updater

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  annotations:
    argocd-image-updater.argoproj.io/image-list: myapp=myregistry/myapp
    argocd-image-updater.argoproj.io/myapp.update-strategy: semver

Progressive Delivery

  • Blue/Green: Two environments, instant switch
  • Canary: Route small % traffic to new version
  • Feature Flags: Enable features for specific users
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: myapp
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  service:
    port: 80
  analysis:
    interval: 30s
    threshold: 5
    maxWeight: 50
    stepWeight: 10

Security in CI/CD

  • Image Scanning: Trivy, Clair, Snyk
  • Policy Enforcement: OPA, Kyverno
  • RBAC for CI: Service accounts with minimal permissions
  • Secrets in CI: Use external vaults, never hardcode
$ trivy image myapp:latest

CI/CD Tools Comparison

  • ArgoCD: GitOps, Kubernetes-native, UI included
  • Flux: GitOps, CNCF graduated, GitOps Toolkit
  • Jenkins: Traditional CI, flexible, plugin ecosystem
  • GitLab CI: Integrated with Git, good for monorepos
  • GitHub Actions: Native GitHub integration, simple
Best Practice: Use GitOps for production deployments. Your git repo becomes the audit trail and single source of truth.

Quiz

What methodology uses git as the single source of truth for deployments?

What is the most popular GitOps tool for Kubernetes?

What CI tool can run inside Kubernetes with scalable agents?

What deployment strategy routes small percentage of traffic first?

What security practice checks for vulnerabilities in container images?

What tool scans container images for vulnerabilities?

What deployment strategy maintains two identical environments?

What is the CNCF-graduated GitOps alternative to ArgoCD?

Show All Answers

Answers

  1. gitops
  2. argocd
  3. jenkins
  4. canary
  5. image scanning
  6. trivy
  7. blue green
  8. flux