// Orchestrate at scale.
CONTAINERS ARE JUST THE BEGINNING.
When you have dozens of services, across multiple servers, with scaling needsâmanaging containers manually becomes impossible. Kubernetes automates the hard stuff.
DECLARE, DON'T EXECUTE.
Tell Kubernetes what you want (3 replicas, auto-scaling, rolling updates). It figures out how to get there. Your infrastructure becomes codeâversionable, testable, reviewable.
THE CLOUD-NATIVE STANDARD.
Kubernetes is the operating system of the cloud. AWS, GCP, Azureâall run Kubernetes. Learn it once, deploy everywhere.
12 lessons. Complete orchestration.
Container orchestration and K8s architecture
BeginnerRun containers in Kubernetes pods
BeginnerScale and manage containerized applications
BeginnerExpose applications and enable service discovery
IntermediateManage configuration and sensitive data
IntermediateRoute external traffic to internal services
IntermediateVolumes, PVs, PVCs, and stateful applications
IntermediateInstall and manage applications with Helm charts
AdvancedSecurity, monitoring, and production readiness
AdvancedRBAC, Pod Security, Network Policies, and secrets
AdvancedGitOps, ArgoCD, and automated deployments
AdvancedKubernetes (K8s) provides:
Whether you run 3 containers or 3000âKubernetes scales with you.
The future is containerized. Kubernetes is the cockpit.
Kubernetes (K8s) is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" $ chmod +x kubectl $ sudo mv kubectl /usr/local/bin/
What is the smallest deployable unit in Kubernetes?
What configuration format does Kubernetes commonly use?
What is the Kubernetes command-line tool called?
What container runtime does Kubernetes typically use?
Which Kubernetes component validates configurations?
Which component decides where pods run?
Where does Kubernetes store cluster state?
Which component ensures desired state matches actual state?
A Pod is the smallest deployable object in Kubernetes. It can contain one or more tightly-coupled containers that share:
$ cat nginx-pod.yaml apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80 resources: limits: memory: "128Mi" cpu: "500m" requests: memory: "64Mi" cpu: "250m"
$ kubectl apply -f nginx-pod.yaml # Create pod $ kubectl get pods # List pods $ kubectl get pods -o wide # More details $ kubectl describe pod nginx # Pod details $ kubectl logs nginx # View logs $ kubectl exec -it nginx -- /bin/sh # Shell into pod $ kubectl delete pod nginx # Delete pod
spec:
containers:
- name: web
image: nginx
- name: sidecar
image: log-collector
env:
- name: LOG_LEVEL
value: "debug"
What do containers in a Pod share?
What file format defines a Pod?
What command creates resources from a file?
What command lists all pods?
What command views container output?
What command runs commands in a container?
What field defines CPU and memory limits?
What is a secondary container in a Pod called?
Pods are mortalâthey die. Deployments give you:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
$ kubectl apply -f deployment.yaml # Create deployment $ kubectl get deployments # List deployments $ kubectl get rs # List ReplicaSets $ kubectl get pods # Pods managed by deployment $ kubectl scale deployment nginx-deployment --replicas=5 # Scale $ kubectl set image deployment/nginx-deployment nginx=nginx:1.26 # Update image $ kubectl rollout status deployment/nginx-deployment # Check rollout $ kubectl rollout undo deployment/nginx-deployment # Rollback
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
Which command updates the image in a deployment?
What field specifies number of pods?
What ensures a specific number of pods running?
What manages ReplicaSets?
What is the default update strategy?
What command reverts to previous version?
What changes the number of replicas?
What manages deployment revisions?
Pods are ephemeralâtheir IPs change. Services provide stable endpoints.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
type: NodePort
$ kubectl get services # List services $ kubectl get svc # Short form $ kubectl describe svc nginx-service # Service details $ kubectl get endpoints # Pod IPs backing service
For StatefulSets or when you need direct pod access:
spec:
clusterIP: None
selector:
app: myapp
ports:
- port: 80
What Service type exposes a service on each node's IP?
What is the default Service type?
What Service type works with cloud load balancers?
What field ties a Service to pods?
What port does traffic go to on the pod?
What Service has no cluster IP?
Services are discovered via what?
What Service type maps to external DNS?
Separate config from code using ConfigMaps and Secrets.
# From literal values $ kubectl create configmap app-config --from-literal=ENV=production --from-literal=LOG_LEVEL=debug # From file $ kubectl create configmap nginx-config --from-file=nginx.conf # From env file $ kubectl create configmap app-env --from-env-file=.env
apiVersion: v1 kind: ConfigMap metadata: name: app-config data: DATABASE_URL: "postgres://db:5432/app" CACHE_TTL: "3600" config.json: | { "logLevel": "debug", "features": { "featureA": true, "featureB": false } }Using ConfigMaps in Pods
spec: containers: - name: app image: myapp:latest env: - name: DATABASE_URL valueFrom: configMapKeyRef: name: app-config key: DATABASE_URL volumeMounts: - name: config mountPath: /etc/config volumes: - name: config configMap: name: app-configSecrets - Sensitive Data
$ kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secret123 $ kubectl create secret tls my-tls --cert=cert.crt --key=cert.keySecret YAML (base64 encoded)
apiVersion: v1 kind: Secret metadata: name: db-credentials type: Opaque data: username: YWRtaW4= password: c2VjcmV0MTIzUsing Secrets
env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: passwordQuiz
Which Kubernetes object should store database passwords?
What stores non-sensitive configuration?
How can ConfigMap data be passed to pods?
How else can ConfigMap data be mounted?
Secret values are stored in what encoding?
What prevents ConfigMap changes from propagating?
What creates a ConfigMap from CLI?
What imports all configmap values as env vars?
Show Answers
Answers
- secret
- configmap
- environment variables
- volumes
- base64
- immutable
- kubectl create configmap
- envfrom
Ingress exposes HTTP/HTTPS routes from outside the cluster to services. It's the K8s way to do what nginx/apache do in traditional setups.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/cloud/deploy.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
spec:
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
spec:
tls:
- hosts:
- example.com
secretName: my-tls-secret
rules:
- host: example.com
...
$ kubectl get ingress # List ingresses $ kubectl describe ingress my-ingress # Details $ kubectl delete ingress my-ingress # Delete
What do you need to create first before using Ingress resources?
What protocol does Ingress route?
What field routes by domain name?
What routes different URLs?
What is a common Ingress controller?
What enables HTTPS in Ingress?
What references the TLS certificate?
What adds controller-specific config?
Containers are ephemeralâdata is lost when they restart. PersistentVolumes provide durable storage.
Cluster-wide storage resource:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
Request for storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
resources:
requests:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
spec:
containers:
- name: app
image: myapp
volumeMounts:
- name: data
mountPath: /var/data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
$ kubectl get storageclass # List storage classes NAME PROVISIONER RECLAIMPOLICY standard kubernetes.io/gce-pd Delete fast pd-ssd Delete
# AWS EBS
spec:
storageClassName: gp3
awsElasticBlockStore:
volumeID: vol-12345
# GCE PD
spec:
storageClassName: standard
gcePersistentDisk:
pdName: my-disk
What is the relationship between PVC and PV?
What represents cluster storage?
What requests storage from PV?
What access mode allows single node mount?
What provisions PVs dynamically?
What is a node-based storage type?
What determines PV handling on PVC deletion?
What field specifies where to mount in container?
spec:
containers:
- name: app
image: myapp
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "250m"
Set defaults and limits for namespaces:
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
spec:
limits:
- default:
memory: "256Mi"
cpu: "500m"
defaultRequest:
memory: "128Mi"
cpu: "250m"
type: Container
Limit total resources per namespace:
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-quota
spec:
hard:
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
pods: "20"
What does the scheduler use to decide where to place a pod?
What caps maximum resource usage?
What QoS class has guaranteed resources?
What QoS class has no requests or limits?
What QoS class is the default?
What sets default resources for namespace?
What limits total resources per namespace?
What is measured in millicores?
Helm is the package manager for Kubernetes. It bundles K8s manifests into "charts" that are versioned, configurable, and reusable.
$ curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami # Add repo $ helm repo update # Update $ helm search repo nginx # Search $ helm install my-nginx bitnami/nginx # Install $ helm list # List releases $ helm upgrade my-nginx bitnami/nginx # Upgrade $ helm rollback my-nginx 1 # Rollback $ helm uninstall my-nginx # Uninstall
$ helm install my-app bitnami/wordpress --set mariadb.db.password=secret $ helm install my-app bitnami/wordpress -f values.yaml
replicaCount: 3
image:
repository: myapp
tag: v1.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hostname: myapp.example.com
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
$ helm create mychart $ ls mychart/ Chart.yaml charts/ templates/ values.yaml
What is a Helm Release?
What is a Helm package called?
What file contains default config?
What directory holds Kubernetes manifests?
What deploys a chart?
What updates a release?
What reverts to previous version?
What manages chart repositories?
Running K8s in production requires attention to:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: myapp
spec:
containers:
- name: app
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
What ensures a minimum number of pods during voluntary disruptions?
What checks if pod can receive traffic?
What checks if container should restart?
What delays health checks for slow starting containers?
How many pods minimum for high availability?
What spreads pods across nodes?
What restricts pod-to-pod communication?
What controls user permissions?
Securing Kubernetes requires defense in depth. Protect your cluster at every layerâfrom infrastructure to workloads.
RBAC controls who can do what in your cluster. Always follow the principle of least privilege.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Pod Security Standards define security policies at three levels:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
By default, all pods can communicate. Network policies restrict this.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
$ kubectl create secret generic db-creds --from-literal=password=$(openssl rand -base64 32)
Built-in alternative to PodSecurityPolicy (deprecated):
apiVersion: v1
kind: Namespace
metadata:
name: restricted-ns
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
kube-bench to check your cluster against CIS Kubernetes Benchmark.
What controls user permissions in Kubernetes?
What binds a Role to a user or service account?
What resource restricts pod-to-pod communication?
Which Pod Security Standard is the most secure?
What Kubernetes object stores sensitive data?
What syncs secrets from cloud vaults to Kubernetes?
What security principle grants minimum necessary permissions?
What security setting prevents container filesystem modifications?
Continuous Integration and Continuous Deployment (CI/CD) automates building, testing, and deploying applications to Kubernetes.
| Traditional | GitOps |
|---|---|
| Push-based deployments | Pull-based reconcilation |
| CI tool pushes to cluster | Git is single source of truth |
| Harder to audit | Full audit trail in git |
| Can drift from config | Automatic drift correction |
ArgoCD is the most popular GitOps tool for Kubernetes. It watches a git repo and ensures the cluster matches.
$ kubectl create namespace argocd $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/myapp.git
targetRevision: HEAD
path: k8s/overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Running Jenkins inside your cluster gives you scalable build agents.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'docker build -t myapp:${BUILD_NUMBER} .'
}
}
stage('Test') {
steps {
sh 'docker run myapp:${BUILD_NUMBER} npm test'
}
}
stage('Push') {
steps {
sh 'docker push myapp:${BUILD_NUMBER}'
}
}
stage('Deploy') {
steps {
sh 'kubectl set image deployment/myapp myapp=myapp:${BUILD_NUMBER}'
}
}
}
}
Techniques for automated deployments:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd-image-updater.argoproj.io/image-list: myapp=myregistry/myapp
argocd-image-updater.argoproj.io/myapp.update-strategy: semver
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: myapp
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
service:
port: 80
analysis:
interval: 30s
threshold: 5
maxWeight: 50
stepWeight: 10
$ trivy image myapp:latest
What methodology uses git as the single source of truth for deployments?
What is the most popular GitOps tool for Kubernetes?
What CI tool can run inside Kubernetes with scalable agents?
What deployment strategy routes small percentage of traffic first?
What security practice checks for vulnerabilities in container images?
What tool scans container images for vulnerabilities?
What deployment strategy maintains two identical environments?
What is the CNCF-graduated GitOps alternative to ArgoCD?