// If it happens, log it.
LOGGING IS OBSERVABILITY.
In production, you can't debug with print statements. You need structured logs, centralized aggregation, and powerful search. When things break at 3 AM, good logs are your best friend.
WHY CENTRALIZED LOGGING?
With dozens of servers and services, checking logs on each machine is impossible. Centralized logging aggregates everything in one place. Search across all logs, create dashboards, and get alerts when errors spike.
BECOME OBSERVABLE.
Master the ELK stack (Elasticsearch, Logstash, Kibana) or Grafana Loki. Learn to structure logs, create alerts, and build dashboards. The key to running reliable systems is seeing what's happening.
12 lessons. Complete logging control.
Understand log levels, structured logging, and best practices.
BeginnerConfigure syslog, journald, and system log rotation.
BeginnerImplement logging in Python, Go, and Node.js applications.
BeginnerCreate JSON logs with timestamps, levels, and context.
BeginnerUse Filebeat, Fluentd, and journalbeat to forward logs.
IntermediateInstall Elasticsearch and understand the data model.
IntermediateParse, transform, and enrich logs with Logstash pipelines.
IntermediateBuild visualizations and dashboards in Kibana.
IntermediateSet up Loki for cost-effective log aggregation.
AdvancedCreate alerts for errors, spikes, and anomalies.
AdvancedOptimize Elasticsearch and handle billions of logs.
AdvancedSecure logs, manage access, and meet compliance requirements.
AdvancedThe old approachโssh to each server and grep through log filesโdoesn't scale. When you have microservices, containers, and auto-scaling, you need centralized logging.
The ELK stack (Elasticsearch, Logstash, Kibana) is the most popular solution. Elasticsearch stores and searches logs, Logstash processes them, and Kibana provides the interface.
Grafana Loki offers a cheaper alternative, storing logs in object storage and only indexing labels. Either way, centralized logging is essential for debugging production issues.
You can't fix what you can't see. Own your logs.
Logging is the practice of recording events, errors, and information from your applications. Good logs help you understand what your system is doing and debug when things go wrong.
1. What level for normal operations?
2. What level for application crashes?
Standard logging daemon on Unix systems:
1. What command views systemd logs?
1. What module for logging in Python?
JSON logs are machine-parseable and work great with log aggregation:
1. What format is machine-parseable?
Agents that forward logs from servers to centralized logging systems:
1. What forwards logs to central systems?
Distributed search and analytics engine, optimized for log storage and searching:
1. What stores documents in Elasticsearch?
Input โ Filter โ Output
1. What parses unstructured logs?
Visualization and dashboarding for Elasticsearch:
1. What visualizes Elasticsearch data?
Cost-effective log aggregation from Grafana. Unlike ELK, it only indexes labels, not full text:
| Feature | Loki | ELK |
| Storage | Cheap (S3/GCS) | Expensive |
| Indexing | Labels only | Full text |
| Setup | Simple | Complex |
1. What indexes only labels?
1. What triggers on conditions?
1. What manages index lifecycle?
1. What tracks access to data?