Nginx: The Web Server That Powers the Internet
Apache, IIS, Lighttpd, Caddy—all valid web servers. But if you want high performance, massive concurrency, and the flexibility to serve as either a web server or reverse proxy, there's one name that dominates: Nginx (pronounced "engine-x").
According to Netcraft, Nginx serves more websites than any other web server. It's the backbone of Netflix, Instagram, Airbnb, and countless other high-traffic sites. Let's dive into why Nginx has become the go-to choice for modern web infrastructure.
Why Nginx Exists
In the early 2000s, Apache was king. It was flexible, feature-rich, and had excellent module support. But as the internet evolved, Apache's process-per-connection model started to show its age. When handling thousands of concurrent connections—each keeping a process alive—the memory requirements ballooned.
Igor Sysoev created Nginx in 2004 to solve this specific problem. His design goals were simple: handle 10,000+ concurrent connections with minimal memory, serve static content blazingly fast, and be a rock-solid reverse proxy.
Architecture: Event-Driven, Not Thread-Based
Apache uses a process or thread per connection. This is straightforward but resource-intensive. Nginx uses an event-driven, asynchronous model. A single worker process can handle thousands of connections.
Think of it like a restaurant:
- Apache = One waiter per table
- Nginx = One efficient waiter juggling dozens of tables
This approach uses far less memory and CPU, making Nginx incredibly efficient under load.
Installing Nginx
# Debian/Ubuntu
sudo apt update
sudo apt install nginx
# RHEL/CentOS
sudo yum install nginx
# Start and enable
sudo systemctl start nginx
sudo systemctl enable nginx
# Check status
nginx -v
sudo systemctl status nginx
Default configuration lives in /etc/nginx/, with the main config at nginx.conf and site configs in sites-available/.
A Basic Nginx Configuration
server {
listen 80;
server_name example.com www.example.com;
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
Save this to /etc/nginx/sites-available/example, then link it to sites-enabled/:
sudo ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/
sudo nginx -t # Test config
sudo systemctl reload nginx
Key Concepts
server blocks
Similar to Apache's VirtualHosts. Each block defines how Nginx handles requests for a particular domain or IP.
location blocks
Match URIs within a server. You can use exact matches, prefixes, or regular expressions:
location / { # Prefix match (default)
...
}
location = /about { # Exact match
...
}
location ~ /api/\d+$ { # Regex match (case-sensitive)
...
}
Reverse Proxy
Nginx shines as a reverse proxy. It sits in front of your application servers and forwards requests to them:
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Load Balancing
Nginx can distribute traffic across multiple backend servers:
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
SSL/TLS Termination
Handle HTTPS at Nginx, passing plain HTTP to backends:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
location / {
proxy_pass http://localhost:3000;
}
}
Performance Tips
Enable Gzip Compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
Configure Worker Processes
worker_processes auto; # Usually match CPU cores
worker_connections 1024; # Connections per worker
Use Caching
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=my_cache:10m;
location / {
proxy_cache my_cache;
proxy_cache_valid 200 10m;
proxy_pass http://backend;
}
Set Proper Timeouts
keepalive_timeout 65;
client_max_body_size 10M;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
Nginx as a Load Balancer
Nginx supports multiple load balancing methods:
- round_bin — Default, sequential distribution
- least_conn — Route to server with fewest active connections
- ip_hash — Route based on client IP (sticky sessions)
- weighted — Distribute based on weight
upstream backend {
least_conn;
server backend1.example.com weight=3;
server backend2.example.com;
server backend3.example.com;
}
Nginx vs Apache
When should you use each?
| Use Nginx when: | Use Apache when: |
|---|---|
| High traffic expected | Need .htaccess overrides |
| Serving static files | Running PHP with mod_php |
| Reverse proxy needed | Simple shared hosting |
| WebSocket support needed | Deep Apache ecosystem needed |
Real-World Example: Node.js with Nginx
Running a Node.js app? Use Nginx in front:
# Nginx handles static files, proxies API
server {
listen 80;
server_name myapp.com;
# Static assets
location /static/ {
alias /var/www/myapp/static/;
expires 30d;
}
# API requests
location /api/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}
# Everything else
location / {
proxy_pass http://localhost:3000;
}
}
Learn More
Nginx is endlessly configurable. Once you're comfortable with the basics, explore:
- Rate limiting — Prevent abuse
- HTTP/2 and HTTP/3 — Modern protocols
- ModSecurity — Web application firewall
- Nginx Amplify — Monitoring and analytics
Master Nginx, and you master modern web infrastructure.
Whether you're running a personal blog or a high-traffic SaaS, Nginx deserves a place in your toolkit. It's lightweight, fast, and incredibly versatile.