Nginx Reverse Proxy Configuration for Node.js | SSL, upstream & Logs
이 글의 핵심
Put Nginx in front of Node.js for TLS termination, upstream load balancing, logging, and WebSocket proxying—patterns you can reuse from dev through production.
Introduction
Nginx is a high-performance web server and reverse proxy. Putting it in front of a Node.js app for TLS termination, static files, load balancing, compression, and rate limiting is extremely common. Node focuses on business logic; the edge handles connections, encryption, and routing.
This article covers a single Node instance behind Nginx and scaling out with upstream to multiple Node processes. For container stacks, pair this with Docker Compose for Node.js.
What you will learn
- Proxy headers and upstream blocks you can paste into real
nginx.conffiles - Let’s Encrypt TLS patterns
- Access/error logs and WebSocket upgrade settings
Table of contents
- Concepts: reverse proxy and Nginx
- Hands-on: nginx.conf
- Advanced: load balancing, buffers, limits
- Performance perspective
- Real-world scenarios
- Troubleshooting
- Conclusion
Concepts: reverse proxy and Nginx
Basics
- Reverse proxy: Clients connect only to Nginx; Nginx forwards to the backend (Node).
- TLS termination: Browser ↔ Nginx uses HTTPS; Nginx ↔ Node often uses HTTP on an internal network.
- upstream: Groups multiple backends and chooses round robin, least_conn, ip_hash (sticky sessions), etc.
Why use it
A single Node process mainly uses one CPU core; teams scale with PM2 cluster or multiple containers plus Nginx load balancing. Serving static assets from Nginx also reduces load on the event loop.
Hands-on: nginx.conf
Minimal example: HTTP → Node (dev or internal)
# /etc/nginx/conf.d/app.conf
upstream node_app {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://node_app;
}
}
HTTPS + Let’s Encrypt (paths assume certbot defaults)
upstream node_app {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
keepalive 64;
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.2 TLSv1.3;
access_log /var/log/nginx/api.access.log combined;
error_log /var/log/nginx/api.error.log warn;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://node_app;
}
# WebSocket (e.g. Socket.IO)
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_pass http://node_app;
}
}
Trust proxy in Express
// Express: honor X-Forwarded-* when behind a proxy
import express from 'express';
const app = express();
app.set('trust proxy', 1); // one hop (Nginx)
app.get('/health', (_req, res) => res.send('ok'));
Docker Compose
If Nginx and API share a Docker network, upstream uses server api:3000;. See Docker Compose for Node.js.
Advanced: load balancing, buffers, limits
Algorithms
| Directive | Use |
|---|---|
round-robin (default) | Even distribution |
least_conn | Prefer server with fewest connections (good for long requests) |
ip_hash | Sticky sessions by client IP |
Body size and timeouts
client_max_body_size 10m;
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
Simple rate limit
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
location / {
limit_req zone=api_limit burst=20 nodelay;
# ... proxy settings
}
Performance perspective
Nginx handles many concurrent connections with an event-driven model; Node excels at I/O. Serving static files from Nginx saves disk I/O and event-loop time.
| Setup | Benefit | Watch |
|---|---|---|
| TLS at Nginx | Less crypto work in Node | Automate cert renewal (certbot) |
| keepalive to upstream | Lower latency | Tune timeouts and pool sizes |
| gzip / brotli | Smaller payloads | Higher CPU use |
Real-world scenarios
- Blue/green: Switch upstream groups or route to new containers after health checks pass.
- Staging: Different
server_name, same Node image, differentDATABASE_URL. - Deploy pipeline: After GitHub Actions pushes an image, reload Nginx (
nginx -s reload) only.
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| 502 Bad Gateway | Node down or wrong port | ss -tlnp, container logs |
| Redirects use http | Missing X-Forwarded-Proto | Add proxy_set_header X-Forwarded-Proto $scheme |
| WebSocket drops | Upgrade headers not forwarded | Use the WebSocket location pattern |
| Real IP is always Nginx | trust proxy not set | Express trust proxy; log $http_x_forwarded_for |
| Upload fails with 413 | Default client_max_body_size | Increase the limit |
Tip: curl -v https://api.example.com for TLS/headers; temporarily raise error_log to info for upstream errors.
Conclusion
- Nginx upstream and proxy headers are the foundation for running Node behind a proxy.
- Define TLS, logs, and WebSocket once to keep staging and production aligned.
- For the full deployment picture, see Node.js deployment guide.