Nginx Reverse Proxy Configuration for Node.js | SSL, upstream & Logs

Nginx Reverse Proxy Configuration for Node.js | SSL, upstream & Logs

이 글의 핵심

Put Nginx in front of Node.js for TLS termination, upstream load balancing, logging, and WebSocket proxying—patterns you can reuse from dev through production.

Introduction

Nginx is a high-performance web server and reverse proxy. Putting it in front of a Node.js app for TLS termination, static files, load balancing, compression, and rate limiting is extremely common. Node focuses on business logic; the edge handles connections, encryption, and routing.

This article covers a single Node instance behind Nginx and scaling out with upstream to multiple Node processes. For container stacks, pair this with Docker Compose for Node.js.

What you will learn

  • Proxy headers and upstream blocks you can paste into real nginx.conf files
  • Let’s Encrypt TLS patterns
  • Access/error logs and WebSocket upgrade settings

Table of contents

  1. Concepts: reverse proxy and Nginx
  2. Hands-on: nginx.conf
  3. Advanced: load balancing, buffers, limits
  4. Performance perspective
  5. Real-world scenarios
  6. Troubleshooting
  7. Conclusion

Concepts: reverse proxy and Nginx

Basics

  • Reverse proxy: Clients connect only to Nginx; Nginx forwards to the backend (Node).
  • TLS termination: Browser ↔ Nginx uses HTTPS; Nginx ↔ Node often uses HTTP on an internal network.
  • upstream: Groups multiple backends and chooses round robin, least_conn, ip_hash (sticky sessions), etc.

Why use it

A single Node process mainly uses one CPU core; teams scale with PM2 cluster or multiple containers plus Nginx load balancing. Serving static assets from Nginx also reduces load on the event loop.


Hands-on: nginx.conf

Minimal example: HTTP → Node (dev or internal)

# /etc/nginx/conf.d/app.conf
upstream node_app {
    server 127.0.0.1:3000;
    keepalive 32;
}

server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://node_app;
    }
}

HTTPS + Let’s Encrypt (paths assume certbot defaults)

upstream node_app {
    least_conn;
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    keepalive 64;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;

    ssl_certificate     /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_protocols TLSv1.2 TLSv1.3;

    access_log /var/log/nginx/api.access.log combined;
    error_log  /var/log/nginx/api.error.log warn;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://node_app;
    }

    # WebSocket (e.g. Socket.IO)
    location /socket.io/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_pass http://node_app;
    }
}

Trust proxy in Express

// Express: honor X-Forwarded-* when behind a proxy
import express from 'express';

const app = express();
app.set('trust proxy', 1); // one hop (Nginx)

app.get('/health', (_req, res) => res.send('ok'));

Docker Compose

If Nginx and API share a Docker network, upstream uses server api:3000;. See Docker Compose for Node.js.


Advanced: load balancing, buffers, limits

Algorithms

DirectiveUse
round-robin (default)Even distribution
least_connPrefer server with fewest connections (good for long requests)
ip_hashSticky sessions by client IP

Body size and timeouts

    client_max_body_size 10m;
    proxy_connect_timeout 10s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;

Simple rate limit

limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    location / {
        limit_req zone=api_limit burst=20 nodelay;
        # ... proxy settings
    }

Performance perspective

Nginx handles many concurrent connections with an event-driven model; Node excels at I/O. Serving static files from Nginx saves disk I/O and event-loop time.

SetupBenefitWatch
TLS at NginxLess crypto work in NodeAutomate cert renewal (certbot)
keepalive to upstreamLower latencyTune timeouts and pool sizes
gzip / brotliSmaller payloadsHigher CPU use

Real-world scenarios

  • Blue/green: Switch upstream groups or route to new containers after health checks pass.
  • Staging: Different server_name, same Node image, different DATABASE_URL.
  • Deploy pipeline: After GitHub Actions pushes an image, reload Nginx (nginx -s reload) only.

Troubleshooting

SymptomCauseFix
502 Bad GatewayNode down or wrong portss -tlnp, container logs
Redirects use httpMissing X-Forwarded-ProtoAdd proxy_set_header X-Forwarded-Proto $scheme
WebSocket dropsUpgrade headers not forwardedUse the WebSocket location pattern
Real IP is always Nginxtrust proxy not setExpress trust proxy; log $http_x_forwarded_for
Upload fails with 413Default client_max_body_sizeIncrease the limit

Tip: curl -v https://api.example.com for TLS/headers; temporarily raise error_log to info for upstream errors.


Conclusion

  • Nginx upstream and proxy headers are the foundation for running Node behind a proxy.
  • Define TLS, logs, and WebSocket once to keep staging and production aligned.
  • For the full deployment picture, see Node.js deployment guide.