HTTP Protocols Complete Guide | HTTP/1.1, HTTP/2, HTTP/3, WebSocket & QUIC

HTTP Protocols Complete Guide | HTTP/1.1, HTTP/2, HTTP/3, WebSocket & QUIC

이 글의 핵심

HTTP evolved from a simple text protocol to a binary, multiplexed, encrypted-by-default transport. This guide explains what changed between HTTP versions, why the changes matter for performance, and when to use WebSocket or SSE.

HTTP Version Timeline

HTTP started as a minimal protocol for fetching HTML documents and has evolved into the backbone of modern real-time applications. Each version was driven by concrete performance problems with the previous one.

HTTP/0.9 (1991): GET only, no headers, HTML responses only
HTTP/1.0 (1996): Headers, status codes, POST — new connection per request
HTTP/1.1 (1997): Persistent connections, chunked transfer, Host header
HTTP/2  (2015):  Binary, multiplexed, header compression, server push
HTTP/3  (2022):  QUIC transport (UDP), eliminates TCP head-of-line blocking

HTTP/1.1 Limitations

HTTP/1.1’s core problem is head-of-line blocking: only one request can be in-flight per connection at a time. While one response is waiting, everything else queues behind it. Browsers work around this by opening 6–8 parallel TCP connections per domain, but each connection requires its own TCP and TLS handshake — expensive on high-latency networks.

The headers situation is also bad. Every HTTP/1.1 request repeats the full set of headers (User-Agent, Accept, Cookies, etc.) — often 800 bytes or more — even when nothing has changed. On a page with 50 requests, that’s 40KB of redundant header data.

HTTP/1.1 connection lifecycle:
  1. TCP handshake (1.5 RTT)
  2. TLS handshake (1-2 RTT, if HTTPS)
  3. Request → Response
  4. Next request waits (head-of-line blocking)

Workarounds browsers use:
  - 6-8 parallel TCP connections per domain
  - Domain sharding (cdn1.example.com, cdn2.example.com)
  - Resource bundling (all JS in one file to reduce requests)
  - Sprite sheets (combine images)
HTTP/1.1 request:
  GET /index.html HTTP/1.1
  Host: example.com
  User-Agent: Mozilla/5.0
  Accept: text/html
  [~800 bytes of headers, uncompressed, repeated every request]

HTTP/2: Multiplexing

HTTP/2 was designed to fix HTTP/1.1’s fundamental problems at the protocol level rather than relying on browser workarounds. The key innovation is multiplexing: multiple requests and responses flow simultaneously over a single TCP connection as independent streams. There’s no waiting — stream 3 doesn’t block because stream 1 is still loading.

HTTP/2 also switches from text to binary encoding, which is more compact and faster to parse. Headers are compressed with HPACK, which remembers which headers were sent before and only transmits the differences. On subsequent requests in a session, headers shrink from ~800 bytes to a handful of bytes.

HTTP/2 connection model:
  One TCP connection
    Stream 1: GET /api/user   → streaming response
    Stream 3: GET /api/posts  → streaming response (parallel!)
    Stream 5: POST /api/log   → streaming response (parallel!)

  All streams share one connection
  No head-of-line blocking at HTTP layer (TCP still can block)

HTTP/2 features:

  • Binary framing: requests/responses as binary frames (not text)
  • Multiplexing: multiple streams on one connection
  • Header compression (HPACK): common headers compressed, not repeated
  • Server push: server sends assets before client requests them
  • Stream prioritization: important resources first

Checking HTTP/2

# curl shows protocol version
curl -I --http2 https://example.com
# HTTP/2 200
# content-type: text/html

# Chrome DevTools → Network → Protocol column
# h2 = HTTP/2, h3 = HTTP/3, http/1.1 = HTTP/1.1

# Browser console
performance.getEntriesByType('navigation')[0].nextHopProtocol
# "h2" or "h3"

Server Push (deprecated in practice)

# Nginx: push assets with the main HTML
location = /index.html {
    http2_push /styles.css;
    http2_push /app.js;
}

Note: Server push was removed from Chrome and most browsers have deprecated it — it caused over-pushing cached resources. Use <link rel="preload"> instead.

HTTP/2 with Nginx

server {
    listen 443 ssl http2;  # Enable HTTP/2
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;  # HTTP/2 works with TLS 1.2+

    location / {
        proxy_pass http://backend;
    }
}

HTTP/3 and QUIC

The TCP head-of-line blocking problem:
  HTTP/2 over TCP:
    Packet 1 (Stream 1 data) — lost!
    Packet 2 (Stream 3 data) — waiting for packet 1
    Packet 3 (Stream 5 data) — waiting for packet 1
    All streams blocked until TCP retransmits packet 1

  HTTP/3 over QUIC:
    Packet 1 (Stream 1 data) — lost, QUIC retransmits
    Packet 2 (Stream 3 data) — delivered! Stream 3 continues
    Packet 3 (Stream 5 data) — delivered! Stream 5 continues
    Only Stream 1 waits for retransmission

QUIC advantages:

  • 0-RTT handshake on repeat connections (TLS 1.3 session resumption)
  • Connection migration — connection survives IP change (WiFi → mobile)
  • Independent streams — packet loss only blocks affected stream
  • Built on UDP — less OS kernel overhead, user-space implementation

HTTP/3 Support

# Cloudflare, Fastly, Google, Akamai all support HTTP/3
# Check: curl --http3 https://cloudflare.com -I

# Nginx: HTTP/3 support in nginx 1.25+ (experimental)
listen 443 quic reuseport;
listen 443 ssl;
ssl_protocols TLSv1.3;  # QUIC requires TLS 1.3
add_header Alt-Svc 'h3=":443"; ma=86400';  # Advertise HTTP/3

WebSocket — Bidirectional Real-Time

WebSocket upgrades an HTTP connection to a persistent, bidirectional channel.

Client → Server:
  GET /ws HTTP/1.1
  Upgrade: websocket
  Connection: Upgrade
  Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==

Server → Client:
  HTTP/1.1 101 Switching Protocols
  Upgrade: websocket
  Connection: Upgrade
  Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=

[Connection upgraded — now both sides can send at any time]

WebSocket Server (Node.js)

import { WebSocketServer, WebSocket } from 'ws';

const wss = new WebSocketServer({ port: 8080 });

const clients = new Set<WebSocket>();

wss.on('connection', (ws, req) => {
  clients.add(ws);
  console.log(`Client connected. Total: ${clients.size}`);

  ws.on('message', (data) => {
    const message = data.toString();
    console.log('Received:', message);

    // Broadcast to all connected clients
    for (const client of clients) {
      if (client.readyState === WebSocket.OPEN) {
        client.send(message);
      }
    }
  });

  ws.on('close', () => {
    clients.delete(ws);
    console.log(`Client disconnected. Total: ${clients.size}`);
  });

  ws.on('error', (err) => {
    console.error('WebSocket error:', err);
    clients.delete(ws);
  });

  // Send welcome message
  ws.send(JSON.stringify({ type: 'connected', timestamp: Date.now() }));
});

WebSocket Client (Browser)

const ws = new WebSocket('wss://example.com/ws');  // wss = secure WebSocket

ws.onopen = () => {
  console.log('Connected');
  ws.send(JSON.stringify({ type: 'join', room: 'main' }));
};

ws.onmessage = (event) => {
  const message = JSON.parse(event.data);
  console.log('Received:', message);
};

ws.onclose = (event) => {
  console.log('Disconnected:', event.code, event.reason);
  // Auto-reconnect with backoff
  setTimeout(connect, 1000 * Math.min(retries++, 30));
};

ws.onerror = (error) => {
  console.error('WebSocket error:', error);
};

// Send from anywhere
ws.send(JSON.stringify({ type: 'message', text: 'Hello!' }));

// Close gracefully
ws.close(1000, 'User logged out');

WebSocket with Nginx

location /ws {
    proxy_pass http://backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_read_timeout 86400s;    # Long timeout for persistent connections
    proxy_send_timeout 86400s;
}

Server-Sent Events (SSE)

SSE streams events from server to client over a regular HTTP connection. Simpler than WebSocket when you only need server → client:

// Express SSE endpoint
app.get('/events', (req, res) => {
  // Required headers
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  // Send event helper
  const sendEvent = (event: string, data: any, id?: string) => {
    if (id) res.write(`id: ${id}\n`);
    res.write(`event: ${event}\n`);
    res.write(`data: ${JSON.stringify(data)}\n\n`);
  };

  // Send initial data
  sendEvent('connected', { timestamp: Date.now() });

  // Send periodic updates
  const interval = setInterval(() => {
    sendEvent('update', { price: getStockPrice(), time: Date.now() });
  }, 1000);

  // Cleanup when client disconnects
  req.on('close', () => {
    clearInterval(interval);
    res.end();
  });
});
// Browser client — EventSource API
const eventSource = new EventSource('/events');

eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Update:', data);
};

eventSource.addEventListener('update', (event) => {
  updatePriceDisplay(JSON.parse(event.data));
});

// EventSource auto-reconnects on disconnect
// Last-Event-ID header resumes from last received event

eventSource.close();  // Stop listening

Comparison

FeatureHTTP/1.1HTTP/2HTTP/3WebSocketSSE
ProtocolTCPTCPQUIC/UDPTCPTCP
DirectionReq/ResReq/ResReq/ResBidirectionalServer→Client
MultiplexingN/AN/A
Head-of-line blockingHTTP+TCPHTTP only❌ NoneN/AN/A
TLS requiredNoEffectivelyYesNo (ws://)No
Auto-reconnectN/AN/AManual✅ Built-in
Browser supportAllAll~96%AllAll

Related posts: