C++ WebSocket Complete Guide | Beast Handshake, Frames, Ping/Pong [#30-1]
이 글의 핵심
Hands-on WebSocket in C++ with Boost.Beast: RFC 6455 handshake, frame layout, masking, Ping/Pong heartbeats, async client/server, errors, and production patterns (Redis, sticky LB, metrics).
Introduction: “We need bidirectional real-time messaging”
Why HTTP polling breaks down
// ❌ Problem: polling wastes work
while (true) {
auto response = httpGet("/api/messages");
if (response.has_new_messages()) {
process(response.messages);
}
std::this_thread::sleep_for(std::chrono::seconds(1));
}
// Issues:
// - Mostly empty responses
// - Up to one period of latency
// - Load scales with clients × poll rate
What goes wrong in production:
- Latency bounded by the poll interval
- Wasted requests when nothing changed
- Load ≈ concurrent users × poll frequency
- Cost from traffic and CPU spent on empty replies
More scenarios
Scenario 2: polling melts the server
Fifty thousand users polling once per second is fifty thousand HTTP transactions per second. WebSocket keeps one socket per subscriber, which usually shrinks overhead dramatically. Scenario 3: market data latency
A five-second poll means quotes can be five seconds stale—unacceptable for low-latency trading. Push over WebSocket delivers updates as they arrive. Scenario 4: lost messages after reconnect
Mobile backgrounds drop TCP sessions. Without sequence numbers or offsets, you cannot resume mid-stream after reconnect. Scenario 5: corporate proxies block upgrades
Some networks block Upgrade: websocket. Fallback strategies include long polling over HTTPS or WSS on port 443.
How WebSocket helps:
// ✅ WebSocket: server pushes when data exists
ws.async_read(buffer, {
if (!ec) {
process(buffer);
ws.async_read(buffer, ...);
}
});
// Benefits:
// - Event-driven delivery
// - One connection, bidirectional
// - No periodic polling loop
Goals:
- Understand the WebSocket protocol (handshake, frames)
- Use
websocket::streamfrom Boost.Beast - Implement Ping/Pong heartbeats
- Handle errors and reconnect
- Compare WebSocket vs HTTP polling
- Sketch production patterns (chat, dashboards) Prerequisites: Boost.Beast 1.70+ After reading you should be able to reason about frames in Wireshark, ship a Beast client/server, and tune timeouts for real networks.
Mental model
Think of sockets as addresses and async I/O as delivery routes: Asio schedules handlers (like workers) across threads; a strand keeps one connection’s work serialized.
Practical focus: examples come from large C++ services—edge cases that textbooks skip (Safari + TLS, masking, proxy timeouts).
Contents
- Protocol layout
- Handshake
- Frames
- Beast WebSocket client
- Beast WebSocket server
- Ping/Pong heartbeat
- Common errors
- Best practices
- Performance
- Production examples
1. WebSocket protocol layout
Connection lifecycle
sequenceDiagram
participant C as Client
participant S as Server
C->>S: HTTP GET /ws\nUpgrade: websocket\nSec-WebSocket-Key: xxx
S->>C: HTTP 101 Switching Protocols\nSec-WebSocket-Accept: yyy
Note over C,S: WebSocket established
C->>S: WebSocket Frame (Text)
S->>C: WebSocket Frame (Text)
C->>S: Ping
S->>C: Pong
C->>S: Close
S->>C: Close
Connection state machine
stateDiagram-v2
[*] --> Connecting: TCP connect
Connecting --> Open: 101 Switching Protocols
Connecting --> [*]: 400/403 etc.
Open --> Closing: Close frame received
Open --> [*]: Abrupt drop
Closing --> [*]: Close complete
HTTP vs WebSocket
| Aspect | HTTP | WebSocket |
|---|---|---|
| Connection | Often per request | Long-lived |
| Direction | Request/response | Bidirectional |
| Overhead | Large HTTP headers | Small frame header (2–14 B) |
| Timeliness | Needs polling | Push |
| Load | High with polling | Event-driven |
2. Handshake
Client request
GET /chat HTTP/1.1
Host: example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Version: 13
Important headers:
Upgrade: websocket— request protocol switchConnection: Upgrade— HTTP upgrade hopSec-WebSocket-Key— random 16 bytes, Base64Sec-WebSocket-Version: 13— only standardized version
Server response
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Raw bytes (first lines of the client request as sent on the wire):
47 45 54 20 2f 63 68 61 74 20 48 54 54 50 2f 31 GET /chat HTTP/1
2e 31 0d 0a 48 6f 73 74 3a 20 65 78 61 6d 70 6c .1..Host: exampl
65 2e 63 6f 6d 0d 0a 55 70 67 72 61 64 65 3a 20 e.com..Upgrade:
77 65 62 73 6f 63 6b 65 74 0d 0a 43 6f 6e 6e 65 websocket..Conne
63 74 69 6f 6e 3a 20 55 70 67 72 61 64 65 0d 0a ction: Upgrade..
Computing Sec-WebSocket-Accept:
#include <openssl/sha.h>
#include <boost/beast/core/detail/base64.hpp>
std::string computeAccept(const std::string& key) {
// RFC 6455: key + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
std::string magic = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11";
std::string input = key + magic;
// SHA-1
unsigned char hash[SHA_DIGEST_LENGTH];
SHA1(reinterpret_cast<const unsigned char*>(input.c_str()),
input.size(), hash);
// Base64
std::string result;
result.resize(boost::beast::detail::base64::encoded_size(SHA_DIGEST_LENGTH));
result.resize(boost::beast::detail::base64::encode(
&result[0], hash, SHA_DIGEST_LENGTH));
return result;
}
Handshake flow
flowchart TB
Start[TCP connect] --> ClientReq[Client: HTTP Upgrade]
ClientReq --> ServerCheck{Server: valid?}
ServerCheck -->|Yes| ServerResp[Server: 101 Switching Protocols]
ServerCheck -->|No| ServerErr[Server: 400 Bad Request]
ServerResp --> WSOpen[WebSocket open]
ServerErr --> Close[Connection closed]
WSOpen --> DataExchange[Data frames]
style WSOpen fill:#4caf50
style ServerErr fill:#f44336
3. Frames
Frame layout
graph LR
A[FINbr/1bit] --> B[RSVbr/3bits]
B --> C[Opcodebr/4bits]
C --> D[Maskbr/1bit]
D --> E["Payload Lenbr/7bits"]
E --> F["Extendedbr/Payload Len"]
F --> G["Masking Keybr/4bytes"]
G --> H[Payload Data]
style A fill:#ff9800
style C fill:#4caf50
style D fill:#2196f3
Opcodes
| Opcode | Value | Role |
|---|---|---|
| Continuation | 0x0 | Continues previous fragment |
| Text | 0x1 | UTF-8 text |
| Binary | 0x2 | Binary payload |
| Close | 0x8 | Close connection |
| Ping | 0x9 | Heartbeat probe |
| Pong | 0xA | Heartbeat reply |
Close frames
// Beast: graceful close
ws_.close(websocket::close_code::normal);
// Optional close reason string
ws_.close(websocket::close_code::normal, "server shutting down");
// In read handler when closed
if (ec == websocket::error::closed) {
auto reason = ws_.reason();
// reason.code (1000 normal, 1001 going away, ...)
// reason.reason (UTF-8 string)
}
Masking
Rule: client → server frames must be masked.
// XOR mask
void mask_payload(uint8_t* data, size_t len, const uint8_t mask_key[4]) {
for (size_t i = 0; i < len; ++i) {
data[i] ^= mask_key[i % 4];
}
}
Why: mitigates cache poisoning on misbehaving intermediaries.
Worked example: text “Hi”
Raw bytes when the client sends two bytes of UTF-8 text:
Byte 0: 0x81 (FIN=1, RSV=0, opcode Text)
Byte 1: 0x82 (MASK=1, payload len=2)
Bytes 2–5: 4-byte masking key (example 0x37 0xfa 0x21 0x3d)
Bytes 6–7: "Hi" XOR mask → 0x7b 0x9b (masked payload)
Sending with Beast (masking applied automatically in client role):
// Beast masks client→server frames for you
ws_.text(true);
ws_.async_write(net::buffer("Hi"),
{
if (!ec) {
// 2 B payload + 2–14 B header on the wire
}
});
4. Beast WebSocket client
Minimal synchronous client
#include <boost/beast.hpp>
#include <boost/asio.hpp>
#include <iostream>
namespace beast = boost::beast;
namespace websocket = beast::websocket;
namespace net = boost::asio;
using tcp = net::ip::tcp;
class WebSocketClient {
net::io_context& ioc_;
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
public:
explicit WebSocketClient(net::io_context& ioc)
: ioc_(ioc), ws_(net::make_strand(ioc)) {}
void connect(const std::string& host, const std::string& port) {
// Resolve host
tcp::resolver resolver(ioc_);
auto results = resolver.resolve(host, port);
// TCP connect
net::connect(ws_.next_layer(), results);
// WebSocket handshake
ws_.handshake(host, "/");
std::cout << "WebSocket connected to " << host << "\n";
}
void send(const std::string& message) {
ws_.write(net::buffer(message));
}
std::string receive() {
buffer_.clear();
ws_.read(buffer_);
return beast::buffers_to_string(buffer_.data());
}
void close() {
ws_.close(websocket::close_code::normal);
}
};
// Example
int main() {
net::io_context ioc;
WebSocketClient client(ioc);
client.connect("echo.websocket.org", "80");
client.send("Hello, WebSocket!");
std::string response = client.receive();
std::cout << "Received: " << response << "\n";
client.close();
}
Async client
class AsyncWebSocketClient : public std::enable_shared_from_this<AsyncWebSocketClient> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
public:
explicit AsyncWebSocketClient(net::io_context& ioc)
: ws_(net::make_strand(ioc)) {}
void connect(const std::string& host, const std::string& port) {
tcp::resolver resolver(ws_.get_executor());
resolver.async_resolve(host, port,
[self = shared_from_this(), host](
beast::error_code ec,
tcp::resolver::results_type results) {
if (ec) {
std::cerr << "Resolve error: " << ec.message() << "\n";
return;
}
// TCP connect
beast::get_lowest_layer(self->ws_).async_connect(results,
[self, host](beast::error_code ec, tcp::endpoint) {
if (ec) {
std::cerr << "Connect error: " << ec.message() << "\n";
return;
}
// WebSocket handshake
self->ws_.async_handshake(host, "/",
[self](beast::error_code ec) {
if (ec) {
std::cerr << "Handshake error: " << ec.message() << "\n";
return;
}
std::cout << "WebSocket connected\n";
self->do_read();
});
});
});
}
void send(const std::string& message) {
ws_.async_write(net::buffer(message),
{
if (ec) {
std::cerr << "Write error: " << ec.message() << "\n";
}
});
}
private:
void do_read() {
auto self = shared_from_this();
ws_.async_read(buffer_,
[self](beast::error_code ec, std::size_t bytes) {
if (ec) {
if (ec != websocket::error::closed) {
std::cerr << "Read error: " << ec.message() << "\n";
}
return;
}
std::cout << "Received: "
<< beast::buffers_to_string(self->buffer_.data()) << "\n";
self->buffer_.clear();
self->do_read(); // wait for next message
});
}
};
5. Beast WebSocket server
Echo server sketch
class WebSocketSession : public std::enable_shared_from_this<WebSocketSession> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
public:
explicit WebSocketSession(tcp::socket socket)
: ws_(std::move(socket)) {}
void run() {
// Suggested timeouts
ws_.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
ws_.set_option(websocket::stream_base::decorator(
{
res.set(beast::http::field::server, "Beast WebSocket Server");
}));
// Accept HTTP upgrade
ws_.async_accept(
[self = shared_from_this()](beast::error_code ec) {
if (ec) {
std::cerr << "Accept error: " << ec.message() << "\n";
return;
}
self->do_read();
});
}
private:
void do_read() {
auto self = shared_from_this();
ws_.async_read(buffer_,
[self](beast::error_code ec, std::size_t) {
if (ec) {
if (ec == websocket::error::closed) {
std::cout << "Connection closed\n";
} else {
std::cerr << "Read error: " << ec.message() << "\n";
}
return;
}
// Echo: preserve text/binary flag
self->ws_.text(self->ws_.got_text());
self->ws_.async_write(self->buffer_.data(),
[self](beast::error_code ec, std::size_t) {
if (ec) {
std::cerr << "Write error: " << ec.message() << "\n";
return;
}
self->buffer_.clear();
self->do_read();
});
});
}
};
class WebSocketServer {
net::io_context& ioc_;
tcp::acceptor acceptor_;
public:
WebSocketServer(net::io_context& ioc, uint16_t port)
: ioc_(ioc),
acceptor_(ioc, tcp::endpoint(tcp::v4(), port)) {}
void run() {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
net::make_strand(ioc_),
[this](beast::error_code ec, tcp::socket socket) {
if (!ec) {
std::make_shared<WebSocketSession>(std::move(socket))->run();
}
do_accept();
});
}
};
int main() {
net::io_context ioc{1};
WebSocketServer server(ioc, 8080);
server.run();
std::cout << "WebSocket server listening on port 8080\n";
ioc.run();
}
6. Ping/Pong Heartbeat
Ping/Pong sequence
sequenceDiagram
participant C as Client
participant S as Server
Note over C: Ping every 30s
C->>S: Ping
S->>C: Pong
Note over C: keepalive OK
C->>S: Ping
Note over S: no Pong (dead peer)
Note over C: timeout → reconnect
Ping timer
class WebSocketClientWithPing : public std::enable_shared_from_this<WebSocketClientWithPing> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
net::steady_timer ping_timer_;
public:
explicit WebSocketClientWithPing(net::io_context& ioc)
: ws_(net::make_strand(ioc)),
ping_timer_(ws_.get_executor()) {}
void start_ping() {
ping_timer_.expires_after(std::chrono::seconds(30));
ping_timer_.async_wait(
[self = shared_from_this()](beast::error_code ec) {
if (ec) return;
// Send ping
self->ws_.async_ping({},
[self](beast::error_code ec) {
if (ec) {
std::cerr << "Ping error: " << ec.message() << "\n";
return;
}
self->start_ping(); // schedule next ping
});
});
}
};
Automatic Pong
Beast replies to Ping with Pong by default. For logging or custom behavior:
ws_.control_callback(
{
if (kind == websocket::frame_type::ping) {
std::cout << "Received Ping\n";
// Beast still sends Pong unless you take over
} else if (kind == websocket::frame_type::pong) {
std::cout << "Received Pong\n";
}
});
Heartbeat with Pong timeout
Example pattern: reconnect if Pong is missing:
class WebSocketWithHeartbeat : public std::enable_shared_from_this<WebSocketWithHeartbeat> {
websocket::stream<beast::tcp_stream> ws_;
beast::flat_buffer buffer_;
net::steady_timer ping_timer_;
net::steady_timer pong_timer_;
bool pong_received_ = true;
public:
void start_heartbeat() {
pong_received_ = true;
schedule_ping();
}
private:
void schedule_ping() {
ping_timer_.expires_after(std::chrono::seconds(30));
ping_timer_.async_wait(
[self = shared_from_this()](beast::error_code ec) {
if (ec) return;
if (!self->pong_received_) {
std::cerr << "Pong timeout - reconnecting\n";
self->reconnect();
return;
}
self->pong_received_ = false;
self->ws_.async_ping({},
[self](beast::error_code ec) {
if (ec) return;
self->schedule_pong_timeout();
self->schedule_ping();
});
});
}
void schedule_pong_timeout() {
pong_timer_.expires_after(std::chrono::seconds(10));
pong_timer_.async_wait(
[self = shared_from_this()](beast::error_code ec) {
if (ec) return;
if (!self->pong_received_) {
self->ws_.close(websocket::close_code::normal);
}
});
}
void setup_control_callback() {
ws_.control_callback(
[self = shared_from_this()](websocket::frame_type kind, beast::string_view) {
if (kind == websocket::frame_type::pong) {
self->pong_received_ = true;
}
});
}
void reconnect() { /* ... */ }
};
7. Common failures
Handshake errors
Typical causes: bad Sec-WebSocket-Key, wrong version.
// Error handling
ws_.async_handshake(host, "/",
{
if (ec) {
if (ec == beast::http::error::bad_version) {
std::cerr << "WebSocket version mismatch\n";
} else if (ec == beast::http::error::bad_method) {
std::cerr << "Invalid HTTP method\n";
} else {
std::cerr << "Handshake error: " << ec.message() << "\n";
}
}
});
Frame parsing
Typical causes: missing mask, illegal opcode.
| Symptom | Cause | Mitigation |
|---|---|---|
| Mask required | client sent unmasked payload | use Beast client or mask manually |
| Invalid opcode | bug or attack | validate before parsing |
| Payload too large | exceeds cap | set read_message_max |
Safari + WSS drops (use a strand)
Symptom: macOS Safari closes WSS when async work races across threads.
Fix: run every async op on one strand.
auto strand = net::make_strand(ioc);
// bind async ops to strand
ws_.async_handshake(host, "/",
net::bind_executor(strand, {
// ...
}));
ws_.async_read(buffer_,
net::bind_executor(strand, {
// ...
}));
Connection timeout
Typical causes: firewall, wrong host/port, flaky Wi‑Fi.
// ❌ async_connect without deadline → can hang forever
beast::get_lowest_layer(ws_).async_connect(results, ...);
// ✅ set deadline / timeout on tcp_stream
beast::get_lowest_layer(ws_).expires_after(std::chrono::seconds(10));
beast::get_lowest_layer(ws_).async_connect(results,
{
if (ec == net::error::operation_aborted) {
std::cerr << "Connection timeout\n";
}
});
400 / 403 on upgrade
Typical causes: Origin mismatch, subprotocol mismatch, auth failure.
// Example: reject unknown origins in decorator
ws_.set_option(websocket::stream_base::decorator(
{
auto origin = res[beast::http::field::origin];
if (origin != "https://myapp.com") {
res.result(beast::http::status::forbidden);
}
}));
Payload too large (DoS)
Cause: malicious multi‑gigabyte frames.
// Beast default max is large; cap explicitly
ws_.read_message_max(1024 * 1024); // 1 MiB
// overflow → websocket::error::message_too_big
ws_.async_read(buffer_, {
if (ec == websocket::error::message_too_big) {
ws_.close(websocket::close_code::too_big);
}
});
Concurrent read/write (data races)
Cause: overlapping async_read/async_write on the same stream without serialization.
// ❌ overlapping writes from multiple callbacks
void do_read() {
ws_.async_read(buffer_, [this](...) {
ws_.async_write(...); // another thread may also call do_read
do_read();
});
}
// ✅ construct stream on a strand
ws_ = websocket::stream<beast::tcp_stream>(net::make_strand(ioc));
shared_ptr cycles with timers
Cause: timers capture shared_from_this() and never cancel.
// ❌ timer keeps session alive forever
ping_timer_.async_wait([self = shared_from_this()](...) {
self->ws_.async_ping(...);
});
// ✅ use weak_ptr for heartbeat callbacks
auto weak = std::weak_ptr<Session>(shared_from_this());
ping_timer_.async_wait([weak](beast::error_code ec) {
auto self = weak.lock();
if (!self || ec) return;
// ...
});
8. Best practices
Serialize with a strand
Run all async operations for one connection on a single strand to avoid overlapping read/write crashes.
websocket::stream<beast::tcp_stream> ws_{net::make_strand(ioc)};
Timeouts
ws_.set_option(websocket::stream_base::timeout::suggested(beast::role_type::server));
// Suggested defaults differ for server vs client roles (idle + handshake)
Cap message size
ws_.read_message_max(1024 * 1024); // 1 MiB
Reconnect with exponential backoff
void reconnect_with_backoff() {
auto delay = std::min(
base_delay_ * (1 << retry_count_),
std::chrono::seconds(60)
);
retry_timer_.expires_after(delay);
retry_timer_.async_wait([this](beast::error_code ec) {
if (!ec) {
connect();
retry_count_ = std::min(retry_count_ + 1, 10);
}
});
}
Structured logging
struct WsError {
beast::error_code ec;
std::string context;
std::chrono::system_clock::time_point when;
};
// Ship JSON logs to your observability stack
Checklist
- Serialize async ops with a
strand - Set
read_message_max(~1 MiB typical for JSON chat) - Apply
timeout::suggestedor explicit deadlines - Ping ~30s; reconnect if Pong missing
-
weak_ptrin heartbeat timers - Optional
control_callbacktracing
9. Performance snapshot
WebSocket vs HTTP polling
Example: 10k concurrent users, one event per user per minute.
| Mode | Requests/min | Bandwidth (illustrative) | Latency |
|---|---|---|---|
| HTTP poll (1s) | 600,000 | large | up to 1s |
| Long poll | ~10,000 | medium | up to hold window |
| WebSocket | ~10,000 | small | push |
| Takeaway: polling wastes bandwidth when updates are rare; WebSocket removes periodic HTTP overhead. |
Order-of-magnitude resources
| Concurrent sockets | RAM | CPU |
|---|---|---|
| 1,000 | tens of MB | low % |
| 10,000 | hundreds of MB | mid teens % |
| 100,000 | multi‑GB | workload dependent |
10. Production sketches
Chat fan-out
class ChatServer {
std::set<std::shared_ptr<WebSocketSession>> sessions_;
std::mutex mutex_;
public:
void join(std::shared_ptr<WebSocketSession> session) {
std::lock_guard<std::mutex> lock(mutex_);
sessions_.insert(session);
}
void leave(std::shared_ptr<WebSocketSession> session) {
std::lock_guard<std::mutex> lock(mutex_);
sessions_.erase(session);
}
void broadcast(const std::string& message) {
std::lock_guard<std::mutex> lock(mutex_);
for (auto& session : sessions_) {
session->send(message);
}
}
};
Live dashboard
class DashboardServer {
std::map<std::string, std::shared_ptr<WebSocketSession>> subscribers_;
public:
void subscribe(const std::string& topic, std::shared_ptr<WebSocketSession> session) {
subscribers_[topic] = session;
}
void publish(const std::string& topic, const nlohmann::json& data) {
auto it = subscribers_.find(topic);
if (it != subscribers_.end()) {
it->second->send(data.dump());
}
}
// push metrics every second (example)
void pushMetrics() {
nlohmann::json metrics = {
{"cpu", getCpuUsage()},
{"memory", getMemoryUsage()},
{"requests", getRequestCount()}
};
publish("metrics", metrics);
}
};
Load balancer + sticky sessions
Stateful features often need sticky sessions (same client → same backend) unless you externalize all fan-out (Redis, etc.).
# Nginx example
upstream websocket_backend {
ip_hash; # pin client IP to one upstream
server 10.0.1.1:8080;
server 10.0.1.2:8080;
}
location /ws {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
Horizontal scale (Redis Pub/Sub)
Broadcast across nodes when connections are not all local:
// Node A publishes after handling a chat event
redis.publish("chat:room1", message);
// Nodes B/C subscribed to the channel forward to local room members
redis.subscribe("chat:room1", [this](const std::string& msg) {
for (auto& session : room1_sessions_) {
session->send(msg);
}
});
Health checks
// For Kubernetes/ECS, expose a plain HTTP /health (200 OK) on a separate port
// WebSocket-specific probes often use synthetic Ping/Pong or TCP checks
Production rollout checklist
- Prefer WSS on 443
- Configure LB stickiness when sessions are local
- Set
proxy_read_timeoutabove heartbeat interval (often 3600s) - Use Redis or a message bus for cross-node fan-out
- Metrics: active connections, msgs/sec, handshake failures
- Client reconnect with exponential backoff
References
Related posts
- C++ HTTP fundamentals | Beast #30-1
- Multithreaded network servers | strand #29-3
- C++ SSL/TLS with Asio #30-2
- C++ chat server | broadcast #31-1
Keywords
C++ WebSocket, Boost.Beast, handshake, frames, Ping, Pong, real-time.
Summary
| Topic | Detail |
|---|---|
| Wire format | HTTP upgrade → framed messages |
| Handshake | Sec-WebSocket-Key / Sec-WebSocket-Accept |
| Frames | Opcodes for text, binary, ping, pong, close |
| Masking | Required client→server |
| Heartbeat | Ping/Pong + app-level keepalives |
| Ops | WSS, LB timeouts, Redis fan-out, metrics |
FAQ
When do I need this in production?
Any bidirectional real-time workload: chat, market data, games, dashboards, notifications.
How much better than polling?
Polling issues scale with users × poll rate; WebSocket removes that multiplier for push-heavy workloads.
Safari drops my WSS connection
Serialize every async WebSocket operation on a strand (see above).
What to read next?
Continue the series: WebSocket deep dive #30-2, TLS #30-2.
Where to study deeper?
RFC 6455 and the official Beast docs.
Next: C++ guide #30-2 — SSL/TLS with OpenSSL and Asio
Previous: Multithreaded servers — io_context pools and strands
See also
- C++ WebSocket deep dive | errors & production patterns