Build a C++ Chat Server: Multi-Client Broadcast with Boost.Asio and Strands

Build a C++ Chat Server: Multi-Client Broadcast with Boost.Asio and Strands

이 글의 핵심

Broadcast messages safely: post work to a strand, per-session write queues, NICK protocol, recent history, slow-client limits, read buffer caps, multi-room sketch, benchmarks, and systemd/Docker deployment.

Introduction: broadcasting without data races

Problems: iterating participants_ while another thread leave() invalidates iterators; slow clients grow write queues without bound; join/leave notifications must stay consistent with membership.

Core idea: asio::post(strand_, ...) serializes join, leave, deliver, and history updates. Each Session uses shared_ptr, enable_shared_from_this, async_read_until('\n'), and a write queue so only one async_write runs at a time.

Requirements: C++17+, Boost.Asio 1.70+.


Architecture

flowchart TB
    subgraph Server["Chat server"]
        Acceptor["acceptor / async_accept"]
        Room["ChatRoom / participants + history"]
        Strand["strand / synchronization"]
    end

    subgraph Sessions["Sessions"]
        S1[Session A]
        S2[Session B]
        S3[Session C]
    end

    Acceptor -->|new connection| S1
    Acceptor -->|new connection| S2
    Acceptor -->|new connection| S3

    S1 -->|join/leave/deliver| Strand
    S2 -->|join/leave/deliver| Strand
    S3 -->|join/leave/deliver| Strand

    Strand --> Room

    Room -->|async_write| S1
    Room -->|async_write| S2
    Room -->|async_write| S3

ChatRoom (outline)

  • std::set<std::shared_ptr<Session>> participants_
  • std::deque<std::string> history_ capped at max_history_
  • All mutations asio::post(strand_, ...)

deliver(message, sender): append to history; deliver to every participant except sender.


Session (outline)

  • async_read_until for lines; parse NICK name then chat lines as nick: text\n.
  • deliver: push to write_queue_; start do_write if idle.
  • do_write: async_write front message; on success pop and continue.
  • bind_executor(strand_, ...) on handlers sharing the room’s strand.
  • On eof / connection_reset: leave, broadcast_leave.

Why strand?

Without serialization, concurrent deliver / leave corrupts participants_. Strand guarantees ordered, non-concurrent execution of posted jobs.


Protocol (text)

Client sends NICK alice\n, then lines; server broadcasts alice: … and system lines for join/leave.

Binary framing (length prefix) is covered in protocol #30-3.


Multi-room sketch

unordered_map<string, shared_ptr<ChatRoom>> protected by a mutex for map structure; each room still uses its own strand for participant state.


Errors and limits

  • Data race: fix with strand (or single-threaded io.run()).
  • Memory leak: ensure leave removes shared_ptr from set.
  • Queue bomb: cap write_queue_ and close socket.
  • DoS via read buffer: cap streambuf size or switch to framed reads.
  • shared_from_this in ctor: call join from start() after make_shared.

Benchmarks (illustrative)

Latency and memory scale with concurrent users, strand contention, and message rate—profile on your hardware.


Production

Logging, connection caps, message size limits, idle timeouts, TLS for internet-facing deployments, graceful shutdown on signals.


  • Event loop #29-2
  • Protocol #30-3
  • REST server #31-2

Next: REST API server #31-2

Previous: Protocol #30-3