Boost.Asio Introduction: io_context, async_read, and Non-Blocking I/O
이 글의 핵심
From blocking pain to async_accept chains: post vs dispatch, timers, client resolve/connect/read, session lifetime with shared_from_this, work_guard, and production timeouts and connection limits.
Introduction: why async I/O?
Blocking read ties up a thread per connection—thousands of connections ⇒ thousands of threads. Asio registers async_* operations; io_context::run() dispatches completions so few threads handle many connections.
Prerequisite: socket basics (#28).
Requirements: Boost.Asio or standalone Asio, C++14+.
Blocking vs async (diagrams in original)
Blocking: thread waits in read_some. Async: thread runs handlers when data is ready; I/O wait does not consume thread stack.
io_context, post, dispatch, run
post: queue a handler.dispatch: run immediately if already insiderun(); else queue.run: block until no pending work.poll/poll_one: non-blocking progress.restart: afterrun()returns, reset beforerun()again.
Async timer
steady_timer::async_wait—chain re-arm in the handler for periodic ticks. Combine with socket.cancel() for read timeouts.
async_read / async_write
async_read: fills buffer completely (or EOF).async_read_some: partial read—typical for protocols.async_read_until: delimiter (e.g. newline) intostreambuf.
Keep shared_ptr to std::array, std::string, or streambuf across the async call.
Async client flow
async_resolve → async_connect → async_write → async_read—chain lambdas or bind to session methods.
Async server
async_accept → construct Session with shared_ptr + enable_shared_from_this → async_read_until loop → async_write echo → re-issue read. Always schedule the next async_accept in the handler.
Error handling
Check error_code; treat asio::error::eof as normal close; operation_aborted after cancel; log connection_reset, broken_pipe.
Common mistakes
- Dangling
thisin handlers—useshared_from_this. run()exits after one connection—re-accept orwork_guard.- Stack buffer in async—use heap
shared_ptrstorage. run()twice withoutrestart—no work processed second time.- Same socket from multiple threads without strand—data races.
Performance (illustrative)
Async often achieves much higher req/s than one-thread-per-connection at large connection counts—always measure on your workload.
Related posts
- Event loop #29-2
- Asio deadlock debugging
- Multithread server #29-3
Next: Event loop #29-2
Previous: Network errors #28-3