Actix Web Complete Guide — High-Performance Actor-Based Web Framework for Rust
이 글의 핵심
An advanced guide that covers Actix Web’s actor model, routing (App and Scope), Extractor and Responder, middleware chains, WebSocket, SSE, database integration, and microservice structure in one document.
What this article covers
Actix Web is a widely used high-performance asynchronous web framework in the Rust ecosystem. The name “Actix” suggests an actor model, but for typical REST and JSON APIs, async/await handlers and type-based extraction (Extractors) are central. For long-lived, stateful work such as WebSocket, you can opt into an actor style via crates like actix-web-actors—that mental model keeps production design simpler.
This article ties together splitting the routing tree with App and Scope, clarifying request and response boundaries with Extractor and Responder, middleware chain execution order, WebSocket and SSE, sharing a DB pool (sqlx), and bounded-context separation in a microservice style. It assumes readers comfortable with ownership and async Rust who can reason about production observability and failure modes.
1. Where Actix Web fits
1.1 Why Actix Web
The Rust web ecosystem includes Axum, Warp, Rocket, and others. Actix Web remains a strong choice for throughput, ecosystem maturity, documentation, and breadth of examples. When you need middleware composition, scope-based routing, and WebSocket and HTTP in one application, it offers a lot of design room.
1.2 What “actor” means here
Historically, the Actix runtime leaned on the actor model, and that remains a good fit where message loops and encapsulated state feel natural—e.g. WebSocket sessions. For most JSON APIs, async fn handlers and Extractors are enough. In other words, it is not “every request goes to an actor”; use the actor abstraction only where it helps, which matches modern codebases.
2. Minimal run: HttpServer and App
2.1 Entry point
HttpServer invokes the application factory per worker thread. Because you build App::new() inside the closure, shared state between handlers must be prepared outside the closure and passed via web::Data, or structured so it is not a one-time global init.
// src/main.rs — 개념 예시 (actix-web 4.x)
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
async fn health() -> impl Responder {
HttpResponse::Ok().body("ok")
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().route("/health", web::get().to(health))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Why write it this way. The closure passed to HttpServer::new may run once per worker. Assuming a single app instance and putting shared mutable state inside the closure can lead to concurrency bugs. Stateless routes like health checks are safe with the pattern above.
When it breaks down. For resources you want to build once per process—DB connection pools, config caches, metric registries—it is better to share explicitly, e.g. web::Data::new(Arc::new(pool)). The next section covers this with App::app_data.
3. App and Scope: routing tree design
3.1 resource and route
A resource groups HTTP method handlers. A scope groups a common path prefix, middleware, and data injection. When you express microservice boundaries as URL prefixes, scope is especially useful.
use actix_web::{web, App, HttpResponse, Responder};
async fn list_users() -> impl Responder {
HttpResponse::Ok().body("[]")
}
async fn get_user(path: web::Path<u64>) -> impl Responder {
HttpResponse::Ok().body(format!("user {}", path.into_inner()))
}
fn user_routes() -> actix_web::Scope {
web::scope("/users")
.route("", web::get().to(list_users))
.route("/{id}", web::get().to(get_user))
}
// App::new().service(user_routes())
How it works. Under web::scope("/users"), "" maps to /users, and /{id} resolves to paths like /users/42. Path<u64> parses a single segment. For composite keys, use a tuple or a struct.
Practical tip. To version APIs under /api/v1, wrap with web::scope("/api/v1").service(...) so breaking changes per version stay inside one scope.
3.2 Nested Scope and data visibility
You can attach different app_data per scope, but data registered on the parent App is often visible to child handlers. Documenting team rules like “each scope is a permission boundary” reduces onboarding cost.
3.3 guard and method/host conditions
To attach header, host, or method conditions to the same path, use guard. Typical uses: split internal vs public APIs under the same prefix by host, or branch JSON vs HTML by Accept.
use actix_web::{guard, web, App, HttpResponse};
// 개념 예시
// App::new().service(
// web::resource("/api/data")
// .guard(guard::Get())
// .to(|| async { HttpResponse::Ok().body("json") })
// )
Caveat: many combined guards raise routing comprehension cost. Prefer splitting scopes and keep guards minimal for maintainability.
3.4 default_service and 404 policy
To return a consistent JSON 404 for unregistered paths, use default_service. When serving API and static frontend from one server, a common order is register API scopes first, then an SPA fallback last.
4. Extractor and Responder: fix boundaries with types
4.1 Common Extractors
Path<T>: path variable parsingQuery<T>: query string deserializationJson<T>: JSON body (400 on deserialization failure)Form<T>: form dataHeader<T>/HttpRequest: headers and raw request access
use actix_web::{web, HttpResponse, Responder};
use serde::Deserialize;
#[derive(Deserialize)]
struct PageQuery {
page: Option<u32>,
size: Option<u32>,
}
async fn search(q: web::Query<PageQuery>) -> impl Responder {
let page = q.page.unwrap_or(1);
HttpResponse::Ok().body(format!("page={}", page))
}
Why it matters. Extractors show what a handler requires from the request in its signature. The compiler enforces the contract more reliably than documentation alone.
4.2 FromRequest: custom Extractors
Cross-cutting concerns—auth tokens, tenant IDs, internal request IDs—stay cleaner when encapsulated with FromRequest. On failure you can return a response immediately, which clarifies division of labor vs middleware.
Conceptually it looks like this. (Project-specific error types and trait impls follow team standards.)
// 개념 스케치 — 실제 코드는 프로젝트의 Error/Response 타입에 맞게 조정
use actix_web::FromRequest;
use std::future::{ready, Ready};
struct AuthUser {
sub: String,
}
impl FromRequest for AuthUser {
type Error = actix_web::Error;
type Future = Ready<Result<AuthUser, Self::Error>>;
fn from_request(req: &actix_web::HttpRequest, _: &mut actix_web::dev::Payload) -> Self::Future {
// 예: 헤더 검증, app_data에서 검증기 꺼내기 등
let _token = req.headers().get("authorization");
ready(Ok(AuthUser {
sub: "user-1".into(),
}))
}
}
Caution. Heavy I/O inside an Extractor burdens the request thread/runtime. Prefer cached key validation or short async work; for long DB reads, move to a service layer and centralize timeout/retry policy.
4.3 Responder: response consistency
Implement impl Responder or use builders like HttpResponse::Ok().json(...). Teams often add a mapping layer that turns Error into Response for uniform error mapping.
use actix_web::{HttpResponse, Responder};
use serde::Serialize;
#[derive(Serialize)]
struct ApiError {
code: &'static str,
message: String,
}
fn bad_request(msg: &str) -> HttpResponse {
HttpResponse::BadRequest().json(ApiError {
code: "BAD_REQUEST",
message: msg.to_string(),
})
}
// 핸들러에서 조기 반환 패턴과 함께 사용
4.4 Payload, binary bodies, uploads
Large files or streaming uploads may require consuming web::Payload directly. In review, always verify memory limits and temporary disk policy. With Multipart, missing per-field size limits can leave you open to denial-of-service.
4.5 Either and branched Extractors
If one endpoint must accept two content types (e.g. JSON or form), splitting handlers is clearest. If you must handle it in one function, use Either or a wrapper type and map deserialization failures to domain errors. A single giant handler often only raises test difficulty.
5. Middleware chain: order is meaning
5.1 Request and response directions with wrap
Middleware layers like an onion. Requests go outside→inside; responses return inside→outside. So logging is usually outer to measure full duration; compression often sits outside after the body is built.
use actix_web::middleware::Logger;
use actix_web::{App, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.wrap(Logger::default())
// .wrap(Compress::default()) // 필요 시
// .wrap_fn(|req, srv| { ... }) // 커스텀
.route("/ping", actix_web::web::get().to(|| async { "pong" }))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
A common pitfall is observability gaps—e.g. auth middleware is inner so the logger never “sees” 401. Prefer logging and tracing spans as outer as possible; keep business rules in inner handlers and Extractors for easier debugging.
5.2 Cross-cutting policy with wrap_fn
Simple header injection, request IDs, or feature flags can be prototyped with wrap_fn. When complexity grows, promote to a dedicated middleware type for testability.
5.3 CORS and NormalizePath
For public APIs called from browsers, restrict origins, methods, and headers with actix_cors::Cors. Internal microservice gRPC/HTTP often has no browser in the path, so CORS may be irrelevant. Trailing-slash 404s can be mitigated with NormalizePath, but the real fix is agreeing REST URL rules (slashes on resources) as a team.
5.4 Error handlers: converge on consistent HTTP responses
Teams differ: map actix_web::Error to custom error types, or apply a common response body in middleware. What matters is one rule for HTTP status and body shape. Clients must distinguish retryable 429/503 from 400 where retrying does not help.
6. WebSocket: the actix-web-actors view
HTTP is request–response; WebSocket is a long-lived connection. In the Actix ecosystem, actix-web-actors is a common choice. A session actor processing messages sequentially helps reduce race conditions.
# Cargo.toml (예시)
# actix-web = "4"
# actix-web-actors = "4"
// 개념 예시 — 실제 프로젝트는 에러 처리·핑/퐁·백프레셔 정책을 추가
use actix::prelude::*;
use actix_web::{web, Error, HttpRequest, HttpResponse};
use actix_web_actors::ws;
struct ChatSession;
impl Actor for ChatSession {
type Context = ws::WebsocketContext<Self>;
}
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for ChatSession {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
if let Ok(ws::Message::Text(t)) = msg {
ctx.text(t);
}
}
}
async fn ws_handler(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
ws::start(ChatSession {}, &req, stream)
}
// App::new().route("/ws", web::get().to(ws_handler))
Operations checklist. Design upgrade timeouts, idle disconnects, max connections at the reverse proxy (Nginx, Cloudflare), and message size limits in the app together. WebSocket clients need reconnect with exponential backoff on failure.
6.1 Broadcast and actor addresses
For chat or notifications where many sessions in one process must exchange messages, a common pattern registers session actors with a hub actor like Addr<HubActor>. Under horizontal scaling (many instances), you need cross-process broadcast and an external bus—Redis Pub/Sub, NATS, Kafka, etc. Keep Actix actors focused on per-instance session state and delegate cross-instance fan-out to the message bus for maintainability.
7. SSE (Server-Sent Events): streaming responses
SSE is a server-to-client, one-way stream. In Actix Web you implement it with chunked transfer and text/event-stream. Periodic heartbeats are common so proxies do not drop idle connections.
Conceptually:
HttpResponse::build(StatusCode::OK)- Headers:
Content-Type: text/event-stream,Cache-Control: no-cache - Body: stream of
event: ...\ndata: ...\n\n
Combine async streams or periodic timers and consider backpressure. Browsers auto-reconnect for SSE, but servers should document duplicate events or event ID strategy.
7.1 Interaction with proxies and load balancers
Without proxy_buffering off and tuned proxy_read_timeout in Nginx, long-lived SSE connections may drop. Kubernetes Ingress and cloud LBs differ in idle timeout defaults, which often causes “works locally, drops in staging.” Set heartbeat interval shorter than infrastructure timeouts.
7.2 SSE vs WebSocket
- Bidirectional real-time (game input, frequent collaboration) → WebSocket candidate
- Server push only and simpler HTTP/2 and proxy story → SSE candidate
- Many legacy proxies → simulate operational difficulty for both up front
8. Database integration: sqlx and web::Data
8.1 Connection pool as app data
// 개념 예시 — 실제 연결 문자열·에러 타입은 환경에 맞게
use actix_web::web::Data;
use sqlx::postgres::PgPoolOptions;
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let pool = PgPoolOptions::new()
.max_connections(10)
.connect("postgres://user:pass@localhost/db")
.await
.expect("db");
let pool = Data::new(pool);
HttpServer::new(move || {
App::new()
.app_data(pool.clone())
.route("/db-health", actix_web::web::get().to(db_health))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
async fn db_health(pool: Data<sqlx::PgPool>) -> impl actix_web::Responder {
match sqlx::query_scalar::<_, i32>("SELECT 1")
.fetch_one(pool.get_ref())
.await
{
Ok(_) => actix_web::HttpResponse::Ok().body("db ok"),
Err(_) => actix_web::HttpResponse::ServiceUnavailable().finish(),
}
}
Why Data. Data<T> wires Arc-based sharing into the framework. Requiring Data<PgPool> in a handler signature makes DB dependency explicit for that route.
8.2 Transactions and request scope
To run multiple queries atomically in one HTTP request, open a request-scoped transaction and handle commit/rollback in one service function. Opening transactions all over handlers complicates nesting, leaks, and timeouts.
8.3 Diesel, SeaORM, and other options
- sqlx: SQL as strings with compile-time verification (including offline mode). Pairs well with migration ops for productivity.
- Diesel: Type-safe query builder, schema-centric; common when optimized for PostgreSQL and similar.
- SeaORM: ActiveRecord style; weigh productivity vs learning curve.
Hooking into Actix Web is similar: put the pool in web::Data. The winner depends on your schema change process.
8.4 Migrations and deploy order
A common microservice failure is deploy vs DB migration ordering: new code sees old schema or the reverse. Without documenting expandable changes (add column, defaults) and contract-based compatibility (dual read paths), rollbacks become very hard.
9. Production microservices: structure and observability
9.1 Crates per bounded context
With a Cargo workspace (see rust-cargo-workspace-monorepo), splitting domain crates from API binaries makes it easy to align Actix Scope with directory structure 1:1.
orders-api:/ordersscope, order flowsbilling-api:/chargesscope, payments- Shared:
auth,telemetry,errorcrates
9.2 Health, readiness, metrics
- Liveness: process is up (
/health/live) - Readiness: DB, queue, cache ready (
/health/ready) - Metrics: Prometheus endpoint or export via OpenTelemetry
For Kubernetes, probe paths must match deployment manifests.
9.3 Config and secrets
From a 12-factor view, config is environment variables; secrets live in Vault, Kubernetes Secrets, etc., and the app holds a validated snapshot at startup. Prefer loading a config struct once and sharing via web::Data over scattering std::env::var in Actix handlers for testability.
9.4 Failure propagation and timeouts
Microservices default to cascade failures. On the Actix side, spell out pool sizes, upstream HTTP client timeouts, and circuit breakers (external crates when needed). Balance “fail fast” vs “graceful degradation” with SLOs so design discussions become quantitative.
9.5 API gateway and BFF
It is common to put a gateway in front of several Actix services to centralize auth, rate limits, and routing. A BFF (Backend for Frontend) can separate mobile vs web response shaping, but watch for domain logic leaking into the BFF. Keep rules in domain services when possible; keep BFF close to aggregation and presentation.
9.6 Observability: logs, traces, metrics
With tracing and OpenTelemetry, ensure spans start outside middleware so they cover the full request. Put trace IDs in logs to link to distributed tracing. Ship at least RPS, latency histograms, error rate, pool exhaustion; alert only on metrics tied to user impact.
9.7 Handler tests with actix_web::test
In unit tests, test::init_service(App::new()...) then TestRequest::get() exercises routing, Extractors, and middleware together. Slower than pure functions, but closer to what you deploy. In CI, split suites considering parallelism and DB test container startup cost.
9.8 HttpServer tuning overview
workers: align with CPU cores; mixed sync blocking can make the threading model the bottleneck.backlog: connection backlog under spikes; tune with kernel and LB.- Graceful shutdown: for in-flight requests on deploy, define signal handling and drain policy.
10. Security and operations checklist
- TLS termination: usually at the reverse proxy; app may be plaintext on internal networks (policy-dependent).
- CORS: often only when browser clients exist. Server-to-server traffic does not use CORS.
- Request size limits: give uploads separate limits.
- PII in logs: masking rules so Authorization headers, emails, national IDs do not log raw.
11. Summary
Actix Web is attractive because it combines a high-performance HTTP stack with optional actor-style WebSocket in one framework. In practice, the essentials are matching URLs and team boundaries with App and Scope, typing contracts with Extractor and Responder, and ordering middleware for observability and security. Share the DB pool via web::Data; for microservices, document health, metrics, timeouts, and transaction boundaries to stay operable.
Further reading
- Pair with the Cargo workspace post (
rust-cargo-workspace-monorepo) for a natural multi-service repo layout. - For contrast, the Go Gin guide (
golang-web-development-guide) helps you compare middleware, pools, and concurrency quickly.
Before deploy, this repo expects git add → git commit → git push, then npm run deploy.