Edge Computing & Serverless Guide | Cloudflare Workers, Vercel Edge, AWS Lambda
이 글의 핵심
Edge functions run your code in 300+ locations worldwide — 5-10ms response times vs 100-300ms from a central server. This guide covers Cloudflare Workers, Vercel Edge Functions, AWS Lambda, and when each deployment model wins.
Edge vs Serverless vs Traditional
Traditional Server (e.g., EC2, VPS):
Location: 1-3 regions
Latency: 100-300ms from far users
Scaling: manual or auto-scaling groups
Cold start: 0 (always running)
Cost: pay for uptime
Serverless (Lambda, Cloud Functions):
Location: 1-3 regions
Latency: 50-300ms (+ cold start)
Scaling: automatic, per-request
Cold start: 100ms-3s
Cost: pay per invocation
Edge (Workers, Edge Functions):
Location: 100-300+ PoPs globally
Latency: 5-50ms (closest PoP)
Scaling: automatic
Cold start: ~0ms (V8 isolates)
Cost: pay per request
Constraint: no Node.js APIs, limited CPU
Cloudflare Workers
Workers run JavaScript/TypeScript at Cloudflare’s 300+ global edge locations.
# Install Wrangler CLI
npm install -D wrangler
# Create a new Worker
npx wrangler init my-worker
cd my-worker && npm install
Basic Worker
// src/index.ts
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
// Route handling
if (url.pathname === '/') {
return new Response('Hello from the edge!', {
headers: { 'Content-Type': 'text/plain' },
});
}
if (url.pathname === '/api/time') {
return Response.json({
time: new Date().toISOString(),
region: request.cf?.colo, // Cloudflare data center code
country: request.cf?.country, // User's country
});
}
return new Response('Not Found', { status: 404 });
},
};
interface Env {
// KV namespace bindings from wrangler.toml
MY_KV: KVNamespace;
// Secret environment variables
API_KEY: string;
}
KV Storage (Edge Key-Value Store)
// KV: eventually consistent, read-optimized, global
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.searchParams.get('key');
if (request.method === 'GET' && key) {
// Read from KV (served from nearest PoP)
const value = await env.MY_KV.get(key);
if (!value) return new Response('Not found', { status: 404 });
return new Response(value);
}
if (request.method === 'PUT' && key) {
const body = await request.text();
// Write to KV (propagates globally within ~60s)
await env.MY_KV.put(key, body, {
expirationTtl: 3600, // Optional TTL in seconds
});
return new Response('OK');
}
return new Response('Method not allowed', { status: 405 });
},
};
# wrangler.toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[[kv_namespaces]]
binding = "MY_KV"
id = "your-kv-namespace-id"
[vars]
ENVIRONMENT = "production"
Durable Objects (Strongly Consistent State)
// Durable Object: single-instance, strongly consistent (not KV)
// Use for: real-time collaboration, rate limiting, WebSocket rooms
export class RateLimiter {
private state: DurableObjectState;
private requests: number = 0;
private windowStart: number = Date.now();
constructor(state: DurableObjectState) {
this.state = state;
}
async fetch(request: Request): Promise<Response> {
const now = Date.now();
// Reset window every minute
if (now - this.windowStart > 60_000) {
this.requests = 0;
this.windowStart = now;
}
this.requests++;
if (this.requests > 100) {
return new Response('Rate limit exceeded', { status: 429 });
}
return Response.json({ requests: this.requests, limit: 100 });
}
}
// Worker uses the Durable Object
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const ip = request.headers.get('CF-Connecting-IP') || 'unknown';
const id = env.RATE_LIMITER.idFromName(ip);
const limiter = env.RATE_LIMITER.get(id);
return limiter.fetch(request);
},
};
Vercel Edge Functions
// app/api/geo/route.ts — Next.js Edge Route Handler
import { NextRequest } from 'next/server';
export const runtime = 'edge'; // Run at the edge
export async function GET(request: NextRequest) {
// Vercel populates geo data from request headers
const country = request.geo?.country ?? 'US';
const city = request.geo?.city ?? 'Unknown';
return Response.json({ country, city, timestamp: Date.now() });
}
Edge Middleware (Next.js)
// middleware.ts — runs at the edge before every request
import { NextRequest, NextResponse } from 'next/server';
export function middleware(request: NextRequest) {
const { pathname, searchParams } = request.nextUrl;
// Auth check at the edge (fast — no origin roundtrip)
if (pathname.startsWith('/dashboard')) {
const token = request.cookies.get('session')?.value;
if (!token) {
return NextResponse.redirect(new URL('/login', request.url));
}
}
// Geo-based routing
const country = request.geo?.country;
if (pathname === '/' && country === 'JP') {
return NextResponse.rewrite(new URL('/ja', request.url));
}
// A/B testing (edge flag)
const bucket = request.cookies.get('ab-bucket')?.value ?? (Math.random() > 0.5 ? 'A' : 'B');
const response = NextResponse.next();
response.cookies.set('ab-bucket', bucket);
response.headers.set('X-AB-Bucket', bucket);
return response;
}
export const config = {
matcher: ['/((?!_next|api/public|favicon.ico).*)'],
};
AWS Lambda
// handler.ts — Lambda function
import { APIGatewayProxyEventV2, APIGatewayProxyResultV2 } from 'aws-lambda';
export async function handler(
event: APIGatewayProxyEventV2
): Promise<APIGatewayProxyResultV2> {
const { pathParameters, body } = event;
if (!body) {
return { statusCode: 400, body: JSON.stringify({ error: 'Body required' }) };
}
try {
const data = JSON.parse(body);
const result = await processData(data);
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result),
};
} catch (error) {
console.error('Handler error:', error);
return { statusCode: 500, body: JSON.stringify({ error: 'Internal server error' }) };
}
}
Reduce Lambda Cold Starts
// ✅ Initialize SDK clients outside the handler (reused across warm invocations)
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
const dynamodb = new DynamoDBClient({ region: 'us-east-1' }); // Created once
export async function handler(event: any) {
// dynamodb is reused on warm starts — no reconnection overhead
const result = await dynamodb.send(/* ... */);
return result;
}
// ✅ Keep bundle small — cold start time correlates with bundle size
// Use tree-shaking, avoid large dependencies
// Lambda: target < 10MB bundle, ideally < 1MB
// ✅ Provisioned Concurrency (AWS) — keep Lambdas pre-warmed
// Set in Lambda configuration: Provisioned concurrency = 5
// Always-warm instances, eliminates cold starts for those 5 slots
Choosing the Right Deployment Model
Use Edge when:
✅ Authentication/authorization checks
✅ Geo-based routing and personalization
✅ Rate limiting
✅ A/B testing
✅ Static responses with simple logic
✅ Caching and cache invalidation
❌ Complex database queries
❌ File I/O
❌ Node.js-specific libraries
Use Serverless (Lambda) when:
✅ Event-driven processing (S3 upload, SQS message)
✅ Scheduled jobs (cron)
✅ APIs with variable traffic (zero to burst)
✅ Backend for mobile apps
✅ Data transformation pipelines
❌ Long-running processes (>15 min on Lambda)
❌ Always-on services (pay per-invocation less efficient)
Use Traditional Server when:
✅ WebSocket connections
✅ Long-running processes
✅ Stateful services
✅ Complex computation
✅ When you need predictable cost at scale
Observability at the Edge
// Cloudflare Workers: use ctx.waitUntil for background tasks
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const start = Date.now();
const response = await handleRequest(request, env);
// Log after response is sent — doesn't block the response
ctx.waitUntil(
logRequest({
url: request.url,
status: response.status,
duration: Date.now() - start,
country: request.cf?.country,
})
);
return response;
},
};
Related posts: