Kubernetes 실전 가이드 | 컨테이너 오케스트레이션 완벽 이해
이 글의 핵심
Kubernetes로 컨테이너를 오케스트레이션하는 완벽 가이드. Pod, Service, Deployment, ConfigMap, Ingress, Helm, 모니터링, 오토스케일링까지 실전 예제로 완벽 이해.
들어가며
Kubernetes(K8s)는 컨테이너 오케스트레이션의 사실상 표준입니다. 구글이 개발하고 오픈소스로 공개한 Kubernetes는 컨테이너 배포, 스케일링, 관리를 자동화합니다.
실무 경험: 대규모 인프라를 Kubernetes로 마이그레이션하면서, 트래픽 급증 시 자동 스케일링으로 서버 비용을 30% 절감하고 배포 시간을 2시간에서 10분으로 단축한 경험을 공유합니다.
이 글에서 다룰 내용:
- Kubernetes 핵심 개념
- Pod, Service, Deployment
- ConfigMap과 Secret
- Ingress와 로드밸런싱
- Helm으로 패키지 관리
- 모니터링과 로깅
- 실전 배포 전략
목차
1. Kubernetes 기본 개념
아키텍처
flowchart TB
subgraph Master["Control Plane"]
API[API Server]
Scheduler[Scheduler]
Controller[Controller Manager]
etcd[(etcd)]
end
subgraph Worker1["Worker Node 1"]
Kubelet1[Kubelet]
Proxy1[kube-proxy]
Pod1[Pod]
Pod2[Pod]
end
subgraph Worker2["Worker Node 2"]
Kubelet2[Kubelet]
Proxy2[kube-proxy]
Pod3[Pod]
end
API --> Kubelet1
API --> Kubelet2
API --> etcd
핵심 개념
| 개념 | 설명 |
|---|---|
| Cluster | 여러 노드의 집합 |
| Node | 물리/가상 서버 |
| Pod | 컨테이너의 최소 단위 |
| Service | Pod 그룹의 네트워크 엔드포인트 |
| Deployment | Pod의 선언적 배포 |
| Namespace | 리소스 격리 |
2. 클러스터 설정
Minikube (로컬)
# 설치 (macOS)
brew install minikube
# 설치 (Linux)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# 클러스터 시작
minikube start
# 상태 확인
kubectl cluster-info
kubectl get nodes
kubectl 설치
# macOS
brew install kubectl
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install kubectl /usr/local/bin/
# 버전 확인
kubectl version --client
기본 명령어
# 리소스 조회
kubectl get pods
kubectl get services
kubectl get deployments
kubectl get all
# 상세 정보
kubectl describe pod <pod-name>
# 로그 확인
kubectl logs <pod-name>
kubectl logs -f <pod-name> # 실시간
# 컨테이너 접속
kubectl exec -it <pod-name> -- /bin/bash
# 리소스 삭제
kubectl delete pod <pod-name>
kubectl delete -f deployment.yaml
3. Pod
기본 Pod
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
# 생성
kubectl apply -f pod.yaml
# 확인
kubectl get pods
kubectl describe pod nginx-pod
# 삭제
kubectl delete pod nginx-pod
멀티 컨테이너 Pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
- name: sidecar
image: logger:1.0
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
4. Service
ClusterIP (기본)
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
NodePort
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080 # 30000-32767
LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
5. Deployment
기본 Deployment
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# 배포
kubectl apply -f deployment.yaml
# 상태 확인
kubectl get deployments
kubectl get pods
# 스케일링
kubectl scale deployment nginx-deployment --replicas=5
# 롤링 업데이트
kubectl set image deployment/nginx-deployment nginx=nginx:1.26
# 롤백
kubectl rollout undo deployment/nginx-deployment
# 히스토리
kubectl rollout history deployment/nginx-deployment
롤링 업데이트 전략
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # 동시에 생성할 추가 Pod
maxUnavailable: 1 # 동시에 중단 가능한 Pod
template:
spec:
containers:
- name: app
image: myapp:2.0
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
6. ConfigMap & Secret
ConfigMap
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgres://db:5432/mydb"
log_level: "info"
config.json: |
{
"feature_flags": {
"new_ui": true
}
}
# Deployment에서 사용
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- name: app
image: myapp:1.0
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log_level
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
name: app-config
Secret
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
# base64 인코딩 필요
db-password: cGFzc3dvcmQxMjM=
api-key: YWJjZGVmZ2hpamtsbW5vcA==
# Secret 생성
kubectl create secret generic app-secret \
--from-literal=db-password=password123 \
--from-literal=api-key=abcdefg
# 확인
kubectl get secrets
kubectl describe secret app-secret
# Deployment에서 사용
spec:
containers:
- name: app
image: myapp:1.0
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: db-password
7. Ingress
Ingress Controller 설치
# NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml
# 확인
kubectl get pods -n ingress-nginx
기본 Ingress
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
TLS/HTTPS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- hosts:
- myapp.com
secretName: tls-secret
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
# TLS Secret 생성
kubectl create secret tls tls-secret \
--cert=tls.crt \
--key=tls.key
8. Volumes
EmptyDir
# 임시 볼륨 (Pod 삭제시 사라짐)
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:1.0
volumeMounts:
- name: cache
mountPath: /cache
volumes:
- name: cache
emptyDir: {}
PersistentVolume & PersistentVolumeClaim
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres
---
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# Deployment에서 사용
spec:
containers:
- name: postgres
image: postgres:16
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
9. Helm
Helm 설치
# macOS
brew install helm
# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# 버전 확인
helm version
Chart 사용
# 리포지토리 추가
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Chart 검색
helm search repo postgres
# 설치
helm install my-postgres bitnami/postgresql
# 확인
helm list
kubectl get pods
# 업그레이드
helm upgrade my-postgres bitnami/postgresql --set auth.password=newpassword
# 삭제
helm uninstall my-postgres
커스텀 Chart 생성
# Chart 생성
helm create myapp
# 디렉토리 구조
myapp/
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ingress.yaml
# Chart.yaml
apiVersion: v2
name: myapp
description: My Application
version: 1.0.0
appVersion: "1.0"
# values.yaml
replicaCount: 3
image:
repository: myapp
tag: "1.0"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hosts:
- host: myapp.com
paths:
- path: /
pathType: Prefix
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "myapp.name" . }}
template:
metadata:
labels:
app: {{ include "myapp.name" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 80
resources:
{{- toYaml .Values.resources | nindent 10 }}
# 설치
helm install myapp ./myapp
# 커스텀 값으로 설치
helm install myapp ./myapp --set replicaCount=5
10. 모니터링
Prometheus + Grafana
# Helm으로 설치
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack
# 확인
kubectl get pods -n default
kubectl get svc -n default
# Grafana 접속
kubectl port-forward svc/prometheus-grafana 3000:80
# http://localhost:3000 (admin/prom-operator)
Metrics Server
# 설치
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 리소스 사용량 확인
kubectl top nodes
kubectl top pods
로깅
# 단일 Pod 로그
kubectl logs <pod-name>
# 모든 Pod 로그
kubectl logs -l app=nginx
# 이전 컨테이너 로그
kubectl logs <pod-name> --previous
# 실시간 로그
kubectl logs -f <pod-name>
11. 실전 예제
예제 1: 웹 애플리케이션 배포
# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myapp/frontend:1.0
ports:
- containerPort: 3000
env:
- name: API_URL
value: "http://backend-service:8080"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 80
targetPort: 3000
# backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: myapp/backend:1.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: app-config
key: redis_url
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 8080
targetPort: 8080
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-tls
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
# 배포
kubectl apply -f frontend-deployment.yaml
kubectl apply -f backend-deployment.yaml
kubectl apply -f ingress.yaml
# 확인
kubectl get all
kubectl get ingress
예제 2: PostgreSQL StatefulSet
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
clusterIP: None # Headless Service
selector:
app: postgres
ports:
- port: 5432
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_DB
value: mydb
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
예제 3: CronJob
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
schedule: "0 2 * * *" # 매일 새벽 2시
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: backup-tool:1.0
command:
- /bin/sh
- -c
- |
pg_dump $DATABASE_URL > /backup/db-$(date +%Y%m%d).sql
aws s3 cp /backup/db-$(date +%Y%m%d).sql s3://backups/
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
restartPolicy: OnFailure
오토스케일링
Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# HPA 생성
kubectl autoscale deployment app-deployment \
--cpu-percent=70 \
--min=2 \
--max=10
# 확인
kubectl get hpa
Vertical Pod Autoscaler (VPA)
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: app-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
updatePolicy:
updateMode: "Auto"
Health Checks
Liveness Probe
spec:
containers:
- name: app
image: myapp:1.0
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
Readiness Probe
spec:
containers:
- name: app
image: myapp:1.0
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Startup Probe
spec:
containers:
- name: app
image: myapp:1.0
startupProbe:
httpGet:
path: /startup
port: 8080
failureThreshold: 30
periodSeconds: 10
Namespace
생성 및 사용
# Namespace 생성
kubectl create namespace dev
kubectl create namespace prod
# 리소스를 특정 namespace에 배포
kubectl apply -f deployment.yaml -n dev
# 기본 namespace 변경
kubectl config set-context --current --namespace=dev
# 확인
kubectl get pods -n dev
kubectl get all --all-namespaces
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: dev
labels:
environment: development
실전 배포 전략
Blue-Green Deployment
# Blue (현재 버전)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:1.0
---
# Green (새 버전)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:2.0
---
# Service (selector로 전환)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
version: blue # blue -> green으로 변경하여 전환
ports:
- port: 80
targetPort: 8080
# 전환
kubectl patch service app-service -p '{"spec":{"selector":{"version":"green"}}}'
# 롤백
kubectl patch service app-service -p '{"spec":{"selector":{"version":"blue"}}}'
Canary Deployment
# 기존 버전 (90%)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- name: app
image: myapp:1.0
---
# 새 버전 (10%)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: app
image: myapp:2.0
---
# Service (두 버전 모두 포함)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp # version 선택자 없음
ports:
- port: 80
targetPort: 8080
보안
RBAC (Role-Based Access Control)
# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
---
# Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
subjects:
- kind: ServiceAccount
name: app-sa
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
베스트 프랙티스
1. 리소스 제한 설정
# ✅ 항상 requests와 limits 설정
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
2. Health Checks
# ✅ Liveness와 Readiness Probe 모두 설정
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
3. Labels와 Annotations
# ✅ 명확한 레이블
metadata:
labels:
app: myapp
version: "1.0"
environment: production
team: backend
annotations:
description: "Main application backend"
contact: "[email protected]"
4. Namespace 분리
# ✅ 환경별 namespace
kubectl create namespace dev
kubectl create namespace staging
kubectl create namespace prod
트러블슈팅
Pod가 시작되지 않을 때
# 1. Pod 상태 확인
kubectl get pods
# 2. 상세 정보
kubectl describe pod <pod-name>
# 3. 로그 확인
kubectl logs <pod-name>
# 4. 이벤트 확인
kubectl get events --sort-by=.metadata.creationTimestamp
일반적인 문제
# ImagePullBackOff
# 원인: 이미지를 가져올 수 없음
# 해결: 이미지 이름, 태그, registry 인증 확인
# CrashLoopBackOff
# 원인: 컨테이너가 계속 종료됨
# 해결: 로그 확인, 환경 변수, health check 확인
# Pending
# 원인: 리소스 부족
# 해결: kubectl describe pod로 이유 확인
참고 자료
한 줄 요약: Kubernetes는 컨테이너 오케스트레이션의 표준으로, Pod, Service, Deployment, Helm을 이해하면 대규모 컨테이너 애플리케이션을 효율적으로 관리할 수 있습니다.