Kubernetes Practical Guide | Pods, Deployments, Services, Ingress & Helm

Kubernetes Practical Guide | Pods, Deployments, Services, Ingress & Helm

이 글의 핵심

Kubernetes orchestrates containers at scale — handling deployment, scaling, networking, and self-healing. This guide covers the core objects you'll work with every day, from Pods to Ingress, plus Helm and real troubleshooting commands.

How Kubernetes Works

Kubernetes (K8s) is a container orchestration platform. You describe your desired state in YAML manifests; Kubernetes continuously reconciles actual state with desired state.

You:        kubectl apply -f deployment.yaml  (desired state)
Kubernetes: "I see 0 pods, you want 3. Creating 3 pods."
            → Node failure: "Pod died. Creating replacement."
            → CPU spike: "HPA says scale to 5. Creating 2 more pods."

Architecture:

Control Plane                    Worker Nodes
─────────────────────────        ──────────────────────────
API Server  ←── kubectl          Node 1: Pod A, Pod B
etcd (state store)               Node 2: Pod C, Pod D
Scheduler                        Node 3: Pod E
Controller Manager

Core Objects

ObjectPurpose
PodSmallest deployable unit — one or more containers
DeploymentManages Pod replicas, rolling updates, rollbacks
ServiceStable network endpoint for a set of Pods
ConfigMapNon-sensitive configuration data
SecretSensitive data (passwords, tokens, TLS certs)
IngressHTTP/HTTPS routing from outside the cluster
PersistentVolumePersistent storage that outlives Pods
HPAHorizontal Pod Autoscaler — scale based on CPU/memory

kubectl — Essential Commands

# Context and cluster
kubectl config get-contexts                   # List clusters
kubectl config use-context my-cluster         # Switch cluster
kubectl cluster-info

# Resource inspection
kubectl get pods                              # List pods (default namespace)
kubectl get pods -n kube-system               # Specific namespace
kubectl get pods -A                           # All namespaces
kubectl get pods -o wide                      # Show node, IP
kubectl get all                               # All resources in namespace

kubectl describe pod my-pod                   # Detailed pod info + events
kubectl describe deployment my-app

# Logs
kubectl logs my-pod                           # Pod logs
kubectl logs my-pod -c my-container           # Specific container
kubectl logs my-pod --previous                # Previous container (after crash)
kubectl logs -f my-pod                        # Follow logs
kubectl logs -l app=my-app --all-containers   # All pods matching label

# Exec into a pod
kubectl exec -it my-pod -- /bin/bash
kubectl exec -it my-pod -c my-container -- sh

# Apply and delete
kubectl apply -f deployment.yaml              # Create/update
kubectl apply -f ./k8s/                       # Apply all files in directory
kubectl delete -f deployment.yaml
kubectl delete pod my-pod
kubectl delete pod my-pod --force --grace-period=0  # Force delete (stuck pods)

# Port forwarding (for testing)
kubectl port-forward pod/my-pod 8080:8080
kubectl port-forward svc/my-service 8080:80

# Copy files
kubectl cp my-pod:/app/logs/app.log ./app.log

Namespaces

Namespaces partition cluster resources — use them to separate environments or teams.

kubectl create namespace staging
kubectl get namespaces

# Set default namespace (saves typing -n flag)
kubectl config set-context --current --namespace=production

Pods

A Pod wraps one or more containers that share network and storage.

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app
    version: v1
spec:
  containers:
    - name: app
      image: my-app:1.0.0
      ports:
        - containerPort: 8080
      env:
        - name: PORT
          value: "8080"
      resources:
        requests:
          memory: "64Mi"
          cpu: "100m"         # 100 millicores = 0.1 CPU
        limits:
          memory: "128Mi"
          cpu: "500m"
      readinessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 10
      livenessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 15
        periodSeconds: 20

Pods are ephemeral — don’t run Pods directly. Use Deployments.


Deployments

Deployments manage Pod replicas with rolling updates and rollback support.

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1           # Max extra pods during update
      maxUnavailable: 0     # Never go below 3 pods during update
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: app
          image: my-app:1.2.0
          ports:
            - containerPort: 8080
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: url
            - name: APP_ENV
              valueFrom:
                configMapKeyRef:
                  name: app-config
                  key: environment
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "1000m"
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5
# Deployment operations
kubectl rollout status deployment/my-app     # Watch rollout progress
kubectl rollout history deployment/my-app    # View revision history
kubectl rollout undo deployment/my-app       # Rollback to previous version
kubectl rollout undo deployment/my-app --to-revision=2  # Rollback to specific revision

# Scale
kubectl scale deployment my-app --replicas=5

# Update image
kubectl set image deployment/my-app app=my-app:1.3.0

Services

Services provide a stable network endpoint for a set of Pods (selected by labels).

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app         # Matches Pods with this label
  ports:
    - port: 80          # Service port
      targetPort: 8080  # Pod port
  type: ClusterIP       # Internal only (default)

Service types:

# ClusterIP — internal access only (default)
type: ClusterIP

# NodePort — exposed on every node's IP at a static port (30000-32767)
type: NodePort
ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080     # Access: any-node-ip:30080

# LoadBalancer — creates cloud load balancer (AWS ELB, GCP LB)
type: LoadBalancer

ConfigMaps and Secrets

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  environment: "production"
  log_level: "info"
  max_connections: "100"
# secret.yaml — values are base64 encoded
apiVersion: v1
kind: Secret
metadata:
  name: db-secret
type: Opaque
data:
  url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc3dAZGI6NTQzMi9teWRi  # base64
  password: c3VwZXJzZWNyZXQ=
# Create secrets from literal values (easier than base64)
kubectl create secret generic db-secret \
  --from-literal=url=postgresql://user:pass@db:5432/mydb \
  --from-literal=password=supersecret

# Create from file
kubectl create secret generic tls-secret \
  --from-file=tls.crt=./cert.pem \
  --from-file=tls.key=./key.pem

Use secrets as environment variables or volume mounts:

containers:
  - name: app
    # As environment variables
    env:
      - name: DB_URL
        valueFrom:
          secretKeyRef:
            name: db-secret
            key: url

    # As a volume (file)
    volumeMounts:
      - name: secrets-vol
        mountPath: /etc/secrets
        readOnly: true

volumes:
  - name: secrets-vol
    secret:
      secretName: db-secret

Ingress — HTTP Routing

Ingress routes external HTTP/HTTPS traffic to Services based on host and path rules.

# Install Nginx Ingress Controller (common choice)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: "letsencrypt-prod"  # Auto SSL
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - myapp.com
      secretName: myapp-tls   # cert-manager creates this
  rules:
    - host: myapp.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 80
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 80

Resource Limits and HPA

Set resource requests (scheduling guarantee) and limits (hard cap), then let HPA scale automatically:

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70   # Scale when avg CPU > 70%
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
kubectl get hpa                          # View current HPA status
kubectl describe hpa my-app-hpa          # Detailed metrics and events

PersistentVolumes for Stateful Apps

# pvc.yaml — request storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce      # Single node read-write
  storageClassName: gp2  # AWS EBS SSD (cloud-specific)
  resources:
    requests:
      storage: 20Gi
# Use PVC in a StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:16
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-secret
                  key: password
          volumeMounts:
            - name: postgres-data
              mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
    - metadata:
        name: postgres-data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: gp2
        resources:
          requests:
            storage: 20Gi

Helm — Kubernetes Package Manager

Helm manages complex Kubernetes applications with templating and versioning.

# Install Helm
brew install helm

# Add a chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Install a chart
helm install my-postgres bitnami/postgresql \
  --set auth.postgresPassword=mypassword \
  --set primary.persistence.size=20Gi

# Install with custom values file
helm install my-app ./my-chart -f values-prod.yaml

# Upgrade
helm upgrade my-app ./my-chart -f values-prod.yaml

# Rollback
helm rollback my-app 1    # Rollback to revision 1

# List releases
helm list

# Uninstall
helm uninstall my-postgres

Create a Helm Chart

helm create my-app    # Scaffold chart structure
my-app/
├── Chart.yaml            # Chart metadata
├── values.yaml           # Default values
└── templates/
    ├── deployment.yaml   # Template using {{ .Values.* }}
    ├── service.yaml
    └── ingress.yaml
# values.yaml
replicaCount: 3
image:
  repository: my-app
  tag: "1.0.0"
resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 256Mi
ingress:
  enabled: true
  host: myapp.com
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-app
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    spec:
      containers:
        - name: app
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}

Troubleshooting

# Pod won't start — check events
kubectl describe pod <pod-name>
# Look for: "Back-off restarting failed container", image pull errors, OOMKilled

# Check pod logs (including previous container after crash)
kubectl logs <pod-name> --previous

# Pod stuck in Pending — check node resources
kubectl describe pod <pod-name>  # Look for "Insufficient cpu/memory"
kubectl describe nodes           # Check node capacity and allocations

# Check all events in namespace (sorted by time)
kubectl get events --sort-by='.metadata.creationTimestamp'

# Container OOMKilled — increase memory limit
kubectl describe pod <pod-name>
# Look for: "OOMKilled", "Last State: Terminated: Reason: OOMKilled"

# Service not reachable — verify selector matches pod labels
kubectl get svc my-service -o yaml    # Check selector
kubectl get pods --show-labels        # Check pod labels
kubectl describe endpoints my-service # See which pods are registered

# Debug network with a temporary pod
kubectl run debug --image=curlimages/curl -it --rm -- sh
# Then: curl http://my-service/health

Production Checklist

# deployment.yaml production settings
spec:
  replicas: 3                      # Always > 1 for availability
  strategy:
    rollingUpdate:
      maxUnavailable: 0            # Never take all pods down at once
  template:
    spec:
      containers:
        - resources:
            requests:              # Set requests — required for scheduling
              cpu: 100m
              memory: 128Mi
            limits:                # Set limits — prevent noisy neighbors
              cpu: 1000m
              memory: 512Mi
          readinessProbe:          # Required — prevents sending traffic to unready pods
            httpGet:
              path: /health
              port: 8080
          livenessProbe:           # Required — restarts crashed containers
            httpGet:
              path: /health
              port: 8080
      affinity:
        podAntiAffinity:           # Spread pods across nodes
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: my-app
                topologyKey: kubernetes.io/hostname

Related posts: