Kubernetes Deployment with minikube | Node.js API, Deployment & Service

Kubernetes Deployment with minikube | Node.js API, Deployment & Service

이 글의 핵심

Practice production-like rollouts locally: minikube start, image load or docker-env build, apply YAML, readiness/liveness probes, and common failure modes.

Introduction

Kubernetes schedules containers with declarative YAML. Before production, minikube remains a popular way to practice the same Pod / Deployment / Service model locally. This walkthrough covers cluster up → image available to the node → kubectl apply → verify with port-forward.

A Node.js API pairs well with stateless HTTP; add /health and graceful shutdown to align with liveness/readiness probes. The article assumes minikube with containerd and Docker for image builds (2026-era tooling).

After reading this post

  • Start/stop minikube and point kubectl at the right context
  • Deploy a Node API with Deployment + Service YAML
  • Use minikube image load or minikube docker-env for local-only images
  • Triage ImagePullBackOff, CrashLoopBackOff, and wrong kubectl context

Table of contents

  1. Concepts
  2. Hands-on implementation
  3. Advanced usage
  4. Performance notes
  5. Real-world cases
  6. Troubleshooting
  7. Conclusion

Concepts

How a Node API maps to Kubernetes

ResourceRole
PodSmallest runnable unit (usually created/managed by a Deployment)
DeploymentDesired replicas, rolling updates, image version
ServiceStable DNS and port to Pods (ClusterIP, NodePort, LoadBalancer)

Even though minikube is single-node, you still exercise the same kubectl commands and manifests as in production.

Why minikube

  • No cloud bill while learning Ingress, HPA, Metrics API (via addons).
  • Good bridge for teams moving from Docker Compose to declarative orchestration.

Hands-on implementation

1) Prerequisites

  • Docker (or another minikube driver)
  • kubectl, minikube (latest stable recommended)
minikube version
kubectl version --client

2) Start minikube

minikube start --driver=docker   # or a VM driver
kubectl cluster-info
kubectl get nodes

Confirm context:

kubectl config current-context
# e.g. minikube

3) Sample Node.js API (with health)

server.mjs:

import http from 'http';

const port = Number(process.env.PORT || 3000);

const server = http.createServer((req, res) => {
  if (req.url === '/health') {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({ status: 'ok' }));
    return;
  }
  res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
  res.end('hello from node on k8s\n');
});

server.listen(port, '0.0.0.0', () => {
  console.log(`listening on ${port}`);
});

Dockerfile:

FROM node:22-alpine
WORKDIR /app
COPY server.mjs .
ENV NODE_ENV=production
EXPOSE 3000
USER node
CMD ["node", "server.mjs"]

Build locally:

docker build -t demo-node-api:1.0.0 .

4) Make the image visible to minikube (pick one)

Option A — build into minikube’s Docker daemon

eval $(minikube docker-env)
docker build -t demo-node-api:1.0.0 .
eval $(minikube docker-env -u)

Option B — build on host, then load

docker build -t demo-node-api:1.0.0 .
minikube image load demo-node-api:1.0.0

5) Deployment and Service manifests

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-node-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demo-node-api
  template:
    metadata:
      labels:
        app: demo-node-api
    spec:
      containers:
        - name: api
          image: demo-node-api:1.0.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
          env:
            - name: PORT
              value: '3000'
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 3
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: demo-node-api
spec:
  selector:
    app: demo-node-api
  ports:
    - port: 80
      targetPort: 3000
  type: ClusterIP

Apply:

kubectl apply -f deployment.yaml
kubectl rollout status deployment/demo-node-api
kubectl get pods,svc

6) Verify connectivity

kubectl port-forward service/demo-node-api 8080:80
# another terminal
curl -s http://127.0.0.1:8080/health

Advanced usage

  • ConfigMap / Secret: Move API keys and DB URLs out of plain env in real environments.
  • Resource limits: Set resources.requests / limits to reduce OOM and CPU throttling surprises.
  • Ingress addon: minikube addons enable ingress to practice host-based routing.
  • Namespaces: kubectl create ns dev and -n dev for team/environment isolation.

Performance notes

TopicNotes
ReplicasOn a laptop, 1–2 replicas is typical; scaling out matters more on multi-worker clusters
Probe intervalsToo aggressive adds load; too loose routes traffic to dead Pods
Image sizeAlpine bases and multi-stage builds shorten image pulls and rollouts

Real-world cases

  • CI: Push images to a registry, then set image: registry/app:git-sha and roll Deployments—locally substitute with image load.
  • Developer laptops: Build the same Dockerfile used for Compose, apply it in minikube to review YAML with staging-like resources.
  • Chaos practice: kubectl delete pod a replica and confirm the ReplicaSet recreates it.

Troubleshooting

SymptomWhat to check
ImagePullBackOffImage missing on node → minikube image load or minikube docker-env; imagePullPolicy: Never is local-only
CrashLoopBackOffkubectl logs deploy/demo-node-api --previous; probe path/port mismatch
kubectl targets wrong clusterkubectl config use-context minikube
Service exists but no connectiontargetPort vs containerPort; kubectl get endpoints demo-node-api

Conclusion

The core loop is: make the image available to the cluster, keep Pods healthy with probes, and expose them with a Service. Once this feels natural, layering Ingress, GitOps, and HPA in a real cluster is much easier—compare with your Docker Compose setup side by side.