Skip to content

StatefulSet vs Deployment

See why StatefulSets exist by comparing the same app deployed both ways.

Time: ~15 minutes Difficulty: Intermediate

  • Stable pod identity (-0, -1) vs random pod names
  • Per-pod persistent storage via volumeClaimTemplates
  • Init containers for one-time setup
  • Headless services and why StatefulSets need them
  • PVC lifecycle (survives pod deletion)

A Deployment gives pods random names and uses shared or ephemeral storage. When a pod is deleted, the replacement gets a new name and starts with empty storage. This breaks workloads that need stable identity or persistent per-pod data (databases, caches, queues).

A StatefulSet solves this with ordered, stable pod names and dedicated PVCs that reattach on restart.

Navigate to the demo directory:

Terminal window
cd demos/statefulset
Terminal window
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/counter-script.yaml
kubectl apply -f manifests/deployment-version.yaml
kubectl apply -f manifests/statefulset-version.yaml

Wait for all pods to be ready:

Terminal window
kubectl get pods -n statefulset-demo -w

You should see pods like:

  • counter-deploy-7b8f9c-abc12 (random suffix)
  • counter-deploy-7b8f9c-xyz34 (random suffix)
  • counter-sts-0 (stable name)
  • counter-sts-1 (stable name)
Terminal window
# Note the pod names
kubectl get pods -l variant=deployment -n statefulset-demo
# Delete a pod
kubectl delete pod -l variant=deployment -n statefulset-demo --wait=false
# Watch - the replacement has a different name
kubectl get pods -l variant=deployment -n statefulset-demo -w
Terminal window
# Note the pod names (counter-sts-0, counter-sts-1)
kubectl get pods -l variant=statefulset -n statefulset-demo
# Delete pod-0
kubectl delete pod counter-sts-0 -n statefulset-demo
# Watch - counter-sts-0 comes back with the SAME name
kubectl get pods -l variant=statefulset -n statefulset-demo -w
Terminal window
# Check the boot count (will show 1)
kubectl exec counter-deploy-<TAB> -n statefulset-demo -- cat /data/counter
# Delete the pod
kubectl delete pod -l variant=deployment -n statefulset-demo --wait=false
# Check the new pod's boot count (back to 1 - data was lost)
kubectl exec <new-pod-name> -n statefulset-demo -- cat /data/counter

StatefulSet preserves data across restarts

Section titled “StatefulSet preserves data across restarts”
Terminal window
# Check the boot count
kubectl exec counter-sts-0 -n statefulset-demo -- cat /data/counter
# Delete the pod
kubectl delete pod counter-sts-0 -n statefulset-demo
# Wait for it to come back, then check again (count incremented, not reset)
kubectl wait --for=condition=Ready pod/counter-sts-0 -n statefulset-demo --timeout=60s
kubectl exec counter-sts-0 -n statefulset-demo -- cat /data/counter

The counter increments because the same PVC reattaches to the same pod.

Terminal window
# See the per-pod PVCs created by volumeClaimTemplates
kubectl get pvc -n statefulset-demo
# Output:
# data-counter-sts-0 Bound ...
# data-counter-sts-1 Bound ...

Each pod gets its own PVC named <template-name>-<pod-name>. These PVCs survive pod deletion and even scale-down.

manifests/
namespace.yaml # statefulset-demo namespace
counter-script.yaml # ConfigMap with init script
deployment-version.yaml # Deployment + Service (emptyDir, random names)
statefulset-version.yaml # StatefulSet + headless Service (PVC per pod)

Both versions run the same app: an init container reads a counter file, increments it, writes an HTML page, then nginx serves it. The difference is storage:

  • Deployment: uses emptyDir, lost on pod deletion
  • StatefulSet: uses volumeClaimTemplates, persists across restarts

The StatefulSet also requires a headless Service (clusterIP: None) for stable DNS. Each pod gets a DNS entry: counter-sts-0.counter-sts.statefulset-demo.svc.cluster.local.

  1. Scale the StatefulSet and watch pods come up in order:

    Terminal window
    kubectl scale statefulset counter-sts --replicas=4 -n statefulset-demo
    kubectl get pods -l variant=statefulset -n statefulset-demo -w
  2. Scale back down and verify PVCs are NOT deleted:

    Terminal window
    kubectl scale statefulset counter-sts --replicas=2 -n statefulset-demo
    kubectl get pvc -n statefulset-demo
  3. Access individual pods via DNS from a debug pod:

    Terminal window
    kubectl run -it debug --rm --image=busybox -n statefulset-demo -- \
    wget -qO- http://counter-sts-0.counter-sts:80
Terminal window
kubectl delete namespace statefulset-demo

Note: Deleting the namespace also deletes the PVCs. In production, scaling down a StatefulSet does NOT delete PVCs. You must clean them up manually.

See docs/deep-dive.md for a detailed explanation of StatefulSet ordering guarantees, update strategies, PVC retention policies, headless service mechanics, and when to use StatefulSet vs Deployment vs DaemonSet.

Move on to Jobs & CronJobs to learn about batch processing and scheduled tasks.