Skip to content

Deployment Strategies

Compare rolling update, blue/green, and canary deployment patterns.

Time: ~15 minutes Difficulty: Intermediate

  • Rolling update: gradual replacement with maxSurge and maxUnavailable
  • Blue/green: instant switch between two full environments
  • Canary: send a small percentage of traffic to the new version
  • Rollback: undo a bad deployment
  • When to use each strategy

Navigate to the demo directory:

Terminal window
cd demos/deployment-strategies
Terminal window
kubectl apply -f manifests/namespace.yaml

The default Kubernetes strategy. Pods are replaced gradually.

Terminal window
kubectl apply -f manifests/rolling-update.yaml
kubectl get pods -l app=rolling-app -n deploy-strategy-demo -w

Trigger a rolling update by changing the image:

Terminal window
kubectl set image deploy/rolling-app app=nginx:1.25.3-alpine -n deploy-strategy-demo
kubectl rollout status deploy/rolling-app -n deploy-strategy-demo

Watch pods replace one at a time (maxSurge=1, maxUnavailable=1).

Terminal window
kubectl rollout undo deploy/rolling-app -n deploy-strategy-demo
kubectl rollout history deploy/rolling-app -n deploy-strategy-demo

Both versions run simultaneously. The Service selector switches traffic instantly.

Terminal window
kubectl apply -f manifests/blue-green.yaml

Traffic goes to Blue:

Terminal window
kubectl port-forward svc/bluegreen-app 8080:80 -n deploy-strategy-demo &
curl http://localhost:8080
# Shows: BLUE version

Switch to Green by changing the Service selector:

Terminal window
kubectl patch svc bluegreen-app -n deploy-strategy-demo \
-p '{"spec":{"selector":{"version":"green"}}}'
curl http://localhost:8080
# Shows: GREEN version

Instant rollback:

Terminal window
kubectl patch svc bluegreen-app -n deploy-strategy-demo \
-p '{"spec":{"selector":{"version":"blue"}}}'

Kill the port-forward when done: kill %1

Run the new version alongside the old one. The Service routes to both (weighted by replica count).

Terminal window
kubectl apply -f manifests/canary.yaml

The Service selector uses app: canary-app (shared label), so traffic goes to both stable (4 pods) and canary (1 pod). Roughly 80% hits stable, 20% hits canary.

Terminal window
# Run multiple requests to see the distribution
for i in $(seq 1 10); do
kubectl exec deploy/app-stable -n deploy-strategy-demo -- \
wget -qO- http://canary-app 2>/dev/null
done

To promote the canary, scale it up and scale stable down:

Terminal window
kubectl scale deploy app-canary --replicas=4 -n deploy-strategy-demo
kubectl scale deploy app-stable --replicas=0 -n deploy-strategy-demo
StrategyDowntimeRollback SpeedResource CostRisk
Rolling UpdateZeroSeconds (undo)1x + surgeLow
Blue/GreenZeroInstant (selector)2xLow
CanaryZeroInstant (scale)1x + canaryLowest
RecreateYesSlow (redeploy)1xHigh
Terminal window
kubectl delete namespace deploy-strategy-demo

See docs/deep-dive.md for a detailed explanation of Deployment controller internals, revision history, Argo Rollouts for advanced canary, traffic mirroring, and A/B testing patterns.

Move on to Pod Security to learn container security hardening.