Skip to content

Progressive Delivery (Argo Rollouts)

Canary deployments with automated traffic shifting, health analysis, and auto-rollback.

Time: ~20 minutes Difficulty: Advanced

Resources: This demo needs ~1GB RAM. Clean up other demos first: task clean:all

  • Argo Rollouts: a drop-in replacement for Deployments with advanced rollout strategies
  • Canary deployments: shift traffic gradually (10% -> 50% -> 100%)
  • AnalysisTemplate: automated health checks that gate promotion
  • Auto-rollback: bad deployments are automatically reverted
  • The difference between a Deployment rolling update and a true canary
Service ──> Argo Rollout (canary) ──> Stable ReplicaSet (90%)
──> Canary ReplicaSet (10%)
|
AnalysisRun checks health
|
Auto-promote or auto-rollback

Version 1 serves “Stable” HTML. We trigger a v2 rollout that serves “Canary” HTML. Traffic shifts from 10% to 50% to 100%, with a health check running at each step. If the canary fails health checks, the rollout automatically rolls back.

Terminal window
kubectl create namespace argo-rollouts 2>/dev/null || true
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml

Wait for the controller:

Terminal window
kubectl get pods -n argo-rollouts -w
Section titled “Install the kubectl Plugin (optional but recommended)”
Terminal window
# Linux (amd64)
curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64
chmod +x kubectl-argo-rollouts-linux-amd64
sudo mv kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts
# macOS (arm64)
# curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-darwin-arm64
# chmod +x kubectl-argo-rollouts-darwin-arm64
# sudo mv kubectl-argo-rollouts-darwin-arm64 /usr/local/bin/kubectl-argo-rollouts

Verify:

Terminal window
kubectl argo rollouts version

Navigate to the demo directory:

Terminal window
cd demos/progressive-delivery
Terminal window
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/analysis-template.yaml
kubectl apply -f manifests/service.yaml
kubectl apply -f manifests/rollout.yaml

Wait for all pods:

Terminal window
kubectl get pods -n rollouts-demo -w

Check the rollout status:

Terminal window
kubectl argo rollouts get rollout canary-app -n rollouts-demo

All 4 replicas should show as Stable. Verify v1 is serving:

Terminal window
kubectl port-forward svc/canary-app-stable 8080:80 -n rollouts-demo &
curl http://localhost:8080
# <html><body><h1>Version 1 - Stable</h1>...
kill %1

Apply the v2 manifest which changes the image and the HTML content:

Terminal window
kubectl apply -f manifests/rollout-v2.yaml

Immediately watch the rollout:

Terminal window
kubectl argo rollouts get rollout canary-app -n rollouts-demo -w

You will see:

  1. 10% weight - 1 canary pod starts, AnalysisRun begins health checking
  2. 30s pause - traffic continues at 10% canary / 90% stable
  3. 50% weight - more canary pods are created
  4. 30s pause - traffic at 50/50
  5. 100% weight - canary becomes the new stable, old ReplicaSet scales down

Press Ctrl+C when the rollout completes.

While the rollout is in progress (during a pause), check both services:

Terminal window
# Stable service - serves v1
kubectl port-forward svc/canary-app-stable 8080:80 -n rollouts-demo &
curl http://localhost:8080
kill %1
# Canary service - serves v2
kubectl port-forward svc/canary-app-canary 8081:80 -n rollouts-demo &
curl http://localhost:8081
kill %1
Terminal window
# List analysis runs
kubectl get analysisruns -n rollouts-demo
# Check the latest analysis run
kubectl describe analysisrun -n rollouts-demo -l rollouts-pod-template-hash

The analysis run executes a Job that sends a wget request to the canary service. If it gets HTTP 200, the metric reports “healthy” and the rollout proceeds. Three successful checks are required.

Step 5: Simulate a Bad Deployment (Auto-Rollback)

Section titled “Step 5: Simulate a Bad Deployment (Auto-Rollback)”

Deploy a version that will fail the health check. We use an image that does not exist, so the canary pods will never become ready:

Terminal window
kubectl argo rollouts set image canary-app nginx=nginx:999.999.999-doesnotexist -n rollouts-demo

Watch the rollout:

Terminal window
kubectl argo rollouts get rollout canary-app -n rollouts-demo -w

The canary pod fails to start (ImagePullBackOff). The AnalysisRun fails because the health check cannot reach the canary. After the failure limit is reached, the rollout automatically aborts and scales down the failed canary ReplicaSet.

Terminal window
# Check rollout status - should show "Degraded"
kubectl argo rollouts status canary-app -n rollouts-demo
# Check events
kubectl describe rollout canary-app -n rollouts-demo | grep -A 10 "Events"

The stable version (v2 from Step 2) continues serving traffic. No downtime.

Abort the failed rollout and restore to healthy state:

Terminal window
kubectl argo rollouts abort canary-app -n rollouts-demo
kubectl argo rollouts set image canary-app nginx=nginx:1.25.4-alpine -n rollouts-demo

Watch recovery:

Terminal window
kubectl argo rollouts get rollout canary-app -n rollouts-demo -w
manifests/
namespace.yaml # rollouts-demo namespace
rollout.yaml # Argo Rollout with canary strategy (v1)
rollout-v2.yaml # Updated Rollout with new image/content (v2)
service.yaml # Stable and canary Services
analysis-template.yaml # AnalysisTemplate with wget health check

Argo Rollout vs Deployment:

FeatureDeploymentArgo Rollout
Rolling updateYesYes
Canary with traffic splittingNoYes
Automated analysisNoYes (AnalysisTemplate)
Auto-rollback on failureNoYes
Pause between stepsNoYes
Blue-green strategyNoYes

Canary progression:

Step 1: setWeight 10% ──> 1 canary pod, AnalysisRun starts
Step 2: pause 30s ──> Health checks run, traffic at 10/90
Step 3: setWeight 50% ──> 2 canary pods, traffic at 50/50
Step 4: pause 30s ──> Health checks continue
Step 5: setWeight 100% ──> Canary becomes stable, old RS scales down
  1. Manually promote during a pause instead of waiting:

    Terminal window
    kubectl argo rollouts promote canary-app -n rollouts-demo
  2. Check the rollout history:

    Terminal window
    kubectl argo rollouts get rollout canary-app -n rollouts-demo --no-color
  3. Try a blue-green strategy (edit rollout.yaml to replace the canary block):

    strategy:
    blueGreen:
    activeService: canary-app-stable
    previewService: canary-app-canary
    autoPromotionEnabled: true
    autoPromotionSeconds: 30
  4. View the Argo Rollouts dashboard:

    Terminal window
    kubectl argo rollouts dashboard -n rollouts-demo

    Open http://localhost:3100 in your browser.

Terminal window
kubectl delete namespace rollouts-demo
kubectl delete namespace argo-rollouts

See docs/deep-dive.md for a detailed explanation of progressive delivery patterns, traffic management with Istio and NGINX ingress integration, Prometheus-based AnalysisTemplates, blue-green vs canary trade-offs, and how GitOps pipelines trigger Argo Rollouts.

Move on to Istio Service Mesh to learn about mTLS, traffic splitting, and observability with a service mesh.