Skip to content

Istio Service Mesh

Deploy Istio service mesh with traffic management, mutual TLS, and observability.

Time: ~20 minutes Difficulty: Advanced

Resource Requirements: Istio needs additional resources. Ensure minikube has at least 4 CPUs and 8GB RAM. Clean up other demos first with task clean:all.

  • Istio service mesh architecture and components
  • Automatic sidecar injection for pods
  • Traffic splitting with weighted routing
  • Mutual TLS (mTLS) between services
  • Istio Gateway for ingress traffic
  • DestinationRules and VirtualServices for traffic management
  • Observability with istioctl proxy-status

Install Istio using istioctl:

Terminal window
# Download Istio (if not already installed)
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH
# Install Istio with demo profile
istioctl install --set profile=demo -y

Verify the installation:

Terminal window
kubectl get pods -n istio-system

You should see istiod and istio-ingressgateway running.

Navigate to the demo directory:

Terminal window
cd demos/istio-service-mesh

Create the namespace with automatic sidecar injection enabled:

Terminal window
kubectl apply -f manifests/namespace.yaml

Deploy the frontend and backend services:

Terminal window
kubectl apply -f manifests/deployment-frontend.yaml
kubectl apply -f manifests/deployment-backend-v1.yaml
kubectl apply -f manifests/deployment-backend-v2.yaml
kubectl apply -f manifests/service-frontend.yaml
kubectl apply -f manifests/service-backend.yaml

Wait for pods to be ready with sidecars injected:

Terminal window
kubectl get pods -n istio-demo

Each pod should show 2/2 containers (application + Envoy sidecar).

Apply Istio traffic management rules:

Terminal window
kubectl apply -f manifests/destination-rule.yaml
kubectl apply -f manifests/virtual-service.yaml
kubectl apply -f manifests/gateway.yaml

Check that Istio sidecars are injected:

Terminal window
kubectl get pods -n istio-demo -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].name}{"\n"}{end}'

You should see each pod has two containers: the application and istio-proxy.

Verify mutual TLS is enabled:

Terminal window
istioctl proxy-status

All proxies should show SYNCED status.

Check mTLS configuration:

Terminal window
istioctl authn tls-check -n istio-demo backend-v1-<pod-id>.istio-demo

Get the ingress gateway URL:

Terminal window
export INGRESS_HOST=$(minikube ip)
export INGRESS_PORT=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "http://$GATEWAY_URL"

Test the frontend:

Terminal window
curl http://$GATEWAY_URL

The VirtualService routes 80% of traffic to backend v1 and 20% to backend v2. Test this from inside the frontend pod:

Terminal window
# Get frontend pod name
export FRONTEND_POD=$(kubectl get pod -n istio-demo -l app=frontend -o jsonpath='{.items[0].metadata.name}')
# Send 10 requests and observe the distribution
for i in {1..10}; do
kubectl exec -n istio-demo $FRONTEND_POD -c nginx -- curl -s http://backend/headers | grep X-Forwarded-Host
done

To see which backend version handled the request, check the pod logs:

Terminal window
kubectl logs -n istio-demo -l app=backend,version=v1 -c httpbin --tail=5
kubectl logs -n istio-demo -l app=backend,version=v2 -c httpbin --tail=5
manifests/
namespace.yaml # istio-demo namespace with istio-injection: enabled
deployment-frontend.yaml # nginx frontend (1 replica)
deployment-backend-v1.yaml # httpbin backend v1 (1 replica)
deployment-backend-v2.yaml # httpbin backend v2 (1 replica)
service-frontend.yaml # ClusterIP service for frontend
service-backend.yaml # ClusterIP service for backend (selects both v1 and v2)
destination-rule.yaml # Defines subsets v1 and v2, enables mTLS
virtual-service.yaml # Routes 80% to v1, 20% to v2
gateway.yaml # Istio Gateway + VirtualService for external access

How Istio works:

  1. The istio-injection: enabled label on the namespace tells Istio to automatically inject an Envoy sidecar into every pod.
  2. The sidecar intercepts all inbound and outbound traffic to the pod.
  3. DestinationRule defines traffic policies (like mTLS) and subsets based on pod labels (version: v1, version: v2).
  4. VirtualService routes traffic to specific subsets with weight-based routing (80/20 split).
  5. Gateway exposes services to external traffic via the istio-ingressgateway.
  6. All service-to-service communication is encrypted with mutual TLS by default.

Traffic flow:

External → Gateway → frontend (Envoy sidecar) → backend Service
VirtualService routes to subsets
↓ ↓
v1 (80%) v2 (20%)
  1. Shift all traffic to v2:

    Terminal window
    kubectl patch virtualservice backend -n istio-demo --type=merge -p '
    {
    "spec": {
    "http": [{
    "route": [{
    "destination": {
    "host": "backend",
    "subset": "v2"
    }
    }]
    }]
    }
    }'
  2. Inject a 5-second delay for 50% of requests to v1:

    Terminal window
    kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
    name: backend
    namespace: istio-demo
    spec:
    hosts:
    - backend
    http:
    - fault:
    delay:
    percentage:
    value: 50.0
    fixedDelay: 5s
    route:
    - destination:
    host: backend
    subset: v1
    EOF
  3. Test circuit breaking by setting connection limits:

    Terminal window
    kubectl patch destinationrule backend -n istio-demo --type=merge -p '
    {
    "spec": {
    "trafficPolicy": {
    "connectionPool": {
    "tcp": {
    "maxConnections": 1
    },
    "http": {
    "http1MaxPendingRequests": 1,
    "maxRequestsPerConnection": 1
    }
    },
    "outlierDetection": {
    "consecutiveErrors": 1,
    "interval": "1s",
    "baseEjectionTime": "3m",
    "maxEjectionPercent": 100
    }
    }
    }
    }'
  4. View Envoy configuration for a pod:

    Terminal window
    istioctl proxy-config routes $FRONTEND_POD -n istio-demo
  5. Enable access logs to see traffic details:

    Terminal window
    kubectl exec -n istio-demo $FRONTEND_POD -c istio-proxy -- \
    curl -X POST http://localhost:15000/logging?level=debug
    kubectl logs -n istio-demo $FRONTEND_POD -c istio-proxy --tail=20

Delete the demo namespace:

Terminal window
kubectl delete namespace istio-demo

Optionally, uninstall Istio:

Terminal window
istioctl uninstall --purge -y
kubectl delete namespace istio-system

See docs/deep-dive.md for a detailed explanation of Istio architecture, control plane vs data plane, Envoy proxies, traffic management patterns, security policies, observability integration with Prometheus and Grafana, and production best practices.

Explore policy enforcement with Kyverno to learn about admission control and policy-as-code.