Skip to content

Multi-Tenant Platform

Three teams share one cluster with isolated namespaces, enforced quotas, locked-down RBAC, and network segmentation.

Time: ~20 minutes Difficulty: Advanced

Resources: This demo needs ~512MB RAM. Clean up other demos first: task clean:all

  • Namespace-per-team isolation strategy
  • ResourceQuota: cap total CPU, memory, and pod count per team
  • LimitRange: inject default resource limits so nothing runs unbounded
  • RBAC: give each team exactly the permissions they need, nothing more
  • NetworkPolicy: control which teams can talk to each other
  • How demos 14 (RBAC), 19 (Network Policies), and 20 (Resource Quotas) work together in a real platform

Three teams share a single cluster:

┌─────────────────────────────────────────────────────────┐
│ Shared Cluster │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ tenant- │ │ tenant- │ │ tenant- │ │
│ │ frontend │ │ backend │ │ data │ │
│ │ │ │ │ │ │ │
│ │ 1 CPU/1Gi │ │ 2 CPU/2Gi │ │ 2 CPU/4Gi │ │
│ │ deploy+svc │ │ +configmaps │ │ +PVCs │ │
│ │ │ │ +secrets │ │ +secrets │ │
│ └──────┬───────┘ └──────┬───────┘ └──────────────┘ │
│ │ allowed │ allowed ▲ │
│ └─────────> └─────────────────┘ │
│ ✗ direct access to data │
└─────────────────────────────────────────────────────────┘
  • Frontend can reach Backend but NOT Data
  • Backend can reach Data
  • Data accepts traffic only from Backend

NetworkPolicy enforcement requires a CNI that supports it:

Terminal window
minikube start --cpus=4 --memory=8192 --cni=calico

Or enable Calico on an existing cluster:

Terminal window
minikube addons enable calico

Navigate to the demo directory:

Terminal window
cd demos/multi-tenant
Terminal window
kubectl apply -f manifests/namespaces.yaml
kubectl apply -f manifests/quotas.yaml
kubectl apply -f manifests/limit-ranges.yaml
kubectl apply -f manifests/rbac.yaml
kubectl apply -f manifests/network-policies.yaml
Terminal window
kubectl apply -f manifests/sample-apps.yaml
Terminal window
# Check all three namespaces
kubectl get namespaces -l purpose=multi-tenant-demo
# Check pods in each namespace
kubectl get pods -n tenant-frontend
kubectl get pods -n tenant-backend
kubectl get pods -n tenant-data
# Check quotas
kubectl describe resourcequota frontend-quota -n tenant-frontend
kubectl describe resourcequota backend-quota -n tenant-backend
kubectl describe resourcequota data-quota -n tenant-data

Frontend team can manage deployments and services

Section titled “Frontend team can manage deployments and services”
Terminal window
# Works: frontend team lists deployments in its namespace
kubectl auth can-i list deployments.apps \
--as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Works: frontend team creates services in its namespace
kubectl auth can-i create services \
--as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Denied: frontend team cannot manage configmaps
kubectl auth can-i create configmaps \
--as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Denied: frontend team cannot access backend namespace
kubectl auth can-i list pods \
--as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-backend

Backend team has additional configmap and secret access

Section titled “Backend team has additional configmap and secret access”
Terminal window
# Works: backend team manages configmaps
kubectl auth can-i create configmaps \
--as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
# Works: backend team manages secrets
kubectl auth can-i create secrets \
--as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
# Denied: backend team cannot manage PVCs
kubectl auth can-i create persistentvolumeclaims \
--as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
Terminal window
# Works: data team manages PVCs
kubectl auth can-i create persistentvolumeclaims \
--as=system:serviceaccount:tenant-data:data-team -n tenant-data
# Works: data team manages secrets
kubectl auth can-i create secrets \
--as=system:serviceaccount:tenant-data:data-team -n tenant-data
Terminal window
kubectl exec deploy/frontend-app -n tenant-frontend -- \
wget -qO- --timeout=3 http://backend-app.tenant-backend.svc.cluster.local
Terminal window
kubectl exec deploy/frontend-app -n tenant-frontend -- \
wget -qO- --timeout=3 http://data-app.tenant-data.svc.cluster.local
# wget: download timed out
Terminal window
kubectl exec deploy/backend-app -n tenant-backend -- \
wget -qO- --timeout=3 http://data-app.tenant-data.svc.cluster.local

Try to exceed the frontend team’s 1 CPU quota:

Terminal window
# This deployment requests 800m CPU per pod x 2 = 1600m, exceeding the 1 CPU quota
kubectl create deployment greedy-frontend \
--image=nginx:1.25.3-alpine \
--replicas=2 \
-n tenant-frontend \
--dry-run=client -o yaml | \
kubectl patch -f - --type=strategic --local -o yaml \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","resources":{"requests":{"cpu":"800m","memory":"256Mi"},"limits":{"cpu":"800m","memory":"256Mi"}}}]}}}}' | \
kubectl apply -f -
# Check events for quota rejection
kubectl get events -n tenant-frontend --field-selector reason=FailedCreate
# Clean up
kubectl delete deployment greedy-frontend -n tenant-frontend 2>/dev/null
manifests/
namespaces.yaml # 3 namespaces with team labels
quotas.yaml # ResourceQuota per namespace (CPU, memory, pod caps)
limit-ranges.yaml # LimitRange per namespace (default container limits)
rbac.yaml # ServiceAccount + Role + RoleBinding per team
network-policies.yaml # deny-all baseline + selective allow rules
sample-apps.yaml # One nginx deployment + service per namespace

The multi-tenant model combines three concepts:

  1. ResourceQuota prevents any team from consuming more than its share
  2. LimitRange ensures every container gets defaults so no pod runs unbounded
  3. RBAC restricts what each team can do inside their own namespace
  4. NetworkPolicy restricts which namespaces can communicate
TeamCPU QuotaMemory QuotaExtra PermissionsNetwork
Frontend1 CPU1GiDeployments, Services-> Backend only
Backend2 CPU2Gi+ ConfigMaps, Secrets-> Data only
Data2 CPU4Gi+ PVCsIngress from Backend only
  1. List all permissions for a team:

    Terminal window
    kubectl auth can-i --list \
    --as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
  2. Check quota usage across all tenant namespaces:

    Terminal window
    for ns in tenant-frontend tenant-backend tenant-data; do
    echo "=== $ns ==="
    kubectl describe resourcequota -n $ns | grep -A 5 "Used"
    done
  3. Try to deploy a pod without resource specs and see LimitRange inject defaults:

    Terminal window
    kubectl run no-limits --image=busybox:1.36 --command -- sleep infinity -n tenant-frontend
    kubectl get pod no-limits -n tenant-frontend -o jsonpath='{.spec.containers[0].resources}' | python3 -m json.tool
    kubectl delete pod no-limits -n tenant-frontend
  4. List all network policies across tenant namespaces:

    Terminal window
    for ns in tenant-frontend tenant-backend tenant-data; do
    echo "=== $ns ==="
    kubectl get networkpolicies -n $ns
    done
Terminal window
kubectl delete namespace tenant-frontend tenant-backend tenant-data

See docs/deep-dive.md for a detailed explanation of multi-tenant cluster design patterns, hierarchical namespaces, tenant isolation with admission webhooks, cost allocation strategies, and how managed Kubernetes platforms implement multi-tenancy.

Move on to Chaos Engineering to learn how to test Kubernetes resilience by deliberately breaking things.