Multi-Tenant Platform
Three teams share one cluster with isolated namespaces, enforced quotas, locked-down RBAC, and network segmentation.
Time: ~20 minutes Difficulty: Advanced
Resources: This demo needs ~512MB RAM. Clean up other demos first:
task clean:all
What You Will Learn
Section titled “What You Will Learn”- Namespace-per-team isolation strategy
- ResourceQuota: cap total CPU, memory, and pod count per team
- LimitRange: inject default resource limits so nothing runs unbounded
- RBAC: give each team exactly the permissions they need, nothing more
- NetworkPolicy: control which teams can talk to each other
- How demos 14 (RBAC), 19 (Network Policies), and 20 (Resource Quotas) work together in a real platform
The Scenario
Section titled “The Scenario”Three teams share a single cluster:
┌─────────────────────────────────────────────────────────┐│ Shared Cluster ││ ││ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ││ │ tenant- │ │ tenant- │ │ tenant- │ ││ │ frontend │ │ backend │ │ data │ ││ │ │ │ │ │ │ ││ │ 1 CPU/1Gi │ │ 2 CPU/2Gi │ │ 2 CPU/4Gi │ ││ │ deploy+svc │ │ +configmaps │ │ +PVCs │ ││ │ │ │ +secrets │ │ +secrets │ ││ └──────┬───────┘ └──────┬───────┘ └──────────────┘ ││ │ allowed │ allowed ▲ ││ └─────────> └─────────────────┘ ││ ✗ direct access to data │└─────────────────────────────────────────────────────────┘- Frontend can reach Backend but NOT Data
- Backend can reach Data
- Data accepts traffic only from Backend
Prerequisites
Section titled “Prerequisites”NetworkPolicy enforcement requires a CNI that supports it:
minikube start --cpus=4 --memory=8192 --cni=calicoOr enable Calico on an existing cluster:
minikube addons enable calicoDeploy
Section titled “Deploy”Navigate to the demo directory:
cd demos/multi-tenantStep 1: Deploy Namespaces and Governance
Section titled “Step 1: Deploy Namespaces and Governance”kubectl apply -f manifests/namespaces.yamlkubectl apply -f manifests/quotas.yamlkubectl apply -f manifests/limit-ranges.yamlkubectl apply -f manifests/rbac.yamlkubectl apply -f manifests/network-policies.yamlStep 2: Deploy Sample Apps
Section titled “Step 2: Deploy Sample Apps”kubectl apply -f manifests/sample-apps.yamlVerify
Section titled “Verify”# Check all three namespaceskubectl get namespaces -l purpose=multi-tenant-demo
# Check pods in each namespacekubectl get pods -n tenant-frontendkubectl get pods -n tenant-backendkubectl get pods -n tenant-data
# Check quotaskubectl describe resourcequota frontend-quota -n tenant-frontendkubectl describe resourcequota backend-quota -n tenant-backendkubectl describe resourcequota data-quota -n tenant-dataTest RBAC
Section titled “Test RBAC”Frontend team can manage deployments and services
Section titled “Frontend team can manage deployments and services”# Works: frontend team lists deployments in its namespacekubectl auth can-i list deployments.apps \ --as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Works: frontend team creates services in its namespacekubectl auth can-i create services \ --as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Denied: frontend team cannot manage configmapskubectl auth can-i create configmaps \ --as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-frontend
# Denied: frontend team cannot access backend namespacekubectl auth can-i list pods \ --as=system:serviceaccount:tenant-frontend:frontend-team -n tenant-backendBackend team has additional configmap and secret access
Section titled “Backend team has additional configmap and secret access”# Works: backend team manages configmapskubectl auth can-i create configmaps \ --as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
# Works: backend team manages secretskubectl auth can-i create secrets \ --as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend
# Denied: backend team cannot manage PVCskubectl auth can-i create persistentvolumeclaims \ --as=system:serviceaccount:tenant-backend:backend-team -n tenant-backendData team gets PVC access
Section titled “Data team gets PVC access”# Works: data team manages PVCskubectl auth can-i create persistentvolumeclaims \ --as=system:serviceaccount:tenant-data:data-team -n tenant-data
# Works: data team manages secretskubectl auth can-i create secrets \ --as=system:serviceaccount:tenant-data:data-team -n tenant-dataTest Network Policies
Section titled “Test Network Policies”Frontend to Backend (allowed)
Section titled “Frontend to Backend (allowed)”kubectl exec deploy/frontend-app -n tenant-frontend -- \ wget -qO- --timeout=3 http://backend-app.tenant-backend.svc.cluster.localFrontend to Data (blocked)
Section titled “Frontend to Data (blocked)”kubectl exec deploy/frontend-app -n tenant-frontend -- \ wget -qO- --timeout=3 http://data-app.tenant-data.svc.cluster.local# wget: download timed outBackend to Data (allowed)
Section titled “Backend to Data (allowed)”kubectl exec deploy/backend-app -n tenant-backend -- \ wget -qO- --timeout=3 http://data-app.tenant-data.svc.cluster.localTest Quotas
Section titled “Test Quotas”Try to exceed the frontend team’s 1 CPU quota:
# This deployment requests 800m CPU per pod x 2 = 1600m, exceeding the 1 CPU quotakubectl create deployment greedy-frontend \ --image=nginx:1.25.3-alpine \ --replicas=2 \ -n tenant-frontend \ --dry-run=client -o yaml | \ kubectl patch -f - --type=strategic --local -o yaml \ -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","resources":{"requests":{"cpu":"800m","memory":"256Mi"},"limits":{"cpu":"800m","memory":"256Mi"}}}]}}}}' | \ kubectl apply -f -
# Check events for quota rejectionkubectl get events -n tenant-frontend --field-selector reason=FailedCreate
# Clean upkubectl delete deployment greedy-frontend -n tenant-frontend 2>/dev/nullWhat is Happening
Section titled “What is Happening”manifests/ namespaces.yaml # 3 namespaces with team labels quotas.yaml # ResourceQuota per namespace (CPU, memory, pod caps) limit-ranges.yaml # LimitRange per namespace (default container limits) rbac.yaml # ServiceAccount + Role + RoleBinding per team network-policies.yaml # deny-all baseline + selective allow rules sample-apps.yaml # One nginx deployment + service per namespaceThe multi-tenant model combines three concepts:
- ResourceQuota prevents any team from consuming more than its share
- LimitRange ensures every container gets defaults so no pod runs unbounded
- RBAC restricts what each team can do inside their own namespace
- NetworkPolicy restricts which namespaces can communicate
| Team | CPU Quota | Memory Quota | Extra Permissions | Network |
|---|---|---|---|---|
| Frontend | 1 CPU | 1Gi | Deployments, Services | -> Backend only |
| Backend | 2 CPU | 2Gi | + ConfigMaps, Secrets | -> Data only |
| Data | 2 CPU | 4Gi | + PVCs | Ingress from Backend only |
Experiment
Section titled “Experiment”-
List all permissions for a team:
Terminal window kubectl auth can-i --list \--as=system:serviceaccount:tenant-backend:backend-team -n tenant-backend -
Check quota usage across all tenant namespaces:
Terminal window for ns in tenant-frontend tenant-backend tenant-data; doecho "=== $ns ==="kubectl describe resourcequota -n $ns | grep -A 5 "Used"done -
Try to deploy a pod without resource specs and see LimitRange inject defaults:
Terminal window kubectl run no-limits --image=busybox:1.36 --command -- sleep infinity -n tenant-frontendkubectl get pod no-limits -n tenant-frontend -o jsonpath='{.spec.containers[0].resources}' | python3 -m json.toolkubectl delete pod no-limits -n tenant-frontend -
List all network policies across tenant namespaces:
Terminal window for ns in tenant-frontend tenant-backend tenant-data; doecho "=== $ns ==="kubectl get networkpolicies -n $nsdone
Cleanup
Section titled “Cleanup”kubectl delete namespace tenant-frontend tenant-backend tenant-dataFurther Reading
Section titled “Further Reading”See docs/deep-dive.md for a detailed explanation of multi-tenant cluster design patterns, hierarchical namespaces, tenant isolation with admission webhooks, cost allocation strategies, and how managed Kubernetes platforms implement multi-tenancy.
Next Step
Section titled “Next Step”Move on to Chaos Engineering to learn how to test Kubernetes resilience by deliberately breaking things.