Skip to content

Resource Quotas & LimitRanges

Prevent resource exhaustion with namespace-level governance.

Time: ~10 minutes Difficulty: Intermediate

  • ResourceQuota: cap total CPU, memory, pods, and services per namespace
  • LimitRange: set default and max resource limits per container
  • Why pods get stuck in Pending (“insufficient quota”)
  • How quotas enforce that every pod specifies resource requests

Navigate to the demo directory:

Terminal window
cd demos/resource-quotas
Terminal window
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/resource-quota.yaml
kubectl apply -f manifests/limit-range.yaml

Check the quota:

Terminal window
kubectl describe resourcequota compute-quota -n quota-demo
Terminal window
kubectl apply -f manifests/small-app.yaml
kubectl get pods -n quota-demo
kubectl describe resourcequota compute-quota -n quota-demo

The “Used” column now shows resources consumed by the 2 pods.

Terminal window
kubectl apply -f manifests/greedy-app.yaml
kubectl get pods -n quota-demo
kubectl get deploy greedy-app -n quota-demo

The small-app already uses 200m of the 1000m CPU request quota. The greedy-app wants 400m per pod x 3 = 1200m, but only 800m remains. At most 2 greedy pods can schedule. The third gets rejected by the quota admission controller.

Check why:

Terminal window
kubectl describe deploy greedy-app -n quota-demo | grep -A 5 "Conditions"
kubectl get events -n quota-demo --field-selector reason=FailedCreate

Deploy a pod without resource specs. The LimitRange injects defaults:

Terminal window
kubectl run no-limits --image=busybox:1.36 --command -- sleep infinity -n quota-demo
kubectl get pod no-limits -n quota-demo -o jsonpath='{.spec.containers[0].resources}' | python3 -m json.tool

The container gets the LimitRange defaults: 50m CPU request, 200m CPU limit.

Try to exceed the LimitRange max:

Terminal window
kubectl run too-big --image=busybox:1.36 -n quota-demo \
--overrides='{"spec":{"containers":[{"name":"too-big","image":"busybox:1.36","command":["sleep","infinity"],"resources":{"requests":{"cpu":"1"},"memory":"1Gi"}}}]}}'

This fails because the LimitRange max is 500m CPU and 512Mi memory.

manifests/
namespace.yaml # quota-demo namespace
resource-quota.yaml # Caps: 1 CPU req, 2 CPU limit, 5 pods, 3 services
limit-range.yaml # Defaults: 50m/200m CPU, min 25m, max 500m
small-app.yaml # 2 pods, fits within quota
greedy-app.yaml # 3 pods, exceeds CPU quota

ResourceQuota limits the total resources a namespace can consume. LimitRange limits what a single container can request and provides defaults for containers that don’t specify resources.

ResourceQuota LimitLimitRange DefaultLimitRange Max
CPU requests1000m total50m per container500m
CPU limits2000m total200m per container500m
Memory requests1Gi total64Mi per container512Mi
Memory limits2Gi total128Mi per container512Mi
Terminal window
kubectl delete namespace quota-demo

See docs/deep-dive.md for a detailed explanation of quota scopes, priority class quotas, count quotas for CRDs, LimitRange types (Pod, PVC), and multi-team namespace strategies.

Move on to Pod Disruption Budgets to learn how to maintain availability during maintenance.