Kustomize: Deep Dive
This document explains what Kustomize does, why it exists, and how every moving part in this demo works. It uses real YAML from the 03-kustomize demo throughout.
Table of Contents
Section titled “Table of Contents”- The Problem Kustomize Solves
- The Base + Overlay Model
- kustomization.yaml Field by Field
- ConfigMapGenerator In Depth
- Strategic Merge Patches vs JSON Patches
- Name Prefixes and Namespace Scoping
- Labels vs commonLabels
- Image Overrides Without Touching the Deployment
- Building a New Overlay: Staging
- Kustomize vs Helm
The Problem Kustomize Solves
Section titled “The Problem Kustomize Solves”You have a Kubernetes application. It needs to run in dev, staging, and production. Each environment differs slightly: different replica counts, different resource limits, different hostnames.
The naive approach: copy-paste the entire set of manifests into three directories. Now you have three copies of deployment.yaml, three copies of service.yaml, and so on.
This breaks down fast.
- A bug fix in the health check path means editing three files.
- Someone adds a label in dev but forgets production.
- The manifests drift apart silently. Nobody notices until production breaks.
Kustomize fixes this by letting you define a single set of base manifests and then layer environment-specific changes on top. You never duplicate the base. You only write what differs.
The Base + Overlay Model
Section titled “The Base + Overlay Model”Kustomize organizes files into two concepts:
Base: The shared, canonical manifests. This is your application as it should run in the general case. In this demo:
base/ kustomization.yaml # Ties the base resources together deployment.yaml # 3 replicas, health checks, resource limits service.yaml # ClusterIP on port 80 index.html # Default landing page (blue) nginx.conf # Nginx server configurationOverlays: Environment-specific layers that modify the base. Each overlay references the base and specifies only what changes:
overlays/ development/ kustomization.yaml # 2 replicas, dev- prefix, lower resources deployment-patch.yaml # Smaller CPU/memory index.html # Orange-themed page ingress.yaml # nginx-dev.local hostname
production/ kustomization.yaml # 5 replicas, prod- prefix, higher resources deployment-patch.yaml # Larger CPU/memory index.html # Green-themed page ingress.yaml # nginx-prod.local hostnameThe base never knows about the overlays. Overlays point to the base via a relative path. This is a one-way dependency. You can add or remove overlays without touching the base at all.
When you run kubectl apply -k overlays/development/, Kustomize reads the base, applies the overlay’s modifications, and produces a final set of manifests. Nothing gets committed or generated on disk. It is purely a build-time transformation.
kustomization.yaml Field by Field
Section titled “kustomization.yaml Field by Field”The kustomization.yaml file is the entry point for Kustomize. Every directory that Kustomize processes must have one. Here is the base:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
metadata: name: nginx-base
resources: - deployment.yaml - service.yaml
configMapGenerator: - name: nginx-config files: - index.html - nginx.conf
labels: - pairs: app: nginx-app version: v1.0.0 includeSelectors: true
images: - name: nginx newTag: 1.25.3-alpineAnd here is the development overlay:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
metadata: name: nginx-development
namespace: kustomize-dev
resources: - ../../base - ingress.yaml
namePrefix: dev-
configMapGenerator: - behavior: replace files: - index.html name: nginx-config
replicas: - count: 2 name: nginx-app
images: - name: nginx newTag: 1.25.3-alpine
labels: - includeSelectors: true pairs: environment: development
patches: - path: deployment-patch.yamlLet’s break down each field.
resources
Section titled “resources”Lists the YAML files (or directories) that form this layer’s input. In the base, it points to local manifest files:
resources: - deployment.yaml - service.yamlIn an overlay, it points to the base directory and can add overlay-specific resources:
resources: - ../../base - ingress.yamlThe ../../base path tells Kustomize to process the base’s kustomization.yaml first, then layer this overlay’s changes on top. The ingress.yaml is an entirely new resource that only exists in this environment.
namespace
Section titled “namespace”Sets the namespace on all resources in the output:
namespace: kustomize-devEvery resource gets metadata.namespace: kustomize-dev in the final output. You don’t need to hardcode namespaces in any individual manifest. The base manifests stay namespace-agnostic.
namePrefix
Section titled “namePrefix”Prepends a string to every resource name:
namePrefix: dev-A Deployment named nginx-app becomes dev-nginx-app. A Service named nginx-service becomes dev-nginx-service. Kustomize also updates all internal references. If your Service’s selector targets nginx-app, and there are name references to fix, Kustomize handles that. More on this in Name Prefixes and Namespace Scoping.
replicas
Section titled “replicas”Overrides the replica count for a named Deployment (or StatefulSet):
replicas: - count: 2 name: nginx-appThe base defines replicas: 3. The development overlay drops it to 2. Production raises it to 5. You don’t need a patch file for this. It is a first-class field in the kustomization.
images
Section titled “images”Overrides image references without editing the Deployment YAML:
images: - name: nginx newTag: 1.25.3-alpineKustomize scans all containers across all resources. Any container using an image named nginx gets its tag replaced with 1.25.3-alpine. You can also use newName to change the registry or repository entirely. See Image Overrides for details.
labels
Section titled “labels”Adds labels to all resources:
labels: - pairs: app: nginx-app version: v1.0.0 includeSelectors: trueThe pairs map defines label key-value pairs. When includeSelectors is true, these labels also get injected into spec.selector.matchLabels and spec.template.metadata.labels on Deployments. This keeps selectors consistent with metadata labels automatically.
configMapGenerator
Section titled “configMapGenerator”Generates ConfigMaps from files or literal values:
configMapGenerator: - name: nginx-config files: - index.html - nginx.confThis is covered in depth in the next section.
patches
Section titled “patches”Applies modifications to existing resources:
patches: - path: deployment-patch.yamlPoints to a patch file that gets merged into matching resources. Covered in Strategic Merge Patches.
ConfigMapGenerator In Depth
Section titled “ConfigMapGenerator In Depth”Instead of writing a ConfigMap YAML by hand, Kustomize generates one for you. The base defines:
configMapGenerator: - name: nginx-config files: - index.html - nginx.confThis reads index.html and nginx.conf from the same directory and creates a ConfigMap with two data keys. The equivalent hand-written ConfigMap would look roughly like:
apiVersion: v1kind: ConfigMapmetadata: name: nginx-config-<hash>data: index.html: | <!DOCTYPE html> ... nginx.conf: | server { listen 80; ... }Hash Suffixes and Rolling Updates
Section titled “Hash Suffixes and Rolling Updates”Notice the <hash> in the name. Kustomize appends a deterministic hash suffix to every generated ConfigMap name, like nginx-config-g4mb7h92kf. This hash is computed from the ConfigMap’s contents.
Why? Because Kubernetes Deployments don’t restart pods when a ConfigMap changes. If you update index.html and reapply, the ConfigMap gets a new name (new hash). The Deployment reference changes too. Kubernetes sees the Deployment spec changed and triggers a rolling update. Pods get the new configuration automatically.
This is one of the most useful features of Kustomize. It turns ConfigMap updates into proper rollouts with zero extra effort.
Kustomize also updates all references to the ConfigMap. The Deployment in this demo mounts nginx-config as a volume:
volumes: - name: config-volume configMap: name: nginx-configAfter Kustomize builds the output, this becomes:
volumes: - name: config-volume configMap: name: dev-nginx-config-<hash>The name prefix (dev-) and the hash suffix are both applied. References track automatically.
The behavior Field
Section titled “The behavior Field”In the overlays, the ConfigMapGenerator uses behavior: replace:
configMapGenerator: - behavior: replace files: - index.html name: nginx-configThe behavior field controls how overlay ConfigMaps interact with base ConfigMaps of the same name:
| Behavior | Effect |
|---|---|
create | Default. Fails if a ConfigMap with this name already exists in the base. |
replace | Completely replaces the base ConfigMap’s data with the overlay’s data. |
merge | Merges overlay data keys into the base ConfigMap. Existing keys are overwritten, others are preserved. |
In this demo, the development overlay uses replace and only lists index.html. This replaces the base’s ConfigMap data entirely, swapping in the orange-themed landing page. The nginx.conf from the base is dropped because replace does a full replacement, not a merge.
If you wanted to keep nginx.conf from the base and only swap index.html, you would use behavior: merge instead:
configMapGenerator: - behavior: merge files: - index.html name: nginx-configThis would keep the base’s nginx.conf key and overwrite only the index.html key.
Literal Values
Section titled “Literal Values”ConfigMapGenerator also supports inline values with literals:
configMapGenerator: - name: app-settings literals: - LOG_LEVEL=debug - MAX_CONNECTIONS=100This creates a ConfigMap with two data keys: LOG_LEVEL and MAX_CONNECTIONS. Useful for simple configuration that does not warrant a file.
Strategic Merge Patches vs JSON Patches
Section titled “Strategic Merge Patches vs JSON Patches”Kustomize supports two patching mechanisms. This demo uses strategic merge patches.
Strategic Merge Patches
Section titled “Strategic Merge Patches”A strategic merge patch looks like a partial version of the resource you want to modify. You include only the fields you want to change. Kustomize merges it into the matching resource.
Here is the development deployment patch from this demo:
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-appspec: template: spec: containers: - name: nginx resources: requests: memory: "32Mi" cpu: "25m" limits: memory: "64Mi" cpu: "50m" env: - name: ENVIRONMENT value: "development" volumeMounts: - name: config-volume mountPath: /usr/share/nginx/html/index.html subPath: index.html volumes: - name: config-volume configMap: name: nginx-configAnd the production patch:
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-appspec: template: spec: containers: - name: nginx resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" env: - name: ENVIRONMENT value: "production" volumeMounts: - name: config-volume mountPath: /usr/share/nginx/html/index.html subPath: index.html volumes: - name: config-volume configMap: name: nginx-configBoth patches target the same Deployment (metadata.name: nginx-app). Kustomize identifies the target by apiVersion, kind, and name. The merge happens field by field. Only specified fields are overwritten. Everything else from the base is preserved (health checks, ports, etc.).
For lists like containers, Kubernetes uses the name field as the merge key. The patch says name: nginx, so Kustomize finds the container named nginx in the base and merges in the new resources, env, and volumeMounts. It does not add a second container.
Key differences between the two patches:
- Dev requests 25m CPU and 32Mi memory. Prod requests 100m CPU and 128Mi memory.
- Dev sets
ENVIRONMENT=development. Prod setsENVIRONMENT=production. - Both reference the same
nginx-configConfigMap, but after Kustomize processing, dev’s resolves todev-nginx-config-<hash>and prod’s toprod-nginx-config-<hash>.
JSON Patches (RFC 6902)
Section titled “JSON Patches (RFC 6902)”JSON patches use explicit operations (add, remove, replace, move, copy, test) to modify resources. They are more surgical but more verbose.
This demo does not use JSON patches, but here is what the dev resource override would look like as one:
patches: - target: kind: Deployment name: nginx-app patch: |- - op: replace path: /spec/template/spec/containers/0/resources/limits/memory value: "64Mi" - op: replace path: /spec/template/spec/containers/0/resources/limits/cpu value: "50m" - op: replace path: /spec/template/spec/containers/0/resources/requests/memory value: "32Mi" - op: replace path: /spec/template/spec/containers/0/resources/requests/cpu value: "25m" - op: add path: /spec/template/spec/containers/0/env value: - name: ENVIRONMENT value: "development"Notice how the JSON patch uses array indices (/containers/0) instead of merge keys. This makes it fragile if the container order changes. Strategic merge patches are usually the better choice for Kubernetes resources because they understand the schema.
When to use which:
| Use Case | Patch Type |
|---|---|
| Overriding fields in a Kubernetes resource | Strategic merge patch |
| Removing a field entirely | JSON patch (op: remove) |
| Adding items to the middle of a list | JSON patch (op: add with index) |
| Patching non-Kubernetes resources (CRDs without schema) | JSON patch |
Name Prefixes and Namespace Scoping
Section titled “Name Prefixes and Namespace Scoping”Two simple fields that prevent a lot of pain.
namePrefix
Section titled “namePrefix”The development overlay sets namePrefix: dev-. Production sets namePrefix: prod-. Every resource name gets this prefix applied.
Without prefixes, deploying both environments to the same cluster would cause name collisions. Two Deployments named nginx-app, two Services named nginx-service. The second apply would overwrite the first.
With prefixes:
- Dev creates
dev-nginx-app,dev-nginx-service,dev-nginx-ingress - Prod creates
prod-nginx-app,prod-nginx-service,prod-nginx-ingress
Kustomize also updates cross-resource references. The Ingress in the dev overlay references nginx-service in its backend:
backend: service: name: nginx-service port: number: 80After Kustomize processes it, this becomes dev-nginx-service. The Service name in the Ingress tracks the Service’s actual name. You don’t do this manually.
There is also nameSuffix if you prefer suffixes. And you can combine both.
Namespace Scoping
Section titled “Namespace Scoping”namespace: kustomize-devThis sets metadata.namespace on every resource in the output. Dev deploys to kustomize-dev. Prod deploys to kustomize-prod. Full resource isolation.
Together, name prefixes and namespace scoping give you two independent layers of collision prevention. Even if you accidentally deploy both overlays to the same namespace, the prefixes keep resource names unique. Even if you remove the prefixes, the namespaces keep resources isolated. Belt and suspenders.
Labels vs commonLabels
Section titled “Labels vs commonLabels”This demo uses the labels field. You may also encounter commonLabels in older examples. They are different.
The labels Field (Current Best Practice)
Section titled “The labels Field (Current Best Practice)”labels: - pairs: app: nginx-app version: v1.0.0 includeSelectors: trueThis is the modern approach. The labels field gives you explicit control:
pairs: The labels to add.includeSelectors: Whether to also inject these labels intospec.selector.matchLabelson Deployments, StatefulSets, and similar workload resources. Also adds them tospec.template.metadata.labels.includeTemplates: Whether to include labels in pod template metadata (defaults totruewhenincludeSelectorsistrue).
You can define multiple label groups with different settings:
labels: - pairs: app: nginx-app includeSelectors: true - pairs: team: platform includeSelectors: falseThis adds app: nginx-app to both metadata labels and selectors, but team: platform only to metadata labels. Selectors are immutable after creation, so being selective about what goes into them matters.
commonLabels (Deprecated)
Section titled “commonLabels (Deprecated)”commonLabels: app: nginx-app version: v1.0.0The commonLabels field adds labels everywhere: metadata, selectors, template labels. All or nothing. You cannot exclude labels from selectors.
This is a problem. If you add a new label to commonLabels after a Deployment already exists, the selector changes. Kubernetes does not allow updating selectors on existing Deployments. The apply fails.
The labels field with includeSelectors: false avoids this entirely. You can add informational labels without breaking selectors.
Recommendation: Always use labels with explicit includeSelectors settings. Avoid commonLabels.
Image Overrides Without Touching the Deployment
Section titled “Image Overrides Without Touching the Deployment”The base Deployment specifies:
containers: - name: nginx image: nginx:1.25.3-alpineThe base kustomization also includes:
images: - name: nginx newTag: 1.25.3-alpineThis might seem redundant when the tags match. But the power shows when overlays need different images. If production needed a hardened image from an internal registry, the overlay could specify:
images: - name: nginx newName: registry.internal.example.com/nginx newTag: 1.25.3-alpine-hardenedThe images transformer scans all container specs across every resource. It matches on the image name (nginx) and replaces it. No patch file needed. No touching the Deployment YAML.
Available fields:
| Field | Purpose | Example |
|---|---|---|
name | The image name to match | nginx |
newName | Replace the image name/registry | my-registry.io/nginx |
newTag | Replace the tag | 1.25.3-alpine |
digest | Pin to an exact image digest | sha256:abc123... |
You can combine newName and newTag. Or use digest instead of newTag for immutable deployments.
This is particularly useful for CI/CD pipelines. Your build step produces a new image tag. Instead of sed-replacing values in YAML, your pipeline runs:
cd overlays/productionkustomize edit set image nginx=my-registry.io/nginx:build-1234This updates the kustomization.yaml programmatically. Clean and scriptable.
Building a New Overlay: Staging
Section titled “Building a New Overlay: Staging”You want a staging environment that sits between dev and prod. Here is how to build it step by step.
Step 1: Create the Directory
Section titled “Step 1: Create the Directory”mkdir -p overlays/stagingStep 2: Write the kustomization.yaml
Section titled “Step 2: Write the kustomization.yaml”apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
metadata: name: nginx-staging
namespace: kustomize-staging
resources: - ../../base - ingress.yaml
namePrefix: staging-
configMapGenerator: - behavior: replace files: - index.html name: nginx-config
replicas: - count: 3 name: nginx-app
images: - name: nginx newTag: 1.25.3-alpine
labels: - includeSelectors: true pairs: environment: staging
patches: - path: deployment-patch.yamlKey decisions:
namespace: kustomize-stagingisolates it from dev and prod.namePrefix: staging-prevents name collisions.replicas: 3sits between dev (2) and prod (5).environment: staginglabel for easy filtering.
Step 3: Write the Deployment Patch
Section titled “Step 3: Write the Deployment Patch”Create overlays/staging/deployment-patch.yaml:
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-appspec: template: spec: containers: - name: nginx resources: requests: memory: "64Mi" cpu: "50m" limits: memory: "128Mi" cpu: "100m" env: - name: ENVIRONMENT value: "staging" volumeMounts: - name: config-volume mountPath: /usr/share/nginx/html/index.html subPath: index.html volumes: - name: config-volume configMap: name: nginx-configResources sit between dev (32Mi/25m) and prod (128Mi/100m).
Step 4: Create the Landing Page
Section titled “Step 4: Create the Landing Page”Create overlays/staging/index.html:
<!DOCTYPE html><html><head> <title>Minikube Demo - Staging</title> <style> body { font-family: Arial, sans-serif; margin: 40px; background-color: #fff8e1; } .container { text-align: center; } .title { color: #f9a825; } .env-badge { background: #f9a825; color: white; padding: 5px 15px; border-radius: 20px; display: inline-block; margin: 10px; } </style></head><body> <div class="container"> <h1 class="title">Welcome to Minikube Demo</h1> <div class="env-badge">STAGING</div> <p>This application is deployed using GitOps with ArgoCD and Kustomize</p> <p>Environment: Staging Environment</p> </div></body></html>Yellow theme. Follows the same pattern as dev (orange) and prod (green).
Step 5: Create the Ingress
Section titled “Step 5: Create the Ingress”Create overlays/staging/ingress.yaml:
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: nginx-ingress labels: app: nginx-app annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: ingressClassName: nginx rules: - host: nginx-staging.local http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80Step 6: Preview and Deploy
Section titled “Step 6: Preview and Deploy”Preview the generated manifests:
kubectl kustomize overlays/staging/Compare with other environments:
diff <(kubectl kustomize overlays/staging/) \ <(kubectl kustomize overlays/production/)Deploy:
kubectl apply -k overlays/staging/The entire process required zero changes to the base. That is the point.
Kustomize vs Helm
Section titled “Kustomize vs Helm”Both tools solve the multi-environment configuration problem. They take very different approaches.
Kustomize: Patching Over a Base
Section titled “Kustomize: Patching Over a Base”Kustomize starts with plain YAML and layers changes on top. There is no templating language. No {{ .Values.replicaCount }}. Your base manifests are valid Kubernetes YAML that you could apply directly with kubectl apply -f.
Strengths:
- No new syntax to learn. It is all YAML.
- Base manifests are valid, readable Kubernetes resources.
- Built into
kubectl(kubectl apply -k). No extra tooling. - Easy to understand what each overlay changes because you see diffs, not template interpolation.
- Works well with GitOps tools like ArgoCD.
Weaknesses:
- No conditionals. You cannot say “if production, add this sidecar.” You must include the sidecar in the patch.
- No loops. If you need 10 similar resources with slight variations, you write them all.
- Complex customizations require many patch files.
- No package distribution story. You can not publish a “Kustomize chart” to a registry.
Helm: Templating with Values
Section titled “Helm: Templating with Values”Helm uses Go templates to generate YAML. A values.yaml file provides variables. Templates use {{ }} syntax to interpolate those values.
Strengths:
- Conditionals and loops. Dynamic manifest generation.
- Package distribution via Helm charts and registries.
- Large ecosystem of community-maintained charts.
- Lifecycle management (install, upgrade, rollback, uninstall).
- Values files make parameterization explicit.
Weaknesses:
- Templates are not valid YAML. They are Go template files that produce YAML. Harder to read and debug.
- Whitespace and indentation bugs are common.
- Debugging requires
helm templateto see the rendered output. - Extra tooling required (
helmCLI). - Chart complexity can spiral.
When to Use Each
Section titled “When to Use Each”| Scenario | Recommended Tool |
|---|---|
| Internal application with 2-5 environments | Kustomize |
| Simple environment-specific overrides (replicas, resources, namespaces) | Kustomize |
| GitOps workflows with ArgoCD or Flux | Kustomize (native support) |
| Distributing reusable application packages | Helm |
| Complex applications needing conditionals and loops | Helm |
| Installing third-party software (databases, monitoring) | Helm |
| Applications with many configurable parameters | Helm |
Using Both Together
Section titled “Using Both Together”They are not mutually exclusive. A common pattern:
- Use a Helm chart to generate base manifests.
- Use Kustomize overlays on top of the Helm output for environment-specific tweaks.
ArgoCD supports this directly. You can point an ArgoCD Application at a Kustomize overlay that references Helm-generated output.
For the use case in this demo, where you have a straightforward application with a handful of environment differences, Kustomize is the right choice. It keeps things simple, readable, and close to plain Kubernetes YAML.