Skip to content

Kustomize: Deep Dive

This document explains what Kustomize does, why it exists, and how every moving part in this demo works. It uses real YAML from the 03-kustomize demo throughout.


You have a Kubernetes application. It needs to run in dev, staging, and production. Each environment differs slightly: different replica counts, different resource limits, different hostnames.

The naive approach: copy-paste the entire set of manifests into three directories. Now you have three copies of deployment.yaml, three copies of service.yaml, and so on.

This breaks down fast.

  • A bug fix in the health check path means editing three files.
  • Someone adds a label in dev but forgets production.
  • The manifests drift apart silently. Nobody notices until production breaks.

Kustomize fixes this by letting you define a single set of base manifests and then layer environment-specific changes on top. You never duplicate the base. You only write what differs.


Kustomize organizes files into two concepts:

Base: The shared, canonical manifests. This is your application as it should run in the general case. In this demo:

base/
kustomization.yaml # Ties the base resources together
deployment.yaml # 3 replicas, health checks, resource limits
service.yaml # ClusterIP on port 80
index.html # Default landing page (blue)
nginx.conf # Nginx server configuration

Overlays: Environment-specific layers that modify the base. Each overlay references the base and specifies only what changes:

overlays/
development/
kustomization.yaml # 2 replicas, dev- prefix, lower resources
deployment-patch.yaml # Smaller CPU/memory
index.html # Orange-themed page
ingress.yaml # nginx-dev.local hostname
production/
kustomization.yaml # 5 replicas, prod- prefix, higher resources
deployment-patch.yaml # Larger CPU/memory
index.html # Green-themed page
ingress.yaml # nginx-prod.local hostname

The base never knows about the overlays. Overlays point to the base via a relative path. This is a one-way dependency. You can add or remove overlays without touching the base at all.

When you run kubectl apply -k overlays/development/, Kustomize reads the base, applies the overlay’s modifications, and produces a final set of manifests. Nothing gets committed or generated on disk. It is purely a build-time transformation.


The kustomization.yaml file is the entry point for Kustomize. Every directory that Kustomize processes must have one. Here is the base:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: nginx-base
resources:
- deployment.yaml
- service.yaml
configMapGenerator:
- name: nginx-config
files:
- index.html
- nginx.conf
labels:
- pairs:
app: nginx-app
version: v1.0.0
includeSelectors: true
images:
- name: nginx
newTag: 1.25.3-alpine

And here is the development overlay:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: nginx-development
namespace: kustomize-dev
resources:
- ../../base
- ingress.yaml
namePrefix: dev-
configMapGenerator:
- behavior: replace
files:
- index.html
name: nginx-config
replicas:
- count: 2
name: nginx-app
images:
- name: nginx
newTag: 1.25.3-alpine
labels:
- includeSelectors: true
pairs:
environment: development
patches:
- path: deployment-patch.yaml

Let’s break down each field.

Lists the YAML files (or directories) that form this layer’s input. In the base, it points to local manifest files:

resources:
- deployment.yaml
- service.yaml

In an overlay, it points to the base directory and can add overlay-specific resources:

resources:
- ../../base
- ingress.yaml

The ../../base path tells Kustomize to process the base’s kustomization.yaml first, then layer this overlay’s changes on top. The ingress.yaml is an entirely new resource that only exists in this environment.

Sets the namespace on all resources in the output:

namespace: kustomize-dev

Every resource gets metadata.namespace: kustomize-dev in the final output. You don’t need to hardcode namespaces in any individual manifest. The base manifests stay namespace-agnostic.

Prepends a string to every resource name:

namePrefix: dev-

A Deployment named nginx-app becomes dev-nginx-app. A Service named nginx-service becomes dev-nginx-service. Kustomize also updates all internal references. If your Service’s selector targets nginx-app, and there are name references to fix, Kustomize handles that. More on this in Name Prefixes and Namespace Scoping.

Overrides the replica count for a named Deployment (or StatefulSet):

replicas:
- count: 2
name: nginx-app

The base defines replicas: 3. The development overlay drops it to 2. Production raises it to 5. You don’t need a patch file for this. It is a first-class field in the kustomization.

Overrides image references without editing the Deployment YAML:

images:
- name: nginx
newTag: 1.25.3-alpine

Kustomize scans all containers across all resources. Any container using an image named nginx gets its tag replaced with 1.25.3-alpine. You can also use newName to change the registry or repository entirely. See Image Overrides for details.

Adds labels to all resources:

labels:
- pairs:
app: nginx-app
version: v1.0.0
includeSelectors: true

The pairs map defines label key-value pairs. When includeSelectors is true, these labels also get injected into spec.selector.matchLabels and spec.template.metadata.labels on Deployments. This keeps selectors consistent with metadata labels automatically.

Generates ConfigMaps from files or literal values:

configMapGenerator:
- name: nginx-config
files:
- index.html
- nginx.conf

This is covered in depth in the next section.

Applies modifications to existing resources:

patches:
- path: deployment-patch.yaml

Points to a patch file that gets merged into matching resources. Covered in Strategic Merge Patches.


Instead of writing a ConfigMap YAML by hand, Kustomize generates one for you. The base defines:

configMapGenerator:
- name: nginx-config
files:
- index.html
- nginx.conf

This reads index.html and nginx.conf from the same directory and creates a ConfigMap with two data keys. The equivalent hand-written ConfigMap would look roughly like:

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-<hash>
data:
index.html: |
<!DOCTYPE html>
...
nginx.conf: |
server {
listen 80;
...
}

Notice the <hash> in the name. Kustomize appends a deterministic hash suffix to every generated ConfigMap name, like nginx-config-g4mb7h92kf. This hash is computed from the ConfigMap’s contents.

Why? Because Kubernetes Deployments don’t restart pods when a ConfigMap changes. If you update index.html and reapply, the ConfigMap gets a new name (new hash). The Deployment reference changes too. Kubernetes sees the Deployment spec changed and triggers a rolling update. Pods get the new configuration automatically.

This is one of the most useful features of Kustomize. It turns ConfigMap updates into proper rollouts with zero extra effort.

Kustomize also updates all references to the ConfigMap. The Deployment in this demo mounts nginx-config as a volume:

volumes:
- name: config-volume
configMap:
name: nginx-config

After Kustomize builds the output, this becomes:

volumes:
- name: config-volume
configMap:
name: dev-nginx-config-<hash>

The name prefix (dev-) and the hash suffix are both applied. References track automatically.

In the overlays, the ConfigMapGenerator uses behavior: replace:

configMapGenerator:
- behavior: replace
files:
- index.html
name: nginx-config

The behavior field controls how overlay ConfigMaps interact with base ConfigMaps of the same name:

BehaviorEffect
createDefault. Fails if a ConfigMap with this name already exists in the base.
replaceCompletely replaces the base ConfigMap’s data with the overlay’s data.
mergeMerges overlay data keys into the base ConfigMap. Existing keys are overwritten, others are preserved.

In this demo, the development overlay uses replace and only lists index.html. This replaces the base’s ConfigMap data entirely, swapping in the orange-themed landing page. The nginx.conf from the base is dropped because replace does a full replacement, not a merge.

If you wanted to keep nginx.conf from the base and only swap index.html, you would use behavior: merge instead:

configMapGenerator:
- behavior: merge
files:
- index.html
name: nginx-config

This would keep the base’s nginx.conf key and overwrite only the index.html key.

ConfigMapGenerator also supports inline values with literals:

configMapGenerator:
- name: app-settings
literals:
- LOG_LEVEL=debug
- MAX_CONNECTIONS=100

This creates a ConfigMap with two data keys: LOG_LEVEL and MAX_CONNECTIONS. Useful for simple configuration that does not warrant a file.


Kustomize supports two patching mechanisms. This demo uses strategic merge patches.

A strategic merge patch looks like a partial version of the resource you want to modify. You include only the fields you want to change. Kustomize merges it into the matching resource.

Here is the development deployment patch from this demo:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
template:
spec:
containers:
- name: nginx
resources:
requests:
memory: "32Mi"
cpu: "25m"
limits:
memory: "64Mi"
cpu: "50m"
env:
- name: ENVIRONMENT
value: "development"
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: config-volume
configMap:
name: nginx-config

And the production patch:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
template:
spec:
containers:
- name: nginx
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
env:
- name: ENVIRONMENT
value: "production"
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: config-volume
configMap:
name: nginx-config

Both patches target the same Deployment (metadata.name: nginx-app). Kustomize identifies the target by apiVersion, kind, and name. The merge happens field by field. Only specified fields are overwritten. Everything else from the base is preserved (health checks, ports, etc.).

For lists like containers, Kubernetes uses the name field as the merge key. The patch says name: nginx, so Kustomize finds the container named nginx in the base and merges in the new resources, env, and volumeMounts. It does not add a second container.

Key differences between the two patches:

  • Dev requests 25m CPU and 32Mi memory. Prod requests 100m CPU and 128Mi memory.
  • Dev sets ENVIRONMENT=development. Prod sets ENVIRONMENT=production.
  • Both reference the same nginx-config ConfigMap, but after Kustomize processing, dev’s resolves to dev-nginx-config-<hash> and prod’s to prod-nginx-config-<hash>.

JSON patches use explicit operations (add, remove, replace, move, copy, test) to modify resources. They are more surgical but more verbose.

This demo does not use JSON patches, but here is what the dev resource override would look like as one:

patches:
- target:
kind: Deployment
name: nginx-app
patch: |-
- op: replace
path: /spec/template/spec/containers/0/resources/limits/memory
value: "64Mi"
- op: replace
path: /spec/template/spec/containers/0/resources/limits/cpu
value: "50m"
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: "32Mi"
- op: replace
path: /spec/template/spec/containers/0/resources/requests/cpu
value: "25m"
- op: add
path: /spec/template/spec/containers/0/env
value:
- name: ENVIRONMENT
value: "development"

Notice how the JSON patch uses array indices (/containers/0) instead of merge keys. This makes it fragile if the container order changes. Strategic merge patches are usually the better choice for Kubernetes resources because they understand the schema.

When to use which:

Use CasePatch Type
Overriding fields in a Kubernetes resourceStrategic merge patch
Removing a field entirelyJSON patch (op: remove)
Adding items to the middle of a listJSON patch (op: add with index)
Patching non-Kubernetes resources (CRDs without schema)JSON patch

Two simple fields that prevent a lot of pain.

The development overlay sets namePrefix: dev-. Production sets namePrefix: prod-. Every resource name gets this prefix applied.

Without prefixes, deploying both environments to the same cluster would cause name collisions. Two Deployments named nginx-app, two Services named nginx-service. The second apply would overwrite the first.

With prefixes:

  • Dev creates dev-nginx-app, dev-nginx-service, dev-nginx-ingress
  • Prod creates prod-nginx-app, prod-nginx-service, prod-nginx-ingress

Kustomize also updates cross-resource references. The Ingress in the dev overlay references nginx-service in its backend:

backend:
service:
name: nginx-service
port:
number: 80

After Kustomize processes it, this becomes dev-nginx-service. The Service name in the Ingress tracks the Service’s actual name. You don’t do this manually.

There is also nameSuffix if you prefer suffixes. And you can combine both.

namespace: kustomize-dev

This sets metadata.namespace on every resource in the output. Dev deploys to kustomize-dev. Prod deploys to kustomize-prod. Full resource isolation.

Together, name prefixes and namespace scoping give you two independent layers of collision prevention. Even if you accidentally deploy both overlays to the same namespace, the prefixes keep resource names unique. Even if you remove the prefixes, the namespaces keep resources isolated. Belt and suspenders.


This demo uses the labels field. You may also encounter commonLabels in older examples. They are different.

labels:
- pairs:
app: nginx-app
version: v1.0.0
includeSelectors: true

This is the modern approach. The labels field gives you explicit control:

  • pairs: The labels to add.
  • includeSelectors: Whether to also inject these labels into spec.selector.matchLabels on Deployments, StatefulSets, and similar workload resources. Also adds them to spec.template.metadata.labels.
  • includeTemplates: Whether to include labels in pod template metadata (defaults to true when includeSelectors is true).

You can define multiple label groups with different settings:

labels:
- pairs:
app: nginx-app
includeSelectors: true
- pairs:
team: platform
includeSelectors: false

This adds app: nginx-app to both metadata labels and selectors, but team: platform only to metadata labels. Selectors are immutable after creation, so being selective about what goes into them matters.

commonLabels:
app: nginx-app
version: v1.0.0

The commonLabels field adds labels everywhere: metadata, selectors, template labels. All or nothing. You cannot exclude labels from selectors.

This is a problem. If you add a new label to commonLabels after a Deployment already exists, the selector changes. Kubernetes does not allow updating selectors on existing Deployments. The apply fails.

The labels field with includeSelectors: false avoids this entirely. You can add informational labels without breaking selectors.

Recommendation: Always use labels with explicit includeSelectors settings. Avoid commonLabels.


Image Overrides Without Touching the Deployment

Section titled “Image Overrides Without Touching the Deployment”

The base Deployment specifies:

containers:
- name: nginx
image: nginx:1.25.3-alpine

The base kustomization also includes:

images:
- name: nginx
newTag: 1.25.3-alpine

This might seem redundant when the tags match. But the power shows when overlays need different images. If production needed a hardened image from an internal registry, the overlay could specify:

images:
- name: nginx
newName: registry.internal.example.com/nginx
newTag: 1.25.3-alpine-hardened

The images transformer scans all container specs across every resource. It matches on the image name (nginx) and replaces it. No patch file needed. No touching the Deployment YAML.

Available fields:

FieldPurposeExample
nameThe image name to matchnginx
newNameReplace the image name/registrymy-registry.io/nginx
newTagReplace the tag1.25.3-alpine
digestPin to an exact image digestsha256:abc123...

You can combine newName and newTag. Or use digest instead of newTag for immutable deployments.

This is particularly useful for CI/CD pipelines. Your build step produces a new image tag. Instead of sed-replacing values in YAML, your pipeline runs:

Terminal window
cd overlays/production
kustomize edit set image nginx=my-registry.io/nginx:build-1234

This updates the kustomization.yaml programmatically. Clean and scriptable.


You want a staging environment that sits between dev and prod. Here is how to build it step by step.

Terminal window
mkdir -p overlays/staging
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: nginx-staging
namespace: kustomize-staging
resources:
- ../../base
- ingress.yaml
namePrefix: staging-
configMapGenerator:
- behavior: replace
files:
- index.html
name: nginx-config
replicas:
- count: 3
name: nginx-app
images:
- name: nginx
newTag: 1.25.3-alpine
labels:
- includeSelectors: true
pairs:
environment: staging
patches:
- path: deployment-patch.yaml

Key decisions:

  • namespace: kustomize-staging isolates it from dev and prod.
  • namePrefix: staging- prevents name collisions.
  • replicas: 3 sits between dev (2) and prod (5).
  • environment: staging label for easy filtering.

Create overlays/staging/deployment-patch.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
template:
spec:
containers:
- name: nginx
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
env:
- name: ENVIRONMENT
value: "staging"
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: config-volume
configMap:
name: nginx-config

Resources sit between dev (32Mi/25m) and prod (128Mi/100m).

Create overlays/staging/index.html:

<!DOCTYPE html>
<html>
<head>
<title>Minikube Demo - Staging</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 40px;
background-color: #fff8e1;
}
.container { text-align: center; }
.title { color: #f9a825; }
.env-badge {
background: #f9a825;
color: white;
padding: 5px 15px;
border-radius: 20px;
display: inline-block;
margin: 10px;
}
</style>
</head>
<body>
<div class="container">
<h1 class="title">Welcome to Minikube Demo</h1>
<div class="env-badge">STAGING</div>
<p>This application is deployed using GitOps with ArgoCD and Kustomize</p>
<p>Environment: Staging Environment</p>
</div>
</body>
</html>

Yellow theme. Follows the same pattern as dev (orange) and prod (green).

Create overlays/staging/ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
labels:
app: nginx-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx-staging.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80

Preview the generated manifests:

Terminal window
kubectl kustomize overlays/staging/

Compare with other environments:

Terminal window
diff <(kubectl kustomize overlays/staging/) \
<(kubectl kustomize overlays/production/)

Deploy:

Terminal window
kubectl apply -k overlays/staging/

The entire process required zero changes to the base. That is the point.


Both tools solve the multi-environment configuration problem. They take very different approaches.

Kustomize starts with plain YAML and layers changes on top. There is no templating language. No {{ .Values.replicaCount }}. Your base manifests are valid Kubernetes YAML that you could apply directly with kubectl apply -f.

Strengths:

  • No new syntax to learn. It is all YAML.
  • Base manifests are valid, readable Kubernetes resources.
  • Built into kubectl (kubectl apply -k). No extra tooling.
  • Easy to understand what each overlay changes because you see diffs, not template interpolation.
  • Works well with GitOps tools like ArgoCD.

Weaknesses:

  • No conditionals. You cannot say “if production, add this sidecar.” You must include the sidecar in the patch.
  • No loops. If you need 10 similar resources with slight variations, you write them all.
  • Complex customizations require many patch files.
  • No package distribution story. You can not publish a “Kustomize chart” to a registry.

Helm uses Go templates to generate YAML. A values.yaml file provides variables. Templates use {{ }} syntax to interpolate those values.

Strengths:

  • Conditionals and loops. Dynamic manifest generation.
  • Package distribution via Helm charts and registries.
  • Large ecosystem of community-maintained charts.
  • Lifecycle management (install, upgrade, rollback, uninstall).
  • Values files make parameterization explicit.

Weaknesses:

  • Templates are not valid YAML. They are Go template files that produce YAML. Harder to read and debug.
  • Whitespace and indentation bugs are common.
  • Debugging requires helm template to see the rendered output.
  • Extra tooling required (helm CLI).
  • Chart complexity can spiral.
ScenarioRecommended Tool
Internal application with 2-5 environmentsKustomize
Simple environment-specific overrides (replicas, resources, namespaces)Kustomize
GitOps workflows with ArgoCD or FluxKustomize (native support)
Distributing reusable application packagesHelm
Complex applications needing conditionals and loopsHelm
Installing third-party software (databases, monitoring)Helm
Applications with many configurable parametersHelm

They are not mutually exclusive. A common pattern:

  1. Use a Helm chart to generate base manifests.
  2. Use Kustomize overlays on top of the Helm output for environment-specific tweaks.

ArgoCD supports this directly. You can point an ArgoCD Application at a Kustomize overlay that references Helm-generated output.

For the use case in this demo, where you have a straightforward application with a handful of environment differences, Kustomize is the right choice. It keeps things simple, readable, and close to plain Kubernetes YAML.