Skip to content

Reloader: Deep Dive

This document explains how Stakater Reloader works internally, the ConfigMap and Secret update problem it solves, annotation modes, and how it compares to alternatives. It covers production considerations, common pitfalls, and when to use each reload strategy.

Kubernetes has a fundamental gap in its configuration management design. When you update a ConfigMap or Secret that a pod uses, the pod does not automatically restart to pick up the changes. The behavior depends on how the configuration is injected.

Kubernetes provides two ways to inject ConfigMap and Secret data into containers:

  1. Environment variables (via env or envFrom)
  2. Volume mounts

Each method handles updates differently:

Environment variables are set at container start and never updated. If you change a ConfigMap key that is injected as an environment variable, the running pod keeps the old value. The pod must be restarted to see the new value.

# From manifests/deployment-auto.yaml
containers:
- name: nginx
env:
- name: APP_MESSAGE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MESSAGE

After the pod starts, APP_MESSAGE is frozen. Updating app-config does not change the environment variable in the running container.

Volume mounts behave differently. When you mount a ConfigMap or Secret as a volume, the kubelet watches for changes and updates the mounted files. The propagation delay is typically 30 to 90 seconds, depending on the kubelet sync period.

But there is a critical limitation: the kubelet only updates volume-mounted files if the mount uses projected volumes (full directory mounts). If you use subPath to mount individual files, the kubelet copies the file at pod start and never updates it.

# From manifests/deployment-auto.yaml
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
volumes:
- name: html
configMap:
name: app-config
items:
- key: index.html
path: index.html

This is a projected volume with item selection. The files update automatically when the ConfigMap changes. But even though the files update, the application must reload its configuration to use the new values. An nginx pod serving static HTML from a ConfigMap volume will continue serving the old cached content until the pod restarts or nginx reloads its configuration.

Before automated reload tools existed, teams handled configuration updates manually:

  1. Manual rolling restarts: Run kubectl rollout restart deployment/myapp after updating ConfigMaps. This works but requires manual intervention and is error-prone in CI/CD pipelines.

  2. Hash suffix strategy: Use Kustomize’s configMapGenerator or Helm hooks to append a hash of the ConfigMap content to its name. When the content changes, the ConfigMap gets a new name, and the deployment references the new name, triggering a rolling update. This works but creates ConfigMap clutter (old versions are never deleted unless you use a cleanup job).

  3. Immutable ConfigMaps with versioned names: Create a new ConfigMap for every config change (for example, app-config-v2, app-config-v3) and update the deployment to reference the new name. Same clutter problem as hash suffixes.

  4. Custom scripts: Write CI/CD pipeline steps that patch the deployment’s pod template annotations to force a restart. This works but scatters logic across deployment pipelines.

Reloader automates option 4. It watches ConfigMaps and Secrets, detects changes, and patches the deployment to trigger a rolling restart.

Stakater Reloader is a Kubernetes controller running as a deployment in the cluster. It watches for changes to ConfigMaps, Secrets, Deployments, StatefulSets, DaemonSets, and other workload resources.

Reloader uses the Kubernetes informer pattern to watch resources. When you install Reloader, it starts watch streams against the API server for:

  • ConfigMaps (all namespaces or namespace-scoped, depending on RBAC)
  • Secrets (all namespaces or namespace-scoped)
  • Deployments, StatefulSets, DaemonSets, Rollouts (Argo Rollouts)

The informer caches the current state and receives update events when resources change. This is efficient and creates minimal API server load.

When a ConfigMap or Secret is updated, Reloader receives an event. It calculates the hash of the ConfigMap or Secret data and compares it to the previously cached hash. If the hash differs, it identifies which workloads reference this ConfigMap or Secret.

The reference detection works by:

  1. Reading the workload’s pod template spec.
  2. Checking all env, envFrom, and volumes for references to ConfigMaps and Secrets.
  3. Matching the referenced names against the changed resource.

If a match is found, Reloader checks the workload’s annotations to see if it should trigger a reload.

Kubernetes watches the pod template spec for changes. Any change to the pod template triggers a rolling update. Reloader exploits this by patching an annotation on the pod template:

spec:
template:
metadata:
annotations:
reloader.stakater.com/last-reloaded-from: "app-config-abc123"

The annotation value includes a hash or timestamp. When the ConfigMap changes, Reloader updates this annotation with a new value. Kubernetes detects the pod template change and starts a rolling update.

The rolling update follows the deployment’s strategy:

spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1

Pods are replaced one by one. The new pods pick up the updated ConfigMap data (either as environment variables set at start or as volume-mounted files that have already been updated by the kubelet).

Reloader supports multiple annotation modes, giving you fine-grained control over what triggers a restart.

The simplest mode. Add a single annotation to the workload and Reloader watches all ConfigMaps and Secrets referenced in the pod spec.

# From manifests/deployment-auto.yaml
metadata:
name: web-auto
namespace: reloader-demo
annotations:
reloader.stakater.com/auto: "true"
spec:
template:
spec:
containers:
- name: nginx
env:
- name: APP_MESSAGE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MESSAGE
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: DB_PASSWORD
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
volumes:
- name: html
configMap:
name: app-config

This deployment references:

  • ConfigMap app-config (via env and volumeMounts)
  • Secret app-secret (via env)

Reloader detects both references. If either app-config or app-secret changes, the deployment restarts.

This mode is convenient but has a downside: updating any referenced ConfigMap or Secret triggers a restart, even if the change is irrelevant to the application. For example, if you add a new key to app-config that the application does not use, the deployment still restarts.

Specific Resource Mode (Fine-Grained Control)

Section titled “Specific Resource Mode (Fine-Grained Control)”

If you want to reload only when specific ConfigMaps or Secrets change, use the resource-specific annotations:

# From manifests/deployment-specific.yaml
metadata:
name: web-specific
namespace: reloader-demo
annotations:
configmap.reloader.stakater.com/reload: "app-config"
spec:
template:
spec:
containers:
- name: nginx
env:
- name: APP_MESSAGE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MESSAGE

This deployment only restarts when app-config changes. It ignores changes to any Secrets, even if the pod references them.

You can watch multiple resources by separating them with commas:

annotations:
configmap.reloader.stakater.com/reload: "app-config,feature-flags"
secret.reloader.stakater.com/reload: "db-credentials,api-keys"

Now the deployment restarts if any of the four resources change.

This mode gives you control but requires more annotation maintenance. If you add a new ConfigMap reference to the pod spec, you must also update the annotation.

No Annotation (Default Kubernetes Behavior)

Section titled “No Annotation (Default Kubernetes Behavior)”

If a deployment has no Reloader annotations, it behaves like standard Kubernetes. Updates to ConfigMaps and Secrets do not trigger restarts.

# From manifests/deployment-ignored.yaml
metadata:
name: web-ignored
namespace: reloader-demo
spec:
template:
spec:
containers:
- name: nginx
env:
- name: APP_MESSAGE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MESSAGE

This deployment is in the same namespace as the others and references the same ConfigMap. But without the Reloader annotation, updating app-config does not restart it. The pod keeps running with the old environment variable values until you manually restart it.

This is useful for:

  • Workloads that should not automatically restart (for example, long-running batch jobs).
  • Testing configuration changes in a canary pod before rolling them out to the entire deployment.
  • ConfigMaps that hold data unrelated to the application runtime (for example, documentation, initialization scripts).

Search mode is less common but powerful. It allows a deployment to watch ConfigMaps or Secrets based on labels instead of names.

Add reloader.stakater.com/search: "true" to the deployment:

metadata:
annotations:
reloader.stakater.com/search: "true"

Then label ConfigMaps or Secrets with reloader.stakater.com/match: "true":

apiVersion: v1
kind: ConfigMap
metadata:
name: dynamic-config
labels:
reloader.stakater.com/match: "true"
data:
KEY: "value"

Now the deployment watches all ConfigMaps with the match: "true" label, regardless of whether the pod spec references them. This is useful for scenarios where ConfigMaps are created dynamically by operators or external systems, and you want deployments to reload when any matching ConfigMap appears or changes.

You can also use label selectors for more complex matching:

annotations:
reloader.stakater.com/search: "true"
reloader.stakater.com/match-labels: "app=myapp,env=prod"

This watches ConfigMaps and Secrets that have both app=myapp and env=prod labels.

Sometimes you want to prevent a specific ConfigMap or Secret from triggering reloads, even if it is referenced in the pod spec. Use the ignore annotation on the workload:

metadata:
annotations:
reloader.stakater.com/auto: "true"
secret.reloader.stakater.com/reload-on-change: "false"

Wait, that is not correct. The ignore annotation is placed on the ConfigMap or Secret itself, not the workload:

apiVersion: v1
kind: ConfigMap
metadata:
name: static-config
annotations:
reloader.stakater.com/ignore: "true"
data:
STATIC_KEY: "never-changes"

No workload will restart when static-config is updated, even if they have reloader.stakater.com/auto: "true".

This is useful for ConfigMaps that change frequently for reasons unrelated to application runtime (for example, logs, metrics config, feature flags that are read at request time).

The demo’s ConfigMap holds both simple key-value pairs and multi-line file content:

# From manifests/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: reloader-demo
data:
APP_MESSAGE: "Hello from v1"
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Reloader Demo</title>
<style>
body { font-family: Arial; text-align: center; padding: 50px; background-color: #3498db; color: white; }
h1 { font-size: 3em; }
</style>
</head>
<body>
<h1>Hello from v1</h1>
<p>This page is served from a ConfigMap.</p>
<p>Update the ConfigMap to see Reloader in action!</p>
</body>
</html>

Both keys are monitored. When you patch the ConfigMap to update APP_MESSAGE, Reloader detects the change and triggers a restart. The same happens if you update index.html.

The Secret uses stringData for plain text input:

# From manifests/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: reloader-demo
type: Opaque
stringData:
DB_PASSWORD: "super-secret-v1"
API_KEY: "abc123xyz"

When you patch the Secret, the API server base64-encodes the new values and stores them in the data field. Reloader watches the data field and detects the hash change.

Reloader is not the only way to handle ConfigMap updates. Each approach has different trade-offs.

Pros:

  • Simple to use (add an annotation).
  • Works with any workload type (Deployment, StatefulSet, DaemonSet, Argo Rollouts).
  • No changes to manifest structure required.
  • Centralized control (one controller for the entire cluster).

Cons:

  • Adds a cluster-level dependency (if Reloader is down, reloads do not happen).
  • Requires RBAC permissions to watch and patch workloads.
  • Annotation syntax can be error-prone (typos are silently ignored).
  • Reloads all pods even if only one needs the new config.

Wave is similar to Reloader but uses a hash annotation directly in the pod template. Instead of watching for ConfigMap changes, Wave calculates a hash of all referenced ConfigMaps and Secrets and injects it as an annotation at deployment time.

spec:
template:
metadata:
annotations:
wave.pusher.com/update-on-config-change: "true"

Wave runs as a mutating admission webhook. When you create or update a deployment, Wave calculates the hash of all referenced ConfigMaps and Secrets and adds an annotation to the pod template. If you later update a ConfigMap, you must also update the deployment (even a no-op change like adding a label) to trigger the webhook and recalculate the hash.

Pros:

  • No background controller watching resources (lower cluster load).
  • Works with GitOps workflows (the hash is stored in the manifest).

Cons:

  • Requires updating the deployment manifest to trigger the hash recalculation.
  • Admission webhook adds deployment latency.
  • Less intuitive workflow (why do I need to touch the deployment to update a ConfigMap?).

Kustomize can generate ConfigMaps with a hash suffix based on their content:

configMapGenerator:
- name: app-config
files:
- config.yaml

Kustomize generates a ConfigMap named app-config-<hash>. When config.yaml changes, the hash changes, and the deployment references a new ConfigMap name.

Pros:

  • No runtime controller or webhook required.
  • Works with any Kubernetes cluster (no additional components).
  • ConfigMap content is immutable (good for auditing).

Cons:

  • Old ConfigMaps are never deleted (cluster fills with orphaned ConfigMaps unless you run a cleanup job).
  • Requires Kustomize in your deployment pipeline.
  • Does not work well with Secrets (hash suffix leaks information about Secret content).

The simplest option. Update the ConfigMap, then run:

Terminal window
kubectl rollout restart deployment/myapp

Pros:

  • No additional tools or controllers.
  • Explicit control over when restarts happen.
  • Works with any Kubernetes version.

Cons:

  • Requires manual intervention (does not work for automated CD pipelines).
  • Easy to forget (deploy a config change, forget to restart, wonder why nothing changed).

Use Reloader when:

  • You deploy frequently and need automated config updates.
  • You use environment variables for configuration (they never auto-update).
  • Your CD pipeline lacks good integration with kubectl (for example, ArgoCD without sync hooks).
  • You have many microservices sharing ConfigMaps (avoid N manual restarts).

Skip Reloader when:

  • You use Kustomize and prefer the hash suffix pattern.
  • You want explicit control over restarts (for example, blue-green deployments).
  • You cannot run cluster-level controllers (restricted environments).
  • Your ConfigMaps rarely change (manual restarts are fine).

By default, Reloader installs with cluster-wide RBAC permissions. It can watch and patch resources in all namespaces. In multi-tenant clusters or restricted environments, you may want to scope Reloader to specific namespaces.

Install Reloader with namespace-scoped RBAC:

Terminal window
helm install reloader stakater/reloader \
-n reloader \
--set watchGlobally=false \
--set namespaceSelector="team=platform"

This limits Reloader to namespaces with the label team=platform. It ignores ConfigMaps and workloads in other namespaces.

You can also deploy multiple Reloader instances, each scoped to a different namespace or set of namespaces. This is useful in environments where different teams own different namespaces and want independent control over reload behavior.

Reloader triggers rolling restarts. If you update a ConfigMap referenced by 50 deployments, all 50 will restart simultaneously (or as fast as the controller can patch them). This can overwhelm the cluster scheduler and cause temporary service disruption.

Mitigation strategies:

  1. Use specific annotations: Only watch the ConfigMaps that truly require restarts. If a deployment uses app-config but only the index.html key matters, do not use auto mode.

  2. Batch updates: Update ConfigMaps during maintenance windows or use canary deployments to test changes on a subset of pods first.

  3. Rate limiting: Some Reloader forks support rate-limiting restart triggers. This spreads restarts over time instead of triggering all at once.

  4. PodDisruptionBudgets: Ensure all critical deployments have PDBs to prevent too many pods restarting at once.

Reloader logs every reload event to stdout. Integrate these logs with your cluster logging system (Loki, Elasticsearch, CloudWatch) and create alerts for unexpected restarts.

Example log entry:

Changes detected in 'app-config' (ConfigMap), Updating workload 'web-auto' (Deployment)

You can also expose Prometheus metrics from Reloader (if using a fork that supports it) to track:

  • Number of ConfigMaps watched
  • Number of restarts triggered
  • Restart failure rate

Some teams extend Reloader with custom logic:

  • Trigger Slack notifications when a reload happens.
  • Call a webhook to pre-warm caches before the new pods start.
  • Delay restarts until off-peak hours (batch restarts overnight).

Reloader itself does not support webhooks, but you can fork it or use a sidecar that watches Reloader logs and triggers external systems.

During incidents or maintenance, you may want to temporarily disable Reloader without uninstalling it. There is no built-in pause feature, but you can scale the Reloader deployment to zero:

Terminal window
kubectl scale deployment reloader-reloader -n reloader --replicas=0

This stops all watches. ConfigMap and Secret updates will not trigger restarts until you scale back up:

Terminal window
kubectl scale deployment reloader-reloader -n reloader --replicas=1

Reloader silently ignores annotations it does not recognize. If you write:

annotations:
reloader.stakater.com/aut0: "true" # Typo: aut0 instead of auto

Nothing happens. The deployment does not reload. No error is logged. Always double-check annotation spelling.

Reloader only watches ConfigMaps and workloads in namespaces it has RBAC permissions for. If you install Reloader in the reloader namespace with namespace-scoped RBAC but try to use it in the default namespace, it will not work.

Check Reloader’s RBAC:

Terminal window
kubectl get clusterrole reloader-reloader -o yaml

Ensure it has watch, get, and list permissions for ConfigMaps, Secrets, and workloads.

You update a ConfigMap, wait for the pods to restart, and they do not. The most common cause: you forgot to add the Reloader annotation to the deployment in the first place. The deployment must have the annotation before Reloader will watch it.

If you apply a script that updates 20 ConfigMaps used by the same deployment, Reloader will trigger 20 restarts in quick succession. The deployment never stabilizes.

The fix: batch ConfigMap updates into a single transaction (use kubectl apply -f with all ConfigMaps at once, or use a kustomization.yaml). Reloader debounces updates over a short window (a few seconds) to avoid duplicate restarts.

If you set both reloader.stakater.com/auto: "true" and configmap.reloader.stakater.com/reload: "app-config", the auto annotation takes precedence. The specific annotation is ignored. Pick one mode and stick with it.

6. Using subPath Without Understanding the Limitation

Section titled “6. Using subPath Without Understanding the Limitation”

You mount a ConfigMap with subPath and expect volume updates to propagate:

volumeMounts:
- name: config
mountPath: /etc/myapp/config.yaml
subPath: config.yaml

The kubelet copies the file at pod start and never updates it. Even if Reloader triggers a restart, the new pod gets a fresh copy of the file. But if you expected the running pod to see updates without a restart, you will be disappointed. This is a Kubernetes limitation, not a Reloader issue.

Reloader uses a polling interval (default 5 seconds) to check for ConfigMap changes. There is also a rolling update propagation delay (pods restart one by one with delays between them). From ConfigMap update to all pods running with the new config can take 30 to 60 seconds.

If you need faster propagation, consider a push-based system (for example, an init container that fetches config from a central service at pod start).

If a ConfigMap is marked immutable: true, the API server rejects any attempt to modify its data. Reloader cannot help here because the ConfigMap cannot change.

Immutable ConfigMaps are useful for versioned configuration. Create a new ConfigMap for each version:

apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-v2
immutable: true
data:
APP_MESSAGE: "Hello from v2"

Then update the deployment to reference app-config-v2. This triggers a rolling update without Reloader’s involvement.

Reloader is still useful in this workflow for the next change. You create app-config-v3, update the deployment, and Reloader detects the change in the deployment’s ConfigMap reference and triggers the restart.

Wait, that is not correct. Reloader does not trigger restarts when you change the ConfigMap name in the deployment. It triggers restarts when the data inside a ConfigMap with the same name changes. For versioned immutable ConfigMaps, Reloader is not involved. The rolling update happens because you changed the pod spec to reference a different ConfigMap name.

Reloader is lightweight. It uses informer caches to avoid constant API server polling. The memory footprint depends on the number of ConfigMaps, Secrets, and workloads in the cluster.

Typical resource usage:

  • CPU: 10-50m (idle), up to 200m during heavy reload activity
  • Memory: 50-100 MiB for small clusters (< 100 ConfigMaps), up to 500 MiB for large clusters (1000+ ConfigMaps)

The bottleneck is usually the number of watch streams. Each watched ConfigMap and Secret requires a watch stream to the API server. In clusters with thousands of ConfigMaps, this can strain the API server. Use namespace scoping and specific annotations to reduce the watch surface.

Reloader requires RBAC permissions to:

  • Watch ConfigMaps, Secrets, Deployments, StatefulSets, DaemonSets
  • Patch Deployments, StatefulSets, DaemonSets (to add the reload annotation)

This is a powerful permission set. An attacker with control over Reloader could patch deployments to inject malicious annotations or trigger unwanted restarts.

Hardening steps:

  1. Run Reloader with a dedicated ServiceAccount: Do not use the default ServiceAccount.
  2. Limit namespace access: Use namespace-scoped RBAC instead of cluster-wide permissions.
  3. Audit Reloader actions: Enable audit logging for Reloader’s ServiceAccount to track what it patches.
  4. Use PodSecurityPolicies or PodSecurityStandards: Restrict Reloader’s pod to non-privileged mode, read-only root filesystem, and drop all capabilities.

If Reloader is not triggering restarts when you expect:

  1. Check Reloader logs:

    Terminal window
    kubectl logs -f deployment/reloader-reloader -n reloader

    Look for log lines indicating ConfigMap change detection and workload updates.

  2. Verify the annotation:

    Terminal window
    kubectl get deployment web-auto -n reloader-demo -o yaml | grep reloader

    Ensure the annotation is present and spelled correctly.

  3. Check RBAC:

    Terminal window
    kubectl auth can-i get configmaps --as=system:serviceaccount:reloader:reloader-reloader -n reloader-demo
    kubectl auth can-i patch deployments --as=system:serviceaccount:reloader:reloader-reloader -n reloader-demo

    Both should return yes.

  4. Manually trigger a restart:

    Terminal window
    kubectl rollout restart deployment/web-auto -n reloader-demo

    If this works, Reloader is not the issue. Check ConfigMap references in the pod spec.

  5. Check for event logs:

    Terminal window
    kubectl get events -n reloader-demo --sort-by='.lastTimestamp'

    Look for warnings about failed mounts or missing ConfigMaps.