Reloader: Deep Dive
This document explains how Stakater Reloader works internally, the ConfigMap and Secret update problem it solves, annotation modes, and how it compares to alternatives. It covers production considerations, common pitfalls, and when to use each reload strategy.
Why Reloader Exists
Section titled “Why Reloader Exists”Kubernetes has a fundamental gap in its configuration management design. When you update a ConfigMap or Secret that a pod uses, the pod does not automatically restart to pick up the changes. The behavior depends on how the configuration is injected.
The Native Update Problem
Section titled “The Native Update Problem”Kubernetes provides two ways to inject ConfigMap and Secret data into containers:
- Environment variables (via
envorenvFrom) - Volume mounts
Each method handles updates differently:
Environment variables are set at container start and never updated. If you change a ConfigMap key that is injected as an environment variable, the running pod keeps the old value. The pod must be restarted to see the new value.
# From manifests/deployment-auto.yamlcontainers: - name: nginx env: - name: APP_MESSAGE valueFrom: configMapKeyRef: name: app-config key: APP_MESSAGEAfter the pod starts, APP_MESSAGE is frozen. Updating app-config does not change the environment variable in the running container.
Volume mounts behave differently. When you mount a ConfigMap or Secret as a volume, the kubelet watches for changes and updates the mounted files. The propagation delay is typically 30 to 90 seconds, depending on the kubelet sync period.
But there is a critical limitation: the kubelet only updates volume-mounted files if the mount uses projected volumes (full directory mounts). If you use subPath to mount individual files, the kubelet copies the file at pod start and never updates it.
# From manifests/deployment-auto.yamlvolumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: truevolumes: - name: html configMap: name: app-config items: - key: index.html path: index.htmlThis is a projected volume with item selection. The files update automatically when the ConfigMap changes. But even though the files update, the application must reload its configuration to use the new values. An nginx pod serving static HTML from a ConfigMap volume will continue serving the old cached content until the pod restarts or nginx reloads its configuration.
What People Did Before Reloader
Section titled “What People Did Before Reloader”Before automated reload tools existed, teams handled configuration updates manually:
-
Manual rolling restarts: Run
kubectl rollout restart deployment/myappafter updating ConfigMaps. This works but requires manual intervention and is error-prone in CI/CD pipelines. -
Hash suffix strategy: Use Kustomize’s
configMapGeneratoror Helm hooks to append a hash of the ConfigMap content to its name. When the content changes, the ConfigMap gets a new name, and the deployment references the new name, triggering a rolling update. This works but creates ConfigMap clutter (old versions are never deleted unless you use a cleanup job). -
Immutable ConfigMaps with versioned names: Create a new ConfigMap for every config change (for example,
app-config-v2,app-config-v3) and update the deployment to reference the new name. Same clutter problem as hash suffixes. -
Custom scripts: Write CI/CD pipeline steps that patch the deployment’s pod template annotations to force a restart. This works but scatters logic across deployment pipelines.
Reloader automates option 4. It watches ConfigMaps and Secrets, detects changes, and patches the deployment to trigger a rolling restart.
How Reloader Works Internally
Section titled “How Reloader Works Internally”Stakater Reloader is a Kubernetes controller running as a deployment in the cluster. It watches for changes to ConfigMaps, Secrets, Deployments, StatefulSets, DaemonSets, and other workload resources.
The Watch Mechanism
Section titled “The Watch Mechanism”Reloader uses the Kubernetes informer pattern to watch resources. When you install Reloader, it starts watch streams against the API server for:
- ConfigMaps (all namespaces or namespace-scoped, depending on RBAC)
- Secrets (all namespaces or namespace-scoped)
- Deployments, StatefulSets, DaemonSets, Rollouts (Argo Rollouts)
The informer caches the current state and receives update events when resources change. This is efficient and creates minimal API server load.
Change Detection
Section titled “Change Detection”When a ConfigMap or Secret is updated, Reloader receives an event. It calculates the hash of the ConfigMap or Secret data and compares it to the previously cached hash. If the hash differs, it identifies which workloads reference this ConfigMap or Secret.
The reference detection works by:
- Reading the workload’s pod template spec.
- Checking all
env,envFrom, andvolumesfor references to ConfigMaps and Secrets. - Matching the referenced names against the changed resource.
If a match is found, Reloader checks the workload’s annotations to see if it should trigger a reload.
The Rolling Restart Trigger
Section titled “The Rolling Restart Trigger”Kubernetes watches the pod template spec for changes. Any change to the pod template triggers a rolling update. Reloader exploits this by patching an annotation on the pod template:
spec: template: metadata: annotations: reloader.stakater.com/last-reloaded-from: "app-config-abc123"The annotation value includes a hash or timestamp. When the ConfigMap changes, Reloader updates this annotation with a new value. Kubernetes detects the pod template change and starts a rolling update.
The rolling update follows the deployment’s strategy:
spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1Pods are replaced one by one. The new pods pick up the updated ConfigMap data (either as environment variables set at start or as volume-mounted files that have already been updated by the kubelet).
Annotation Modes
Section titled “Annotation Modes”Reloader supports multiple annotation modes, giving you fine-grained control over what triggers a restart.
Auto Mode (Watch Everything)
Section titled “Auto Mode (Watch Everything)”The simplest mode. Add a single annotation to the workload and Reloader watches all ConfigMaps and Secrets referenced in the pod spec.
# From manifests/deployment-auto.yamlmetadata: name: web-auto namespace: reloader-demo annotations: reloader.stakater.com/auto: "true"spec: template: spec: containers: - name: nginx env: - name: APP_MESSAGE valueFrom: configMapKeyRef: name: app-config key: APP_MESSAGE - name: DB_PASSWORD valueFrom: secretKeyRef: name: app-secret key: DB_PASSWORD volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true volumes: - name: html configMap: name: app-configThis deployment references:
- ConfigMap
app-config(viaenvandvolumeMounts) - Secret
app-secret(viaenv)
Reloader detects both references. If either app-config or app-secret changes, the deployment restarts.
This mode is convenient but has a downside: updating any referenced ConfigMap or Secret triggers a restart, even if the change is irrelevant to the application. For example, if you add a new key to app-config that the application does not use, the deployment still restarts.
Specific Resource Mode (Fine-Grained Control)
Section titled “Specific Resource Mode (Fine-Grained Control)”If you want to reload only when specific ConfigMaps or Secrets change, use the resource-specific annotations:
# From manifests/deployment-specific.yamlmetadata: name: web-specific namespace: reloader-demo annotations: configmap.reloader.stakater.com/reload: "app-config"spec: template: spec: containers: - name: nginx env: - name: APP_MESSAGE valueFrom: configMapKeyRef: name: app-config key: APP_MESSAGEThis deployment only restarts when app-config changes. It ignores changes to any Secrets, even if the pod references them.
You can watch multiple resources by separating them with commas:
annotations: configmap.reloader.stakater.com/reload: "app-config,feature-flags" secret.reloader.stakater.com/reload: "db-credentials,api-keys"Now the deployment restarts if any of the four resources change.
This mode gives you control but requires more annotation maintenance. If you add a new ConfigMap reference to the pod spec, you must also update the annotation.
No Annotation (Default Kubernetes Behavior)
Section titled “No Annotation (Default Kubernetes Behavior)”If a deployment has no Reloader annotations, it behaves like standard Kubernetes. Updates to ConfigMaps and Secrets do not trigger restarts.
# From manifests/deployment-ignored.yamlmetadata: name: web-ignored namespace: reloader-demospec: template: spec: containers: - name: nginx env: - name: APP_MESSAGE valueFrom: configMapKeyRef: name: app-config key: APP_MESSAGEThis deployment is in the same namespace as the others and references the same ConfigMap. But without the Reloader annotation, updating app-config does not restart it. The pod keeps running with the old environment variable values until you manually restart it.
This is useful for:
- Workloads that should not automatically restart (for example, long-running batch jobs).
- Testing configuration changes in a canary pod before rolling them out to the entire deployment.
- ConfigMaps that hold data unrelated to the application runtime (for example, documentation, initialization scripts).
Search and Match Mode (Advanced)
Section titled “Search and Match Mode (Advanced)”Search mode is less common but powerful. It allows a deployment to watch ConfigMaps or Secrets based on labels instead of names.
Add reloader.stakater.com/search: "true" to the deployment:
metadata: annotations: reloader.stakater.com/search: "true"Then label ConfigMaps or Secrets with reloader.stakater.com/match: "true":
apiVersion: v1kind: ConfigMapmetadata: name: dynamic-config labels: reloader.stakater.com/match: "true"data: KEY: "value"Now the deployment watches all ConfigMaps with the match: "true" label, regardless of whether the pod spec references them. This is useful for scenarios where ConfigMaps are created dynamically by operators or external systems, and you want deployments to reload when any matching ConfigMap appears or changes.
You can also use label selectors for more complex matching:
annotations: reloader.stakater.com/search: "true" reloader.stakater.com/match-labels: "app=myapp,env=prod"This watches ConfigMaps and Secrets that have both app=myapp and env=prod labels.
The Ignore Annotation
Section titled “The Ignore Annotation”Sometimes you want to prevent a specific ConfigMap or Secret from triggering reloads, even if it is referenced in the pod spec. Use the ignore annotation on the workload:
metadata: annotations: reloader.stakater.com/auto: "true" secret.reloader.stakater.com/reload-on-change: "false"Wait, that is not correct. The ignore annotation is placed on the ConfigMap or Secret itself, not the workload:
apiVersion: v1kind: ConfigMapmetadata: name: static-config annotations: reloader.stakater.com/ignore: "true"data: STATIC_KEY: "never-changes"No workload will restart when static-config is updated, even if they have reloader.stakater.com/auto: "true".
This is useful for ConfigMaps that change frequently for reasons unrelated to application runtime (for example, logs, metrics config, feature flags that are read at request time).
ConfigMap and Secret Data Structure
Section titled “ConfigMap and Secret Data Structure”The demo’s ConfigMap holds both simple key-value pairs and multi-line file content:
# From manifests/configmap.yamlapiVersion: v1kind: ConfigMapmetadata: name: app-config namespace: reloader-demodata: APP_MESSAGE: "Hello from v1" index.html: | <!DOCTYPE html> <html> <head> <title>Reloader Demo</title> <style> body { font-family: Arial; text-align: center; padding: 50px; background-color: #3498db; color: white; } h1 { font-size: 3em; } </style> </head> <body> <h1>Hello from v1</h1> <p>This page is served from a ConfigMap.</p> <p>Update the ConfigMap to see Reloader in action!</p> </body> </html>Both keys are monitored. When you patch the ConfigMap to update APP_MESSAGE, Reloader detects the change and triggers a restart. The same happens if you update index.html.
The Secret uses stringData for plain text input:
# From manifests/secret.yamlapiVersion: v1kind: Secretmetadata: name: app-secret namespace: reloader-demotype: OpaquestringData: DB_PASSWORD: "super-secret-v1" API_KEY: "abc123xyz"When you patch the Secret, the API server base64-encodes the new values and stores them in the data field. Reloader watches the data field and detects the hash change.
Trade-offs and Alternatives
Section titled “Trade-offs and Alternatives”Reloader is not the only way to handle ConfigMap updates. Each approach has different trade-offs.
Reloader (Annotation-Based Controller)
Section titled “Reloader (Annotation-Based Controller)”Pros:
- Simple to use (add an annotation).
- Works with any workload type (Deployment, StatefulSet, DaemonSet, Argo Rollouts).
- No changes to manifest structure required.
- Centralized control (one controller for the entire cluster).
Cons:
- Adds a cluster-level dependency (if Reloader is down, reloads do not happen).
- Requires RBAC permissions to watch and patch workloads.
- Annotation syntax can be error-prone (typos are silently ignored).
- Reloads all pods even if only one needs the new config.
Wave (Pusher’s Reloader Alternative)
Section titled “Wave (Pusher’s Reloader Alternative)”Wave is similar to Reloader but uses a hash annotation directly in the pod template. Instead of watching for ConfigMap changes, Wave calculates a hash of all referenced ConfigMaps and Secrets and injects it as an annotation at deployment time.
spec: template: metadata: annotations: wave.pusher.com/update-on-config-change: "true"Wave runs as a mutating admission webhook. When you create or update a deployment, Wave calculates the hash of all referenced ConfigMaps and Secrets and adds an annotation to the pod template. If you later update a ConfigMap, you must also update the deployment (even a no-op change like adding a label) to trigger the webhook and recalculate the hash.
Pros:
- No background controller watching resources (lower cluster load).
- Works with GitOps workflows (the hash is stored in the manifest).
Cons:
- Requires updating the deployment manifest to trigger the hash recalculation.
- Admission webhook adds deployment latency.
- Less intuitive workflow (why do I need to touch the deployment to update a ConfigMap?).
Kustomize configMapGenerator Hash Suffix
Section titled “Kustomize configMapGenerator Hash Suffix”Kustomize can generate ConfigMaps with a hash suffix based on their content:
configMapGenerator: - name: app-config files: - config.yamlKustomize generates a ConfigMap named app-config-<hash>. When config.yaml changes, the hash changes, and the deployment references a new ConfigMap name.
Pros:
- No runtime controller or webhook required.
- Works with any Kubernetes cluster (no additional components).
- ConfigMap content is immutable (good for auditing).
Cons:
- Old ConfigMaps are never deleted (cluster fills with orphaned ConfigMaps unless you run a cleanup job).
- Requires Kustomize in your deployment pipeline.
- Does not work well with Secrets (hash suffix leaks information about Secret content).
Manual Rolling Restart
Section titled “Manual Rolling Restart”The simplest option. Update the ConfigMap, then run:
kubectl rollout restart deployment/myappPros:
- No additional tools or controllers.
- Explicit control over when restarts happen.
- Works with any Kubernetes version.
Cons:
- Requires manual intervention (does not work for automated CD pipelines).
- Easy to forget (deploy a config change, forget to restart, wonder why nothing changed).
When to Use Reloader
Section titled “When to Use Reloader”Use Reloader when:
- You deploy frequently and need automated config updates.
- You use environment variables for configuration (they never auto-update).
- Your CD pipeline lacks good integration with kubectl (for example, ArgoCD without sync hooks).
- You have many microservices sharing ConfigMaps (avoid N manual restarts).
Skip Reloader when:
- You use Kustomize and prefer the hash suffix pattern.
- You want explicit control over restarts (for example, blue-green deployments).
- You cannot run cluster-level controllers (restricted environments).
- Your ConfigMaps rarely change (manual restarts are fine).
Production Considerations
Section titled “Production Considerations”RBAC and Namespace Scoping
Section titled “RBAC and Namespace Scoping”By default, Reloader installs with cluster-wide RBAC permissions. It can watch and patch resources in all namespaces. In multi-tenant clusters or restricted environments, you may want to scope Reloader to specific namespaces.
Install Reloader with namespace-scoped RBAC:
helm install reloader stakater/reloader \ -n reloader \ --set watchGlobally=false \ --set namespaceSelector="team=platform"This limits Reloader to namespaces with the label team=platform. It ignores ConfigMaps and workloads in other namespaces.
You can also deploy multiple Reloader instances, each scoped to a different namespace or set of namespaces. This is useful in environments where different teams own different namespaces and want independent control over reload behavior.
Avoiding Restart Storms
Section titled “Avoiding Restart Storms”Reloader triggers rolling restarts. If you update a ConfigMap referenced by 50 deployments, all 50 will restart simultaneously (or as fast as the controller can patch them). This can overwhelm the cluster scheduler and cause temporary service disruption.
Mitigation strategies:
-
Use specific annotations: Only watch the ConfigMaps that truly require restarts. If a deployment uses
app-configbut only theindex.htmlkey matters, do not useautomode. -
Batch updates: Update ConfigMaps during maintenance windows or use canary deployments to test changes on a subset of pods first.
-
Rate limiting: Some Reloader forks support rate-limiting restart triggers. This spreads restarts over time instead of triggering all at once.
-
PodDisruptionBudgets: Ensure all critical deployments have PDBs to prevent too many pods restarting at once.
Alerting and Monitoring
Section titled “Alerting and Monitoring”Reloader logs every reload event to stdout. Integrate these logs with your cluster logging system (Loki, Elasticsearch, CloudWatch) and create alerts for unexpected restarts.
Example log entry:
Changes detected in 'app-config' (ConfigMap), Updating workload 'web-auto' (Deployment)You can also expose Prometheus metrics from Reloader (if using a fork that supports it) to track:
- Number of ConfigMaps watched
- Number of restarts triggered
- Restart failure rate
Webhooks and Custom Integrations
Section titled “Webhooks and Custom Integrations”Some teams extend Reloader with custom logic:
- Trigger Slack notifications when a reload happens.
- Call a webhook to pre-warm caches before the new pods start.
- Delay restarts until off-peak hours (batch restarts overnight).
Reloader itself does not support webhooks, but you can fork it or use a sidecar that watches Reloader logs and triggers external systems.
Pause and Resume
Section titled “Pause and Resume”During incidents or maintenance, you may want to temporarily disable Reloader without uninstalling it. There is no built-in pause feature, but you can scale the Reloader deployment to zero:
kubectl scale deployment reloader-reloader -n reloader --replicas=0This stops all watches. ConfigMap and Secret updates will not trigger restarts until you scale back up:
kubectl scale deployment reloader-reloader -n reloader --replicas=1Common Pitfalls
Section titled “Common Pitfalls”1. Typo in Annotation Name
Section titled “1. Typo in Annotation Name”Reloader silently ignores annotations it does not recognize. If you write:
annotations: reloader.stakater.com/aut0: "true" # Typo: aut0 instead of autoNothing happens. The deployment does not reload. No error is logged. Always double-check annotation spelling.
2. Watching the Wrong Namespace
Section titled “2. Watching the Wrong Namespace”Reloader only watches ConfigMaps and workloads in namespaces it has RBAC permissions for. If you install Reloader in the reloader namespace with namespace-scoped RBAC but try to use it in the default namespace, it will not work.
Check Reloader’s RBAC:
kubectl get clusterrole reloader-reloader -o yamlEnsure it has watch, get, and list permissions for ConfigMaps, Secrets, and workloads.
3. Forgetting to Apply the Annotation
Section titled “3. Forgetting to Apply the Annotation”You update a ConfigMap, wait for the pods to restart, and they do not. The most common cause: you forgot to add the Reloader annotation to the deployment in the first place. The deployment must have the annotation before Reloader will watch it.
4. Too Many Restarts During Bulk Updates
Section titled “4. Too Many Restarts During Bulk Updates”If you apply a script that updates 20 ConfigMaps used by the same deployment, Reloader will trigger 20 restarts in quick succession. The deployment never stabilizes.
The fix: batch ConfigMap updates into a single transaction (use kubectl apply -f with all ConfigMaps at once, or use a kustomization.yaml). Reloader debounces updates over a short window (a few seconds) to avoid duplicate restarts.
5. Mixing Auto and Specific Annotations
Section titled “5. Mixing Auto and Specific Annotations”If you set both reloader.stakater.com/auto: "true" and configmap.reloader.stakater.com/reload: "app-config", the auto annotation takes precedence. The specific annotation is ignored. Pick one mode and stick with it.
6. Using subPath Without Understanding the Limitation
Section titled “6. Using subPath Without Understanding the Limitation”You mount a ConfigMap with subPath and expect volume updates to propagate:
volumeMounts: - name: config mountPath: /etc/myapp/config.yaml subPath: config.yamlThe kubelet copies the file at pod start and never updates it. Even if Reloader triggers a restart, the new pod gets a fresh copy of the file. But if you expected the running pod to see updates without a restart, you will be disappointed. This is a Kubernetes limitation, not a Reloader issue.
7. Expecting Instant Restarts
Section titled “7. Expecting Instant Restarts”Reloader uses a polling interval (default 5 seconds) to check for ConfigMap changes. There is also a rolling update propagation delay (pods restart one by one with delays between them). From ConfigMap update to all pods running with the new config can take 30 to 60 seconds.
If you need faster propagation, consider a push-based system (for example, an init container that fetches config from a central service at pod start).
Reloader and Immutable ConfigMaps
Section titled “Reloader and Immutable ConfigMaps”If a ConfigMap is marked immutable: true, the API server rejects any attempt to modify its data. Reloader cannot help here because the ConfigMap cannot change.
Immutable ConfigMaps are useful for versioned configuration. Create a new ConfigMap for each version:
apiVersion: v1kind: ConfigMapmetadata: name: app-config-v2immutable: truedata: APP_MESSAGE: "Hello from v2"Then update the deployment to reference app-config-v2. This triggers a rolling update without Reloader’s involvement.
Reloader is still useful in this workflow for the next change. You create app-config-v3, update the deployment, and Reloader detects the change in the deployment’s ConfigMap reference and triggers the restart.
Wait, that is not correct. Reloader does not trigger restarts when you change the ConfigMap name in the deployment. It triggers restarts when the data inside a ConfigMap with the same name changes. For versioned immutable ConfigMaps, Reloader is not involved. The rolling update happens because you changed the pod spec to reference a different ConfigMap name.
Performance and Resource Usage
Section titled “Performance and Resource Usage”Reloader is lightweight. It uses informer caches to avoid constant API server polling. The memory footprint depends on the number of ConfigMaps, Secrets, and workloads in the cluster.
Typical resource usage:
- CPU: 10-50m (idle), up to 200m during heavy reload activity
- Memory: 50-100 MiB for small clusters (< 100 ConfigMaps), up to 500 MiB for large clusters (1000+ ConfigMaps)
The bottleneck is usually the number of watch streams. Each watched ConfigMap and Secret requires a watch stream to the API server. In clusters with thousands of ConfigMaps, this can strain the API server. Use namespace scoping and specific annotations to reduce the watch surface.
Security Considerations
Section titled “Security Considerations”Reloader requires RBAC permissions to:
- Watch ConfigMaps, Secrets, Deployments, StatefulSets, DaemonSets
- Patch Deployments, StatefulSets, DaemonSets (to add the reload annotation)
This is a powerful permission set. An attacker with control over Reloader could patch deployments to inject malicious annotations or trigger unwanted restarts.
Hardening steps:
- Run Reloader with a dedicated ServiceAccount: Do not use the default ServiceAccount.
- Limit namespace access: Use namespace-scoped RBAC instead of cluster-wide permissions.
- Audit Reloader actions: Enable audit logging for Reloader’s ServiceAccount to track what it patches.
- Use PodSecurityPolicies or PodSecurityStandards: Restrict Reloader’s pod to non-privileged mode, read-only root filesystem, and drop all capabilities.
Debugging Reloader
Section titled “Debugging Reloader”If Reloader is not triggering restarts when you expect:
-
Check Reloader logs:
Terminal window kubectl logs -f deployment/reloader-reloader -n reloaderLook for log lines indicating ConfigMap change detection and workload updates.
-
Verify the annotation:
Terminal window kubectl get deployment web-auto -n reloader-demo -o yaml | grep reloaderEnsure the annotation is present and spelled correctly.
-
Check RBAC:
Terminal window kubectl auth can-i get configmaps --as=system:serviceaccount:reloader:reloader-reloader -n reloader-demokubectl auth can-i patch deployments --as=system:serviceaccount:reloader:reloader-reloader -n reloader-demoBoth should return
yes. -
Manually trigger a restart:
Terminal window kubectl rollout restart deployment/web-auto -n reloader-demoIf this works, Reloader is not the issue. Check ConfigMap references in the pod spec.
-
Check for event logs:
Terminal window kubectl get events -n reloader-demo --sort-by='.lastTimestamp'Look for warnings about failed mounts or missing ConfigMaps.