Multi-Container Patterns: Deep Dive
This document explains how multi-container pods work in Kubernetes, why they exist, and when to use each pattern. It covers shared namespaces, the classic design patterns, init containers, native sidecar containers (KEP-753), and real-world architectures.
Why Pods Exist
Section titled “Why Pods Exist”Containers isolate processes. Pods group containers that must work together.
Kubernetes created the pod abstraction because some workloads require tightly-coupled containers that share resources. A web server and its log shipper need access to the same files. A proxy and its backend need to talk over localhost. These containers form a single unit of deployment, scaling, and scheduling.
The alternative, running each container in its own pod, would force you to manage inter-pod networking, shared storage across nodes, and co-scheduling yourself. Pods handle all of that automatically.
Shared Namespaces Inside a Pod
Section titled “Shared Namespaces Inside a Pod”Containers in the same pod share Linux kernel namespaces. This is the mechanism that makes multi-container patterns possible.
Network Namespace
Section titled “Network Namespace”All containers in a pod share one network namespace. They get the same IP address and the same set of ports. Any container can reach any other via localhost.
Look at the ambassador pattern from this demo:
containers: - name: app image: nginx:1.25.3-alpine ports: - containerPort: 80 name: app-port - name: proxy image: nginx:1.25.3-alpine ports: - containerPort: 8443 name: proxy-portThe proxy forwards traffic to http://localhost:80, reaching the app container directly. No service discovery. No DNS. No network hops.
This also means port conflicts are real. Two containers in the same pod cannot both listen on port 80. Plan your port allocations just as you would for processes on the same host.
IPC Namespace
Section titled “IPC Namespace”Containers in a pod share the IPC namespace by default. This allows System V shared memory and POSIX message queues. Most modern applications use network sockets instead, but some legacy and HPC workloads depend on shared memory.
PID Namespace
Section titled “PID Namespace”By default, each container has its own PID namespace. You can change this with shareProcessNamespace: true:
spec: shareProcessNamespace: true containers: - name: app image: myapp:latest - name: debugger image: busybox:1.36With PID sharing enabled, every container can see every other container’s processes. Useful for debugging sidecars, process monitoring, and signal forwarding. The pause container becomes PID 1, so your application process will have a different PID. If your app checks for PID 1 during signal handling, it will behave differently.
Volume Sharing with emptyDir
Section titled “Volume Sharing with emptyDir”Shared namespaces handle networking and processes. Shared volumes handle files. The emptyDir volume type is the most common mechanism for inter-container file communication.
An emptyDir is created when the pod is scheduled. It starts empty. All containers that mount it see the same files. When the pod is removed, the emptyDir is deleted permanently.
The sidecar logging pattern uses emptyDir:
volumes: - name: log-volume emptyDir: {}The app container mounts it read-write and appends log lines:
containers: - name: app volumeMounts: - name: log-volume mountPath: /var/log/appThe log-shipper container mounts the same volume read-only:
- name: log-shipper volumeMounts: - name: log-volume mountPath: /var/log/app readOnly: trueNotice readOnly: true on the sidecar mount. The log shipper has no business writing to that volume. Marking it read-only enforces that boundary.
By default, emptyDir uses the node’s disk. You can back it with memory:
volumes: - name: scratch emptyDir: medium: Memory sizeLimit: 64MiMemory-backed emptyDir volumes are fast but consume pod memory and count against memory limits. Use them for small scratch spaces like TLS certificate caches. Avoid them for log files that grow over time.
The Sidecar Pattern
Section titled “The Sidecar Pattern”The sidecar extends the main container without modifying it. The application does not know the sidecar exists. It reads from or writes to a shared resource (usually a volume or the network namespace).
This demo’s sidecar-logging.yaml is the simplest example. The application writes structured logs to a file. The sidecar tails that file. In production, the sidecar would be Fluent Bit shipping logs to Elasticsearch or Splunk.
Production examples. Fluent Bit reads log files from a shared emptyDir, parses them, and forwards to a centralized backend. No code changes to the app. Envoy runs as a sidecar in Istio, intercepting all traffic via iptables rules injected by an init container. The app talks to localhost, and Envoy handles mTLS, retries, and circuit breaking transparently.
Use sidecars for log collection, monitoring, config reload (watching a ConfigMap and sending SIGHUP), secret rotation (Vault agent), and traffic management (service mesh proxies). Do not use sidecars when the “sidecar” is actually a second application that could run independently.
The Adapter Pattern
Section titled “The Adapter Pattern”The adapter normalizes the output of the main container. It is structurally identical to a sidecar, but its purpose is format translation.
This demo shows it clearly. The app writes pipe-delimited metrics:
- name: app command: - /bin/sh - -c - | while true; do TS=$(date +%s) CPU=$((RANDOM % 100)) MEM=$((RANDOM % 512 + 128)) echo "${TS}|cpu_usage|${CPU}|percent" >> /var/log/app/metrics.raw echo "${TS}|mem_usage|${MEM}|megabytes" >> /var/log/app/metrics.raw sleep 5 doneThe adapter converts to Prometheus exposition format:
- name: adapter command: - /bin/sh - -c - | tail -f /var/log/app/metrics.raw | while IFS='|' read -r ts name value unit; do echo "app_${name}{unit=\"${unit}\"} ${value} ${ts}000" doneThe raw line 1700000000|cpu_usage|42|percent becomes app_cpu_usage{unit="percent"} 42 1700000000000. Prometheus can scrape it directly.
Use adapters for metrics normalization (custom formats to Prometheus/OpenTelemetry), protocol translation (legacy protocols to HTTP/gRPC), and log format standardization. The pattern is valuable in heterogeneous environments with third-party software you cannot modify.
The Ambassador Pattern
Section titled “The Ambassador Pattern”The ambassador proxies network connections on behalf of the main container. The application talks to localhost. The ambassador handles the outside world.
In this demo, the ambassador terminates TLS:
- name: proxy command: - /bin/sh - -c - | apk add --no-cache openssl > /dev/null 2>&1 openssl req -x509 -newkey rsa:2048 -keyout /tmp/key.pem -out /tmp/cert.pem \ -days 1 -nodes -subj '/CN=localhost' 2>/dev/null cat > /etc/nginx/conf.d/default.conf <<'CONF' server { listen 8443 ssl; ssl_certificate /tmp/cert.pem; ssl_certificate_key /tmp/key.pem; location / { proxy_pass http://localhost:80; } } CONF nginx -g 'daemon off;'External clients connect to port 8443 with TLS. The proxy decrypts and forwards to the app on port 80 over plain HTTP. The application never deals with certificates or TLS configuration.
Common ambassador use cases: TLS termination (exactly this demo). Connection pooling (PgBouncer maintains pooled database connections; the app connects to localhost). Rate limiting (enforcing limits before requests reach the app). Service discovery (the app sends to localhost; the ambassador routes to the correct upstream).
Init Containers
Section titled “Init Containers”Init containers run before any application container starts. They run sequentially, in order, and each must succeed before the next begins.
This demo defines two:
initContainers: - name: init-config image: busybox:1.36 command: - /bin/sh - -c - | echo '{"database":"postgres","port":5432,"debug":true}' > /config/app.json volumeMounts: - name: config mountPath: /config - name: init-wait-db image: busybox:1.36 command: - /bin/sh - -c - | echo "Init container: waiting for database..." sleep 2 echo "Database ready (simulated)"Execution order: init-config writes a config file to the shared volume. After it exits 0, init-wait-db runs. After it exits 0, the app container starts.
Restart Behavior
Section titled “Restart Behavior”If an init container fails, Kubernetes restarts it according to the pod’s restartPolicy. For restartPolicy: Always (the default), the kubelet retries the failed init container indefinitely. The app container never starts until all init containers succeed. If restartPolicy is Never, a failed init container causes permanent pod failure.
Init containers do not support liveness, readiness, or startup probes. They are expected to run to completion.
Resource Inheritance
Section titled “Resource Inheritance”Init containers and app containers share the pod’s resource quota, but they are calculated differently. The effective init request is the max of any single init container’s request (they run sequentially). The effective app request is the sum of all app container requests (they run simultaneously). The pod’s overall request is the greater of these two values.
Common Use Cases
Section titled “Common Use Cases”Configuration download, database migrations, waiting for dependencies, setting file permissions on volumes, and pre-warming caches.
Native Sidecar Containers (KEP-753)
Section titled “Native Sidecar Containers (KEP-753)”Traditional sidecars have a fundamental problem: no lifecycle guarantees relative to the main application. A log shipper might start after logs are already being produced, or shut down before the app flushes its final entries.
Kubernetes 1.28 introduced native sidecar containers (stable in 1.29) through KEP-753. These are init containers with restartPolicy: Always:
initContainers: - name: log-agent image: fluent/fluent-bit:latest restartPolicy: Always volumeMounts: - name: logs mountPath: /var/log/appThis tells Kubernetes: start this container before app containers, do not wait for it to exit, keep it running for the pod’s lifetime, and shut it down after all app containers stop.
Startup order: Regular init containers run first (sequentially, to completion). Native sidecar init containers start next (in declaration order). Application containers start last.
Shutdown order: Application containers receive SIGTERM first. Native sidecars stop afterward, in reverse declaration order.
| Aspect | Traditional Sidecar | Native Sidecar (KEP-753) |
|---|---|---|
| Declared in | containers | initContainers with restartPolicy: Always |
| Startup order | No guarantee | Starts before app containers |
| Shutdown order | No guarantee | Stops after app containers |
| Resource calculation | Summed with app containers | Max (not sum), like init containers |
| Job support | Blocks Job completion | Does not block Job completion |
The Job support point is significant. Before KEP-753, a sidecar in a Job would prevent the Job from completing. Native sidecars are exempt.
Container Ordering Guarantees
Section titled “Container Ordering Guarantees”What Kubernetes guarantees: Init containers run sequentially. Native sidecars start in order but do not block each other. All init containers finish or start before app containers begin. App containers all start at the same time. On shutdown, app containers get SIGTERM simultaneously. Native sidecars shut down in reverse order after app containers exit.
What Kubernetes does not guarantee: No ordering among application containers. No guarantee a container is “ready” just because it started (use readiness probes). No guarantee all containers start at the exact same instant.
When to Use Multi-Container Pods vs Separate Pods
Section titled “When to Use Multi-Container Pods vs Separate Pods”Use multi-container pods when: Containers share the network namespace (localhost communication). Containers share files through a volume. They have the same scaling requirements. One container exists only to support the other.
Use separate pods when: Containers have different scaling needs. They have different release cycles. They can run on different nodes. They communicate through network APIs. A failure in one should not bring down the other.
The litmus test: “If I need to scale this container independently, is it in the wrong pod?” If yes, it belongs in its own pod behind a Service.
Real-World Architectures
Section titled “Real-World Architectures”Istio Service Mesh
Section titled “Istio Service Mesh”Every pod in an Istio mesh gets an init container (istio-init) that configures iptables rules, and a sidecar (istio-proxy, which is Envoy) that intercepts all traffic. With native sidecar support, Envoy starts before the app and stops after it, preventing traffic drops during startup or shutdown. The app has zero awareness of the mesh.
Vault Agent Injection
Section titled “Vault Agent Injection”An init container (vault-agent-init) authenticates with Vault and writes secrets to a shared emptyDir. A sidecar (vault-agent) renews leases and rotates secrets. The app reads secrets from the shared volume. With native sidecars, secrets are guaranteed to be available before the app starts.
Logging Pipelines
Section titled “Logging Pipelines”The app writes structured logs to a shared emptyDir. A Fluent Bit sidecar tails, parses, adds metadata, and forwards to Elasticsearch or Splunk. An init container may set up directories or download Fluent Bit config. You can swap logging backends without touching application code.
Summary of Shared Resources
Section titled “Summary of Shared Resources”| Resource | How Shared | What It Enables |
|---|---|---|
| Network namespace | Automatic | localhost communication, port sharing |
| IPC namespace | Automatic | Shared memory, message queues |
| PID namespace | Opt-in (shareProcessNamespace) | Process visibility, signal forwarding |
| Volumes (emptyDir) | Explicit mounts | File sharing between containers |
| Pod identity | Automatic | Same service account, labels, IP |
See Also
Section titled “See Also”- README for step-by-step instructions to run this demo
- Persistent Volumes for the Kubernetes storage layer
- KEP-753 for the native sidecar specification