Skip to content

Multi-Container Patterns

Learn the multi-container patterns used in production Kubernetes: sidecar, adapter, ambassador, init containers, and native sidecars.

Time: ~15 minutes Difficulty: Intermediate

  • Sidecar: extend app behavior without changing the app (log shipping, monitoring)
  • Adapter: normalize output format (convert metrics to Prometheus format)
  • Ambassador: proxy connections on behalf of the app (TLS termination)
  • Init containers: run setup tasks before the app starts
  • Native sidecar containers (Kubernetes 1.28+) with restartPolicy: Always
  • Container lifecycle ordering and shared resources

Navigate to the demo directory:

Terminal window
cd demos/multi-container
Terminal window
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/

Wait for all pods to be ready:

Terminal window
kubectl get pods -n multi-container-demo

The app writes logs to a shared volume. The sidecar tails and ships them.

Terminal window
# App logs (writing to file)
kubectl logs sidecar-logging -c app -n multi-container-demo
# Sidecar output (tailing the same file)
kubectl logs sidecar-logging -c log-shipper -n multi-container-demo

Both containers share the log-volume emptyDir. The app writes, the sidecar reads. Neither knows about the other.

The app writes metrics in a custom pipe-delimited format. The adapter converts to Prometheus format.

Terminal window
# Raw app metrics (custom format)
kubectl exec adapter-format -c app -n multi-container-demo -- tail -5 /var/log/app/metrics.raw
# Adapter output (Prometheus format)
kubectl logs adapter-format -c adapter -n multi-container-demo | tail -5

The proxy container terminates TLS and forwards to the app on localhost.

Terminal window
# Direct HTTP access (app container)
kubectl exec ambassador-proxy -c app -n multi-container-demo -- wget -qO- http://localhost:80
# TLS access via the proxy (ambassador container)
kubectl exec ambassador-proxy -c app -n multi-container-demo -- wget -qO- --no-check-certificate https://localhost:8443

Init containers run sequentially before the main container starts.

Terminal window
# See the init container logs
kubectl logs init-demo -c init-config -n multi-container-demo
kubectl logs init-demo -c init-wait-db -n multi-container-demo
# App uses the config created by init
kubectl logs init-demo -c app -n multi-container-demo

Part 2: Native Sidecar Containers (Kubernetes 1.28+)

Section titled “Part 2: Native Sidecar Containers (Kubernetes 1.28+)”

Native sidecars are defined in initContainers with restartPolicy: Always. They start before the main containers but keep running alongside them.

The proxy sidecar forwards requests to the main app and adds custom headers:

Terminal window
# Direct access to the app on port 80
kubectl exec -n multi-container-demo deployment/sidecar-proxy -c app -- wget -qO- http://localhost:80
# Access via the proxy sidecar on port 8080
kubectl exec -n multi-container-demo deployment/sidecar-proxy -c app -- wget -qO- http://localhost:8080
# Check proxy health endpoint
kubectl exec -n multi-container-demo deployment/sidecar-proxy -c app -- wget -qO- http://localhost:8080/health

The proxy runs as a native sidecar and shares the network namespace (localhost) with the main app. Kubernetes manages its lifecycle: it starts before the main container and stops gracefully when the pod terminates.

How Native Sidecars Differ from Classic Sidecars

Section titled “How Native Sidecars Differ from Classic Sidecars”
AspectClassic SidecarNative Sidecar (1.28+)
Definitioncontainers sectioninitContainers with restartPolicy: Always
Startup orderUndefined (parallel)Guaranteed before main containers
Shutdown orderUndefinedGraceful, after main containers
Crash recoveryPod restart policyIndividual restart (Always)
Use casesAny companion processProxies, log shippers, service mesh
manifests/
namespace.yaml # multi-container-demo namespace
sidecar-logging.yaml # App + log shipper sharing a volume
adapter-format.yaml # App + format converter sharing a volume
ambassador-proxy.yaml # App + TLS proxy sharing localhost
init-container.yaml # Two init containers + app
sidecar-proxy.yaml # Native sidecar proxy (restartPolicy: Always)
service.yaml # Services for proxy demos

Key principle: containers in the same pod share:

  • Network namespace (localhost is the same for all containers)
  • Volumes (emptyDir mounts shared between containers)
  • Lifecycle (all containers start together, pod dies if any crashes)
PatternCommunicationUse Case
SidecarShared volumeLog collection, monitoring agents, config reload
AdapterShared volumeFormat conversion, protocol translation
AmbassadorlocalhostTLS termination, connection pooling, rate limiting
InitSequential executionDatabase migrations, config download, wait-for-service
Native Sidecarlocalhost + volumesProxies, service mesh, log shippers (with lifecycle guarantees)
  1. Test sidecar crash recovery. Kill the native sidecar process and watch it restart automatically:

    Terminal window
    kubectl exec -n multi-container-demo -c proxy deployment/sidecar-proxy -- kill 1
    kubectl get pods -n multi-container-demo -w
  2. Remove restartPolicy: Always from the proxy in sidecar-proxy.yaml. The proxy will run as a traditional init container and complete before the main app starts (breaking the proxy).

  3. Add a failing init container to init-container.yaml:

    initContainers:
    - name: failing-init
    image: busybox:1.36
    command: ["sh", "-c", "exit 1"]

    Apply and watch the pod get stuck in Init:Error. Init containers block the entire pod startup if they fail.

  4. Check resource accounting. Sidecars count toward the pod’s total resource requests and limits:

    Terminal window
    kubectl describe pod -l app=sidecar-proxy -n multi-container-demo | grep -A5 Requests
Terminal window
kubectl delete namespace multi-container-demo

See docs/deep-dive.md for a detailed explanation of shared namespaces, native sidecar containers (KEP-753), container ordering, and real-world multi-container architectures.

Move on to PersistentVolumes to learn about the Kubernetes storage layer.