Skip to content

CloudNativePG

Run production-grade PostgreSQL on Kubernetes using the CloudNativePG operator (CNCF Sandbox project).

Time: ~15 minutes Difficulty: Intermediate

  • How Kubernetes operators manage stateful workloads
  • Deploying a 3-instance PostgreSQL cluster with a single YAML file
  • Automatic failover: delete the primary, watch a standby promote in seconds
  • Operator-managed services for read-write, read-only, and any-instance access
  • Auto-generated credentials stored as Kubernetes Secrets

How This Differs from a Plain Postgres Deployment

Section titled “How This Differs from a Plain Postgres Deployment”

The Redis demo uses a simple Deployment for PostgreSQL. That works for learning and dev, but it gives you:

  • A single pod with no replication
  • No automatic failover
  • No managed backups
  • Manual credential management
  • You handle PVC lifecycle yourself

CloudNativePG replaces all of that with an operator. You declare the desired state (“I want 3 instances with 1Gi storage”), and the operator handles replication, failover, credentials, services, and PVC management.

  • Minikube running with at least 4 CPUs and 8GB RAM
  • Helm v3.13+
Terminal window
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
helm install cnpg cnpg/cloudnative-pg \
--namespace cnpg-system \
--create-namespace

Wait for the operator to be ready:

Terminal window
kubectl rollout status deployment/cnpg-cloudnative-pg -n cnpg-system

Navigate to the demo directory:

Terminal window
cd demos/cloudnative-pg

Then deploy the cluster:

Terminal window
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/cluster.yaml

Watch the cluster come up (takes 1-2 minutes):

Terminal window
kubectl get pods -n cnpg-demo -w

You should see 3 pods: demo-pg-1 (primary), demo-pg-2 (replica), demo-pg-3 (replica).

Terminal window
# If you have the cnpg kubectl plugin:
kubectl cnpg status demo-pg -n cnpg-demo
# Without the plugin:
kubectl get cluster demo-pg -n cnpg-demo
kubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo -o wide

The operator auto-creates credentials. Retrieve them:

Terminal window
# Get the app user password
kubectl get secret demo-pg-app -n cnpg-demo \
-o jsonpath='{.data.password}' | base64 -d && echo
# Get the full connection URI
kubectl get secret demo-pg-app -n cnpg-demo \
-o jsonpath='{.data.uri}' | base64 -d && echo

Deploy a client pod and connect using DNS:

Terminal window
kubectl apply -f manifests/client-pod.yaml
# Wait for it to start
kubectl wait --for=condition=Ready pod/pg-client -n cnpg-demo --timeout=60s
# Connect to the primary (read-write)
kubectl exec -it pg-client -n cnpg-demo -- \
psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app"
Terminal window
kubectl port-forward svc/demo-pg-rw 5432:5432 -n cnpg-demo
psql "postgresql://app@localhost:5432/app"

This is the key demo. Delete the primary pod and watch the operator promote a standby.

Terminal window
kubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo \
-L cnpg.io/instanceRole

The pod with role primary is the current leader.

Terminal window
kubectl exec -it pg-client -n cnpg-demo -- \
psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app" \
-c "CREATE TABLE failover_test (id serial, ts timestamp DEFAULT now()); INSERT INTO failover_test DEFAULT VALUES;"
Terminal window
# Find the primary pod name
PRIMARY=$(kubectl get pod -l cnpg.io/cluster=demo-pg,cnpg.io/instanceRole=primary \
-n cnpg-demo -o jsonpath='{.items[0].metadata.name}')
echo "Deleting primary: $PRIMARY"
kubectl delete pod "$PRIMARY" -n cnpg-demo
Terminal window
# Watch pods - a new primary is elected within seconds
kubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo \
-L cnpg.io/instanceRole -w

A different pod now has the primary role. The deleted pod comes back as a replica.

Terminal window
kubectl exec -it pg-client -n cnpg-demo -- \
psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app" \
-c "SELECT * FROM failover_test;"

Your data is intact. The demo-pg-rw service automatically switched to the new primary.

The operator creates three services with different routing behavior:

ServiceDNS NameRoutes ToUse For
demo-pg-rwdemo-pg-rw.cnpg-demo.svcPrimary onlyWrites, DDL, transactions
demo-pg-rodemo-pg-ro.cnpg-demo.svcReplicas onlyRead-heavy queries
demo-pg-rdemo-pg-r.cnpg-demo.svcAny instanceReads that tolerate stale data
Terminal window
kubectl get svc -n cnpg-demo
  1. Scale up: Edit manifests/cluster.yaml, change instances: 3 to instances: 5, and apply. Watch new replicas join automatically.

  2. Check replication lag:

    Terminal window
    kubectl exec -it demo-pg-1 -n cnpg-demo -- \
    psql -U postgres -c "SELECT * FROM pg_stat_replication;"
  3. View PostgreSQL logs:

    Terminal window
    kubectl logs demo-pg-1 -n cnpg-demo | tail -20
  4. Inspect the managed PVCs:

    Terminal window
    kubectl get pvc -n cnpg-demo
Terminal window
kubectl delete namespace cnpg-demo
helm uninstall cnpg -n cnpg-system
kubectl delete namespace cnpg-system

See docs/deep-dive.md for a detailed explanation of the operator architecture, the Cluster CRD fields, replication topology, backup strategies, connection pooling with PgBouncer, and how CNPG compares to other PostgreSQL operators.

Move on to StatefulSet vs Deployment to understand why stateful workloads need different abstractions.