CloudNativePG
Run production-grade PostgreSQL on Kubernetes using the CloudNativePG operator (CNCF Sandbox project).
Time: ~15 minutes Difficulty: Intermediate
What You Will Learn
Section titled “What You Will Learn”- How Kubernetes operators manage stateful workloads
- Deploying a 3-instance PostgreSQL cluster with a single YAML file
- Automatic failover: delete the primary, watch a standby promote in seconds
- Operator-managed services for read-write, read-only, and any-instance access
- Auto-generated credentials stored as Kubernetes Secrets
How This Differs from a Plain Postgres Deployment
Section titled “How This Differs from a Plain Postgres Deployment”The Redis demo uses a simple Deployment for PostgreSQL. That works for learning and dev, but it gives you:
- A single pod with no replication
- No automatic failover
- No managed backups
- Manual credential management
- You handle PVC lifecycle yourself
CloudNativePG replaces all of that with an operator. You declare the desired state (“I want 3 instances with 1Gi storage”), and the operator handles replication, failover, credentials, services, and PVC management.
Prerequisites
Section titled “Prerequisites”- Minikube running with at least 4 CPUs and 8GB RAM
- Helm v3.13+
Install the Operator
Section titled “Install the Operator”helm repo add cnpg https://cloudnative-pg.github.io/chartshelm repo update
helm install cnpg cnpg/cloudnative-pg \ --namespace cnpg-system \ --create-namespaceWait for the operator to be ready:
kubectl rollout status deployment/cnpg-cloudnative-pg -n cnpg-systemDeploy a PostgreSQL Cluster
Section titled “Deploy a PostgreSQL Cluster”Navigate to the demo directory:
cd demos/cloudnative-pgThen deploy the cluster:
kubectl apply -f manifests/namespace.yamlkubectl apply -f manifests/cluster.yamlWatch the cluster come up (takes 1-2 minutes):
kubectl get pods -n cnpg-demo -wYou should see 3 pods: demo-pg-1 (primary), demo-pg-2 (replica), demo-pg-3 (replica).
Check Cluster Status
Section titled “Check Cluster Status”# If you have the cnpg kubectl plugin:kubectl cnpg status demo-pg -n cnpg-demo
# Without the plugin:kubectl get cluster demo-pg -n cnpg-demokubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo -o wideConnect to the Database
Section titled “Connect to the Database”The operator auto-creates credentials. Retrieve them:
# Get the app user passwordkubectl get secret demo-pg-app -n cnpg-demo \ -o jsonpath='{.data.password}' | base64 -d && echo
# Get the full connection URIkubectl get secret demo-pg-app -n cnpg-demo \ -o jsonpath='{.data.uri}' | base64 -d && echoOption A: From inside the cluster
Section titled “Option A: From inside the cluster”Deploy a client pod and connect using DNS:
kubectl apply -f manifests/client-pod.yaml
# Wait for it to startkubectl wait --for=condition=Ready pod/pg-client -n cnpg-demo --timeout=60s
# Connect to the primary (read-write)kubectl exec -it pg-client -n cnpg-demo -- \ psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app"Option B: Port-forward
Section titled “Option B: Port-forward”kubectl port-forward svc/demo-pg-rw 5432:5432 -n cnpg-demopsql "postgresql://app@localhost:5432/app"Test Automatic Failover
Section titled “Test Automatic Failover”This is the key demo. Delete the primary pod and watch the operator promote a standby.
Step 1: Identify the primary
Section titled “Step 1: Identify the primary”kubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo \ -L cnpg.io/instanceRoleThe pod with role primary is the current leader.
Step 2: Write some data
Section titled “Step 2: Write some data”kubectl exec -it pg-client -n cnpg-demo -- \ psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app" \ -c "CREATE TABLE failover_test (id serial, ts timestamp DEFAULT now()); INSERT INTO failover_test DEFAULT VALUES;"Step 3: Delete the primary
Section titled “Step 3: Delete the primary”# Find the primary pod namePRIMARY=$(kubectl get pod -l cnpg.io/cluster=demo-pg,cnpg.io/instanceRole=primary \ -n cnpg-demo -o jsonpath='{.items[0].metadata.name}')
echo "Deleting primary: $PRIMARY"kubectl delete pod "$PRIMARY" -n cnpg-demoStep 4: Watch failover happen
Section titled “Step 4: Watch failover happen”# Watch pods - a new primary is elected within secondskubectl get pods -l cnpg.io/cluster=demo-pg -n cnpg-demo \ -L cnpg.io/instanceRole -wA different pod now has the primary role. The deleted pod comes back as a replica.
Step 5: Verify data survived
Section titled “Step 5: Verify data survived”kubectl exec -it pg-client -n cnpg-demo -- \ psql "postgresql://app:$(kubectl get secret demo-pg-app -n cnpg-demo -o jsonpath='{.data.password}' | base64 -d)@demo-pg-rw:5432/app" \ -c "SELECT * FROM failover_test;"Your data is intact. The demo-pg-rw service automatically switched to the new primary.
Operator-Managed Services
Section titled “Operator-Managed Services”The operator creates three services with different routing behavior:
| Service | DNS Name | Routes To | Use For |
|---|---|---|---|
demo-pg-rw | demo-pg-rw.cnpg-demo.svc | Primary only | Writes, DDL, transactions |
demo-pg-ro | demo-pg-ro.cnpg-demo.svc | Replicas only | Read-heavy queries |
demo-pg-r | demo-pg-r.cnpg-demo.svc | Any instance | Reads that tolerate stale data |
kubectl get svc -n cnpg-demoExperiment
Section titled “Experiment”-
Scale up: Edit
manifests/cluster.yaml, changeinstances: 3toinstances: 5, and apply. Watch new replicas join automatically. -
Check replication lag:
Terminal window kubectl exec -it demo-pg-1 -n cnpg-demo -- \psql -U postgres -c "SELECT * FROM pg_stat_replication;" -
View PostgreSQL logs:
Terminal window kubectl logs demo-pg-1 -n cnpg-demo | tail -20 -
Inspect the managed PVCs:
Terminal window kubectl get pvc -n cnpg-demo
Cleanup
Section titled “Cleanup”kubectl delete namespace cnpg-demohelm uninstall cnpg -n cnpg-systemkubectl delete namespace cnpg-systemFurther Reading
Section titled “Further Reading”See docs/deep-dive.md for a detailed explanation of the operator architecture, the Cluster CRD fields, replication topology, backup strategies, connection pooling with PgBouncer, and how CNPG compares to other PostgreSQL operators.
Next Step
Section titled “Next Step”Move on to StatefulSet vs Deployment to understand why stateful workloads need different abstractions.