Microservices Platform
Deploy a multi-tier microservices application with a frontend, backend API, worker, Redis queue, and PostgreSQL database.
Time: ~15 minutes Difficulty: Intermediate
Resources: This demo needs ~1GB RAM. Clean up other demos first:
task clean:all
What You Will Learn
Section titled “What You Will Learn”- Decomposing an application into independently deployable services
- Service-to-service communication inside a Kubernetes cluster
- Using Ingress to route external traffic to different backends
- Connecting application pods to databases and caches via environment variables
- Running background workers that process jobs from a queue
Architecture
Section titled “Architecture” +-----------+ | Ingress | +-----+-----+ | +-----------+-----------+ | / | /api v v +-----------+ +-----------+ | Frontend | | Backend | | (nginx) | | API(nginx)| +-----------+ +-----+-----+ | +----------+----------+ | | +-----+-----+ +-----+-----+ | Redis | | PostgreSQL| | (queue) | | (database)| +-----+-----+ +-----------+ | +-----+-----+ | Worker | | (busybox) | +-----------+The Ingress routes / to the Frontend and /api to the Backend API. The Backend connects to PostgreSQL for data storage and Redis for caching and job queuing. The Worker processes jobs from the Redis queue every 10 seconds.
Deploy
Section titled “Deploy”Step 1: Enable the Ingress addon
Section titled “Step 1: Enable the Ingress addon”minikube addons enable ingressStep 2: Apply the manifests
Section titled “Step 2: Apply the manifests”kubectl apply -f demos/microservices-platform/manifests/namespace.yamlkubectl apply -f demos/microservices-platform/manifests/postgres.yamlkubectl apply -f demos/microservices-platform/manifests/redis.yamlkubectl apply -f demos/microservices-platform/manifests/backend.yamlkubectl apply -f demos/microservices-platform/manifests/worker.yamlkubectl apply -f demos/microservices-platform/manifests/frontend.yamlkubectl apply -f demos/microservices-platform/manifests/ingress.yamlStep 3: Wait for pods
Section titled “Step 3: Wait for pods”kubectl get pods -n microservices-demo -wWait until all five pods (postgres, redis, backend, frontend, worker) show Running and 1/1 ready.
Verify
Section titled “Verify”# Check all pods are runningkubectl get pods -n microservices-demo
# Check all serviceskubectl get svc -n microservices-demo
# Check the ingresskubectl get ingress -n microservices-demo
# Access the frontend directlykubectl port-forward svc/frontend 8080:80 -n microservices-demo
# Access the backend API directlykubectl port-forward svc/backend 8081:80 -n microservices-demoOpen http://localhost:8080 to see the frontend dashboard. Open http://localhost:8081/api/orders to see the API response.
# Watch the worker processing jobskubectl logs -f deploy/worker -n microservices-demo
# Verify PostgreSQL has seed datakubectl exec -it deploy/postgres -n microservices-demo -- \ psql -U orders -d ordersdb -c "SELECT * FROM orders;"
# Verify Redis is reachablekubectl exec -it deploy/redis -n microservices-demo -- redis-cli PINGWhat is Happening
Section titled “What is Happening”manifests/ namespace.yaml # microservices-demo namespace postgres.yaml # PostgreSQL 16 with init SQL creating orders table + seed data redis.yaml # Redis 7 as queue and cache backend.yaml # Nginx returning JSON at /api/health, /api/orders, /api/stats worker.yaml # Busybox loop simulating job processing every 10 seconds frontend.yaml # Nginx serving a static HTML dashboard ingress.yaml # Routes /api to backend, / to frontendPostgreSQL starts with a ConfigMap-mounted init script that creates an orders table and inserts five sample rows. Redis serves as a lightweight queue and cache. The Backend is an nginx server configured to return JSON responses on API endpoints. The Worker runs a shell loop that logs “Processing job” messages every 10 seconds, simulating a consumer pulling from the queue. The Frontend serves a static HTML page showing the architecture and linking to the API endpoints. The Ingress splits traffic by path so external users reach the correct service.
Experiment
Section titled “Experiment”-
Scale the backend to handle more traffic:
Terminal window kubectl scale deployment backend --replicas=3 -n microservices-demokubectl get pods -n microservices-demo -l app=backend -
Scale the worker to process jobs faster:
Terminal window kubectl scale deployment worker --replicas=2 -n microservices-demokubectl logs -f deploy/worker -n microservices-demo -
Add more orders to PostgreSQL:
Terminal window kubectl exec -it deploy/postgres -n microservices-demo -- \psql -U orders -d ordersdb -c \"INSERT INTO orders (customer_name, product, quantity, status) VALUES ('Frank', 'Sensor F', 10, 'pending');" -
Check Redis connectivity from the backend pod:
Terminal window kubectl exec -it deploy/backend -n microservices-demo -- \sh -c "apk add --no-cache redis && redis-cli -h redis PING" -
Delete the worker pod and watch Kubernetes recreate it:
Terminal window kubectl delete pod -l app=worker -n microservices-demo --wait=falsekubectl get pods -n microservices-demo -w
Cleanup
Section titled “Cleanup”kubectl delete namespace microservices-demoFurther Reading
Section titled “Further Reading”See docs/deep-dive.md for a detailed explanation of microservices decomposition patterns, service discovery in Kubernetes, ingress routing rules, and best practices for connecting services to databases and queues.
Next Step
Section titled “Next Step”Move on to API Gateway to learn how Kong manages routing, rate limiting, and authentication for your APIs.