Skip to content

Redis Caching

See the impact of Redis caching with an interactive dashboard that compares database vs cached response times.

Time: ~15 minutes Difficulty: Intermediate

  • The cache-aside (lazy-loading) pattern
  • Redis as a Kubernetes-native cache with LRU eviction
  • Measuring cache hit rates and speedup factors
  • How TTL and eviction policies work in practice
Browser --> FastAPI App --> Redis (cache layer)
|
v
PostgreSQL (data layer)

The app checks Redis first. On a cache miss, it queries PostgreSQL, stores the result in Redis with a 60-second TTL, and returns it. Subsequent requests hit the cache and return in under 1ms.

Terminal window
eval $(minikube docker-env)
docker build -t cache-demo-app:latest -f demos/redis/Containerfile demos/redis/

If using Podman: podman build -t cache-demo-app:latest -f demos/redis/Containerfile demos/redis/

Terminal window
kubectl apply -f demos/redis/manifests/namespace.yaml
kubectl apply -f demos/redis/manifests/postgres.yaml
kubectl apply -f demos/redis/manifests/redis.yaml
kubectl apply -f demos/redis/manifests/app.yaml
Terminal window
kubectl get pods -n redis-demo -w

Wait until all three pods (postgres, redis, app) show Running and 1/1 ready.

Terminal window
minikube service cache-demo-app -n redis-demo

Or use port-forwarding:

Terminal window
kubectl port-forward svc/cache-demo-app 8000:8000 -n redis-demo

Open http://localhost:8000 in your browser.

The dashboard provides three actions:

ButtonWhat It Does
Run Single ComparisonQueries the database directly, then queries with cache. Shows both response times side by side.
Run Load Test (10x)Runs 10 rounds of database vs cache comparisons. Builds a bar chart of response times.
Flush CacheClears Redis and resets stats. Use this to start fresh.

What to look for:

  • First request (cache miss): ~300-400ms (database query + artificial delay)
  • Cached request (cache hit): ~0.5-2ms
  • Typical speedup: 100-900x

The stats bar at the top shows cache hit rate, Redis memory usage, and number of cached keys.

manifests/
namespace.yaml # redis-demo namespace
postgres.yaml # PostgreSQL with 5,510 rows of seed data
redis.yaml # Redis 7 with 64MB memory, LRU eviction
app.yaml # FastAPI app (NodePort service)
src/
main.py # Cache-aside logic, 5 API endpoints
requirements.txt # Python dependencies
templates/
dashboard.html # Interactive single-page dashboard
Containerfile # UBI9 + Python 3.11
init.sql # Database schema: categories, products, reviews

API endpoints:

EndpointDescription
GET /api/categoriesCategory stats with COUNT, AVG, MIN, MAX aggregations
GET /api/products?q=...Text search across product names and descriptions
GET /api/top-productsTop-rated products using window functions
POST /api/flush-cacheClear Redis and reset stats
GET /api/statsCache hit/miss rates, Redis memory, key count
  1. Run a load test and observe the bar chart. Red bars (database) should tower over green bars (cache).

  2. Wait 60 seconds after a load test. The TTL expires, and the next request will be a cache miss again.

  3. Check Redis directly:

    Terminal window
    kubectl exec -it deploy/redis -n redis-demo -- redis-cli
    > KEYS *
    > INFO memory
    > TTL "categories:stats"
  4. Watch application logs:

    Terminal window
    kubectl logs -f deploy/cache-demo-app -n redis-demo
Terminal window
kubectl delete namespace redis-demo

See docs/deep-dive.md for a detailed explanation of the cache-aside pattern, cache key design, TTL strategy, connection pooling, Redis LRU eviction, the database queries, and when to use (or not use) Redis caching.

Move on to YAKD to visualize your cluster with a lightweight dashboard.