Skip to content

Network Policies: Deep Dive

This document explains how Kubernetes NetworkPolicy objects control traffic between pods, how CNI plugins implement those rules, and the patterns you need for production zero-trust networking.

The Default: Everything Talks to Everything

Section titled “The Default: Everything Talks to Everything”

Without any NetworkPolicy objects, Kubernetes allows all ingress and egress traffic between all pods in all namespaces. Every pod can reach every other pod by IP. Every pod can reach the internet. There are no firewalls.

This is by design. Kubernetes networking follows a flat network model where every pod gets a routable IP. The assumption is that you layer security on top.

NetworkPolicy is that layer.

A NetworkPolicy is a namespace-scoped resource that selects pods via labels and declares allowed ingress and egress traffic. It is purely declarative. The Kubernetes API server stores the policy, but it does not enforce it. Enforcement is the CNI plugin’s job.

Network policies follow an additive (union) model:

  1. If no policies select a pod, all traffic is allowed (default open).
  2. If any policy selects a pod for a given direction (ingress or egress), all traffic of that type is denied except what the policies explicitly allow.
  3. Multiple policies on the same pod are combined with OR. If policy A allows traffic from pod X and policy B allows traffic from pod Y, both X and Y can reach the pod.

Policies never deny. There is no “deny from X” rule. You deny by omission: apply a policy that allows nothing, then add policies that allow specific traffic.

The demo starts with a deny-all policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: netpol-demo
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

Breaking this down:

  • podSelector: {} selects all pods in the namespace.
  • policyTypes: [Ingress, Egress] activates both ingress and egress rules.
  • No ingress or egress rules are defined, so nothing is allowed.

This is the zero-trust starting point. Apply this first, then add specific allow rules.

The empty podSelector ({}) is critical. It matches everything. If you used podSelector: { matchLabels: { app: frontend } }, only frontend pods would be affected. Other pods would remain wide open.

NetworkPolicy supports three selector types for defining traffic sources and destinations.

Select pods within the same namespace:

ingress:
- from:
- podSelector:
matchLabels:
tier: frontend

This allows traffic from pods labeled tier: frontend in the same namespace as the policy. The demo uses this to allow frontend-to-backend communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: netpol-demo
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 80

This says: pods with tier: backend accept ingress on TCP/80 from pods with tier: frontend.

Select pods from other namespaces:

ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring

This allows traffic from any pod in any namespace labeled purpose: monitoring. The pod labels do not matter, only the namespace label.

This is where a subtle but important distinction exists. Two items in the from array are OR-ed. Two selectors in the same item are AND-ed.

OR (two separate items):

ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- namespaceSelector:
matchLabels:
name: monitoring

This allows traffic from frontend pods in the current namespace OR any pod in the monitoring namespace.

AND (combined in one item):

ingress:
- from:
- podSelector:
matchLabels:
app: prometheus
namespaceSelector:
matchLabels:
name: monitoring

This allows traffic only from pods labeled app: prometheus that are also in a namespace labeled name: monitoring. Both conditions must be true.

The difference is a single dash (-) in the YAML. This is the single most common NetworkPolicy mistake. An extra or missing dash changes the policy from AND to OR, potentially opening traffic you intended to restrict.

Select traffic by CIDR range:

egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.0.1.0/24

IP blocks are used for external services, VPN ranges, or on-premises networks that are not part of the Kubernetes cluster. The except field carves out sub-ranges.

IP blocks do not apply to pod-to-pod traffic within the cluster. The CNI resolves pod selectors to IPs. Use pod selectors for in-cluster traffic.

Every connection has two sides. For pod A to reach pod B:

  1. Pod A needs an egress rule allowing traffic to pod B.
  2. Pod B needs an ingress rule allowing traffic from pod A.

The demo implements both sides explicitly:

# Backend INGRESS: accept from frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: netpol-demo
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 80
# Frontend EGRESS: allow sending to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-egress
namespace: netpol-demo
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 80

Both are needed because the deny-all policy blocks both directions. If you only defined the ingress rule on the backend, the frontend’s egress would still be blocked.

When you apply a deny-all egress policy, DNS stops working. Pods cannot resolve service names. The demo addresses this with a dedicated DNS policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: netpol-demo
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53

This allows all pods in the namespace to send DNS queries (port 53, both UDP and TCP) to any destination. The to: [] means “any destination.” This is necessary because CoreDNS runs in the kube-system namespace, and the pod needs to reach it.

You could restrict DNS egress to only the CoreDNS pods using a namespace selector, but this is fragile. CoreDNS might move namespaces or be replaced by a different DNS provider.

DNS over TCP on port 53 is included because DNS falls back to TCP for responses larger than 512 bytes. Without the TCP rule, large DNS responses would fail.

Instead of hardcoding port numbers, you can reference named ports:

ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: http # References the container port named "http"

The port name must match a containerPort name in the target pod’s spec. This is useful when port numbers vary across services but the port name is consistent.

Named ports resolve to the actual port number of the selected pods. If different pods expose the same named port on different numbers, each pod gets the correct rule.

The Kubernetes API defines NetworkPolicy, but enforcement depends entirely on the CNI plugin. The API server does not enforce anything. If your CNI does not support NetworkPolicy, policies are stored but ignored.

Calico implements NetworkPolicy using iptables rules on each node. When a policy is created or updated:

  1. The Calico Felix agent on each node watches the Kubernetes API for NetworkPolicy changes.
  2. Felix translates policies into iptables rules in custom chains.
  3. Rules are installed on every node where affected pods run.
  4. Packets that do not match any allow rule are dropped.

Calico also extends the Kubernetes NetworkPolicy API with its own CRDs (NetworkPolicy and GlobalNetworkPolicy in the projectcalico.org API group) that support features like:

  • Global policies across all namespaces
  • Deny rules (not just allow)
  • Application layer policies (HTTP method, path)
  • Policy ordering with explicit priority
  • DNS-based policies

Cilium uses eBPF programs instead of iptables. eBPF programs run in the kernel and are significantly more efficient than iptables chains, especially at scale.

Cilium advantages:

  • No iptables overhead (O(1) lookup vs O(n) chain traversal)
  • Layer 7 policy (HTTP, gRPC, Kafka-aware)
  • Identity-based enforcement (pods get numeric identities, policies match on identities)
  • Better visibility (Hubble for network flow observability)

Cilium also extends NetworkPolicy with CiliumNetworkPolicy CRDs that support:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: l7-policy
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: GET
path: "/api/.*"

This restricts not just which pods can connect, but what HTTP methods and paths they can use.

If you use a CNI that does not support NetworkPolicy (like Flannel), the policy objects are created in the API server but have zero effect. Traffic flows as if the policies do not exist. There are no warnings, no errors. The policies are simply inert.

This is a common gotcha in development environments. You test on Minikube without Calico and think your policies work because kubectl get networkpolicy shows them. They are not enforced.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress

Blocks all incoming traffic. Egress (outgoing) is unaffected.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress

Blocks all outgoing traffic. Ingress is unaffected. But this also blocks DNS, so pods cannot resolve service names.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

This is what the demo uses. Total lockdown. Add allow rules on top.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- {}

The ingress: [{}] means “allow from everywhere.” This effectively undoes a deny-all-ingress policy.

To allow egress to external services by IP range:

egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443

This allows HTTPS traffic to the 203.0.113.0/24 range. Use this for external APIs, SaaS services, or on-premises systems.

To allow all internet access except internal ranges:

egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16

This allows traffic to any public IP but blocks private IP ranges.

NetworkPolicy testing is notoriously difficult because policies are silently enforced. A misconfigured policy results in timeouts, not error messages.

Terminal window
# Test connectivity from a labeled pod
kubectl run test-frontend --rm -it \
--image=busybox:1.36 \
--labels="tier=frontend" \
-n netpol-demo \
-- wget -qO- --timeout=3 http://backend
# Test from an unlabeled pod (should be blocked)
kubectl run test-unlabeled --rm -it \
--image=busybox:1.36 \
-n netpol-demo \
-- wget -qO- --timeout=3 http://backend

Tools like kube-linter and polaris check for common mistakes (policies selecting no pods, missing DNS rules). The kubectl np-viewer plugin visualizes allowed communication paths. Cilium provides cilium connectivity test for end-to-end validation.

There is no ordering or priority in standard Kubernetes NetworkPolicy. All policies that select a pod are combined additively. You cannot override one policy with another. Adding more policies can only expand what is allowed, never restrict it.

Calico’s GlobalNetworkPolicy CRD adds an order field for explicit precedence. Cilium achieves deny semantics through policy structure.

In production, cross-namespace policies use namespaceSelector. Since Kubernetes v1.22, every namespace gets a kubernetes.io/metadata.name label automatically:

ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring

For DNS, the simplest approach is allowing port 53 to all destinations (what the demo does). To restrict DNS to only internal CoreDNS, combine a namespace selector for kube-system with a pod selector for k8s-app: kube-dns.

As described above, the dash placement in YAML changes AND to OR. Review this in every policy.

Allowing ingress to a pod is not enough if the source pod’s egress is blocked. Both sides must be open.

A deny-all egress policy breaks DNS resolution. Always pair it with a DNS allow rule.

Flannel, kindnet, and some other CNIs do not enforce NetworkPolicy. Always verify your CNI supports it.

Standard NetworkPolicy is namespace-scoped. You cannot create a cluster-wide default deny in one object. You must create a deny-all in every namespace. Calico and Cilium CRDs offer cluster-scoped alternatives.