Network Policies: Deep Dive
This document explains how Kubernetes NetworkPolicy objects control traffic between pods, how CNI plugins implement those rules, and the patterns you need for production zero-trust networking.
The Default: Everything Talks to Everything
Section titled “The Default: Everything Talks to Everything”Without any NetworkPolicy objects, Kubernetes allows all ingress and egress traffic between all pods in all namespaces. Every pod can reach every other pod by IP. Every pod can reach the internet. There are no firewalls.
This is by design. Kubernetes networking follows a flat network model where every pod gets a routable IP. The assumption is that you layer security on top.
NetworkPolicy is that layer.
How NetworkPolicy Works
Section titled “How NetworkPolicy Works”A NetworkPolicy is a namespace-scoped resource that selects pods via labels and declares allowed ingress and egress traffic. It is purely declarative. The Kubernetes API server stores the policy, but it does not enforce it. Enforcement is the CNI plugin’s job.
The Additive Model
Section titled “The Additive Model”Network policies follow an additive (union) model:
- If no policies select a pod, all traffic is allowed (default open).
- If any policy selects a pod for a given direction (ingress or egress), all traffic of that type is denied except what the policies explicitly allow.
- Multiple policies on the same pod are combined with OR. If policy A allows traffic from pod X and policy B allows traffic from pod Y, both X and Y can reach the pod.
Policies never deny. There is no “deny from X” rule. You deny by omission: apply a policy that allows nothing, then add policies that allow specific traffic.
The deny-all Pattern
Section titled “The deny-all Pattern”The demo starts with a deny-all policy:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all namespace: netpol-demospec: podSelector: {} policyTypes: - Ingress - EgressBreaking this down:
podSelector: {}selects all pods in the namespace.policyTypes: [Ingress, Egress]activates both ingress and egress rules.- No
ingressoregressrules are defined, so nothing is allowed.
This is the zero-trust starting point. Apply this first, then add specific allow rules.
The empty podSelector ({}) is critical. It matches everything. If you used podSelector: { matchLabels: { app: frontend } }, only frontend pods would be affected. Other pods would remain wide open.
Selectors in Detail
Section titled “Selectors in Detail”NetworkPolicy supports three selector types for defining traffic sources and destinations.
Pod Selectors
Section titled “Pod Selectors”Select pods within the same namespace:
ingress: - from: - podSelector: matchLabels: tier: frontendThis allows traffic from pods labeled tier: frontend in the same namespace as the policy. The demo uses this to allow frontend-to-backend communication:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend-to-backend namespace: netpol-demospec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - protocol: TCP port: 80This says: pods with tier: backend accept ingress on TCP/80 from pods with tier: frontend.
Namespace Selectors
Section titled “Namespace Selectors”Select pods from other namespaces:
ingress: - from: - namespaceSelector: matchLabels: purpose: monitoringThis allows traffic from any pod in any namespace labeled purpose: monitoring. The pod labels do not matter, only the namespace label.
Combining Pod and Namespace Selectors
Section titled “Combining Pod and Namespace Selectors”This is where a subtle but important distinction exists. Two items in the from array are OR-ed. Two selectors in the same item are AND-ed.
OR (two separate items):
ingress: - from: - podSelector: matchLabels: app: frontend - namespaceSelector: matchLabels: name: monitoringThis allows traffic from frontend pods in the current namespace OR any pod in the monitoring namespace.
AND (combined in one item):
ingress: - from: - podSelector: matchLabels: app: prometheus namespaceSelector: matchLabels: name: monitoringThis allows traffic only from pods labeled app: prometheus that are also in a namespace labeled name: monitoring. Both conditions must be true.
The difference is a single dash (-) in the YAML. This is the single most common NetworkPolicy mistake. An extra or missing dash changes the policy from AND to OR, potentially opening traffic you intended to restrict.
IP Block Selectors
Section titled “IP Block Selectors”Select traffic by CIDR range:
egress: - to: - ipBlock: cidr: 10.0.0.0/8 except: - 10.0.1.0/24IP blocks are used for external services, VPN ranges, or on-premises networks that are not part of the Kubernetes cluster. The except field carves out sub-ranges.
IP blocks do not apply to pod-to-pod traffic within the cluster. The CNI resolves pod selectors to IPs. Use pod selectors for in-cluster traffic.
Ingress and Egress Rules
Section titled “Ingress and Egress Rules”Every connection has two sides. For pod A to reach pod B:
- Pod A needs an egress rule allowing traffic to pod B.
- Pod B needs an ingress rule allowing traffic from pod A.
The demo implements both sides explicitly:
# Backend INGRESS: accept from frontendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend-to-backend namespace: netpol-demospec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - protocol: TCP port: 80# Frontend EGRESS: allow sending to backendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend-egress namespace: netpol-demospec: podSelector: matchLabels: tier: frontend policyTypes: - Egress egress: - to: - podSelector: matchLabels: tier: backend ports: - protocol: TCP port: 80Both are needed because the deny-all policy blocks both directions. If you only defined the ingress rule on the backend, the frontend’s egress would still be blocked.
The DNS Problem
Section titled “The DNS Problem”When you apply a deny-all egress policy, DNS stops working. Pods cannot resolve service names. The demo addresses this with a dedicated DNS policy:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-dns namespace: netpol-demospec: podSelector: {} policyTypes: - Egress egress: - to: [] ports: - protocol: UDP port: 53 - protocol: TCP port: 53This allows all pods in the namespace to send DNS queries (port 53, both UDP and TCP) to any destination. The to: [] means “any destination.” This is necessary because CoreDNS runs in the kube-system namespace, and the pod needs to reach it.
You could restrict DNS egress to only the CoreDNS pods using a namespace selector, but this is fragile. CoreDNS might move namespaces or be replaced by a different DNS provider.
DNS over TCP on port 53 is included because DNS falls back to TCP for responses larger than 512 bytes. Without the TCP rule, large DNS responses would fail.
Named Ports
Section titled “Named Ports”Instead of hardcoding port numbers, you can reference named ports:
ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: http # References the container port named "http"The port name must match a containerPort name in the target pod’s spec. This is useful when port numbers vary across services but the port name is consistent.
Named ports resolve to the actual port number of the selected pods. If different pods expose the same named port on different numbers, each pod gets the correct rule.
How CNIs Implement Policies
Section titled “How CNIs Implement Policies”The Kubernetes API defines NetworkPolicy, but enforcement depends entirely on the CNI plugin. The API server does not enforce anything. If your CNI does not support NetworkPolicy, policies are stored but ignored.
Calico
Section titled “Calico”Calico implements NetworkPolicy using iptables rules on each node. When a policy is created or updated:
- The Calico Felix agent on each node watches the Kubernetes API for NetworkPolicy changes.
- Felix translates policies into iptables rules in custom chains.
- Rules are installed on every node where affected pods run.
- Packets that do not match any allow rule are dropped.
Calico also extends the Kubernetes NetworkPolicy API with its own CRDs (NetworkPolicy and GlobalNetworkPolicy in the projectcalico.org API group) that support features like:
- Global policies across all namespaces
- Deny rules (not just allow)
- Application layer policies (HTTP method, path)
- Policy ordering with explicit priority
- DNS-based policies
Cilium
Section titled “Cilium”Cilium uses eBPF programs instead of iptables. eBPF programs run in the kernel and are significantly more efficient than iptables chains, especially at scale.
Cilium advantages:
- No iptables overhead (O(1) lookup vs O(n) chain traversal)
- Layer 7 policy (HTTP, gRPC, Kafka-aware)
- Identity-based enforcement (pods get numeric identities, policies match on identities)
- Better visibility (Hubble for network flow observability)
Cilium also extends NetworkPolicy with CiliumNetworkPolicy CRDs that support:
apiVersion: cilium.io/v2kind: CiliumNetworkPolicymetadata: name: l7-policyspec: endpointSelector: matchLabels: app: backend ingress: - fromEndpoints: - matchLabels: app: frontend toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: GET path: "/api/.*"This restricts not just which pods can connect, but what HTTP methods and paths they can use.
What Happens Without a Supporting CNI
Section titled “What Happens Without a Supporting CNI”If you use a CNI that does not support NetworkPolicy (like Flannel), the policy objects are created in the API server but have zero effect. Traffic flows as if the policies do not exist. There are no warnings, no errors. The policies are simply inert.
This is a common gotcha in development environments. You test on Minikube without Calico and think your policies work because kubectl get networkpolicy shows them. They are not enforced.
Default Deny Patterns
Section titled “Default Deny Patterns”Deny All Ingress Only
Section titled “Deny All Ingress Only”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all-ingressspec: podSelector: {} policyTypes: - IngressBlocks all incoming traffic. Egress (outgoing) is unaffected.
Deny All Egress Only
Section titled “Deny All Egress Only”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-all-egressspec: podSelector: {} policyTypes: - EgressBlocks all outgoing traffic. Ingress is unaffected. But this also blocks DNS, so pods cannot resolve service names.
Deny All Both
Section titled “Deny All Both”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-allspec: podSelector: {} policyTypes: - Ingress - EgressThis is what the demo uses. Total lockdown. Add allow rules on top.
Allow All Ingress (Reset)
Section titled “Allow All Ingress (Reset)”apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-all-ingressspec: podSelector: {} policyTypes: - Ingress ingress: - {}The ingress: [{}] means “allow from everywhere.” This effectively undoes a deny-all-ingress policy.
CIDR Blocks for External Access
Section titled “CIDR Blocks for External Access”To allow egress to external services by IP range:
egress: - to: - ipBlock: cidr: 203.0.113.0/24 ports: - protocol: TCP port: 443This allows HTTPS traffic to the 203.0.113.0/24 range. Use this for external APIs, SaaS services, or on-premises systems.
To allow all internet access except internal ranges:
egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16This allows traffic to any public IP but blocks private IP ranges.
Policy Testing Approaches
Section titled “Policy Testing Approaches”NetworkPolicy testing is notoriously difficult because policies are silently enforced. A misconfigured policy results in timeouts, not error messages.
Manual Testing with Temporary Pods
Section titled “Manual Testing with Temporary Pods”# Test connectivity from a labeled podkubectl run test-frontend --rm -it \ --image=busybox:1.36 \ --labels="tier=frontend" \ -n netpol-demo \ -- wget -qO- --timeout=3 http://backend
# Test from an unlabeled pod (should be blocked)kubectl run test-unlabeled --rm -it \ --image=busybox:1.36 \ -n netpol-demo \ -- wget -qO- --timeout=3 http://backendPolicy Testing Tools
Section titled “Policy Testing Tools”Tools like kube-linter and polaris check for common mistakes (policies selecting no pods, missing DNS rules). The kubectl np-viewer plugin visualizes allowed communication paths. Cilium provides cilium connectivity test for end-to-end validation.
Policy Ordering and Precedence
Section titled “Policy Ordering and Precedence”There is no ordering or priority in standard Kubernetes NetworkPolicy. All policies that select a pod are combined additively. You cannot override one policy with another. Adding more policies can only expand what is allowed, never restrict it.
Calico’s GlobalNetworkPolicy CRD adds an order field for explicit precedence. Cilium achieves deny semantics through policy structure.
Multi-Namespace and DNS Policies
Section titled “Multi-Namespace and DNS Policies”In production, cross-namespace policies use namespaceSelector. Since Kubernetes v1.22, every namespace gets a kubernetes.io/metadata.name label automatically:
ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoringFor DNS, the simplest approach is allowing port 53 to all destinations (what the demo does). To restrict DNS to only internal CoreDNS, combine a namespace selector for kube-system with a pod selector for k8s-app: kube-dns.
Common Mistakes
Section titled “Common Mistakes”1. AND vs OR in Selectors
Section titled “1. AND vs OR in Selectors”As described above, the dash placement in YAML changes AND to OR. Review this in every policy.
2. Forgetting Egress
Section titled “2. Forgetting Egress”Allowing ingress to a pod is not enough if the source pod’s egress is blocked. Both sides must be open.
3. Forgetting DNS
Section titled “3. Forgetting DNS”A deny-all egress policy breaks DNS resolution. Always pair it with a DNS allow rule.
4. Testing on a Non-Supporting CNI
Section titled “4. Testing on a Non-Supporting CNI”Flannel, kindnet, and some other CNIs do not enforce NetworkPolicy. Always verify your CNI supports it.
5. Namespace-Scoped Only
Section titled “5. Namespace-Scoped Only”Standard NetworkPolicy is namespace-scoped. You cannot create a cluster-wide default deny in one object. You must create a deny-all in every namespace. Calico and Cilium CRDs offer cluster-scoped alternatives.