Skip to content

Falco Runtime Security: Deep Dive

This deep dive explains how Falco works under the hood, the eBPF driver architecture, rule engine internals, alert pipelines, and how to build a comprehensive runtime security strategy.

Security does not stop at deployment time. You can scan images, enforce admission policies, and lock down RBAC, but threats can still emerge at runtime:

  • A developer execs into a pod and runs a shell (lateral movement risk)
  • A compromised container reads sensitive files or writes to system directories
  • An attacker exploits a zero-day vulnerability after your image passed scanning
  • A misconfigured pod spawns a crypto miner or tries to contact a C2 server

Shift-left security (scan early, fail fast) is essential, but it is not enough. You need runtime detection to catch threats that slip through.

Modern Kubernetes security requires multiple layers:

LayerToolWhat It Does
PreventKyverno, OPABlock bad configs at admission time
ScanTrivy, GrypeDetect vulnerabilities in images before deployment
DetectFalcoCatch suspicious behavior at runtime
RespondFalcosidekick, PagerDutyAlert teams, trigger incident response

Falco sits in the detection layer. It does not prevent anything. It observes and alerts.

Falco uses eBPF (extended Berkeley Packet Filter) to monitor kernel events without loading a kernel module.

eBPF is a Linux kernel technology that lets you run sandboxed programs inside the kernel. Originally designed for network packet filtering, it now supports tracing, profiling, and security.

Key characteristics:

  • Safe: eBPF programs are verified before loading to ensure they cannot crash the kernel.
  • Efficient: Programs run in kernel space, avoiding context switches.
  • Non-intrusive: No kernel module required, no reboot needed.

Falco installs an eBPF program that hooks into syscall events. Every time a process in a container makes a syscall (like open, execve, connect), the eBPF program captures it and sends it to Falco in userspace.

The flow:

  1. Container process calls open("/etc/shadow", O_RDONLY)
  2. Linux kernel triggers syscall entry hook
  3. Falco’s eBPF program captures the event (syscall type, file path, process ID, container ID, etc.)
  4. Event is sent to Falco’s userspace process via a ring buffer
  5. Falco rule engine evaluates the event against loaded rules
  6. If a rule matches, Falco emits an alert

Modern eBPF vs kernel module:

Falco supports three driver types:

  • modern_ebpf: Uses eBPF (recommended, no kernel module)
  • ebpf: Legacy eBPF driver
  • kmod: Kernel module (fallback if eBPF is unavailable)

In this demo, we used driver.kind=modern_ebpf for the best balance of performance and compatibility.

Falco rules are written in YAML and evaluated against every syscall event. Each rule has three parts: condition, output, and priority.

Here is a simplified version of the “Terminal shell in container” rule:

- rule: Terminal shell in container
desc: A shell was spawned in a container with an attached terminal
condition: >
spawned_process and
container and
proc.name in (bash, sh, zsh, ksh, ash) and
proc.tty != 0
output: >
Shell spawned in container
(user=%user.name container=%container.name
command=%proc.cmdline pid=%proc.pid)
priority: WARNING

Breaking it down:

  • condition: Boolean expression that matches events. If true, the rule fires.
  • output: Template string for the alert message. Uses field references like %user.name.
  • priority: EMERGENCY, ALERT, CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG.

Falco provides hundreds of fields you can use in conditions and outputs:

CategoryExamples
Processproc.name, proc.cmdline, proc.pid, proc.ppid
Filefd.name, fd.directory, fd.type
Useruser.name, user.uid, user.loginuid
Containercontainer.id, container.name, container.image
Kubernetesk8s.pod.name, k8s.ns.name, k8s.deployment.name
Networkfd.sip, fd.dip, fd.sport, fd.dport

Falco supports macros (reusable condition fragments) and lists (arrays of values):

- macro: spawned_process
condition: evt.type = execve and evt.dir = <
- list: shell_binaries
items: [bash, sh, zsh, ksh, ash, fish]
- rule: Shell in container
condition: spawned_process and container and proc.name in (shell_binaries)
output: "Shell detected (command=%proc.cmdline)"
priority: WARNING

This makes rules easier to read and maintain.

Detect file access:

condition: >
evt.type = open and
fd.name = /etc/shadow and
container.id != host

Detect network connections:

condition: >
evt.type = connect and
fd.sip = 0.0.0.0 and
container

Detect process execution:

condition: >
evt.type = execve and
proc.name in (nc, ncat, socat)

Detect writes to system directories:

condition: >
evt.type = open and
evt.arg.flags contains O_CREAT and
fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)

Falco is a CNCF Graduated project. This is the highest maturity level in the CNCF landscape.

What Graduated means:

  • Production-ready, widely adopted
  • Strong governance and contributor diversity
  • Comprehensive documentation and testing
  • Regular releases with stability guarantees

Other Graduated projects include Kubernetes, Prometheus, Envoy, and Helm. Falco earning this status signals it is the de facto standard for runtime security in Kubernetes.

Falco emits alerts to multiple destinations:

By default, Falco writes alerts to stdout. You see them via kubectl logs.

Pros: Simple, built-in, works everywhere.
Cons: Alerts disappear when pods restart. No integration with external systems.

Falcosidekick is a companion project that forwards Falco alerts to external systems:

  • Chat: Slack, Microsoft Teams, Mattermost
  • Incident Management: PagerDuty, Opsgenie
  • SIEM: Splunk, Elasticsearch, Datadog
  • Messaging: Kafka, NATS, AWS SNS
  • Storage: S3, Google Cloud Storage
  • Webhooks: Generic HTTP endpoints

Enabling Falcosidekick:

Terminal window
helm install falco falcosecurity/falco \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set falcosidekick.config.slack.webhookurl=<YOUR_SLACK_WEBHOOK>

Now every Falco alert is posted to Slack in real time.

Falco Exporter exposes alerts as Prometheus metrics. You can create Grafana dashboards and Alertmanager rules:

Terminal window
helm install falco-exporter falcosecurity/falco-exporter

Metrics include:

  • falco_events_total{priority="warning"} (counter)
  • falco_events_total{rule="Terminal shell in container"} (counter)
ToolApproachStrengths
FalcoSyscall monitoring via eBPFDetects behavior, kernel-level visibility, CNCF graduated
Tracee (Aqua)eBPF-based runtime securitySimilar to Falco, focuses on attack signatures
Sysdig SecureCommercial platform (built on Falco)Enterprise features, managed rules, incident response
Tetragon (Cilium)eBPF-based security observabilityPolicy enforcement + detection, network-aware

Falco is the most widely adopted open-source option. If you need commercial support or advanced features, Sysdig Secure is the commercial offering built on Falco.

You can write your own rules to detect application-specific threats.

Example: Detect Pod Accessing External API

Section titled “Example: Detect Pod Accessing External API”

Suppose your app should never call the Kubernetes API server. You can create a rule:

- rule: Unauthorized K8s API Access
desc: Detect container accessing Kubernetes API
condition: >
evt.type = connect and
fd.sip = 0.0.0.0 and
fd.dip = 10.96.0.1 and
fd.dport = 443 and
container.image contains "my-app"
output: >
Unauthorized K8s API access
(container=%container.name pod=%k8s.pod.name)
priority: ERROR

Replace 10.96.0.1 with your cluster’s API server ClusterIP (kubectl get svc kubernetes -n default).

Detect when a container reads your application’s secret config:

- rule: Config File Read
desc: Detect reads to application config
condition: >
evt.type = open and
fd.name = /app/config/secrets.json and
container.name != admin-pod
output: >
Secrets file read by unauthorized container
(container=%container.name user=%user.name)
priority: CRITICAL

Option 1: ConfigMap

Create a ConfigMap with your custom rules and mount it into the Falco pod:

apiVersion: v1
kind: ConfigMap
metadata:
name: falco-custom-rules
namespace: falco-system
data:
custom-rules.yaml: |
- rule: My Custom Rule
condition: ...
output: ...
priority: WARNING

Update Falco Helm values to load it:

customRules:
custom-rules.yaml: |-
- rule: My Custom Rule
...

Option 2: Helm Values

Pass custom rules directly in values.yaml:

customRules:
my-rules.yaml: |-
- rule: Detect Package Manager
condition: >
spawned_process and
proc.name in (apk, apt, yum, dnf)
output: "Package manager run in container (command=%proc.cmdline)"
priority: WARNING

Reinstall Falco with the updated values.

Falco can be noisy in some environments. Here is how to reduce false positives:

If your CI/CD pipeline regularly execs into pods, exclude those pods:

- rule: Terminal shell in container
condition: >
spawned_process and
proc.name in (bash, sh) and
not k8s.ns.name in (ci-system, ci-runners)
output: ...
priority: WARNING

Do not edit the default rules directly. Use append in custom rules:

- rule: Terminal shell in container
append: true
condition: and not k8s.pod.label.ignore-falco = "true"

This adds the condition to the existing rule without overwriting it.

If a rule fires too often and is not actionable, downgrade it from WARNING to INFO:

- rule: Terminal shell in container
override:
priority: INFO

Disable a rule entirely:

- rule: Write below binary dir
enabled: false

Only do this if you are sure the rule is not useful in your environment.

Falco is a detection tool, not a prevention or response tool. To build a complete strategy:

Run Falco in audit mode for a week. Collect all events. Identify normal patterns (CI jobs, health checks, monitoring agents). Write rules to exclude known-good behavior.

Not every alert requires an immediate response. Use priority levels:

  • CRITICAL/ERROR: Page the on-call engineer immediately.
  • WARNING: Post to Slack, review during business hours.
  • INFO: Log for forensic analysis, no active monitoring.

Connect Falco to your incident response workflow:

  • Alerts -> PagerDuty: Trigger incidents for CRITICAL alerts
  • Alerts -> SIEM: Correlate with other security events
  • Alerts -> Webhooks: Trigger automated remediation (kill pod, isolate network)

Runtime security is one layer. Combine with:

  • Admission control (Kyverno): Prevent bad configs
  • Image scanning (Trivy): Block vulnerable images
  • Network policies: Restrict pod-to-pod traffic
  • RBAC: Limit who can exec into pods

Review Falco rules quarterly. Remove rules that never fire. Add new rules as threats evolve.

Falco has low overhead, but high-traffic systems may need tuning.

Falco typically uses:

  • CPU: 0.1-0.5 cores per node
  • Memory: 100-300 MB per node

High-throughput systems (1000+ syscalls/sec) may see higher usage.

Falco uses a ring buffer to store events before processing. If the buffer fills up, events are dropped.

Tune buffer size via Helm:

driver:
ebpf:
bufSizePreset: 4

Presets: 1 (smallest) to 8 (largest). Larger buffers reduce dropped events but use more memory.

Check for dropped events:

Terminal window
kubectl logs -l app.kubernetes.io/name=falco -n falco-system | grep "Falco internal: syscall event drop"

If you see drops, increase buffer size or reduce rule complexity.

Best practices for running Falco in production clusters:

Falco should run on every node, including control plane nodes:

tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane

Set resource limits to prevent Falco from consuming too much:

resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi

Use persistent storage for rule files and configuration:

volumes:
- name: rules
persistentVolumeClaim:
claimName: falco-rules

For large organizations with many clusters, centralize alerts:

  • Run Falco in each cluster
  • Forward alerts to a central SIEM (Splunk, Elasticsearch)
  • Create cross-cluster dashboards in Grafana

Falco is stateless, so there is no HA configuration needed. Each node runs its own Falco pod. If a pod crashes, kubelet restarts it. Events are not lost (they are generated from kernel hooks, not stored in Falco).

Runtime security tools use different detection methods:

ApproachHow It WorksProsCons
Syscall MonitoringHook into kernel syscalls (Falco, Tracee)Deep visibility, detects zero-daysHigh volume of events, needs tuning
Behavioral AnalysisMachine learning on process behavior (Datadog, Sysdig)Low false positivesRequires baseline, may miss novel attacks
File Integrity MonitoringDetect file changes (AIDE, Tripwire)Simple, low overheadReactive, does not detect memory attacks
Network Anomaly DetectionAnalyze traffic patterns (Cilium Hubble)Detects C2 trafficMisses local attacks, encrypted traffic is opaque

Falco uses syscall monitoring. It is the most comprehensive approach but requires tuning to reduce noise.

Falco is highly extensible:

Falco supports plugins for custom data sources:

  • k8saudit: Parse Kubernetes audit logs
  • cloudtrail: Detect AWS API threats
  • okta: Monitor identity events

Install a plugin via Helm:

plugins:
- name: k8saudit
library_path: /usr/share/falco/plugins/libk8saudit.so

Write your own output handler in Go or Python. Connect Falco to proprietary systems.

Falco exposes a gRPC API for programmatic access:

Terminal window
helm install falco falcosecurity/falco \
--set grpc.enabled=true

Use the API to build custom dashboards, incident response tools, or compliance reports.

Falco is the industry standard for Kubernetes runtime security. It uses eBPF to monitor syscalls, matches events against rules, and emits alerts when threats are detected.

Key takeaways:

  • Runtime security is essential (shift-left is not enough)
  • eBPF provides deep kernel visibility without kernel modules
  • Falco rules are flexible and powerful (hundreds of built-in rules, easy to customize)
  • Integrate with Falcosidekick, Prometheus, and SIEM for automated response
  • Combine with admission control (Kyverno), scanning (Trivy), and network policies for defense in depth

Falco is CNCF Graduated, production-ready, and widely adopted. If you are serious about Kubernetes security, Falco should be part of your stack.