Skip to content

Tekton CI/CD Pipeline: Deep Dive

This document explains the internals behind a production-style CI/CD pipeline built with Tekton. It covers how Tekton Triggers turn external events into PipelineRuns, how Kaniko builds container images without a Docker daemon, and how Tekton fits into a broader GitOps architecture alongside ArgoCD.

For step-by-step instructions, see the README instead.


Tekton Triggers is a separate component from Tekton Pipelines. It adds four custom resources that connect external events to pipeline execution.

EventListener is a Kubernetes Service that receives incoming HTTP requests. When you install an EventListener, Tekton creates a pod running an HTTP server and a Service pointing to it. External systems (GitHub, GitLab, Bitbucket) send webhooks to this Service.

TriggerBinding extracts values from the incoming HTTP request body. It maps JSON fields from the webhook payload to named parameters.

TriggerTemplate is a blueprint for creating Tekton resources. It receives the parameters extracted by the TriggerBinding and stamps out a PipelineRun (or TaskRun) with those values.

Interceptors sit between the EventListener and the TriggerBinding. They filter, validate, and transform incoming requests before the trigger fires.

The flow looks like this:

GitHub webhook POST
|
v
EventListener (receives HTTP request)
|
v
Interceptor (filters/validates/transforms)
|
v
TriggerBinding (extracts params from payload)
|
v
TriggerTemplate (creates PipelineRun with params)
|
v
PipelineRun (pipeline starts executing)

Here is the EventListener from this demo:

apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: github-listener
namespace: tekton-cicd-demo
spec:
serviceAccountName: default
triggers:
- name: github-push
bindings:
- ref: github-push-binding
template:
ref: build-deploy-trigger

The EventListener can host multiple triggers. Each trigger has its own bindings, template, and optional interceptors. This lets a single EventListener handle push events, pull request events, and tag events with different pipeline configurations.

The TriggerBinding maps webhook payload fields to Tekton parameters:

apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
name: github-push-binding
namespace: tekton-cicd-demo
spec:
params:
- name: app-name
value: demo-app
- name: image-name
value: demo-app:latest

In this demo, the binding uses static values. In a real-world scenario, you would extract values from the webhook payload using JSONPath expressions:

params:
- name: git-revision
value: $(body.head_commit.id)
- name: git-repo-url
value: $(body.repository.clone_url)
- name: git-branch
value: $(extensions.branch_name)

The $(body.*) syntax references fields in the HTTP request body. The $(header.*) syntax references HTTP headers. The $(extensions.*) syntax references values added by interceptors.

TriggerTemplate: Stamping Out PipelineRuns

Section titled “TriggerTemplate: Stamping Out PipelineRuns”

The TriggerTemplate receives the extracted parameters and creates resources:

apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: build-deploy-trigger
namespace: tekton-cicd-demo
spec:
params:
- name: app-name
default: demo-app
- name: image-name
default: demo-app:latest
resourcetemplates:
- apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: triggered-run-
spec:
pipelineRef:
name: build-and-deploy
params:
- name: app-name
value: $(tt.params.app-name)
- name: image-name
value: $(tt.params.image-name)
workspaces:
- name: shared-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi

Note the $(tt.params.*) syntax. This is specific to TriggerTemplates and references the template’s own params, not pipeline params. The resourcetemplates array can contain any Kubernetes resource, not just PipelineRuns.


Common Expression Language (CEL) interceptors let you filter and transform webhook payloads before triggering a pipeline. They are the most powerful interceptor type.

Not every webhook should trigger a build. You might want to build only on pushes to the main branch:

triggers:
- name: main-branch-push
interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "body.ref == 'refs/heads/main'"
bindings:
- ref: github-push-binding
template:
ref: build-deploy-trigger

The filter parameter is a CEL expression evaluated against the request. If it returns false, the trigger does not fire. No PipelineRun is created. The EventListener returns a 202 response but takes no action.

CEL interceptors can also add computed values to the request using overlays:

interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "body.ref.startsWith('refs/heads/')"
- name: "overlays"
value:
- key: branch_name
expression: "body.ref.split('/')[2]"

This extracts the branch name from refs/heads/main and makes it available as $(extensions.branch_name) in the TriggerBinding.

Beyond CEL, Tekton provides:

  • GitHub interceptor: Validates webhook signatures using a shared secret. Prevents unauthorized triggers.
  • GitLab interceptor: Same concept for GitLab webhooks.
  • Bitbucket interceptor: Same concept for Bitbucket webhooks.
  • Webhook interceptor: Calls an external HTTP service for custom validation or transformation.

Interceptors can be chained. A common pattern is GitHub signature validation followed by CEL filtering:

interceptors:
- ref:
name: "github"
params:
- name: "secretRef"
value:
secretName: github-webhook-secret
secretKey: token
- ref:
name: "cel"
params:
- name: "filter"
value: "body.ref == 'refs/heads/main'"

In production, the EventListener needs to be reachable from GitHub’s servers. The typical setup involves:

  1. Expose the EventListener Service via an Ingress or Route (OpenShift)
  2. Configure a webhook in your GitHub repository settings pointing to that URL
  3. Create a Kubernetes Secret with the webhook shared secret
  4. Add a GitHub interceptor to validate the signature

The EventListener creates a Service named el-<eventlistener-name>. In this demo, that is el-github-listener. You can verify it:

Terminal window
kubectl get svc -n tekton-cicd-demo -l eventlistener=github-listener

For local testing, the demo uses kubectl port-forward and curl to simulate a webhook:

Terminal window
curl -X POST http://localhost:8090 \
-H "Content-Type: application/json" \
-d '{"ref": "refs/heads/main"}'

Kaniko builds container images inside a container, without needing a Docker daemon. This is critical for Kubernetes CI because running Docker-in-Docker requires privileged containers, which is a security risk.

The Docker daemon uses a client-server architecture. The docker build command sends a build context to the Docker daemon, which processes the Dockerfile layer by layer. Kaniko does the same thing, but entirely in userspace within a single process.

Kaniko’s process:

  1. Parse the Dockerfile
  2. For each instruction (FROM, COPY, RUN, etc.), execute it in the current filesystem context
  3. After each instruction, snapshot the filesystem and create a layer diff
  4. Pack the layers into an OCI-compliant image tarball
  5. Optionally push the image to a registry

No daemon. No socket mounting. No privileged mode. Kaniko runs as a regular unprivileged container.

Here is the build-image Task from this demo:

apiVersion: tekton.dev/v1
kind: Task
metadata:
name: build-image
namespace: tekton-cicd-demo
spec:
params:
- name: image-name
type: string
workspaces:
- name: source
results:
- name: image-digest
steps:
- name: create-dockerfile
image: alpine:3.19
script: |
#!/bin/sh
cd $(workspaces.source.path)
cat > Dockerfile <<'DOCKERFILE'
FROM nginx:1.25.3-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY index.html /usr/share/nginx/html/index.html
EXPOSE 8080
DOCKERFILE
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
args:
- --dockerfile=$(workspaces.source.path)/Dockerfile
- --context=$(workspaces.source.path)
- --destination=$(params.image-name)
- --no-push
- --tarPath=$(workspaces.source.path)/image.tar

The --no-push flag tells Kaniko to save the image as a tar file instead of pushing to a registry. In production, you would remove --no-push and provide registry credentials via a Kubernetes Secret mounted as a Docker config.

Kaniko supports layer caching to speed up builds. Two approaches:

Registry-based caching stores layer caches in a container registry:

args:
- --cache=true
- --cache-repo=registry.example.com/cache/my-app

Kaniko checks the registry for existing layers before building. If a layer’s inputs have not changed, it pulls the cached layer instead of rebuilding.

Local caching uses a persistent volume:

args:
- --cache=true
- --cache-dir=/cache

This is faster than registry-based caching but requires a PVC that persists across builds.

Kaniko fully supports multi-stage Dockerfiles. Each stage is built independently, and only the final stage’s filesystem becomes the output image. This is the standard pattern for building compiled applications:

FROM golang:1.21 AS builder
COPY . .
RUN go build -o /app
FROM gcr.io/distroless/base-debian12
COPY --from=builder /app /app
ENTRYPOINT ["/app"]

Kaniko processes both stages. The builder stage’s filesystem is discarded after the COPY instruction in the final stage extracts the compiled binary.


This demo uses both patterns. The prepare-source task is defined inline within the Pipeline. The test, build, and deploy tasks are standalone Task resources referenced by name.

tasks:
- name: prepare-source
taskSpec:
workspaces:
- name: source
steps:
- name: copy-source
image: bitnami/kubectl:1.28
script: |
#!/bin/bash
kubectl get configmap sample-app-source -n tekton-cicd-demo \
-o jsonpath='{.data.index\.html}' > $(workspaces.source.path)/index.html
kubectl get configmap sample-app-source -n tekton-cicd-demo \
-o jsonpath='{.data.nginx\.conf}' > $(workspaces.source.path)/nginx.conf
workspaces:
- name: source
workspace: shared-workspace
tasks:
- name: test
taskRef:
name: run-tests
runAfter:
- prepare-source
workspaces:
- name: source
workspace: shared-workspace

Inline specs are good for simple, one-off tasks that are specific to a single pipeline. The prepare-source task is tightly coupled to this pipeline’s ConfigMap layout. It would not make sense to reuse it elsewhere.

Reusable tasks are good for common operations. Testing, building, and deploying are patterns that apply across many pipelines. Defining them as standalone Tasks lets you version them independently and share them across teams.

The tradeoff: inline specs keep everything in one file but make pipelines longer. Reusable tasks keep pipelines clean but require managing separate resources.


Tekton Tasks run as pods. Those pods need Kubernetes API access to interact with the cluster. This demo creates a dedicated RBAC setup:

apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-deployer
namespace: tekton-cicd-demo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tekton-deployer
namespace: tekton-cicd-demo
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]

This follows the principle of least privilege. The pipeline ServiceAccount can read ConfigMaps (for source), manage Deployments and Services (for deploying), and list pods (for status checks). It cannot delete resources, create namespaces, or access secrets.

In production, you would also want:

  • Separate ServiceAccounts per pipeline so different pipelines have different permission scopes
  • ClusterRoles for cross-namespace operations if a pipeline deploys to multiple namespaces
  • ImagePullSecrets on the ServiceAccount for pulling private container images in task steps

Tekton Pipelines as Code (PaC) is a component that stores pipeline definitions inside your source repository. Instead of applying Tekton resources to the cluster manually, you commit .tekton/ directory files alongside your application code.

When a pull request is opened or code is pushed, PaC reads the pipeline definitions from the repository and creates PipelineRuns automatically. This gives you:

  • Version-controlled pipelines. The pipeline definition travels with the code it builds.
  • Pull request testing. Pipeline changes are tested in the PR context before merging.
  • No cluster-side pipeline management. Developers own their pipelines without needing cluster admin access.

A typical .tekton/ directory structure:

.tekton/
pull-request.yaml # Pipeline triggered on PRs
push.yaml # Pipeline triggered on pushes to main

PaC integrates with GitHub Apps, GitLab webhooks, and Bitbucket Cloud. It is the recommended approach for teams that want self-service CI/CD.


Tekton Chains observes TaskRun completions and automatically signs the results and generates provenance attestations. This addresses the software supply chain security problem: how do you prove that an artifact was built from specific source code, by a specific pipeline, in a specific environment?

  1. Chains runs as a controller alongside the Tekton Pipeline controller
  2. It watches for completed TaskRuns
  3. When a TaskRun produces an image (via a result named IMAGE_URL and IMAGE_DIGEST), Chains generates an attestation
  4. The attestation is signed using cosign (keyless or with a key pair)
  5. The signed attestation is stored alongside the image in the registry

Chains generates attestations in the SLSA (Supply-chain Levels for Software Artifacts) provenance format. SLSA defines levels of increasing assurance:

  • SLSA Level 1: Provenance exists (build metadata is recorded)
  • SLSA Level 2: Provenance is signed and hosted on a build service
  • SLSA Level 3: Provenance is non-forgeable (hardened build platform)

Tekton Chains with a properly configured cluster can achieve SLSA Level 2 out of the box and Level 3 with additional hardening.

The attestation includes:

  • Source repository URL and commit SHA
  • Builder image and version
  • Pipeline and Task definitions used
  • Start and completion times
  • All parameters passed to the build
  • Output image digest

This creates an auditable chain from source commit to deployed artifact.


By default, TaskRun and PipelineRun records are stored in etcd as Kubernetes resources. This works, but etcd is not designed for long-term storage of historical data. As runs accumulate, etcd performance degrades.

Tekton Results provides a separate storage backend. It:

  • Stores run records in a relational database (PostgreSQL or MySQL)
  • Exposes a gRPC and REST API for querying historical runs
  • Allows Kubernetes-side records to be pruned without losing history
  • Supports log streaming and storage

The architecture separates hot data (running pipelines in etcd) from cold data (historical records in a database). The Results API lets you query across namespaces, filter by status, and retrieve logs long after the original pods have been cleaned up.


Tekton and ArgoCD solve different halves of the delivery problem. They complement each other.

AspectTekton (CI)ArgoCD (CD)
PurposeBuild and testDeploy and sync
InputSource code + triggersGit manifests
OutputContainer images + test resultsRunning workloads
Execution modelRun-to-completion (pipelines)Continuous reconciliation
State managementPipelineRun (finite)Application (infinite loop)
Failure handlingRetry, fail, stopSelf-healing, drift correction
TriggerWebhook, manual, cronGit poll, webhook

Tekton is imperative: “run these steps in this order.” ArgoCD is declarative: “make the cluster look like this Git repo.”

You could deploy from a Tekton pipeline (this demo does exactly that with kubectl apply in the deploy task). But this loses ArgoCD’s continuous reconciliation. If someone manually changes a deployment, ArgoCD detects the drift and corrects it. A Tekton pipeline only runs once and moves on.

You could also trigger builds from ArgoCD. But ArgoCD is not designed for build orchestration. It does not handle multi-step workflows, test execution, or image building.


In a production GitOps setup, Tekton and ArgoCD divide responsibilities cleanly:

Developer pushes code to app repo
|
v
Tekton EventListener receives webhook
|
v
Tekton Pipeline: test --> build --> push image
|
v
Tekton updates image tag in deployment repo (GitOps repo)
|
v
ArgoCD detects change in GitOps repo
|
v
ArgoCD syncs: deploys new image to cluster

Two repositories are involved:

  1. Application repo: Contains source code and Tekton pipeline definitions (via Pipeline-as-Code). Tekton watches this for changes.
  2. GitOps repo: Contains Kubernetes manifests (or Helm charts, or Kustomize overlays) with image references. ArgoCD watches this for changes.

The bridge between them is the image tag update. After Tekton builds and pushes a new image, the last pipeline step updates the image tag in the GitOps repo. ArgoCD picks up that change and deploys it.

This separation means:

  • Developers own CI. Pipeline definitions live in the app repo.
  • Platform teams own CD. ArgoCD Applications and deployment manifests live in the GitOps repo.
  • Git is the audit trail. Every deployment is traceable to a commit in both repos.
  • Rollback is a git revert. Reverting the image tag commit in the GitOps repo triggers ArgoCD to redeploy the previous version.

This demo collapses the full GitOps flow into a single pipeline for simplicity. The deploy-app task uses kubectl apply directly instead of updating a GitOps repo:

steps:
- name: deploy
image: bitnami/kubectl:1.28
script: |
#!/bin/bash
kubectl apply -n tekton-cicd-demo -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: $(params.app-name)
labels:
app: $(params.app-name)
deployed-by: tekton
spec:
replicas: 2
selector:
matchLabels:
app: $(params.app-name)
template:
spec:
containers:
- name: app
image: $(params.image-name)
ports:
- containerPort: 8080
EOF

This works for learning. In production, replace this step with one that commits the new image tag to a GitOps repository and let ArgoCD handle the actual deployment.


  1. Triggers turn webhooks into PipelineRuns. EventListener receives HTTP, TriggerBinding extracts data, TriggerTemplate stamps out resources. Interceptors filter and validate in between.

  2. CEL interceptors are the gatekeeper. They prevent unwanted events from triggering builds. Branch filtering, payload validation, and data transformation all happen here.

  3. Kaniko builds images in userspace. No Docker daemon, no privileged containers. It parses Dockerfiles, executes instructions, snapshots filesystem layers, and produces OCI images.

  4. Inline specs keep simple tasks close to the pipeline. Reusable tasks keep common operations standardized. Use both patterns intentionally.

  5. RBAC is not optional. Pipeline pods interact with the Kubernetes API. Scope their permissions to exactly what they need.

  6. Tekton Chains closes the supply chain loop. Automatic signing and attestation give you provenance from source to artifact.

  7. Tekton is CI. ArgoCD is CD. Together they form a complete GitOps pipeline with Git as the single source of truth and clear separation of build and deploy responsibilities.