Tekton CI/CD Pipeline: Deep Dive
This document explains the internals behind a production-style CI/CD pipeline built with Tekton. It covers how Tekton Triggers turn external events into PipelineRuns, how Kaniko builds container images without a Docker daemon, and how Tekton fits into a broader GitOps architecture alongside ArgoCD.
For step-by-step instructions, see the README instead.
Tekton Triggers Architecture
Section titled “Tekton Triggers Architecture”Tekton Triggers is a separate component from Tekton Pipelines. It adds four custom resources that connect external events to pipeline execution.
The Four Resources
Section titled “The Four Resources”EventListener is a Kubernetes Service that receives incoming HTTP requests. When you install an EventListener, Tekton creates a pod running an HTTP server and a Service pointing to it. External systems (GitHub, GitLab, Bitbucket) send webhooks to this Service.
TriggerBinding extracts values from the incoming HTTP request body. It maps JSON fields from the webhook payload to named parameters.
TriggerTemplate is a blueprint for creating Tekton resources. It receives the parameters extracted by the TriggerBinding and stamps out a PipelineRun (or TaskRun) with those values.
Interceptors sit between the EventListener and the TriggerBinding. They filter, validate, and transform incoming requests before the trigger fires.
How They Connect
Section titled “How They Connect”The flow looks like this:
GitHub webhook POST | vEventListener (receives HTTP request) | vInterceptor (filters/validates/transforms) | vTriggerBinding (extracts params from payload) | vTriggerTemplate (creates PipelineRun with params) | vPipelineRun (pipeline starts executing)Here is the EventListener from this demo:
apiVersion: triggers.tekton.dev/v1beta1kind: EventListenermetadata: name: github-listener namespace: tekton-cicd-demospec: serviceAccountName: default triggers: - name: github-push bindings: - ref: github-push-binding template: ref: build-deploy-triggerThe EventListener can host multiple triggers. Each trigger has its own bindings, template, and optional interceptors. This lets a single EventListener handle push events, pull request events, and tag events with different pipeline configurations.
TriggerBinding: Extracting Webhook Data
Section titled “TriggerBinding: Extracting Webhook Data”The TriggerBinding maps webhook payload fields to Tekton parameters:
apiVersion: triggers.tekton.dev/v1beta1kind: TriggerBindingmetadata: name: github-push-binding namespace: tekton-cicd-demospec: params: - name: app-name value: demo-app - name: image-name value: demo-app:latestIn this demo, the binding uses static values. In a real-world scenario, you would extract values from the webhook payload using JSONPath expressions:
params: - name: git-revision value: $(body.head_commit.id) - name: git-repo-url value: $(body.repository.clone_url) - name: git-branch value: $(extensions.branch_name)The $(body.*) syntax references fields in the HTTP request body. The
$(header.*) syntax references HTTP headers. The $(extensions.*) syntax
references values added by interceptors.
TriggerTemplate: Stamping Out PipelineRuns
Section titled “TriggerTemplate: Stamping Out PipelineRuns”The TriggerTemplate receives the extracted parameters and creates resources:
apiVersion: triggers.tekton.dev/v1beta1kind: TriggerTemplatemetadata: name: build-deploy-trigger namespace: tekton-cicd-demospec: params: - name: app-name default: demo-app - name: image-name default: demo-app:latest resourcetemplates: - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: triggered-run- spec: pipelineRef: name: build-and-deploy params: - name: app-name value: $(tt.params.app-name) - name: image-name value: $(tt.params.image-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 512MiNote the $(tt.params.*) syntax. This is specific to TriggerTemplates and
references the template’s own params, not pipeline params. The
resourcetemplates array can contain any Kubernetes resource, not just
PipelineRuns.
CEL Interceptors for Event Filtering
Section titled “CEL Interceptors for Event Filtering”Common Expression Language (CEL) interceptors let you filter and transform webhook payloads before triggering a pipeline. They are the most powerful interceptor type.
Filtering Events
Section titled “Filtering Events”Not every webhook should trigger a build. You might want to build only on pushes
to the main branch:
triggers: - name: main-branch-push interceptors: - ref: name: "cel" params: - name: "filter" value: "body.ref == 'refs/heads/main'" bindings: - ref: github-push-binding template: ref: build-deploy-triggerThe filter parameter is a CEL expression evaluated against the request. If it
returns false, the trigger does not fire. No PipelineRun is created. The
EventListener returns a 202 response but takes no action.
Adding Data with Overlays
Section titled “Adding Data with Overlays”CEL interceptors can also add computed values to the request using overlays:
interceptors: - ref: name: "cel" params: - name: "filter" value: "body.ref.startsWith('refs/heads/')" - name: "overlays" value: - key: branch_name expression: "body.ref.split('/')[2]"This extracts the branch name from refs/heads/main and makes it available as
$(extensions.branch_name) in the TriggerBinding.
Other Interceptor Types
Section titled “Other Interceptor Types”Beyond CEL, Tekton provides:
- GitHub interceptor: Validates webhook signatures using a shared secret. Prevents unauthorized triggers.
- GitLab interceptor: Same concept for GitLab webhooks.
- Bitbucket interceptor: Same concept for Bitbucket webhooks.
- Webhook interceptor: Calls an external HTTP service for custom validation or transformation.
Interceptors can be chained. A common pattern is GitHub signature validation followed by CEL filtering:
interceptors: - ref: name: "github" params: - name: "secretRef" value: secretName: github-webhook-secret secretKey: token - ref: name: "cel" params: - name: "filter" value: "body.ref == 'refs/heads/main'"GitHub Webhook Integration
Section titled “GitHub Webhook Integration”In production, the EventListener needs to be reachable from GitHub’s servers. The typical setup involves:
- Expose the EventListener Service via an Ingress or Route (OpenShift)
- Configure a webhook in your GitHub repository settings pointing to that URL
- Create a Kubernetes Secret with the webhook shared secret
- Add a GitHub interceptor to validate the signature
The EventListener creates a Service named el-<eventlistener-name>. In this
demo, that is el-github-listener. You can verify it:
kubectl get svc -n tekton-cicd-demo -l eventlistener=github-listenerFor local testing, the demo uses kubectl port-forward and curl to simulate
a webhook:
curl -X POST http://localhost:8090 \ -H "Content-Type: application/json" \ -d '{"ref": "refs/heads/main"}'Kaniko Build Internals
Section titled “Kaniko Build Internals”Kaniko builds container images inside a container, without needing a Docker daemon. This is critical for Kubernetes CI because running Docker-in-Docker requires privileged containers, which is a security risk.
How Kaniko Works Without Docker
Section titled “How Kaniko Works Without Docker”The Docker daemon uses a client-server architecture. The docker build command
sends a build context to the Docker daemon, which processes the Dockerfile
layer by layer. Kaniko does the same thing, but entirely in userspace within a
single process.
Kaniko’s process:
- Parse the Dockerfile
- For each instruction (FROM, COPY, RUN, etc.), execute it in the current filesystem context
- After each instruction, snapshot the filesystem and create a layer diff
- Pack the layers into an OCI-compliant image tarball
- Optionally push the image to a registry
No daemon. No socket mounting. No privileged mode. Kaniko runs as a regular unprivileged container.
Here is the build-image Task from this demo:
apiVersion: tekton.dev/v1kind: Taskmetadata: name: build-image namespace: tekton-cicd-demospec: params: - name: image-name type: string workspaces: - name: source results: - name: image-digest steps: - name: create-dockerfile image: alpine:3.19 script: | #!/bin/sh cd $(workspaces.source.path) cat > Dockerfile <<'DOCKERFILE' FROM nginx:1.25.3-alpine COPY nginx.conf /etc/nginx/conf.d/default.conf COPY index.html /usr/share/nginx/html/index.html EXPOSE 8080 DOCKERFILE - name: build-and-push image: gcr.io/kaniko-project/executor:latest args: - --dockerfile=$(workspaces.source.path)/Dockerfile - --context=$(workspaces.source.path) - --destination=$(params.image-name) - --no-push - --tarPath=$(workspaces.source.path)/image.tarThe --no-push flag tells Kaniko to save the image as a tar file instead of
pushing to a registry. In production, you would remove --no-push and provide
registry credentials via a Kubernetes Secret mounted as a Docker config.
Layer Caching
Section titled “Layer Caching”Kaniko supports layer caching to speed up builds. Two approaches:
Registry-based caching stores layer caches in a container registry:
args: - --cache=true - --cache-repo=registry.example.com/cache/my-appKaniko checks the registry for existing layers before building. If a layer’s inputs have not changed, it pulls the cached layer instead of rebuilding.
Local caching uses a persistent volume:
args: - --cache=true - --cache-dir=/cacheThis is faster than registry-based caching but requires a PVC that persists across builds.
Multi-Stage Builds
Section titled “Multi-Stage Builds”Kaniko fully supports multi-stage Dockerfiles. Each stage is built independently, and only the final stage’s filesystem becomes the output image. This is the standard pattern for building compiled applications:
FROM golang:1.21 AS builderCOPY . .RUN go build -o /app
FROM gcr.io/distroless/base-debian12COPY --from=builder /app /appENTRYPOINT ["/app"]Kaniko processes both stages. The builder stage’s filesystem is discarded after the COPY instruction in the final stage extracts the compiled binary.
Inline Task Specs vs Reusable Tasks
Section titled “Inline Task Specs vs Reusable Tasks”This demo uses both patterns. The prepare-source task is defined inline within
the Pipeline. The test, build, and deploy tasks are standalone Task
resources referenced by name.
Inline Task Spec
Section titled “Inline Task Spec”tasks: - name: prepare-source taskSpec: workspaces: - name: source steps: - name: copy-source image: bitnami/kubectl:1.28 script: | #!/bin/bash kubectl get configmap sample-app-source -n tekton-cicd-demo \ -o jsonpath='{.data.index\.html}' > $(workspaces.source.path)/index.html kubectl get configmap sample-app-source -n tekton-cicd-demo \ -o jsonpath='{.data.nginx\.conf}' > $(workspaces.source.path)/nginx.conf workspaces: - name: source workspace: shared-workspaceReusable Task Reference
Section titled “Reusable Task Reference”tasks: - name: test taskRef: name: run-tests runAfter: - prepare-source workspaces: - name: source workspace: shared-workspaceWhen to Use Which
Section titled “When to Use Which”Inline specs are good for simple, one-off tasks that are specific to a
single pipeline. The prepare-source task is tightly coupled to this pipeline’s
ConfigMap layout. It would not make sense to reuse it elsewhere.
Reusable tasks are good for common operations. Testing, building, and deploying are patterns that apply across many pipelines. Defining them as standalone Tasks lets you version them independently and share them across teams.
The tradeoff: inline specs keep everything in one file but make pipelines longer. Reusable tasks keep pipelines clean but require managing separate resources.
RBAC for Pipelines
Section titled “RBAC for Pipelines”Tekton Tasks run as pods. Those pods need Kubernetes API access to interact with the cluster. This demo creates a dedicated RBAC setup:
apiVersion: v1kind: ServiceAccountmetadata: name: tekton-deployer namespace: tekton-cicd-demo---apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: tekton-deployer namespace: tekton-cicd-demorules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["services"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["pods"] verbs: ["get", "list"]This follows the principle of least privilege. The pipeline ServiceAccount can read ConfigMaps (for source), manage Deployments and Services (for deploying), and list pods (for status checks). It cannot delete resources, create namespaces, or access secrets.
In production, you would also want:
- Separate ServiceAccounts per pipeline so different pipelines have different permission scopes
- ClusterRoles for cross-namespace operations if a pipeline deploys to multiple namespaces
- ImagePullSecrets on the ServiceAccount for pulling private container images in task steps
Pipeline-as-Code
Section titled “Pipeline-as-Code”Tekton Pipelines as Code (PaC) is a component that stores pipeline definitions
inside your source repository. Instead of applying Tekton resources to the
cluster manually, you commit .tekton/ directory files alongside your
application code.
When a pull request is opened or code is pushed, PaC reads the pipeline definitions from the repository and creates PipelineRuns automatically. This gives you:
- Version-controlled pipelines. The pipeline definition travels with the code it builds.
- Pull request testing. Pipeline changes are tested in the PR context before merging.
- No cluster-side pipeline management. Developers own their pipelines without needing cluster admin access.
A typical .tekton/ directory structure:
.tekton/ pull-request.yaml # Pipeline triggered on PRs push.yaml # Pipeline triggered on pushes to mainPaC integrates with GitHub Apps, GitLab webhooks, and Bitbucket Cloud. It is the recommended approach for teams that want self-service CI/CD.
Tekton Chains: Supply Chain Security
Section titled “Tekton Chains: Supply Chain Security”Tekton Chains observes TaskRun completions and automatically signs the results and generates provenance attestations. This addresses the software supply chain security problem: how do you prove that an artifact was built from specific source code, by a specific pipeline, in a specific environment?
How Chains Works
Section titled “How Chains Works”- Chains runs as a controller alongside the Tekton Pipeline controller
- It watches for completed TaskRuns
- When a TaskRun produces an image (via a result named
IMAGE_URLandIMAGE_DIGEST), Chains generates an attestation - The attestation is signed using cosign (keyless or with a key pair)
- The signed attestation is stored alongside the image in the registry
SLSA Compliance
Section titled “SLSA Compliance”Chains generates attestations in the SLSA (Supply-chain Levels for Software Artifacts) provenance format. SLSA defines levels of increasing assurance:
- SLSA Level 1: Provenance exists (build metadata is recorded)
- SLSA Level 2: Provenance is signed and hosted on a build service
- SLSA Level 3: Provenance is non-forgeable (hardened build platform)
Tekton Chains with a properly configured cluster can achieve SLSA Level 2 out of the box and Level 3 with additional hardening.
What Gets Signed
Section titled “What Gets Signed”The attestation includes:
- Source repository URL and commit SHA
- Builder image and version
- Pipeline and Task definitions used
- Start and completion times
- All parameters passed to the build
- Output image digest
This creates an auditable chain from source commit to deployed artifact.
Tekton Results: Long-Term Storage
Section titled “Tekton Results: Long-Term Storage”By default, TaskRun and PipelineRun records are stored in etcd as Kubernetes resources. This works, but etcd is not designed for long-term storage of historical data. As runs accumulate, etcd performance degrades.
Tekton Results provides a separate storage backend. It:
- Stores run records in a relational database (PostgreSQL or MySQL)
- Exposes a gRPC and REST API for querying historical runs
- Allows Kubernetes-side records to be pruned without losing history
- Supports log streaming and storage
The architecture separates hot data (running pipelines in etcd) from cold data (historical records in a database). The Results API lets you query across namespaces, filter by status, and retrieve logs long after the original pods have been cleaned up.
Comparison: Tekton CI vs ArgoCD CD
Section titled “Comparison: Tekton CI vs ArgoCD CD”Tekton and ArgoCD solve different halves of the delivery problem. They complement each other.
| Aspect | Tekton (CI) | ArgoCD (CD) |
|---|---|---|
| Purpose | Build and test | Deploy and sync |
| Input | Source code + triggers | Git manifests |
| Output | Container images + test results | Running workloads |
| Execution model | Run-to-completion (pipelines) | Continuous reconciliation |
| State management | PipelineRun (finite) | Application (infinite loop) |
| Failure handling | Retry, fail, stop | Self-healing, drift correction |
| Trigger | Webhook, manual, cron | Git poll, webhook |
Tekton is imperative: “run these steps in this order.” ArgoCD is declarative: “make the cluster look like this Git repo.”
Why Not Use One Tool for Both?
Section titled “Why Not Use One Tool for Both?”You could deploy from a Tekton pipeline (this demo does exactly that with
kubectl apply in the deploy task). But this loses ArgoCD’s continuous
reconciliation. If someone manually changes a deployment, ArgoCD detects the
drift and corrects it. A Tekton pipeline only runs once and moves on.
You could also trigger builds from ArgoCD. But ArgoCD is not designed for build orchestration. It does not handle multi-step workflows, test execution, or image building.
The Full GitOps Architecture
Section titled “The Full GitOps Architecture”In a production GitOps setup, Tekton and ArgoCD divide responsibilities cleanly:
Developer pushes code to app repo | vTekton EventListener receives webhook | vTekton Pipeline: test --> build --> push image | vTekton updates image tag in deployment repo (GitOps repo) | vArgoCD detects change in GitOps repo | vArgoCD syncs: deploys new image to clusterTwo repositories are involved:
- Application repo: Contains source code and Tekton pipeline definitions (via Pipeline-as-Code). Tekton watches this for changes.
- GitOps repo: Contains Kubernetes manifests (or Helm charts, or Kustomize overlays) with image references. ArgoCD watches this for changes.
The bridge between them is the image tag update. After Tekton builds and pushes a new image, the last pipeline step updates the image tag in the GitOps repo. ArgoCD picks up that change and deploys it.
This separation means:
- Developers own CI. Pipeline definitions live in the app repo.
- Platform teams own CD. ArgoCD Applications and deployment manifests live in the GitOps repo.
- Git is the audit trail. Every deployment is traceable to a commit in both repos.
- Rollback is a git revert. Reverting the image tag commit in the GitOps repo triggers ArgoCD to redeploy the previous version.
This Demo’s Simplified Version
Section titled “This Demo’s Simplified Version”This demo collapses the full GitOps flow into a single pipeline for simplicity.
The deploy-app task uses kubectl apply directly instead of updating a GitOps
repo:
steps: - name: deploy image: bitnami/kubectl:1.28 script: | #!/bin/bash kubectl apply -n tekton-cicd-demo -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: $(params.app-name) labels: app: $(params.app-name) deployed-by: tekton spec: replicas: 2 selector: matchLabels: app: $(params.app-name) template: spec: containers: - name: app image: $(params.image-name) ports: - containerPort: 8080 EOFThis works for learning. In production, replace this step with one that commits the new image tag to a GitOps repository and let ArgoCD handle the actual deployment.
Key Takeaways
Section titled “Key Takeaways”-
Triggers turn webhooks into PipelineRuns. EventListener receives HTTP, TriggerBinding extracts data, TriggerTemplate stamps out resources. Interceptors filter and validate in between.
-
CEL interceptors are the gatekeeper. They prevent unwanted events from triggering builds. Branch filtering, payload validation, and data transformation all happen here.
-
Kaniko builds images in userspace. No Docker daemon, no privileged containers. It parses Dockerfiles, executes instructions, snapshots filesystem layers, and produces OCI images.
-
Inline specs keep simple tasks close to the pipeline. Reusable tasks keep common operations standardized. Use both patterns intentionally.
-
RBAC is not optional. Pipeline pods interact with the Kubernetes API. Scope their permissions to exactly what they need.
-
Tekton Chains closes the supply chain loop. Automatic signing and attestation give you provenance from source to artifact.
-
Tekton is CI. ArgoCD is CD. Together they form a complete GitOps pipeline with Git as the single source of truth and clear separation of build and deploy responsibilities.
See Also
Section titled “See Also”- README for step-by-step instructions to run this demo
- Tekton Basics for foundational concepts
- Tekton Triggers documentation
- Kaniko documentation
- Tekton Chains documentation