Tekton Basics: Deep Dive
This document explains how Tekton works under the hood. It covers the control
plane, how Tasks become pods, how Pipelines orchestrate execution, and the
mechanics behind params, workspaces, and results. The goal is to give you a
mental model of what happens when you kubectl create a TaskRun or PipelineRun.
If you want step-by-step instructions, see the README instead.
Tekton’s Control Plane
Section titled “Tekton’s Control Plane”Tekton installs as a set of Kubernetes controllers in the tekton-pipelines
namespace. Three components handle everything.
The Tekton Pipeline Controller watches for TaskRun and PipelineRun custom resources. When you create one, the controller reconciles it, meaning it reads the desired state and creates the Kubernetes objects needed to achieve it. For a TaskRun, it creates a pod. For a PipelineRun, it creates multiple TaskRuns.
The Tekton Webhook validates and mutates incoming Tekton resources. When you
kubectl apply a Task or Pipeline, the webhook checks the schema, verifies
field types, and sets defaults before the resource is persisted in etcd. This
catches errors at admission time rather than at execution time.
The Reconciler Loop is the core pattern. The controller uses the standard Kubernetes controller-runtime reconciliation model:
- Watch for changes to TaskRun/PipelineRun resources
- Compare current state against desired state
- Take action (create pod, update status, handle failure)
- Requeue if the resource is not yet in a terminal state
This is the same pattern used by Deployment controllers, StatefulSet controllers, and every other Kubernetes operator. Tekton is just another set of custom resources with controllers that reconcile them.
Tasks: Steps as Containers
Section titled “Tasks: Steps as Containers”A Task is the fundamental unit of work. Each Task runs as a single pod, and each step within the Task runs as a separate container in that pod. This is the key insight: Tekton maps steps to containers, not to shell commands.
Here is the hello Task from this demo:
apiVersion: tekton.dev/v1kind: Taskmetadata: name: hello namespace: tekton-demospec: params: - name: name type: string default: "World" steps: - name: greet image: alpine:3.19 script: | #!/bin/sh echo "Hello, $(params.name)!" echo "Running on $(hostname) at $(date -u)"This single-step Task produces a pod with one container. The container image is
alpine:3.19, and the script is injected as the container’s entrypoint.
Entrypoint Rewriting
Section titled “Entrypoint Rewriting”Tekton does not run your containers the way you might expect. It replaces the
container’s entrypoint with its own binary called /tekton/bin/entrypoint. This
binary handles:
- Step ordering: Even though Kubernetes init containers run sequentially, Tekton uses regular containers with entrypoint coordination. Each step waits for its predecessor to write a completion marker before starting.
- Result collection: The entrypoint reads termination messages and writes them to a known location.
- Timeout enforcement: The entrypoint tracks elapsed time and kills the step if it exceeds the configured timeout.
The rewriting happens transparently. You write a script block, and Tekton
handles the rest. Your script gets mounted as a file inside the container, and
the rewritten entrypoint executes it.
Step Ordering Mechanics
Section titled “Step Ordering Mechanics”When a Task has multiple steps, Tekton needs to run them sequentially within a
single pod. The git-info Task demonstrates this:
apiVersion: tekton.dev/v1kind: Taskmetadata: name: git-info namespace: tekton-demospec: params: - name: repo-url type: string workspaces: - name: source results: - name: commit-sha - name: file-count steps: - name: clone image: alpine/git:2.43.0 script: | #!/bin/sh cd $(workspaces.source.path) git clone $(params.repo-url) repo cd repo git log --oneline -1 - name: report image: alpine:3.19 script: | #!/bin/sh cd $(workspaces.source.path)/repo SHA=$(git rev-parse --short HEAD) COUNT=$(find . -type f -not -path './.git/*' | wc -l) echo -n "$SHA" > $(results.commit-sha.path) echo -n "$COUNT" > $(results.file-count.path)Two steps, two containers, one pod. The clone step runs first. The report
step runs second. They share the same workspace volume mount, so the cloned
repo is available to both containers.
Internally, Tekton uses a /tekton/run directory with numbered marker files.
Step 0 writes /tekton/run/0/out when it completes. Step 1’s entrypoint polls
for that file before starting. This coordination is invisible to you as a Task
author.
Why Steps Are Containers (Not Shell Commands)
Section titled “Why Steps Are Containers (Not Shell Commands)”This design has real consequences:
- Each step can use a different container image. A clone step uses
alpine/git, a lint step usesalpine, a build step useskaniko. No need to install everything in one image. - Steps are isolated by container boundaries. A crash in step 1 does not corrupt the memory of step 2.
- Resource limits can be set per step. A build step might need 2Gi of memory while a report step needs 32Mi.
The tradeoff is startup time. Each container has image pull and initialization overhead. For Tasks with many small steps, this adds up. Five steps means five container startups within one pod.
Pipeline Controller: DAG Execution
Section titled “Pipeline Controller: DAG Execution”A Pipeline is a directed acyclic graph (DAG) of Tasks. The Pipeline controller does not run Tasks itself. It creates TaskRuns, one per Task, and lets the TaskRun controller handle pod creation.
Here is the repo-check Pipeline from this demo:
apiVersion: tekton.dev/v1kind: Pipelinemetadata: name: repo-check namespace: tekton-demospec: params: - name: repo-url type: string workspaces: - name: shared-workspace tasks: - name: fetch-repo taskRef: name: git-info params: - name: repo-url value: $(params.repo-url) workspaces: - name: source workspace: shared-workspace - name: lint-yaml taskRef: name: yaml-lint runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: report taskRef: name: hello runAfter: - lint-yaml params: - name: name value: "Pipeline complete! Commit $(tasks.fetch-repo.results.commit-sha) has $(tasks.fetch-repo.results.file-count) files"Fan-Out and Fan-In
Section titled “Fan-Out and Fan-In”The runAfter field creates explicit dependencies. Without it, tasks run in
parallel. The DAG for this pipeline is strictly linear:
fetch-repo --> lint-yaml --> reportBut consider a pipeline with this structure:
tasks: - name: fetch # no runAfter, starts immediately - name: unit-test runAfter: [fetch] - name: lint runAfter: [fetch] - name: deploy runAfter: [unit-test, lint]This creates a fan-out/fan-in pattern:
+--> unit-test --+fetch -->| |--> deploy +--> lint -------+unit-test and lint run in parallel after fetch completes. deploy waits
for both to finish. The Pipeline controller tracks each TaskRun’s status and
creates the next batch of TaskRuns only when dependencies are satisfied.
How the Controller Decides What to Run Next
Section titled “How the Controller Decides What to Run Next”On each reconciliation loop, the Pipeline controller:
- Lists all TaskRuns created for this PipelineRun
- Checks which ones are complete (succeeded or failed)
- Evaluates the DAG to find tasks whose dependencies are all satisfied
- Creates TaskRuns for those tasks
- Updates the PipelineRun status
If any task fails, the controller stops creating new TaskRuns and marks the
PipelineRun as failed (unless the task has a retries count configured).
Parameters: String, Array, and Object Types
Section titled “Parameters: String, Array, and Object Types”Tekton parameters are typed. Three types exist.
String is the most common. It holds a single value.
params: - name: name type: string default: "World"Array holds a list of values. Useful for passing multiple arguments to a command.
params: - name: flags type: array default: ["--verbose", "--output=json"]Array params are referenced with $(params.flags[*]) to expand all elements,
or $(params.flags[0]) for a specific index.
Object holds key-value pairs. Added in Tekton v0.46. Useful for structured configuration.
params: - name: config type: object properties: host: type: string port: type: stringObject params are referenced with $(params.config.host).
In this demo, the hello Task uses a string param. The PipelineRun passes
"Tekton" as the value:
apiVersion: tekton.dev/v1kind: TaskRunmetadata: generateName: hello-run- namespace: tekton-demospec: taskRef: name: hello params: - name: name value: "Tekton"Parameter substitution happens at TaskRun creation time, not at runtime. The
controller resolves $(params.name) to "Tekton" and injects the literal
value into the pod spec before the pod is created.
Workspaces: Sharing Files Between Tasks
Section titled “Workspaces: Sharing Files Between Tasks”Workspaces are Tekton’s abstraction for mounted volumes. A workspace can be backed by several Kubernetes volume types:
| Backing Type | Use Case | Persistence |
|---|---|---|
| PVC (PersistentVolumeClaim) | Share data across tasks in a pipeline | Survives task restarts |
| volumeClaimTemplate | Auto-provisioned PVC per PipelineRun | Cleaned up with PipelineRun |
| emptyDir | Temporary scratch space within a single task | Lost when pod completes |
| ConfigMap | Read-only configuration files | Cluster-managed |
| Secret | Credentials and sensitive data | Cluster-managed |
This demo uses a volumeClaimTemplate, which is the most common pattern for
pipelines:
apiVersion: tekton.dev/v1kind: PipelineRunmetadata: generateName: repo-check-run- namespace: tekton-demospec: pipelineRef: name: repo-check workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 256MiWhen the PipelineRun starts, Tekton creates a PVC from this template. That PVC
is mounted into every TaskRun that references the shared-workspace workspace.
When the PipelineRun is deleted, the PVC is garbage-collected.
Why ReadWriteOnce Matters
Section titled “Why ReadWriteOnce Matters”ReadWriteOnce means the volume can only be mounted by pods on the same node.
This is why parallel tasks sharing a workspace can run into scheduling issues.
If unit-test and lint both need the same workspace and run in parallel,
their pods must land on the same node. Tekton handles this by setting node
affinity on the pods, but it is something to be aware of in production.
For truly parallel workloads, consider ReadWriteMany access mode with an NFS
or CephFS-backed StorageClass.
Results: Passing Data Between Tasks
Section titled “Results: Passing Data Between Tasks”Results are Tekton’s mechanism for passing small pieces of data from one Task to
another. The git-info Task writes two results:
results: - name: commit-sha description: The short commit SHA of HEAD - name: file-count description: Number of files in the repoAnd the step writes to the result path:
echo -n "$SHA" > $(results.commit-sha.path)echo -n "$COUNT" > $(results.file-count.path)The downstream report task references these results:
params: - name: name value: "Pipeline complete! Commit $(tasks.fetch-repo.results.commit-sha) has $(tasks.fetch-repo.results.file-count) files"How Results Work Internally
Section titled “How Results Work Internally”Results use Kubernetes termination messages. When a step container exits,
the Tekton entrypoint writes the results to the container’s termination message
file (/dev/termination-log by default). The kubelet reads this and stores it
in the pod status. The Tekton controller then extracts it from the pod status
and writes it into the TaskRun status.
Size Limits
Section titled “Size Limits”This mechanism has a hard constraint. The total size of all results for a single TaskRun is limited to 4096 bytes (4KB). This includes result names and values. If you exceed this limit, the TaskRun fails.
For larger data, use workspaces. Write files to a shared PVC instead of using results. Results are meant for small metadata: commit SHAs, version strings, image digests, pass/fail flags.
As of Tekton v0.45+, you can enable the “larger results” feature using a sidecar-based approach that raises the limit significantly. But the default termination message approach has the 4KB ceiling.
TaskRun Lifecycle
Section titled “TaskRun Lifecycle”A TaskRun is the execution of a Task. Here is what happens when you create one.
1. Admission. The webhook validates the TaskRun. It checks that the referenced Task exists, that all required params are provided, and that workspace bindings are valid.
2. Pod creation. The controller resolves param substitutions, constructs a pod spec with one container per step (plus init containers for setup), and creates the pod.
3. Execution. The pod runs. Steps execute sequentially via entrypoint coordination. The controller watches the pod status.
4. Completion. When all containers finish, the controller reads termination messages to extract results, updates the TaskRun status with the outcome (Succeeded or Failed), and records the start/completion times.
5. Cleanup. If the TaskRun was created by a PipelineRun, the Pipeline controller reads its status and decides what to do next.
The generateName pattern used in this demo is worth understanding:
metadata: generateName: hello-run-This tells Kubernetes to append a random suffix, producing names like
hello-run-xk7m2. Each kubectl create produces a unique TaskRun. This is why
the README uses kubectl create instead of kubectl apply. Applying with
generateName would create a new resource every time.
PipelineRun Lifecycle
Section titled “PipelineRun Lifecycle”A PipelineRun is to a Pipeline what a TaskRun is to a Task. It represents a single execution of a Pipeline.
1. Admission. The webhook validates workspace bindings and params.
2. DAG resolution. The Pipeline controller builds the task graph from
runAfter fields and result references.
3. TaskRun creation. The controller creates TaskRuns for tasks with no unsatisfied dependencies. It passes through params and workspace bindings.
4. Monitoring. On each reconciliation, the controller checks TaskRun statuses, resolves results from completed TaskRuns, and creates the next batch of TaskRuns.
5. Completion. When all TaskRuns finish (or one fails without retries), the PipelineRun is marked as Succeeded or Failed.
The PipelineRun from this demo:
apiVersion: tekton.dev/v1kind: PipelineRunmetadata: generateName: repo-check-run- namespace: tekton-demospec: pipelineRef: name: repo-check params: - name: repo-url value: "https://github.com/savitojs/k8s-learn-by-doing.git" workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 256MiThis creates three TaskRuns in sequence: fetch-repo, lint-yaml, report.
Timeout Handling
Section titled “Timeout Handling”Tekton supports timeouts at three levels.
Pipeline-level timeout caps the entire PipelineRun:
spec: timeouts: pipeline: "1h"Task-level timeout caps individual tasks within a pipeline:
spec: timeouts: tasks: "30m"Step-level timeout caps a single step within a task:
steps: - name: build timeout: "10m"When a timeout fires, Tekton cancels the running pods. The PipelineRun or
TaskRun is marked as failed with a TaskRunTimeout or PipelineRunTimeout
reason.
The default timeout, if none is specified, is 1 hour. You can change this
globally in the config-defaults ConfigMap in the tekton-pipelines namespace.
Setting the default to 0 disables the timeout entirely.
Retry Policies
Section titled “Retry Policies”Tasks within a pipeline can be retried on failure:
tasks: - name: flaky-test taskRef: name: run-tests retries: 3When a TaskRun fails and retries remain, the Pipeline controller creates a new
pod for that task. The retry count, attempt number, and each attempt’s status
are recorded in the TaskRun’s status.retriesStatus field.
Important: retries create new pods. The old pod is not restarted. This means workspace contents may be stale if a previous task modified them and failed partway through. Design tasks to be idempotent if you plan to use retries.
Custom Tasks (Run API)
Section titled “Custom Tasks (Run API)”Tekton supports custom task types through the Run (v1alpha1) or CustomRun
(v1beta1) API. Instead of referencing a built-in Task, you reference a custom
controller:
tasks: - name: approval taskRef: apiVersion: custom.tekton.dev/v1 kind: ApprovalTaskThe Tekton Pipeline controller does not handle this TaskRun. Instead, a
separate controller watching ApprovalTask resources picks it up. This enables
use cases like manual approval gates, Slack notifications, or external system
integrations.
Custom tasks follow the same lifecycle conventions: they must update the
status.conditions field to indicate success or failure. The Pipeline
controller watches for status changes and proceeds with the DAG accordingly.
Tekton Hub: Reusable Tasks
Section titled “Tekton Hub: Reusable Tasks”Tekton Hub (hub.tekton.dev) is a catalog of community-maintained Tasks and Pipelines. Instead of writing a git-clone Task from scratch, you can install one:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.9/git-clone.yamlThen reference it in your Pipeline:
tasks: - name: fetch-source taskRef: name: git-cloneThe catalog includes tasks for common operations: git-clone, buildah, kaniko,
golang-build, python-test, and many more. The git-info Task in this demo is a
simplified version of the catalog’s git-clone task.
Tekton Hub also supports Bundles, which are OCI images containing Tekton resources. This lets you version and distribute tasks through container registries rather than raw YAML files.
Comparison with Other CI Systems
Section titled “Comparison with Other CI Systems”Tekton vs GitHub Actions
Section titled “Tekton vs GitHub Actions”| Aspect | Tekton | GitHub Actions |
|---|---|---|
| Runs on | Your Kubernetes cluster | GitHub’s infrastructure |
| Definition format | Kubernetes YAML (CRDs) | YAML (workflow files) |
| Execution unit | Pod per Task | VM per job |
| Step isolation | Container per step | Process per step |
| State sharing | Workspaces (PVCs) | Artifacts (upload/download) |
| Trigger mechanism | EventListener + TriggerBinding | GitHub events (native) |
| Reusable components | Tekton Hub, Bundles | GitHub Marketplace |
| Cost model | Your cluster resources | GitHub-hosted minutes |
GitHub Actions is simpler to set up. Tekton gives you full control over the execution environment and keeps everything inside your cluster.
Tekton vs Jenkins
Section titled “Tekton vs Jenkins”| Aspect | Tekton | Jenkins |
|---|---|---|
| Architecture | Kubernetes-native (CRDs + controllers) | Standalone Java server |
| Scaling | Pods are ephemeral, cluster scales naturally | Permanent agents, manual scaling |
| Pipeline definition | Declarative YAML | Groovy DSL (Jenkinsfile) |
| Isolation | Each task is a fresh pod | Shared agent workspace |
| State management | Kubernetes resources (etcd) | Jenkins controller (filesystem) |
| Plugin ecosystem | Tekton Hub + custom tasks | Jenkins plugins (1800+) |
| Maintenance burden | Kubernetes operator updates | Java, plugin compatibility matrix |
Jenkins has a larger plugin ecosystem. Tekton has a cleaner execution model with no permanent infrastructure beyond the controller. Each pipeline run starts fresh, which eliminates the “dirty agent” problem that plagues Jenkins.
Key Takeaways
Section titled “Key Takeaways”-
Tekton is just Kubernetes. Tasks are pods. Steps are containers. Pipelines are DAGs of TaskRuns. Everything is a custom resource reconciled by controllers.
-
Entrypoint rewriting is the magic. Tekton replaces container entrypoints to enforce step ordering, collect results, and manage timeouts. You never see this, but it is the core mechanism.
-
Results use termination messages. This is clever but limited to 4KB. Use workspaces for anything larger.
-
Workspaces are volumes. PVCs for persistence, emptyDir for scratch, ConfigMaps and Secrets for configuration. The
volumeClaimTemplatepattern auto-provisions and cleans up PVCs per PipelineRun. -
The DAG is implicit.
runAfterand result references define the execution graph. The Pipeline controller resolves it on each reconciliation loop.
See Also
Section titled “See Also”- README for step-by-step instructions to run this demo
- Tekton CI/CD for a real-world pipeline with triggers
- Tekton documentation for the full reference