Skip to content

Tekton Basics: Deep Dive

This document explains how Tekton works under the hood. It covers the control plane, how Tasks become pods, how Pipelines orchestrate execution, and the mechanics behind params, workspaces, and results. The goal is to give you a mental model of what happens when you kubectl create a TaskRun or PipelineRun.

If you want step-by-step instructions, see the README instead.


Tekton installs as a set of Kubernetes controllers in the tekton-pipelines namespace. Three components handle everything.

The Tekton Pipeline Controller watches for TaskRun and PipelineRun custom resources. When you create one, the controller reconciles it, meaning it reads the desired state and creates the Kubernetes objects needed to achieve it. For a TaskRun, it creates a pod. For a PipelineRun, it creates multiple TaskRuns.

The Tekton Webhook validates and mutates incoming Tekton resources. When you kubectl apply a Task or Pipeline, the webhook checks the schema, verifies field types, and sets defaults before the resource is persisted in etcd. This catches errors at admission time rather than at execution time.

The Reconciler Loop is the core pattern. The controller uses the standard Kubernetes controller-runtime reconciliation model:

  1. Watch for changes to TaskRun/PipelineRun resources
  2. Compare current state against desired state
  3. Take action (create pod, update status, handle failure)
  4. Requeue if the resource is not yet in a terminal state

This is the same pattern used by Deployment controllers, StatefulSet controllers, and every other Kubernetes operator. Tekton is just another set of custom resources with controllers that reconcile them.


A Task is the fundamental unit of work. Each Task runs as a single pod, and each step within the Task runs as a separate container in that pod. This is the key insight: Tekton maps steps to containers, not to shell commands.

Here is the hello Task from this demo:

apiVersion: tekton.dev/v1
kind: Task
metadata:
name: hello
namespace: tekton-demo
spec:
params:
- name: name
type: string
default: "World"
steps:
- name: greet
image: alpine:3.19
script: |
#!/bin/sh
echo "Hello, $(params.name)!"
echo "Running on $(hostname) at $(date -u)"

This single-step Task produces a pod with one container. The container image is alpine:3.19, and the script is injected as the container’s entrypoint.

Tekton does not run your containers the way you might expect. It replaces the container’s entrypoint with its own binary called /tekton/bin/entrypoint. This binary handles:

  • Step ordering: Even though Kubernetes init containers run sequentially, Tekton uses regular containers with entrypoint coordination. Each step waits for its predecessor to write a completion marker before starting.
  • Result collection: The entrypoint reads termination messages and writes them to a known location.
  • Timeout enforcement: The entrypoint tracks elapsed time and kills the step if it exceeds the configured timeout.

The rewriting happens transparently. You write a script block, and Tekton handles the rest. Your script gets mounted as a file inside the container, and the rewritten entrypoint executes it.

When a Task has multiple steps, Tekton needs to run them sequentially within a single pod. The git-info Task demonstrates this:

apiVersion: tekton.dev/v1
kind: Task
metadata:
name: git-info
namespace: tekton-demo
spec:
params:
- name: repo-url
type: string
workspaces:
- name: source
results:
- name: commit-sha
- name: file-count
steps:
- name: clone
image: alpine/git:2.43.0
script: |
#!/bin/sh
cd $(workspaces.source.path)
git clone $(params.repo-url) repo
cd repo
git log --oneline -1
- name: report
image: alpine:3.19
script: |
#!/bin/sh
cd $(workspaces.source.path)/repo
SHA=$(git rev-parse --short HEAD)
COUNT=$(find . -type f -not -path './.git/*' | wc -l)
echo -n "$SHA" > $(results.commit-sha.path)
echo -n "$COUNT" > $(results.file-count.path)

Two steps, two containers, one pod. The clone step runs first. The report step runs second. They share the same workspace volume mount, so the cloned repo is available to both containers.

Internally, Tekton uses a /tekton/run directory with numbered marker files. Step 0 writes /tekton/run/0/out when it completes. Step 1’s entrypoint polls for that file before starting. This coordination is invisible to you as a Task author.

Why Steps Are Containers (Not Shell Commands)

Section titled “Why Steps Are Containers (Not Shell Commands)”

This design has real consequences:

  • Each step can use a different container image. A clone step uses alpine/git, a lint step uses alpine, a build step uses kaniko. No need to install everything in one image.
  • Steps are isolated by container boundaries. A crash in step 1 does not corrupt the memory of step 2.
  • Resource limits can be set per step. A build step might need 2Gi of memory while a report step needs 32Mi.

The tradeoff is startup time. Each container has image pull and initialization overhead. For Tasks with many small steps, this adds up. Five steps means five container startups within one pod.


A Pipeline is a directed acyclic graph (DAG) of Tasks. The Pipeline controller does not run Tasks itself. It creates TaskRuns, one per Task, and lets the TaskRun controller handle pod creation.

Here is the repo-check Pipeline from this demo:

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: repo-check
namespace: tekton-demo
spec:
params:
- name: repo-url
type: string
workspaces:
- name: shared-workspace
tasks:
- name: fetch-repo
taskRef:
name: git-info
params:
- name: repo-url
value: $(params.repo-url)
workspaces:
- name: source
workspace: shared-workspace
- name: lint-yaml
taskRef:
name: yaml-lint
runAfter:
- fetch-repo
workspaces:
- name: source
workspace: shared-workspace
- name: report
taskRef:
name: hello
runAfter:
- lint-yaml
params:
- name: name
value: "Pipeline complete! Commit $(tasks.fetch-repo.results.commit-sha) has $(tasks.fetch-repo.results.file-count) files"

The runAfter field creates explicit dependencies. Without it, tasks run in parallel. The DAG for this pipeline is strictly linear:

fetch-repo --> lint-yaml --> report

But consider a pipeline with this structure:

tasks:
- name: fetch
# no runAfter, starts immediately
- name: unit-test
runAfter: [fetch]
- name: lint
runAfter: [fetch]
- name: deploy
runAfter: [unit-test, lint]

This creates a fan-out/fan-in pattern:

+--> unit-test --+
fetch -->| |--> deploy
+--> lint -------+

unit-test and lint run in parallel after fetch completes. deploy waits for both to finish. The Pipeline controller tracks each TaskRun’s status and creates the next batch of TaskRuns only when dependencies are satisfied.

How the Controller Decides What to Run Next

Section titled “How the Controller Decides What to Run Next”

On each reconciliation loop, the Pipeline controller:

  1. Lists all TaskRuns created for this PipelineRun
  2. Checks which ones are complete (succeeded or failed)
  3. Evaluates the DAG to find tasks whose dependencies are all satisfied
  4. Creates TaskRuns for those tasks
  5. Updates the PipelineRun status

If any task fails, the controller stops creating new TaskRuns and marks the PipelineRun as failed (unless the task has a retries count configured).


Parameters: String, Array, and Object Types

Section titled “Parameters: String, Array, and Object Types”

Tekton parameters are typed. Three types exist.

String is the most common. It holds a single value.

params:
- name: name
type: string
default: "World"

Array holds a list of values. Useful for passing multiple arguments to a command.

params:
- name: flags
type: array
default: ["--verbose", "--output=json"]

Array params are referenced with $(params.flags[*]) to expand all elements, or $(params.flags[0]) for a specific index.

Object holds key-value pairs. Added in Tekton v0.46. Useful for structured configuration.

params:
- name: config
type: object
properties:
host:
type: string
port:
type: string

Object params are referenced with $(params.config.host).

In this demo, the hello Task uses a string param. The PipelineRun passes "Tekton" as the value:

apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: hello-run-
namespace: tekton-demo
spec:
taskRef:
name: hello
params:
- name: name
value: "Tekton"

Parameter substitution happens at TaskRun creation time, not at runtime. The controller resolves $(params.name) to "Tekton" and injects the literal value into the pod spec before the pod is created.


Workspaces are Tekton’s abstraction for mounted volumes. A workspace can be backed by several Kubernetes volume types:

Backing TypeUse CasePersistence
PVC (PersistentVolumeClaim)Share data across tasks in a pipelineSurvives task restarts
volumeClaimTemplateAuto-provisioned PVC per PipelineRunCleaned up with PipelineRun
emptyDirTemporary scratch space within a single taskLost when pod completes
ConfigMapRead-only configuration filesCluster-managed
SecretCredentials and sensitive dataCluster-managed

This demo uses a volumeClaimTemplate, which is the most common pattern for pipelines:

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: repo-check-run-
namespace: tekton-demo
spec:
pipelineRef:
name: repo-check
workspaces:
- name: shared-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi

When the PipelineRun starts, Tekton creates a PVC from this template. That PVC is mounted into every TaskRun that references the shared-workspace workspace. When the PipelineRun is deleted, the PVC is garbage-collected.

ReadWriteOnce means the volume can only be mounted by pods on the same node. This is why parallel tasks sharing a workspace can run into scheduling issues. If unit-test and lint both need the same workspace and run in parallel, their pods must land on the same node. Tekton handles this by setting node affinity on the pods, but it is something to be aware of in production.

For truly parallel workloads, consider ReadWriteMany access mode with an NFS or CephFS-backed StorageClass.


Results are Tekton’s mechanism for passing small pieces of data from one Task to another. The git-info Task writes two results:

results:
- name: commit-sha
description: The short commit SHA of HEAD
- name: file-count
description: Number of files in the repo

And the step writes to the result path:

Terminal window
echo -n "$SHA" > $(results.commit-sha.path)
echo -n "$COUNT" > $(results.file-count.path)

The downstream report task references these results:

params:
- name: name
value: "Pipeline complete! Commit $(tasks.fetch-repo.results.commit-sha) has $(tasks.fetch-repo.results.file-count) files"

Results use Kubernetes termination messages. When a step container exits, the Tekton entrypoint writes the results to the container’s termination message file (/dev/termination-log by default). The kubelet reads this and stores it in the pod status. The Tekton controller then extracts it from the pod status and writes it into the TaskRun status.

This mechanism has a hard constraint. The total size of all results for a single TaskRun is limited to 4096 bytes (4KB). This includes result names and values. If you exceed this limit, the TaskRun fails.

For larger data, use workspaces. Write files to a shared PVC instead of using results. Results are meant for small metadata: commit SHAs, version strings, image digests, pass/fail flags.

As of Tekton v0.45+, you can enable the “larger results” feature using a sidecar-based approach that raises the limit significantly. But the default termination message approach has the 4KB ceiling.


A TaskRun is the execution of a Task. Here is what happens when you create one.

1. Admission. The webhook validates the TaskRun. It checks that the referenced Task exists, that all required params are provided, and that workspace bindings are valid.

2. Pod creation. The controller resolves param substitutions, constructs a pod spec with one container per step (plus init containers for setup), and creates the pod.

3. Execution. The pod runs. Steps execute sequentially via entrypoint coordination. The controller watches the pod status.

4. Completion. When all containers finish, the controller reads termination messages to extract results, updates the TaskRun status with the outcome (Succeeded or Failed), and records the start/completion times.

5. Cleanup. If the TaskRun was created by a PipelineRun, the Pipeline controller reads its status and decides what to do next.

The generateName pattern used in this demo is worth understanding:

metadata:
generateName: hello-run-

This tells Kubernetes to append a random suffix, producing names like hello-run-xk7m2. Each kubectl create produces a unique TaskRun. This is why the README uses kubectl create instead of kubectl apply. Applying with generateName would create a new resource every time.


A PipelineRun is to a Pipeline what a TaskRun is to a Task. It represents a single execution of a Pipeline.

1. Admission. The webhook validates workspace bindings and params.

2. DAG resolution. The Pipeline controller builds the task graph from runAfter fields and result references.

3. TaskRun creation. The controller creates TaskRuns for tasks with no unsatisfied dependencies. It passes through params and workspace bindings.

4. Monitoring. On each reconciliation, the controller checks TaskRun statuses, resolves results from completed TaskRuns, and creates the next batch of TaskRuns.

5. Completion. When all TaskRuns finish (or one fails without retries), the PipelineRun is marked as Succeeded or Failed.

The PipelineRun from this demo:

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: repo-check-run-
namespace: tekton-demo
spec:
pipelineRef:
name: repo-check
params:
- name: repo-url
value: "https://github.com/savitojs/k8s-learn-by-doing.git"
workspaces:
- name: shared-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi

This creates three TaskRuns in sequence: fetch-repo, lint-yaml, report.


Tekton supports timeouts at three levels.

Pipeline-level timeout caps the entire PipelineRun:

spec:
timeouts:
pipeline: "1h"

Task-level timeout caps individual tasks within a pipeline:

spec:
timeouts:
tasks: "30m"

Step-level timeout caps a single step within a task:

steps:
- name: build
timeout: "10m"

When a timeout fires, Tekton cancels the running pods. The PipelineRun or TaskRun is marked as failed with a TaskRunTimeout or PipelineRunTimeout reason.

The default timeout, if none is specified, is 1 hour. You can change this globally in the config-defaults ConfigMap in the tekton-pipelines namespace. Setting the default to 0 disables the timeout entirely.


Tasks within a pipeline can be retried on failure:

tasks:
- name: flaky-test
taskRef:
name: run-tests
retries: 3

When a TaskRun fails and retries remain, the Pipeline controller creates a new pod for that task. The retry count, attempt number, and each attempt’s status are recorded in the TaskRun’s status.retriesStatus field.

Important: retries create new pods. The old pod is not restarted. This means workspace contents may be stale if a previous task modified them and failed partway through. Design tasks to be idempotent if you plan to use retries.


Tekton supports custom task types through the Run (v1alpha1) or CustomRun (v1beta1) API. Instead of referencing a built-in Task, you reference a custom controller:

tasks:
- name: approval
taskRef:
apiVersion: custom.tekton.dev/v1
kind: ApprovalTask

The Tekton Pipeline controller does not handle this TaskRun. Instead, a separate controller watching ApprovalTask resources picks it up. This enables use cases like manual approval gates, Slack notifications, or external system integrations.

Custom tasks follow the same lifecycle conventions: they must update the status.conditions field to indicate success or failure. The Pipeline controller watches for status changes and proceeds with the DAG accordingly.


Tekton Hub (hub.tekton.dev) is a catalog of community-maintained Tasks and Pipelines. Instead of writing a git-clone Task from scratch, you can install one:

Terminal window
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.9/git-clone.yaml

Then reference it in your Pipeline:

tasks:
- name: fetch-source
taskRef:
name: git-clone

The catalog includes tasks for common operations: git-clone, buildah, kaniko, golang-build, python-test, and many more. The git-info Task in this demo is a simplified version of the catalog’s git-clone task.

Tekton Hub also supports Bundles, which are OCI images containing Tekton resources. This lets you version and distribute tasks through container registries rather than raw YAML files.


AspectTektonGitHub Actions
Runs onYour Kubernetes clusterGitHub’s infrastructure
Definition formatKubernetes YAML (CRDs)YAML (workflow files)
Execution unitPod per TaskVM per job
Step isolationContainer per stepProcess per step
State sharingWorkspaces (PVCs)Artifacts (upload/download)
Trigger mechanismEventListener + TriggerBindingGitHub events (native)
Reusable componentsTekton Hub, BundlesGitHub Marketplace
Cost modelYour cluster resourcesGitHub-hosted minutes

GitHub Actions is simpler to set up. Tekton gives you full control over the execution environment and keeps everything inside your cluster.

AspectTektonJenkins
ArchitectureKubernetes-native (CRDs + controllers)Standalone Java server
ScalingPods are ephemeral, cluster scales naturallyPermanent agents, manual scaling
Pipeline definitionDeclarative YAMLGroovy DSL (Jenkinsfile)
IsolationEach task is a fresh podShared agent workspace
State managementKubernetes resources (etcd)Jenkins controller (filesystem)
Plugin ecosystemTekton Hub + custom tasksJenkins plugins (1800+)
Maintenance burdenKubernetes operator updatesJava, plugin compatibility matrix

Jenkins has a larger plugin ecosystem. Tekton has a cleaner execution model with no permanent infrastructure beyond the controller. Each pipeline run starts fresh, which eliminates the “dirty agent” problem that plagues Jenkins.


  1. Tekton is just Kubernetes. Tasks are pods. Steps are containers. Pipelines are DAGs of TaskRuns. Everything is a custom resource reconciled by controllers.

  2. Entrypoint rewriting is the magic. Tekton replaces container entrypoints to enforce step ordering, collect results, and manage timeouts. You never see this, but it is the core mechanism.

  3. Results use termination messages. This is clever but limited to 4KB. Use workspaces for anything larger.

  4. Workspaces are volumes. PVCs for persistence, emptyDir for scratch, ConfigMaps and Secrets for configuration. The volumeClaimTemplate pattern auto-provisions and cleans up PVCs per PipelineRun.

  5. The DAG is implicit. runAfter and result references define the execution graph. The Pipeline controller resolves it on each reconciliation loop.