docs/pipelines.md
PipelineWorkspacesParametersTasks to the Pipeline
Pipelines in PipelineTasksParameters in PipelineTasksMatrix in PipelineTasksWorkspaces in PipelineTasksrunAfter fieldretries fieldonError fieldOnErrorTask execution using when expressions
Results
Task execution orderFinally to the Pipeline
Workspaces in finally tasksParameters in finally tasksmatrix in finally tasksTask execution results in finallyPipeline result with finallyPipelineRun Status with finallyStatus of pipelineTaskStatus of All Tasksfinally Task execution using when expressions
A Pipeline is a collection of Tasks that you define and arrange in a specific order
of execution as part of your continuous integration flow. Each Task in a Pipeline
executes as a Pod on your Kubernetes cluster. You can configure various execution
conditions to fit your business needs.
PipelineA Pipeline definition supports the following fields:
apiVersion - Specifies the API version, for example
tekton.dev/v1beta1.kind - Identifies this resource object as a Pipeline object.metadata - Specifies metadata that uniquely identifies the
Pipeline object. For example, a name.spec - Specifies the configuration information for
this Pipeline object. This must include:
tasks - Specifies the Tasks that comprise the Pipeline
and the details of their execution.params - Specifies the Parameters that the Pipeline requires.workspaces - Specifies a set of Workspaces that the Pipeline requires.tasks:
name - the name of this Task within the context of this Pipeline.displayName - a user-facing name of this Task within the context of this Pipeline.description - a description of this Task within the context of this Pipeline.taskRef - a reference to a Task definition.taskSpec - a specification of a Task.runAfter - Indicates that a Task should execute after one or more other
Tasks without output linking.retries - Specifies the number of times to retry the execution of a Task after
a failure. Does not apply to execution cancellations.when - Specifies when expressions that guard
the execution of a Task; allow execution only when all when expressions evaluate to true.timeout - Specifies the timeout before a Task fails.params - Specifies the Parameters that a Task requires.workspaces - Specifies the Workspaces that a Task requires.matrix - Specifies the Parameters used to fan out a Task into
multiple TaskRuns or Runs.results - Specifies the location to which the Pipeline emits its execution
results.displayName - is a user-facing name of the pipeline that may be used to populate a UI.description - Holds an informative description of the Pipeline object.finally - Specifies one or more Tasks to be executed in parallel after
all other tasks have completed.
name - the name of this Task within the context of this Pipeline.displayName - a user-facing name of this Task within the context of this Pipeline.description - a description of this Task within the context of this Pipeline.taskRef - a reference to a Task definition.taskSpec - a specification of a Task.retries - Specifies the number of times to retry the execution of a Task after
a failure. Does not apply to execution cancellations.when - Specifies when expressions that guard
the execution of a Task; allow execution only when all when expressions evaluate to true.timeout - Specifies the timeout before a Task fails.params - Specifies the Parameters that a Task requires.workspaces - Specifies the Workspaces that a Task requires.matrix - Specifies the Parameters used to fan out a Task into
multiple TaskRuns or Runs.WorkspacesWorkspaces allow you to specify one or more volumes that each Task in the Pipeline
requires during execution. You specify one or more Workspaces in the workspaces field.
For example:
spec:
workspaces:
- name: pipeline-ws1 # The name of the workspace in the Pipeline
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit expects a workspace with name "src"
runAfter:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
workspaces:
- name: src
workspace: pipeline-ws1
For simplicity you can also map the name of the Workspace in PipelineTask to match with
the Workspace from the Pipeline.
For example:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline
spec:
workspaces:
- name: source
tasks:
- name: gen-code
taskRef:
name: gen-code # gen-code expects a Workspace named "source"
workspaces:
- name: source # <- mapping workspace name
- name: commit
taskRef:
name: commit # commit expects a Workspace named "source"
workspaces:
- name: source # <- mapping workspace name
runAfter:
- gen-code
For more information, see:
Workspaces in PipelinesWorkspaces in a PipelineRun code examplePipelineRun, including workspaces.<name>.bound.WorkspacesParameters(See also Specifying Parameters in Tasks)
You can specify global parameters, such as compilation flags or artifact names, that you want to supply
to the Pipeline at execution time. Parameters are passed to the Pipeline from its corresponding
PipelineRun and can replace template values specified within each Task in the Pipeline.
Parameter names:
-), and underscores (_)._).For example, fooIs-Bar_ is a valid parameter name, but barIsBa$ or 0banana are not.
Each declared parameter has a type field, which can be set to either array or string.
array is useful in cases where the number of compilation flags being supplied to the Pipeline
varies throughout its execution. If no value is specified, the type field defaults to string.
When the actual parameter value is supplied, its parsed type is validated against the type field.
The description and default fields for a Parameter are optional.
The following example illustrates the use of Parameters in a Pipeline.
The following Pipeline declares two input parameters :
context which passes its value (a string) to the Task to set the value of the pathToContext parameter within the Task.flags which passes its value (an array) to the Task to set the value of
the flags parameter within the Task. The flags parameter within the
Task must also be an array.
If you specify a value for the default field and invoke this Pipeline in a PipelineRun
without specifying a value for context, that value will be used.Note: Input parameter values can be used as variables throughout the Pipeline
by using variable substitution.
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
type: string
description: Path to context
default: /some/where/or/other
- name: flags
type: array
description: List of flags
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "$(params.context)"
- name: flags
value: ["$(params.flags[*])"]
The following PipelineRun supplies a value for context:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"
- name: "flags"
value:
- "foo"
- "bar"
:seedling:
enumis an alpha feature. Theenable-param-enumfeature flag must be set to"true"to enable this feature.
Parameter declarations can include enum which is a predefine set of valid values that can be accepted by the Pipeline Param. If a Param has both enum and default value, the default value must be in the enum set. For example, the valid/allowed values for Param "message" is bounded to v1 and v2:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: pipeline-param-enum
spec:
params:
- name: message
enum: ["v1", "v2"]
default: "v1"
tasks:
- name: task1
params:
- name: message
value: $(params.message)
steps:
- name: build
image: bash:3.2
script: |
echo "$(params.message)"
If the Param value passed in by PipelineRun is NOT in the predefined enum list, the PipelineRun will fail with reason InvalidParamValue.
If a PipelineTask references a Task with enum, the enums specified in the Pipeline spec.params (pipeline-level enum) must be
a subset of the enums specified in the referenced Task (task-level enum). An empty pipeline-level enum is invalid
in this scenario since an empty enum set indicates a "universal set" which allows all possible values. The same rules apply to Pipelines with embbeded Tasks.
In the below example, the referenced Task accepts v1 and v2 as valid values, the Pipeline further restricts the valid value to v1.
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: param-enum-demo
spec:
params:
- name: message
type: string
enum: ["v1", "v2"]
steps:
- name: build
image: bash:latest
script: |
echo "$(params.message)"
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: pipeline-param-enum
spec:
params:
- name: message
enum: ["v1"] # note that an empty enum set is invalid
tasks:
- name: task1
params:
- name: message
value: $(params.message)
taskRef:
name: param-enum-demo
Note that this subset restriction only applies to the task-level params with a direct single reference to pipeline-level params. If a task-level param references multiple pipeline-level params, the subset validation is not applied.
apiVersion: tekton.dev/v1
kind: Pipeline
...
spec:
params:
- name: message1
enum: ["v1"]
- name: message2
enum: ["v2"]
tasks:
- name: task1
params:
- name: message
value: "$(params.message1) and $(params.message2)"
taskSpec:
params: message
enum: [...] # the message enum is not required to be a subset of message1 or message2
...
Tekton validates user-provided values in a PipelineRun against the enum specified in the PipelineSpec.params. Tekton also validates
any resolved param value against the enum specified in each PipelineTask before creating the TaskRun.
See usage in this example
Like with embedded pipelineruns, you can propagate params declared in the pipeline down to the inlined pipelineTasks and its inlined Steps. Wherever a resource (e.g. a pipelineTask) or a StepAction is referenced, the parameters need to be passed explicitly.
For example, the following is a valid yaml.
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipelien-propagated-params
spec:
params:
- name: HELLO
default: "Hello World!"
- name: BYE
default: "Bye World!"
tasks:
- name: echo-hello
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- name: echo-action
ref:
name: step-action-echo
params:
- name: msg
value: "$(params.BYE)"
The same rules defined in pipelineruns apply here.
Tasks to the PipelineYour Pipeline definition must reference at least one Task.
Each Task within a Pipeline must have a valid
name and a taskRef or a taskSpec. For example:
tasks:
- name: build-the-image
taskRef:
name: build-push
Note: Using both apiVersion and kind will create CustomRun, don't set apiVersion if only referring to Task.
or
tasks:
- name: say-hello
taskSpec:
steps:
- image: ubuntu
script: echo 'hello there'
Note that any task specified in taskSpec will be the same version as the Pipeline.
displayName in PipelineTasksThe displayName field is an optional field that allows you to add a user-facing name of the PipelineTask that can be
used to populate and distinguish in the dashboard. For example:
spec:
tasks:
- name: scan
displayName: "Code Scan"
taskRef:
name: sonar-scan
The displayName also allows you to parameterize the human-readable name of your choice based on the
params, the task results,
and the context variables. For example:
spec:
params:
- name: application
tasks:
- name: scan
displayName: "Code Scan for $(params.application)"
taskRef:
name: sonar-scan
- name: upload-scan-report
displayName: "Upload Scan Report $(tasks.scan.results.report)"
taskRef:
name: upload
Specifying task results in the displayName does not introduce an inherent resource dependency among tasks. The
pipeline author is responsible for specifying dependency explicitly either using runAfter
or rely on whenExpressions or task results in params.
Fully resolved displayName is also available in the status as part of the pipelineRun.status.childReferences. The
clients such as the dashboard, CLI, etc. can retrieve the displayName from the childReferences. The displayName mainly
drives a better user experience and at the same time it is not validated for the content or length by the controller.
A taskRef field may specify a Task in a remote location such as git.
Support for specific types of remote will depend on the Resolvers your
cluster's operator has installed. For more information including a tutorial, please check resolution docs. The below example demonstrates referencing a Task in git:
tasks:
- name: "go-build"
taskRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
# value can use params declared at the pipeline level or a static value like main
value: $(params.gitRevision)
- name: pathInRepo
value: task/golang-build/0.3/golang-build.yaml
Pipelines in PipelineTasks:seedling: Specifying
pipelinesinPipelineTasksis an alpha feature. Theenable-api-fieldsfeature flag must be set to"alpha"to specifyPipelineReforPipelineSpecin aPipelineTask. This feature is in Preview Only mode and not yet supported/implemented.
Apart from taskRef and taskSpec, pipelineRef and pipelineSpec allows you to specify a pipeline in pipelineTask.
This allows you to generate a child pipelineRun which is inherited by the parent pipelineRun.
kind: Pipeline
metadata:
name: security-scans
spec:
tasks:
- name: scorecards
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Generating scorecard report ..."
- name: codeql
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Generating codeql report ..."
---
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-scan-notify
spec:
tasks:
- name: git-clone
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Cloning a repo to run security scans ..."
- name: security-scans
runAfter:
- git-clone
pipelineRef:
name: security-scans
---
For further information read Pipelines in Pipelines
Parameters in PipelineTasksYou can also provide Parameters:
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-web
Matrix in PipelineTasks:seedling:
Matrixis an beta feature. Theenable-api-fieldsfeature flag can be set to"beta"to specifyMatrixin aPipelineTask.
You can also provide Parameters through the matrix field:
spec:
tasks:
- name: browser-test
taskRef:
name: browser-test
matrix:
params:
- name: browser
value:
- chrome
- safari
- firefox
include:
- name: build-1
params:
- name: browser
value: chrome
- name: url
value: some-url
For further information, read Matrix.
Workspaces in PipelineTasksYou can also provide Workspaces:
spec:
tasks:
- name: use-workspace
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: shared-ws
A Tekton Bundle is an OCI artifact that contains Tekton resources like Tasks which can be referenced within a taskRef.
There is currently a hard limit of 20 objects in a bundle.
You can reference a Tekton bundle in a TaskRef in both v1 and v1beta1 using remote resolution. The example syntax shown below for v1 uses remote resolution and requires enabling beta features.
spec:
tasks:
- name: hello-world
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog
- name: name
value: echo-task
- name: kind
value: Task
You may also specify a tag as you would with a Docker image which will give you a fixed,
repeatable reference to a Task.
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog:v1.0.1
- name: name
value: echo-task
- name: kind
value: Task
You may also specify a fixed digest instead of a tag.
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog@sha256:abc123
- name: name
value: echo-task
- name: kind
value: Task
Any of the above options will fetch the image using the ImagePullSecrets attached to the
ServiceAccount specified in the PipelineRun.
See the Service Account section
for details on how to configure a ServiceAccount on a PipelineRun. The PipelineRun will then
run that Task without registering it in the cluster allowing multiple versions of the same named
Task to be run at once.
Tekton Bundles may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the contract.
runAfter fieldIf you need your Tasks to execute in a specific order within the Pipeline,
use the runAfter field to indicate that a Task must execute after
one or more other Tasks.
In the example below, we want to test the code before we build it. Since there
is no output from the test-app Task, the build-app Task uses runAfter
to indicate that test-app must run before it, regardless of the order in which
they are referenced in the Pipeline definition.
workspaces:
- name: source
tasks:
- name: test-app
taskRef:
name: make-test
workspaces:
- name: source
workspace: source
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
workspaces:
- name: source
workspace: source
retries fieldFor each Task in the Pipeline, you can specify the number of times Tekton
should retry its execution when it fails. When a Task fails, the corresponding
TaskRun sets its Succeeded Condition to False. The retries field
instructs Tekton to retry executing the Task when this happens. retries are executed
even when other Tasks in the Pipeline have failed, unless the PipelineRun has
been cancelled or
gracefully cancelled.
If you expect a Task to encounter problems during execution (for example,
you know that there will be issues with network connectivity or missing
dependencies), set its retries field to a suitable value greater than 0.
If you don't explicitly specify a value, Tekton does not attempt to execute
the failed Task again.
In the example below, the execution of the build-the-image Task will be
retried once after a failure; if the retried execution fails, too, the Task
execution fails as a whole.
tasks:
- name: build-the-image
retries: 1
taskRef:
name: build-push
onError fieldWhen a PipelineTask fails, the rest of the PipelineTasks are skipped and the PipelineRun is declared a failure. If you would like to
ignore such PipelineTask failure and continue executing the rest of the PipelineTasks, you can specify onError for such a PipelineTask.
OnError can be set to stopAndFail (default) and continue. The failure of a PipelineTask with stopAndFail would stop and fail the whole PipelineRun. A PipelineTask fails with continue does not fail the whole PipelineRun, and the rest of the PipelineTask will continue to execute.
To ignore a PipelineTask failure, set onError to continue:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: demo
spec:
tasks:
- name: task1
onError: continue
taskSpec:
steps:
- name: step1
image: alpine
script: |
exit 1
At runtime, the failure is ignored to determine the PipelineRun status. The PipelineRun message contains the ignored failure info:
status:
conditions:
- lastTransitionTime: "2023-09-28T19:08:30Z"
message: 'Tasks Completed: 1 (Failed: 1 (Ignored: 1), Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
Note that the TaskRun status remains as it is irrelevant to OnError. Failed but ignored TaskRuns result in a failed status with reason
FailureIgnored.
For example, the TaskRun created by the above PipelineRun has the following status:
$ kubectl get tr demo-run-task1
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
demo-run-task1 False FailureIgnored 12m 12m
To specify onError for a step, please see specifying onError for a step.
Note: Setting Retry and OnError:continue at the same time is NOT allowed.
OnErrorWhen a PipelineTask is set to ignore error and the PipelineTask is able to initialize a result before failing, the result is made available to the consumer PipelineTasks.
tasks:
- name: task1
onError: continue
taskSpec:
results:
- name: result1
steps:
- name: step1
image: alpine
script: |
echo -n 123 | tee $(results.result1.path)
exit 1
The consumer PipelineTasks can access the result by referencing $(tasks.task1.results.result1).
If the result is NOT initialized before failing, and there is a PipelineTask consuming it:
tasks:
- name: task1
onError: continue
taskSpec:
results:
- name: result1
steps:
- name: step1
image: alpine
script: |
exit 1
echo -n 123 | tee $(results.result1.path)
PipelineTask has OnError:stopAndFail, the PipelineRun will fail with InvalidTaskResultReference.PipelineTask has OnError:continue, the consuming PipelineTask will be skipped with reason Results were missing,
and the PipelineRun will continue to execute.Task execution using when expressionsTo run a Task only when certain conditions are met, it is possible to guard task execution using the when field. The when field allows you to list a series of references to when expressions.
The components of when expressions are input, operator and values:
| Component | Description | Syntax |
|---|---|---|
input | Input for the when expression, defaults to an empty string if not provided. | * Static values e.g. "ubuntu" |
"$(params.image)" or "$(tasks.task1.results.image)" or "$(tasks.task1.results.array-results[1])" |
| operator | operator represents an input's relationship to a set of values, a valid operator must be provided. | in or notin |
| values | An array of string values, the values array must be provided and has to be non-empty. | * An array param e.g. ["$(params.images[*])"]["$(tasks.task1.results.array-results[*])"]values can contain static values e.g. "ubuntu"values can contain variables (parameters or results) or a Workspaces's bound state e.g. ["$(params.image)"] or ["$(tasks.task1.results.image)"] or ["$(tasks.task1.results.array-results[1])"] |The Parameters are read from the Pipeline and Results are read directly from previous Tasks. Using Results in a when expression in a guarded Task introduces a resource dependency on the previous Task that produced the Result.
The declared when expressions are evaluated before the Task is run. If all the when expressions evaluate to True, the Task is run. If any of the when expressions evaluate to False, the Task is not run and the Task is listed in the Skipped Tasks section of the PipelineRunStatus.
In these examples, first-create-file task will only be executed if the path parameter is README.md, echo-file-exists task will only be executed if the exists result from check-file task is yes and run-lint task will only be executed if the lint-config optional workspace has been provided by a PipelineRun.
tasks:
- name: first-create-file
when:
- input: "$(params.path)"
operator: in
values: ["README.md"]
taskRef:
name: first-create-file
---
tasks:
- name: echo-file-exists
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
taskRef:
name: echo-file-exists
---
tasks:
- name: run-lint
when:
- input: "$(workspaces.lint-config.bound)"
operator: in
values: ["true"]
taskRef:
name: lint-source
---
tasks:
- name: deploy-in-blue
when:
- input: "blue"
operator: in
values: ["$(params.deployments[*])"]
taskRef:
name: deployment
For an end-to-end example, see PipelineRun with when expressions.
There are a lot of scenarios where when expressions can be really useful. Some of these are:
Result of a previous Task is as expected:seedling:
CEL in WhenExpressionis an alpha feature. Theenable-cel-in-whenexpressionfeature flag must be set to"true"to enable the use ofCELinWhenExpression.
CEL (Common Expression Language) is a declarative language designed for simplicity, speed, safety, and portability which can be used to express a wide variety of conditions and computations.
You can define a CEL expression in WhenExpression to guard the execution of a Task. The CEL expression must evaluate to either true or false. You can use a single line of CEL string to replace current WhenExpressions's input+operator+values. For example:
# current WhenExpressions
when:
- input: "foo"
operator: "in"
values: ["foo", "bar"]
- input: "duh"
operator: "notin"
values: ["foo", "bar"]
# with cel
when:
- cel: "'foo' in ['foo', 'bar']"
- cel: "!('duh' in ['foo', 'bar'])"
CEL can offer more conditional functions, such as numeric comparisons (e.g. >, <=, etc), logic operators (e.g. OR, AND), Regex Pattern Matching. For example:
when:
# test coverage result is larger than 90%
- cel: "'$(tasks.unit-test.results.test-coverage)' > 0.9"
# params is not empty, or params2 is 8.5 or 8.6
- cel: "'$(params.param1)' != '' || '$(params.param2)' == '8.5' || '$(params.param2)' == '8.6'"
# param branch matches pattern `release/.*`
- cel: "'$(params.branch)'.matches('release/.*')"
CEL supports string substitutions, you can reference string, array indexing or object value of a param/result. For example:
when:
# string result
- cel: "$(tasks.unit-test.results.test-coverage) > 0.9"
# array indexing result
- cel: "$(tasks.unit-test.results.test-coverage[0]) > 0.9"
# object result key
- cel: "'$(tasks.objectTask.results.repo.url)'.matches('github.com/tektoncd/.*')"
# string param
- cel: "'$(params.foo)' == 'foo'"
# array indexing
- cel: "'$(params.branch[0])' == 'foo'"
# object param key
- cel: "'$(params.repo.url)'.matches('github.com/tektoncd/.*')"
Note: the reference needs to be wrapped with single quotes.
Whole Array and Object replacements are not supported yet. The following usage is not supported:
when:
- cel: "'foo' in '$(params.array_params[*])'"
- cel: "'foo' in '$(params.object_params[*])'"
In addition to the cases listed above, you can craft any valid CEL expression as defined by the cel-spec language definition
CEL expression is validated at admission webhook and a validation error will be returned if the expression is invalid.
Note: To use Tekton's variable substitution, you need to wrap the reference with single quotes. This also means that if you pass another CEL expression via params or results, it won't be executed. Therefore CEL injection is disallowed.
For example:
This is valid: '$(params.foo)' == 'foo'
This is invalid: $(params.foo) == 'foo'
CEL's variable substitution is not supported yet and thus invalid: params.foo == 'foo'
Task and its dependent TasksTo guard a Task and its dependent Tasks:
when expressions to the specific dependent Tasks to be guarded as wellTask and its dependent Tasks as a unit to be guarded and executed together using Pipelines in Pipelineswhen expressions to the specific dependent TasksPick and choose which specific dependent Tasks to guard as well, and cascade the when expressions to those Tasks.
Taking the use case below, a user who wants to guard manual-approval and its dependent Tasks:
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
The user can design the Pipeline to solve their use case as such:
tasks:
#...
- name: manual-approval
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
name: manual-approval
- name: build-image
when:
- input: $(params.git-action)
operator: in
values:
- merge
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
when:
- input: $(params.git-action)
operator: in
values:
- merge
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
Compose a set of Tasks as a unit of execution using Pipelines in Pipelines, which allows for guarding a Task and
its dependent Tasks (as a sub-Pipeline) using when expressions.
Note: Pipelines in Pipelines is an experimental feature
Taking the use case below, a user who wants to guard manual-approval and its dependent Tasks:
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
The user can design the Pipelines to solve their use case as such:
## sub pipeline (approve-build-deploy-slack)
tasks:
- name: manual-approval
runAfter:
- integration-tests
taskRef:
name: manual-approval
- name: build-image
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
---
## main pipeline
tasks:
#...
- name: approve-build-deploy-slack
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
name: approve-build-deploy-slack
Task onlyWhen when expressions evaluate to False, the Task will be skipped and:
Tasks will be executedTasks (and their dependencies) will be skipped because of missing Results from the skipped
parent Task. When we add support for default Results, then the
resource-dependent Tasks may be executed if the default Results from the skipped parent Task are specified. In
addition, if a resource-dependent Task needs a file from a guarded parent Task in a shared Workspace, make sure
to handle the execution of the child Task in case the expected file is missing from the Workspace because the
guarded parent Task is skipped.On the other hand, the rest of the Pipeline will continue executing.
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
Taking the use case above, a user who wants to guard manual-approval only can design the Pipeline as such:
tasks:
#...
- name: manual-approval
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
name: manual-approval
- name: build-image
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
If manual-approval is skipped, execution of its dependent Tasks (slack-msg, build-image and deploy-image)
would be unblocked regardless:
build-image and deploy-image should be executed successfullyslack-msg will be skipped because it is missing the approver Result from manual-approval
slack-msg would have been skipped too if it had any of themmanual-approval specifies a default approver Result, such as "None", then slack-msg would be executed
(supporting default Results is in progress)You can use the Timeout field in the Task spec within the Pipeline to set the timeout
of the TaskRun that executes that Task within the PipelineRun that executes your Pipeline.
The Timeout value is a duration conforming to Go's ParseDuration
format. For example, valid values are 1h30m, 1h, 1m, and 60s.
Note: If you do not specify a Timeout value, Tekton instead honors the timeout for the PipelineRun.
Note: If the specified Task timeout is greater than the Pipeline timeout as configured in the PipelineRun, the Pipeline will time-out first causing the Task to timeout before its configured timeout.
For example, if the PipelineRun sets timeouts.pipeline = 1h and the Pipeline sets tasks[0].timeout = 3h, the task will still timeout after 1h.
See PipelineRun - Configuring a failure timeout for details.
Note: Task timeouts specified in the Pipeline can be overridden at runtime using the taskRunSpecs field in the PipelineRun. This provides flexibility to adjust timeouts for specific execution contexts without modifying the Pipeline definition.
In the example below, the build-the-image Task is configured to time out after 90 seconds:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
timeout: "0h1m30s"
Tekton provides variables to inject values into the contents of certain fields. The values you can inject come from a range of sources including other fields in the Pipeline, context-sensitive information that Tekton provides, and runtime information received from a PipelineRun.
The mechanism of variable substitution is quite simple - string replacement is performed by the Tekton Controller when a PipelineRun is executed.
See the complete list of variable substitutions for Pipelines and the list of fields that accept substitutions.
For an end-to-end example, see using context variables.
retries and retry-count variable substitutionsTekton supports variable substitution for the retries
parameter of PipelineTask. Variables like context.pipelineTask.retries and
context.task.retry-count can be added to the parameters of a PipelineTask.
context.pipelineTask.retries will be replaced by retries of the PipelineTask, while
context.task.retry-count will be replaced by current retry number of the PipelineTask.
params:
- name: pipelineTask-retries
value: "$(context.pipelineTask.retries)"
taskSpec:
params:
- name: pipelineTask-retries
steps:
- image: ubuntu
name: print-if-retries-exhausted
script: |
if [ "$(context.task.retry-count)" == "$(params.pipelineTask-retries)" ]
then
echo "This is the last retry."
fi
exit 1
Note: Every PipelineTask can only access its own retries and retry-count. These
values aren't accessible for other PipelineTasks.
ResultsTasks can emit Results when they execute. A Pipeline can use these
Results for two different purposes:
Result of a Task into the Parameters or when expressions of another.Results and include data from the Results of its Tasks.Note Tekton does not enforce that results are produced at Task level. If a pipeline attempts to consume a result that was declared by a Task, but not produced, it will fail. TEP-0048 propopses introducing default values for results to help Pipeline authors manage this case.
Results into the Parameters or when expressions of anotherSharing Results between Tasks in a Pipeline happens via
variable substitution - one Task emits
a Result and another receives it as a Parameter with a variable such as
$(tasks.<task-name>.results.<result-name>). Pipeline support two new types of
results and parameters: array []string and object map[string]string.
Array result is a beta feature and can be enabled by setting enable-api-fields to alpha or beta.
| Result Type | Parameter Type | Specification | enable-api-fields |
|---|---|---|---|
| string | string | $(tasks.<task-name>.results.<result-name>) | stable |
| array | array | $(tasks.<task-name>.results.<result-name>[*]) | alpha or beta |
| array | string | $(tasks.<task-name>.results.<result-name>[i]) | alpha or beta |
| object | object | $(tasks.<task-name>.results.<result-name>[*]) | alpha or beta |
| object | string | $(tasks.<task-name>.results.<result-name>.key) | alpha or beta |
Note: Whole Array and Object Results (using star notation) cannot be referred in script.
When one Task receives the Results of another, there is a dependency created between those
two Tasks. In order for the receiving Task to get data from another Task's Result,
the Task producing the Result must run first. Tekton enforces this Task ordering
by ensuring that the Task emitting the Result executes before any Task that uses it.
In the snippet below, a param is provided its value from the commit Result emitted by the
checkout-source Task. Tekton will make sure that the checkout-source Task runs
before this one.
params:
- name: foo
value: "$(tasks.checkout-source.results.commit)"
- name: array-params
value: "$(tasks.checkout-source.results.array-results[*])"
- name: array-indexing-params
value: "$(tasks.checkout-source.results.array-results[1])"
- name: object-params
value: "$(tasks.checkout-source.results.object-results[*])"
- name: object-element-params
value: "$(tasks.checkout-source.results.object-results.objectkey)"
Note: If checkout-source exits successfully without initializing commit Result,
the receiving Task fails and causes the Pipeline to fail with InvalidTaskResultReference:
unable to find result referenced by param 'foo' in 'task';: Could not find result with name 'commit' for task run 'checkout-source'
In the snippet below, a when expression is provided its value from the exists Result emitted by the
check-file Task. Tekton will make sure that the check-file Task runs before this one.
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
For an end-to-end example, see Task Results in a PipelineRun.
Note that when expressions are whitespace-sensitive. In particular, when producing results intended for inputs to when
expressions that may include newlines at their close (e.g. cat, jq), you may wish to truncate them.
taskSpec:
params:
- name: jsonQuery-check
steps:
- image: ubuntu
name: store-name-in-results
script: |
curl -s https://my-json-server.typicode.com/typicode/demo/profile | jq -r .name | tr -d '\n' | tee $(results.name.path)
Results from a PipelineA Pipeline can emit Results of its own for a variety of reasons - an external
system may need to read them when the Pipeline is complete, they might summarise
the most important Results from the Pipeline's Tasks, or they might simply
be used to expose non-critical messages generated during the execution of the Pipeline.
A Pipeline's Results can be composed of one or many Task Results emitted during
the course of the Pipeline's execution. A Pipeline Result can refer to its Tasks'
Results using a variable of the form $(tasks.<task-name>.results.<result-name>).
After a Pipeline has executed the PipelineRun will be populated with the Results
emitted by the Pipeline. These will be written to the PipelineRun's
status.pipelineResults field.
In the example below, the Pipeline specifies a results entry with the name sum that
references the outputValue Result emitted by the calculate-sum Task.
results:
- name: sum
description: the sum of all three operands
value: $(tasks.calculate-sum.results.outputValue)
For an end-to-end example, see Results in a PipelineRun.
In the example below, the Pipeline collects array and object results from Tasks.
results:
- name: array-results
type: array
description: whole array
value: $(tasks.task1.results.array-results[*])
- name: array-indexing-results
type: string
description: array element
value: $(tasks.task1.results.array-results[1])
- name: object-results
type: object
description: whole object
value: $(tasks.task2.results.object-results[*])
- name: object-element
type: string
description: object element
value: $(tasks.task2.results.object-results.foo)
For an end-to-end example see Array and Object Results in a PipelineRun.
A Pipeline Result is not emitted if any of the following are true:
PipelineTask referenced by the Pipeline Result failed. The PipelineRun will also
have failed.PipelineTask referenced by the Pipeline Result was skipped.PipelineTask referenced by the Pipeline Result didn't emit the referenced Task Result. This
should be considered a bug in the Task and may fail a PipelineTask in future.Pipeline Result uses a variable that doesn't point to an actual PipelineTask. This will
result in an InvalidTaskResultReference validation error during PipelineRun execution.Pipeline Result uses a variable that doesn't point to an actual result in a PipelineTask.
This will cause an InvalidTaskResultReference validation error during PipelineRun execution.Note: Since a Pipeline Result can contain references to multiple Task Results, if any of those
Task Result references are invalid the entire Pipeline Result is not emitted.
Note: If a PipelineTask referenced by the Pipeline Result was skipped, the Pipeline Result will not be emitted and the PipelineRun will not fail due to a missing result.
Task execution orderYou can connect Tasks in a Pipeline so that they execute in a Directed Acyclic Graph (DAG).
Each Task in the Pipeline becomes a node on the graph that can be connected with an edge
so that one will run before another and the execution of the Pipeline progresses to completion
without getting stuck in an infinite loop.
This is done using:
resource dependencies:
results of one Task being passed into params or when expressions of
anotherordering dependencies:
runAfter clauses on the corresponding TasksFor example, the Pipeline defined as follows
tasks:
- name: lint-repo
taskRef:
name: pylint
- name: test-app
taskRef:
name: make-test
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
- name: deploy-all
taskRef:
name: deploy-kubectl
runAfter:
- build-app
- build-frontend
executes according to the following graph:
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
In particular:
lint-repo and test-app Tasks have no runAfter clauses
and start executing simultaneously.test-app completes, both build-app and build-frontend start
executing simultaneously since they both runAfter the test-app Task.deploy-all Task executes once both build-app and build-frontend
complete, since it is supposed to runAfter them both.Pipeline completes execution once both lint-repo and deploy-all
complete execution.The displayName field is an optional field that allows you to add a user-facing name of the Pipeline that can be used to populate a UI. For example:
spec:
displayName: "Code Scan"
tasks:
- name: scan
taskRef:
name: sonar-scan
The description field is an optional field and can be used to provide description of the Pipeline.
Finally to the PipelineYou can specify a list of one or more final tasks under finally section. finally tasks are guaranteed to be executed
in parallel after all PipelineTasks under tasks have completed regardless of success or error. finally tasks are very
similar to PipelineTasks under tasks section and follow the same syntax. Each finally task must have a
valid name and a taskRef or
taskSpec. For example:
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: cleanup-test
taskRef:
name: cleanup
displayName in finally tasksSimilar to specifying displayName in pipelineTasks, finally tasks also
allows to add a user-facing name of the finally task that can be used to populate and distinguish in the dashboard.
For example:
spec:
finally:
- name: notification
displayName: "Notify"
taskRef:
name: notification
- name: notification-using-context-variable
displayName: "Notification from $(context.pipeline.name)"
taskRef:
name: notification
The displayName also allows you to parameterize the human-readable name of your choice based on the
params, the task results,
and the context variables.
Fully resolved displayName is also available in the status as part of the pipelineRun.status.childReferences. The
clients such as the dashboard, CLI, etc. can retrieve the displayName from the childReferences. The displayName mainly
drives a better user experience and at the same time it is not validated for the content or length by the controller.
Workspaces in finally tasksfinally tasks can specify workspaces which PipelineTasks might have utilized
e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more
Workspaces in the workspaces field for the finally tasks similar to tasks.
spec:
workspaces:
- name: shared-workspace
tasks:
- name: clone-app-source
taskRef:
name: clone-app-repo-to-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
finally:
- name: cleanup-workspace
taskRef:
name: cleanup-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
Parameters in finally tasksSimilar to tasks, you can specify Parameters in finally tasks:
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: report-results
taskRef:
name: report-results
params:
- name: url
value: "someURL"
matrix in finally tasks:seedling:
Matrixis an beta feature. Theenable-api-fieldsfeature flag can be set to"beta"to specifyMatrixin aPipelineTask.
Similar to tasks, you can also provide Parameters through matrix
in finally tasks:
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: report-results
taskRef:
name: report-results
params:
- name: url
value: "someURL"
matrix:
params:
- name: slack-channel
value:
- "foo"
- "bar"
include:
- name: build-1
params:
- name: slack-channel
value: "foo"
- name: flags
value: "-v"
For further information, read Matrix.
Task execution results in finallyfinally tasks can be configured to consume Results of PipelineTask from the tasks section:
spec:
tasks:
- name: clone-app-repo
taskRef:
name: git-clone
finally:
- name: discover-git-commit
params:
- name: commit
value: $(tasks.clone-app-repo.results.commit)
Note: The scheduling of such finally task does not change, it will still be executed in parallel with other
finally tasks after all non-finally tasks are done.
The controller resolves task results before executing the finally task discover-git-commit. If the task
clone-app-repo failed before initializing commit or skipped with when expression
resulting in uninitialized task result commit, the finally Task discover-git-commit will be included in the list of
skippedTasks and continues executing rest of the finally tasks. The pipeline exits with completion instead of
success if a finally task is added to the list of skippedTasks.
Pipeline result with finallyfinally tasks can emit Results and these results emitted from the finally tasks can be configured in the
Pipeline Results. References of Results from finally will follow the same naming conventions as referencing Results from tasks: $(finally.<finally-pipelinetask-name>.result.<result-name>).
results:
- name: comment-count-validate
value: $(finally.check-count.results.comment-count-validate)
finally:
- name: check-count
taskRef:
name: example-task-name
In this example, pipelineResults in status will show the name-value pair for the result comment-count-validate which is produced in the Task example-task-name.
PipelineRun Status with finallyWith finally, PipelineRun status is calculated based on PipelineTasks under tasks section and finally tasks.
Without finally:
PipelineTasks under tasks | PipelineRun status | Reason |
|---|---|---|
all PipelineTasks successful | true | Succeeded |
one or more PipelineTasks skipped and rest successful | true | Completed |
single failure of PipelineTask | false | failed |
With finally:
PipelineTasks under tasks | finally tasks | PipelineRun status | Reason |
|---|---|---|---|
all PipelineTask successful | all finally tasks successful | true | Succeeded |
all PipelineTask successful | one or more failure of finally tasks | false | Failed |
one or more PipelineTask skipped and rest successful | all finally tasks successful | true | Completed |
one or more PipelineTask skipped and rest successful | one or more failure of finally tasks | false | Failed |
single failure of PipelineTask | all finally tasks successful | false | failed |
single failure of PipelineTask | one or more failure of finally tasks | false | failed |
Overall, PipelineRun state transitioning is explained below for respective scenarios:
PipelineTask and finally tasks are successful: Started -> Running -> SucceededPipelineTask skipped and rest successful: Started -> Running -> CompletedPipelineTask failed / one or more finally tasks failed: Started -> Running -> FailedPlease refer to the table under Monitoring Execution Status to learn about
what kind of events are triggered based on the Pipelinerun status.
Status of pipelineTaskA pipeline can check the status of a specific pipelineTask from the tasks section in finally through the task
parameters:
finally:
- name: finaltask
params:
- name: task1Status
value: "$(tasks.task1.status)"
taskSpec:
params:
- name: task1Status
steps:
- image: ubuntu
name: print-task-status
script: |
if [ $(params.task1Status) == "Failed" ]
then
echo "Task1 has failed, continue processing the failure"
fi
This kind of variable can have any one of the values from the following table:
| Status | Description |
|---|---|
Succeeded | taskRun for the pipelineTask completed successfully |
Failed | taskRun for the pipelineTask completed with a failure or cancelled by the user |
None | the pipelineTask has been skipped or no execution information available for the pipelineTask |
For an end-to-end example, see status in a PipelineRun.
Status of All TasksA pipeline can check an aggregate status of all the tasks section in finally through the task parameters:
finally:
- name: finaltask
params:
- name: aggregateTasksStatus
value: "$(tasks.status)"
taskSpec:
params:
- name: aggregateTasksStatus
steps:
- image: ubuntu
name: check-task-status
script: |
if [ $(params.aggregateTasksStatus) == "Failed" ]
then
echo "Looks like one or more tasks returned failure, continue processing the failure"
fi
This kind of variable can have any one of the values from the following table:
| Status | Description |
|---|---|
Succeeded | all tasks have succeeded |
Failed | one ore more tasks failed |
Completed | all tasks completed successfully including one or more skipped tasks |
None | no aggregate execution status available (i.e. none of the above), one or more tasks could be pending/running/cancelled/timedout |
For an end-to-end example, see $(tasks.status) usage in a Pipeline.
finally Task execution using when expressionsSimilar to Tasks, finally Tasks can be guarded using when expressions
that operate on static inputs or variables. Like in Tasks, when expressions in finally Tasks can operate on
Parameters and Results. Unlike in Tasks, when expressions in finally tasks can also operate on the Execution Status of Tasks.
when expressions using Parameters in finally Taskswhen expressions in finally Tasks can utilize Parameters as demonstrated using golang-build
and send-to-channel-slack Catalog
Tasks:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
params:
- name: enable-notifications
type: string
description: a boolean indicating whether the notifications should be sent
tasks:
- name: golang-build
taskRef:
name: golang-build
# […]
finally:
- name: notify-build-failure # executed only when build task fails and notifications are enabled
when:
- input: $(tasks.golang-build.status)
operator: in
values: ["Failed"]
- input: $(params.enable-notifications)
operator: in
values: ["true"]
taskRef:
name: send-to-slack-channel
# […]
params:
- name: enable-notifications
value: true
when expressions using Results in finally 'Tasks`when expressions in finally tasks can utilize Results, as demonstrated using git-clone
and github-add-comment Catalog Tasks:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
tasks:
- name: git-clone
taskRef:
name: git-clone
- name: go-build
# […]
finally:
- name: notify-commit-sha # executed only when commit sha is not the expected sha
when:
- input: $(tasks.git-clone.results.commit)
operator: notin
values: [$(params.expected-sha)]
taskRef:
name: github-add-comment
# […]
params:
- name: expected-sha
value: 54dd3984affab47f3018852e61a1a6f9946ecfa
If the when expressions in a finally task use Results from a skipped or failed non-finally Tasks, then the
finally task would also be skipped and be included in the list of Skipped Tasks in the Status, similarly to when using
Results in other parts of the finally task.
when expressions using Execution Status of PipelineTask in finally taskswhen expressions in finally tasks can utilize Execution Status of PipelineTasks,
as demonstrated using golang-build and
send-to-channel-slack Catalog Tasks:
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
tasks:
- name: golang-build
taskRef:
name: golang-build
# […]
finally:
- name: notify-build-failure # executed only when build task fails
when:
- input: $(tasks.golang-build.status)
operator: in
values: ["Failed"]
taskRef:
name: send-to-slack-channel
# […]
For an end-to-end example, see PipelineRun with when expressions.
when expressions using Aggregate Execution Status of Tasks in finally taskswhen expressions in finally tasks can utilize
Aggregate Execution Status of Tasks as demonstrated:
finally:
- name: notify-any-failure # executed only when one or more tasks fail
when:
- input: $(tasks.status)
operator: in
values: ["Failed"]
taskRef:
name: notify-failure
For an end-to-end example, see PipelineRun with when expressions.
finally task execution orderIt's not possible to configure or modify the execution order of the finally tasks. Unlike Tasks in a Pipeline,
all finally tasks run simultaneously and start executing once all PipelineTasks under tasks have settled which means
no runAfter can be specified in finally tasks.
Custom Tasks have been promoted from v1alpha1 to v1beta1. Starting from v0.43.0 to v0.46.0, Pipeline Controller is able to create either v1alpha1 or v1beta1 Custom Task gated by a feature flag custom-task-version, defaulting to v1beta1. You can set custom-task-version to v1alpha1 or v1beta1 to control which version to create.
Starting from v0.47.0, feature flag custom-task-version is removed and only v1beta1 Custom Task will be supported. See the migration doc for details.
Custom Tasks
can implement behavior that doesn't correspond directly to running a workload in a Pod on the cluster.
For example, a custom task might execute some operation outside of the cluster and wait for its execution to complete.
A PipelineRun starts a custom task by creating a CustomRun instead of a TaskRun.
In order for a custom task to execute, there must be a custom task controller running on the cluster
that is responsible for watching and updating CustomRuns which reference their type.
To specify the custom task type you want to execute, the taskRef field
must include the custom task's apiVersion and kind as shown below.
Using apiVersion will always create a CustomRun. If apiVersion is set, kind is required as well.
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
This creates a Run/CustomRun of a custom task of type Example in the example.dev API group with the version v1alpha1.
Validation error will be returned if apiVersion or kind is missing.
You can also specify the name of a custom task resource object previously defined in the cluster.
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
If the taskRef specifies a name, the custom task controller should look up the
Example resource with that name and use that object to configure the execution.
If the taskRef does not specify a name, the custom task controller might support
some default behavior for executing unnamed tasks.
For v1alpha1.Run
spec:
tasks:
- name: run-custom-task
taskSpec:
apiVersion: example.dev/v1alpha1
kind: Example
spec:
field1: value1
field2: value2
For v1beta1.CustomRun
spec:
tasks:
- name: run-custom-task
taskSpec:
apiVersion: example.dev/v1alpha1
kind: Example
customSpec:
field1: value1
field2: value2
If the custom task controller supports the in-line or embedded task spec, this will create a Run/CustomRun of a custom task of
type Example in the example.dev API group with the version v1alpha1.
If the taskSpec is not supported, the custom task controller should produce proper validation errors.
Please take a look at the
developer guide for custom controllers supporting taskSpec:
taskSpec support for pipelineRun was designed and discussed in
TEP-0061
If a custom task supports parameters, you can use the
params field to specify their values:
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: bah
The Parameters in the Params field will accept
context variables that will be substituted, including:
PipelineRun name, namespace and uidPipeline namePipelineTask retriesspec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: $(context.pipeline.name)
:seedling:
Matrixis an alpha feature. Theenable-api-fieldsfeature flag must be set to"alpha"to specifyMatrixin aPipelineTask.
If a custom task supports parameters, you can use the
matrix field to specify their values, if you want to fan:
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: bah
matrix:
params:
- name: bar
value:
- qux
- thud
include:
- name: build-1
params:
- name: common-package
value: path-to-common-pkg
For further information, read Matrix.
If the custom task supports it, you can provide Workspaces to share data with the custom task.
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
workspaces:
- name: my-workspace
Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.
ResultsIf the custom task produces results, you can reference them in a Pipeline using the normal syntax,
$(tasks.<task-name>.results.<result-name>).
Timeoutv1alpha1.RunIf the custom task supports it as we recommended, you can provide timeout to specify the maximum running time of a CustomRun (including all retry attempts or other operations).
v1beta1.CustomRunIf the custom task supports it as we recommended, you can provide timeout to specify the maximum running time of one CustomRun execution.
spec:
tasks:
- name: run-custom-task
timeout: 2s
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
Consult the documentation of the custom task that you are using to determine whether it supports Timeout.
RetriesIf the custom task supports it, you can provide retries to specify how many times you want to retry the custom task.
spec:
tasks:
- name: run-custom-task
retries: 2
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
Consult the documentation of the custom task that you are using to determine whether it supports Retries.
We try to list as many known Custom Tasks as possible here so that users can easily find what they want. Please feel free to share the Custom Task you implemented in this table.
| Custom Task | Description |
|---|---|
| Wait Task Beta | Waits a given amount of time before succeeding, specified by an input parameter named duration. Support timeout and retries. |
| Approvals | Pauses the execution of PipelineRuns and waits for manual approvals. Version 0.6.0 and up. |
| Custom Task | Description |
|---|---|
| Pipeline Loops | Runs a Pipeline in a loop with varying Parameter values. |
| Common Expression Language | Provides Common Expression Language support in Tekton Pipelines. |
| Wait | Waits a given amount of time, specified by a Parameter named "duration", before succeeding. |
| Approvals | Pauses the execution of PipelineRuns and waits for manual approvals. Version up to (and including) 0.5.0 |
| Pipelines in Pipelines | Defines and executes a Pipeline in a Pipeline. |
| Task Group | Groups Tasks together as a Task. |
| Pipeline in a Pod | Runs Pipeline in a Pod. |
For a better understanding of Pipelines, study our code examples.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.