doc/ci/yaml/_index.md
{{< details >}}
{{< /details >}}
This document lists the configuration options for the GitLab .gitlab-ci.yml file.
This file is where you define the CI/CD jobs that make up your pipeline.
.gitlab-ci.yml file by following a tutorial that demonstrates a simple
or complex pipeline..gitlab-ci.yml file used in an enterprise, see the
.gitlab-ci.yml file for gitlab.When you are editing your .gitlab-ci.yml file, you can validate it with the
CI Lint tool.
GitLab CI/CD configuration uses YAML formatting, so the order of keywords is not important unless otherwise specified.
Use CI/CD expressions for more dynamic pipeline configuration options.
<!-- If you are editing content on this page, follow the instructions for documenting keywords: <https://docs.gitlab.com/development/cicd/cicd_reference_documentation_guide/> -->A GitLab CI/CD pipeline configuration includes:
Global keywords that configure pipeline behavior:
| Keyword | Description |
|---|---|
default | Custom default values for job keywords. |
include | Import configuration from other YAML files. |
stages | The names and order of the pipeline stages. |
variables | Define default CI/CD variables for all jobs in the pipeline. |
workflow | Control what types of pipeline run. |
| Keyword | Description |
|---|---|
spec | Define specifications for external configuration files. |
Jobs configured with job keywords:
| Keyword | Description |
|---|---|
after_script | Override a set of commands that are executed after job. |
allow_failure | Allow job to fail. A failed job does not cause the pipeline to fail. |
artifacts | List of files and directories to attach to a job on success. |
before_script | Override a set of commands that are executed before job. |
cache | List of files that should be cached between subsequent runs. |
coverage | Code coverage settings for a given job. |
dast_configuration | Use configuration from DAST profiles on a job level. |
dependencies | Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. |
environment | Name of an environment to which the job deploys. |
extends | Configuration entries that this job inherits from. |
identity | Authenticate with third party services using identity federation. |
image | Use Docker images. |
inherit | Select which global defaults all jobs inherit. |
interruptible | Defines if a job can be canceled when made redundant by a newer run. |
manual_confirmation | Define a custom confirmation message for a manual job. |
needs | Execute jobs earlier than the stage ordering. |
pages | Upload the result of a job to use with GitLab Pages. |
parallel | How many instances of a job should be run in parallel. |
release | Instructs the runner to generate a release object. |
resource_group | Limit job concurrency. |
retry | When and how many times a job can be auto-retried in case of a failure. |
rules | List of conditions to evaluate and determine selected attributes of a job, and whether or not it's created. |
script | Shell script that is executed by a runner. |
run | Run configuration that is executed by a runner. |
secrets | The CI/CD secrets the job needs. |
services | Use Docker services images. |
stage | Defines a job stage. |
start_in | Delay job execution for a specified duration. Requires when: delayed. |
tags | List of tags that are used to select a runner. |
timeout | Define a custom job-level timeout that takes precedence over the project-wide setting. |
trigger | Defines a downstream pipeline trigger. |
variables | Define CI/CD variables for individual jobs. |
when | When to run job. |
Deprecated keywords that are no longer recommended for use.
Some keywords are not defined in a job. These keywords control pipeline behavior or import additional pipeline configuration.
default{{< history >}}
id_tokens introduced in GitLab 16.4.{{< /history >}}
You can set global defaults for some keywords. Each default keyword is copied to every job that doesn't already have it defined.
Default configuration does not merge with job configuration. If the job already has a keyword defined, the job keyword takes precedence and the default configuration for that keyword is not used.
Keyword type: Global keyword.
Supported values: These keywords can have custom defaults:
Example of default:
default:
image: ruby:3.0
retry: 2
rspec:
script: bundle exec rspec
rspec 2.7:
image: ruby:2.7
script: bundle exec rspec
In this example:
image: ruby:3.0 and retry: 2 are the default keywords for all jobs in the pipeline.rspec job does not have image or retry defined, so it uses the defaults of
image: ruby:3.0 and retry: 2.rspec 2.7 job does not have retry defined, but it does have image explicitly defined.
It uses the default retry: 2, but ignores the default image and uses the image: ruby:2.7
defined in the job.Additional details:
inherit:default.includeUse include to include external YAML files in your CI/CD configuration.
You can split one long .gitlab-ci.yml file into multiple files to increase readability,
or reduce duplication of the same configuration in multiple places.
You can also store template files in a central repository and include them in projects.
The include files are:
.gitlab-ci.yml file..gitlab-ci.yml file,
regardless of the position of the include keyword.The time limit to resolve all files is 30 seconds.
Keyword type: Global keyword.
Supported values: The include subkeys:
And optionally:
Additional details:
include keywords..gitlab-ci.yml file. The two configurations are merged together, and the
configuration in the .gitlab-ci.yml file takes precedence over the included configuration.include files are not fetched again. All jobs in a pipeline use the configuration
fetched when the pipeline was created. Any changes to the source include files
do not affect job reruns.include files are fetched again. If they changed after the last
pipeline run, the new pipeline uses the changed configuration.include:componentUse include:component to add a CI/CD component to the
pipeline configuration.
Keyword type: Global keyword.
Supported values: The full address of the CI/CD component, formatted as
<fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>.
Example of include:component:
include:
- component: $CI_SERVER_FQDN/my-org/security-components/[email protected]
Related topics:
include:localUse include:local to include a file that is in the same repository and branch as the configuration file containing the include keyword.
Use include:local instead of symbolic links.
Keyword type: Global keyword.
Supported values:
A full path relative to the root directory (/):
.yml or .yaml.* and ** wildcards in the file path.Example of include:local:
include:
- local: '/templates/.gitlab-ci-template.yml'
You can also use shorter syntax to define the path:
include: '.gitlab-ci-production.yml'
Additional details:
.gitlab-ci.yml file and the local file must be on the same branch.include configuration is always evaluated based on the location of the file
containing the include keyword, not the project running the pipeline. If a
nested include is in a configuration file
in a different project, include: local checks that other project for the file.include:projectTo include files from another private project on the same GitLab instance,
use include:project and include:file.
Keyword type: Global keyword.
Supported values:
include:project: The full GitLab project path.include:file A full file path, or array of file paths, relative to the root directory (/).
The YAML files must have the .yml or .yaml extension.include:ref: Optional. The ref to retrieve the file from. Defaults to the HEAD of the project
when not specified.Example of include:project:
include:
- project: 'my-group/my-project'
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-subgroup/my-project-2'
file:
- '/templates/.builds.yml'
- '/templates/.tests.yml'
You can also specify a ref:
include:
- project: 'my-group/my-project'
ref: main # Git branch
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: v1.0.0 # Git Tag
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA
file: '/templates/.gitlab-ci-template.yml'
Additional details:
include configuration is always evaluated based on the location of the file
containing the include keyword, not the project running the pipeline. If a
nested include is in a configuration file
in a different project, include: local checks that other project for the file..gitlab-ci.yml file configuration included by all methods is evaluated.
The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to
the referenced .gitlab-ci.yml file configuration until the next pipeline starts.not found or access denied error may be displayed if the user does not have access to any of the included files.ref, consider:
ref might be ambiguous.ref in the other project. Protected tags and branches are more likely to pass through change management before changing.include:remoteUse include:remote with a full URL to include a file from a different location.
Keyword type: Global keyword.
Supported values:
A public URL accessible by an HTTP/HTTPS GET request:
.yml or .yaml.Example of include:remote:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'
Additional details:
include section of nested includes.integrity keyword.
If you link to another GitLab project you own, consider the use of both
protected branches and protected tags
to enforce change management rules.include:templateUse include:template to include .gitlab-ci.yml templates.
Keyword type: Global keyword.
Supported values:
Auto-DevOps.gitlab-ci.yml.Example of include:template:
# File sourced from the GitLab template collection
include:
- template: Auto-DevOps.gitlab-ci.yml
Multiple include:template files:
include:
- template: Android-Fastlane.gitlab-ci.yml
- template: Auto-DevOps.gitlab-ci.yml
Additional details:
lib/gitlab/ci/templates.
Not all templates are designed to be used with include:template, so check template
comments before using one.include section of nested includes.include:inputs{{< history >}}
{{< /history >}}
Use include:inputs to set the values for input parameters when the included configuration
uses spec:inputs and is added to the pipeline.
Keyword type: Global keyword.
Supported values: A string, numeric value, or boolean.
Example of include:inputs:
include:
- local: 'custom_configuration.yml'
inputs:
website: "My website"
In this example:
custom_configuration.yml is added to the pipeline,
with a website input set to a value of My website for the included configuration.Additional details:
spec:inputs:type,
the input value must match the defined type.spec:inputs:options,
the input value must match one of the listed options.Related topics:
include:rulesYou can use rules with include to conditionally include other configuration files.
Keyword type: Global keyword.
Supported values: These rules subkeys:
Some CI/CD variables are supported.
Example of include:rules:
include:
- local: build_jobs.yml
rules:
- if: $INCLUDE_BUILDS == "true"
test-job:
stage: test
script: echo "This is a test job"
In this example, if the INCLUDE_BUILDS variable is:
true, the build_jobs.yml configuration is included in the pipeline.true or does not exist, the build_jobs.yml configuration is not included in the pipeline.Related topics:
include with:
include:integrity{{< history >}}
{{< /history >}}
Use integrity with include:remote to specify a SHA256 hash of the included remote file.
If integrity does not match the actual content, the remote file is not processed
and the pipeline fails.
Keyword type: Global keyword.
Supported values: Base64-encoded SHA256 hash of the included content.
Example of include:integrity:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'
integrity: 'sha256-L3/GAoKaw0Arw6hDCKeKQlV1QPEgHYxGBHsH4zG1IY8='
include:cache{{< details >}}
{{< /details >}}
{{< history >}}
ci_cache_remote_includes. Disabled by default.{{< /history >}}
[!flag] The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use.
Use cache with include:remote to cache the fetched remote file content and reduce HTTP requests.
When enabled, the remote file is cached for a specified time-to-live (TTL), improving pipeline performance
for configurations that use the same remote includes repeatedly.
Consider the trade-off between performance and freshness when setting cache durations. Longer cache durations improve performance but might use stale content if the remote file changes frequently.
When cache is not defined, the remote file is fetched every time.
Keyword type: Global keyword.
Supported values:
true: Enable caching with a default time-to-live (TTL) of 1 hour.minutes, hours, or days (minimum 1 minute).Example of include:cache:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/sample1.gitlab-ci.yml'
cache: true
- remote: 'https://gitlab.com/example-project/-/raw/main/sample2.gitlab-ci.yml'
cache: '1 day'
Additional details:
include:remote.integrity with cache, the integrity check is performed
on every pipeline run, even when using cached content.stages{{< history >}}
{{< /history >}}
Use stages to define stages that contain groups of jobs. Use stage
in a job to configure the job to run in a specific stage.
If stages is not defined in the .gitlab-ci.yml file, the default pipeline stages are:
The order of the items in stages defines the execution order for jobs:
If a pipeline contains only jobs in the .pre or .post stages, it does not run.
There must be at least one other job in a different stage.
Keyword type: Global keyword.
Example of stages:
stages:
- build
- test
- deploy
In this example:
build execute in parallel.build succeed, the test jobs execute in parallel.test succeed, the deploy jobs execute in parallel.deploy succeed, the pipeline is marked as passed.If any job fails, the pipeline is marked as failed and jobs in later stages do not
start. Jobs in the current stage are not stopped and continue to run.
Additional details:
stage, the job is assigned the test stage.Related topics:
needs keyword.workflowUse workflow to control pipeline behavior.
You can use some predefined CI/CD variables in
workflow configuration, but not variables that are only defined when jobs start.
Related topics:
workflow:auto_cancel:on_new_commit{{< history >}}
ci_workflow_auto_cancel_on_new_commit. Disabled by default.ci_workflow_auto_cancel_on_new_commit removed.{{< /history >}}
Use workflow:auto_cancel:on_new_commit to configure the behavior of
the auto-cancel redundant pipelines feature.
Supported values:
conservative: Cancel the pipeline, but only if no jobs with interruptible: false have started yet. Default when not defined.interruptible: Cancel only jobs with interruptible: true.none: Do not auto-cancel any jobs.Example of workflow:auto_cancel:on_new_commit:
workflow:
auto_cancel:
on_new_commit: interruptible
job1:
interruptible: true
script: sleep 60
job2:
interruptible: false # Default when not defined.
script: sleep 60
In this example:
job1 and job2 start.job1 is canceled.workflow:auto_cancel:on_job_failure{{< history >}}
auto_cancel_pipeline_on_job_failure. Disabled by default.auto_cancel_pipeline_on_job_failure removed.{{< /history >}}
Use workflow:auto_cancel:on_job_failure to configure which jobs should be canceled as soon as one job fails.
Supported values:
all: Cancel the pipeline and all running jobs as soon as one job fails.none: Do not auto-cancel any jobs.Example of workflow:auto_cancel:on_job_failure:
stages: [stage_a, stage_b]
workflow:
auto_cancel:
on_job_failure: all
job1:
stage: stage_a
script: sleep 60
job2:
stage: stage_a
script:
- sleep 30
- exit 1
job3:
stage: stage_b
script:
- sleep 30
In this example, if job2 fails, job1 is canceled if it is still running and job3 does not start.
Related topics:
workflow:name{{< history >}}
pipeline_name. Disabled by default.pipeline_name removed.{{< /history >}}
You can use name in workflow: to define a name for pipelines.
All pipelines are assigned the defined name. Any leading or trailing spaces in the name are removed.
Supported values:
Examples of workflow:name:
A simple pipeline name with a predefined variable:
workflow:
name: 'Pipeline for branch: $CI_COMMIT_BRANCH'
A configuration with different pipeline names depending on the pipeline conditions:
variables:
PROJECT1_PIPELINE_NAME: 'Default pipeline name' # A default is not required
workflow:
name: '$PROJECT1_PIPELINE_NAME'
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /pipeline:run-in-ruby3/'
variables:
PROJECT1_PIPELINE_NAME: 'Ruby 3 pipeline'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
variables:
PROJECT1_PIPELINE_NAME: 'MR pipeline: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # For default branch pipelines, use the default name
Additional details:
workflow:rules:variables become default variables available in all jobs,
including trigger jobs which forward variables to downstream pipelines by default.
If the downstream pipeline uses the same variable, the variable is overwritten
by the upstream variable value. Be sure to either:
PROJECT1_PIPELINE_NAME.inherit:variables in the trigger job and list the
exact variables you want to forward to the downstream pipeline.workflow:rulesThe rules keyword in workflow is similar to rules defined in jobs,
but controls whether or not a whole pipeline is created.
When no rules evaluate to true, the pipeline does not run.
Supported values: You can use some of the same keywords as job-level rules:
rules: if.rules: changes.rules: exists.when, can only be always or never when used with workflow.variables.Example of workflow:rules:
workflow:
rules:
- if: $CI_COMMIT_TITLE =~ /-draft$/
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In this example, pipelines run if the commit title (first line of the commit message) does not end with -draft
and the pipeline is for either:
Additional details:
start_in, allow_failure, and needs are not supported in workflow:rules,
but do not cause a syntax violation. Though they have no effect, do not use them
in workflow:rules as it could cause syntax failures in the future. See
issue 436473 for more details.Related topics:
workflow:rules:variablesYou can use variables in workflow:rules to define variables for
specific pipeline conditions.
When the condition matches, the variable is created and can be used by all jobs
in the pipeline. If the variable is already defined at the top level as a default variable,
the workflow variable takes precedence and overrides the default variable.
Keyword type: Global keyword.
Supported values: Variable name and value pairs:
_).Example of workflow:rules:variables:
variables:
DEPLOY_VARIABLE: "default-deploy"
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables:
DEPLOY_VARIABLE: "deploy-production" # Override globally-defined DEPLOY_VARIABLE
- if: $CI_COMMIT_BRANCH =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
- if: $CI_COMMIT_BRANCH # Run the pipeline in other cases
job1:
variables:
DEPLOY_VARIABLE: "job1-default-deploy"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "job1-deploy-production" # at the job level.
- when: on_success # Run the job in other cases
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
job2:
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
When the branch is the default branch:
DEPLOY_VARIABLE is job1-deploy-production.DEPLOY_VARIABLE is deploy-production.When the branch is feature:
DEPLOY_VARIABLE is job1-default-deploy, and IS_A_FEATURE is true.DEPLOY_VARIABLE is default-deploy, and IS_A_FEATURE is true.When the branch is something else:
DEPLOY_VARIABLE is job1-default-deploy.DEPLOY_VARIABLE is default-deploy.Additional details:
workflow:rules:variables become default variables available in all jobs,
including trigger jobs which forward variables to downstream pipelines by default.
If the downstream pipeline uses the same variable, the variable is overwritten
by the upstream variable value. Be sure to either:
PROJECT1_VARIABLE_NAME.inherit:variables in the trigger job and list the
exact variables you want to forward to the downstream pipeline.workflow:rules:auto_cancel{{< history >}}
ci_workflow_auto_cancel_on_new_commit. Disabled by default.ci_workflow_auto_cancel_on_new_commit removed.on_job_failure option for workflow:rules introduced in GitLab 16.10 with a flag named auto_cancel_pipeline_on_job_failure. Disabled by default.on_job_failure option for workflow:rules generally available in GitLab 16.11. Feature flag auto_cancel_pipeline_on_job_failure removed.{{< /history >}}
Use workflow:rules:auto_cancel to configure the behavior of
the workflow:auto_cancel:on_new_commit or
the workflow:auto_cancel:on_job_failure features.
Supported values:
on_new_commit: workflow:auto_cancel:on_new_commiton_job_failure: workflow:auto_cancel:on_job_failureExample of workflow:rules:auto_cancel:
workflow:
auto_cancel:
on_new_commit: interruptible
on_job_failure: all
rules:
- if: $CI_COMMIT_REF_PROTECTED == 'true'
auto_cancel:
on_new_commit: none
on_job_failure: none
- when: always # Run the pipeline in other cases
test-job1:
script: sleep 10
interruptible: false
test-job2:
script: sleep 10
interruptible: true
In this example, workflow:auto_cancel:on_new_commit
is set to interruptible and workflow:auto_cancel:on_job_failure
is set to all for all jobs by default. But if a pipeline runs for a protected branch,
the rule overrides the default with on_new_commit: none and on_job_failure: none. For example, if a pipeline
is running for:
test-job1 continues to run and test-job2 is canceled.test-job1 and test-job2 continue to run.Some keywords must be defined in a header section of a YAML configuration file.
The header must be at the top of the file, separated from the rest of the configuration
with ---.
spec{{< history >}}
{{< /history >}}
Add a spec section to the header of a YAML file to configure the behavior of a pipeline
when a configuration is added to the pipeline with the include keyword.
Specs must be declared at the top of a configuration file, in a header section separated
from the rest of the configuration with ---.
spec:inputsYou can use spec:inputs to define inputs for the CI/CD configuration.
Use the interpolation format $[[ inputs.input-id ]] to reference the values outside of the header section.
Inputs are evaluated and interpolated when the configuration is fetched during pipeline creation.
When using inputs, interpolation completes before the configuration is merged
with the contents of the .gitlab-ci.yml file.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: A hash of strings representing the expected inputs.
Example of spec:inputs:
spec:
inputs:
environment:
job-stage:
---
scan-website:
stage: $[[ inputs.job-stage ]]
script: ./scan-website $[[ inputs.environment ]]
Additional details:
spec:inputs:default
to set a default value. Avoid mandatory inputs unless you only use inputs with
include:inputs.spec:inputs:type to set a
different input type.Related topics:
spec:inputs:default{{< history >}}
{{< /history >}}
Inputs are mandatory when included, unless you set a default value with spec:inputs:default.
Use default: '' to have no default value.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: A string representing the default value, or ''.
Example of spec:inputs:default:
spec:
inputs:
website:
user:
default: 'test-user'
flags:
default: ''
---
# The pipeline configuration would follow...
In this example:
website is mandatory and must be defined.user is optional. If not defined, the value is test-user.flags is optional. If not defined, it has no value.Additional details:
spec:inputs:description{{< history >}}
{{< /history >}}
Use description to give a description to a specific input. The description does
not affect the behavior of the input and is only used to help users of the file
understand the input.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: A string representing the description.
Example of spec:inputs:description:
spec:
inputs:
flags:
description: 'Sample description of the `flags` input details.'
---
# The pipeline configuration would follow...
spec:inputs:options{{< history >}}
{{< /history >}}
Inputs can use options to specify a list of allowed values for an input.
The limit is 50 options per input.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: An array of input options. Only string and number type inputs can be used with options.
Example of spec:inputs:options:
spec:
inputs:
environment:
options:
- development
- staging
- production
---
# The pipeline configuration would follow...
In this example:
environment is mandatory and must be defined with one of the values in the list.Additional details:
spec:inputs:regex{{< history >}}
{{< /history >}}
Use spec:inputs:regex to specify a regular expression that the input must match.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: Must be a regular expression.
Example of spec:inputs:regex:
spec:
inputs:
version:
regex: ^v\d\.\d+(\.\d+)?$
---
# The pipeline configuration would follow...
In this example, inputs of v1.0 or v1.2.3 match the regular expression and pass validation.
An input of v1.A.B does not match the regular expression and fails validation.
Additional details:
inputs:regex can only be used with a type of string,
not number or boolean./ character. For example, use regex.*,
not /regex.*/.inputs:regex uses RE2 to parse regular expressions.spec:inputs:rules{{< history >}}
{{< /history >}}
Use spec:inputs:rules to define conditional options and default values for an input
based on the values of other inputs.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: An array of rule objects. Each rule can have:
if: A conditional expression to check input values, using $[[ inputs.input-id ]] syntax.options: An array of allowed values for the input.default: The default value for the input when this rule matches. Use default: null to allow users to enter their own value for the input.Example of spec:inputs:rules:
spec:
inputs:
environment:
options: ['development', 'production']
default: 'development'
instance_type:
description: 'VM instance size'
rules:
- if: $[[ inputs.environment ]] == 'development'
options: ['small', 'medium']
default: 'small'
- if: $[[ inputs.environment ]] == 'production'
options: ['large', 'xlarge']
default: 'large'
---
deploy:
script: echo "Deploying $[[ inputs.instance_type ]] instance"
In this example, when environment is development, users can only select small or
medium instances. When environment is production, only large or xlarge instances
are available.
Additional details:
if condition is used.if condition acts as a fallback when no other rules match.options with at least one value.options must also define a default value that exists in the options list.rules and top-level options or default for the same input.Related topics:
spec:inputs:typeBy default, inputs expect strings. Use spec:inputs:type to set a different required
type for inputs.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: Can be one of:
array, to accept an array of inputs.string, to accept string inputs (default when not defined).number, to only accept numeric inputs.boolean, to only accept true or false inputs.Example of spec:inputs:type:
spec:
inputs:
job_name:
website:
type: string
port:
type: number
available:
type: boolean
array_input:
type: array
---
# The pipeline configuration would follow...
spec:include{{< history >}}
ci_file_inputs. Disabled by default.ci_file_inputs removed.{{< /history >}}
Use spec:include to include external input definitions from other files.
You can share and reuse input definitions across multiple pipeline configurations.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: An array of include locations. Supports local, remote, and project includes only.
Example of spec:include:
spec:
include:
- local: /shared-inputs.yml
inputs:
environment:
default: production
---
deploy:
script: echo "Deploying to $[[ inputs.environment ]]"
With multiple includes from different sources:
spec:
include:
- local: /base-inputs.yml
- remote: 'https://example.com/ci/common-inputs.yml'
- project: 'my-group/shared-configs'
ref: main
file: '/ci/team-inputs.yml'
inputs:
environment:
default: production
---
deploy:
script: echo "Deploying to $[[ inputs.environment ]]"
Additional details:
spec:include in CI/CD components.inputs key. Other keys cause validation errors.local, remote, and project include types.
Does not support template, component, or artifact includes.Related topics:
spec:component{{< history >}}
ci_component_context_interpolation. Enabled by default.ci_component_context_interpolation removed.{{< /history >}}
Use spec:component to define which component context data is available for interpolation
in a CI/CD component.
Component context provides metadata about the component itself, such as its name, version, and the commit SHA. This allows component templates to reference their own metadata dynamically.
Use the interpolation format $[[ component.field-name ]] to reference component context
values in the component template.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: An array of strings. Each string must be one of:
name: The component name as specified in the component path.sha: The commit SHA of the component.version: The resolved semantic version from the catalog resource. Returns null if:
reference: The original reference specified after @ in the component path.
For example, 1.0, ~latest, a branch name, or a commit SHA.Example of spec:component:
spec:
component: [name, version, reference]
inputs:
stage:
default: build
---
build-image:
stage: $[[ inputs.stage ]]
image: registry.example.com/$[[ component.name ]]:$[[ component.version ]]
script:
- echo "Building with component version $[[ component.version ]]"
- echo "Component reference: $[[ component.reference ]]"
Additional details:
version field resolves to the actual semantic version when using:
@1.0.0 (returns 1.0.0)@1.0 (returns the latest matching version, for example 1.0.2)@~latest (returns the latest version)reference field always returns the exact value specified after @:
@1.0 returns 1.0 (while version might return 1.0.2)@~latest returns ~latest (while version returns the actual version number)@abc123 returns abc123 (while version returns null)Related topics:
spec:description{{< history >}}
{{< /history >}}
Use spec:description to provide a short description of the component. The description
is displayed in the CI/CD Catalog on the component details page, above the inputs table.
Keyword type: Header keyword. spec must be declared at the top of the configuration file,
in a header section.
Supported values: A string describing the component.
Example of spec:description:
spec:
description: "A description of the component visible to users in the CI/CD Catalog."
inputs:
stage:
default: test
---
scan-job:
stage: $[[ inputs.stage ]]
script: ./run-scan.sh
The following topics explain how to use keywords to configure CI/CD pipelines.
after_script{{< history >}}
after_script commands for canceled jobs introduced in GitLab 17.0.{{< /history >}}
Use after_script to define an array of commands to run last, after a job's before_script and
script sections complete. after_script commands also run when:
before_script or script sections are still running.script_failure, but not other failure types.Job configuration and default configuration does not merge together.
If the pipeline has default:after_script defined, and the job also has after_script,
the job configuration takes precedence and the default configuration is not used.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: An array including:
CI/CD variables are supported.
Example of after_script:
job:
script:
- echo "An example script section."
after_script:
- echo "Execute this command after the `script` section completes."
Additional details:
Scripts you specify in after_script execute in a new shell, separate from any
before_script or script commands. As a result, they:
before_script or script,
including:
script scripts.before_script or script script.RUNNER_AFTER_SCRIPT_TIMEOUT variable.
In GitLab 16.3 and earlier, the timeout is hard-coded to 5 minutes.script section succeeds and the
after_script times out or fails, the job exits with code 0 (Job Succeeded).after_script commands do not execute by default.after_script runs by setting appropriate RUNNER_SCRIPT_TIMEOUT and RUNNER_AFTER_SCRIPT_TIMEOUT values that don't exceed the job's timeout.after_script at the top level, but not in the default section, is deprecated.Execution timing and file inclusion:
after_script commands execute before cache and artifact upload operations.
after_script are included in artifacts.after_script are included in cache uploads.after_script creates or modifies in the specified cache or artifact paths are captured and uploaded. You can use this timing for scenarios like:
In the following example, the only files that are not included are those created or modified after the artifact or cache upload stages:
job:
script:
- echo "main" > output.txt
- build_something
after_script:
- echo "modified in after_script" >> output.txt # This WILL be in the artifact
- generate_test_report > report.html # This WILL be in the artifact
artifacts:
paths:
- output.txt
- report.html
cache:
paths:
- output.txt # Will include the "modified in after_script" line
For more information, see job execution flow.
Related topics:
after_script with default
to define a default array of commands that should run after all jobs.after_script commands if the job is canceled.after_script
to make job logs easier to review.after_script.allow_failureUse allow_failure to determine whether a pipeline should continue running when a job fails.
allow_failure: true.allow_failure: false.When jobs are allowed to fail (allow_failure: true) an orange warning ({{< icon name="status_warning" >}})
indicates that a job failed. However, the pipeline is successful and the associated commit
is marked as passed with no warnings.
This same warning is displayed when:
The default value for allow_failure is:
true for manual jobs.false for jobs that use when: manual inside rules.false in all other cases.Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true or false.Example of allow_failure:
job1:
stage: test
script:
- execute_script_1
job2:
stage: test
script:
- execute_script_2
allow_failure: true
job3:
stage: deploy
script:
- deploy_to_staging
environment: staging
In this example, job1 and job2 run in parallel:
job1 fails, jobs in the deploy stage do not start.job2 fails, jobs in the deploy stage can still start.Additional details:
allow_failure as a subkey of rules.allow_failure: true is set, the job is always considered successful, and later jobs with when: on_failure don't start if this job fails.allow_failure: false with a manual job to create a blocking manual job.
A blocked pipeline does not run any jobs in later stages until the manual job
is started and completes successfully.allow_failure:exit_codesUse allow_failure:exit_codes to control when a job should be
allowed to fail. The job is allow_failure: true for any of the listed exit codes,
and allow_failure false for any other exit code.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of allow_failure:
test_job_1:
script:
- echo "Run a script that results in exit code 1. This job fails."
- exit 1
allow_failure:
exit_codes: 137
test_job_2:
script:
- echo "Run a script that results in exit code 137. This job is allowed to fail."
- exit 137
allow_failure:
exit_codes:
- 137
- 255
artifacts{{< history >}}
symlinks are no longer followed, which happened in some edge cases with previous GitLab Runner versions.{{< /history >}}
Use artifacts to specify which files to save as job artifacts.
Job artifacts are a list of files and directories that are
attached to the job when it succeeds, fails, or always.
The artifacts are sent to GitLab after the job finishes. They are available for download in the GitLab UI if the size is smaller than the maximum artifact size.
By default, jobs in later stages automatically download all the artifacts created
by jobs in earlier stages. You can control artifact download behavior in jobs with
dependencies.
When using the needs keyword, jobs can only download
artifacts from the jobs defined in the needs configuration.
Job artifacts are only collected for successful jobs by default, and artifacts are restored after caches.
Job configuration and default configuration does not merge together.
If the pipeline has default:artifacts defined, and the job also has artifacts,
the job configuration takes precedence and the default configuration is not used.
artifacts:pathsPaths are relative to the project directory ($CI_PROJECT_DIR) and can't directly
link outside it.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
doublestar.Glob patterns.pages.publish path is automatically appended to artifacts:paths,
so you don't need to specify it again.pages.publish path is not specified,
the public directory is automatically appended to artifacts:paths.CI/CD variables are supported.
Example of artifacts:paths:
job:
artifacts:
paths:
- binaries/
- .config
This example creates an artifact with .config and all the files in the binaries directory.
Additional details:
artifacts:name, the artifacts file
is named artifacts, which becomes artifacts.zip when downloaded.Related topics:
dependencies.artifacts:excludeUse artifacts:exclude to prevent files from being added to an artifacts archive.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
doublestar.PathMatch patterns.Example of artifacts:exclude:
artifacts:
paths:
- binaries/
exclude:
- binaries/**/*.o
This example stores all files in binaries/, but not *.o files located in
subdirectories of binaries/.
Additional details:
artifacts:exclude paths are not searched recursively.artifacts:untracked can be excluded using
artifacts:exclude too.Related topics:
artifacts:expire_inUse expire_in to specify how long job artifacts are stored before
they expire and are deleted. The expire_in setting does not affect:
After their expiry, artifacts are deleted hourly by default (using a cron job), and are not accessible anymore.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: The expiry time. If no unit is provided, the time is in seconds. Valid values include:
'42'42 seconds3 mins 4 sec2 hrs 20 min2h20min6 mos 1 day47 yrs 6 mos and 4d3 weeks and 2 daysneverExample of artifacts:expire_in:
job:
artifacts:
expire_in: 1 week
Additional details:
expire_in to never.could not retrieve the needed artifacts error.
Set the expiry time to be longer, or use dependencies in later jobs
to ensure they don't try to fetch expired artifacts.artifacts:expire_in doesn't affect GitLab Pages deployments. To configure Pages deployments' expiry, use pages.expire_in.artifacts:expose_asUse the artifacts:expose_as keyword to
expose artifacts in the merge request UI.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
artifacts:paths.Example of artifacts:expose_as:
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
Additional details:
expose_as only once per job, with a maximum of 10 jobs per merge request.artifacts:paths values:
/. For example, directory/ works with artifacts:expose_as,
but directory does not.artifacts:paths only includes a single file, the link opens the file directly.
In all other cases, the link opens the artifacts browser.Related topics:
artifacts:nameUse the artifacts:name keyword to define the name of the created artifacts
archive. You can specify a unique name for every archive.
If not defined, the default name is artifacts, which becomes artifacts.zip when downloaded.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
artifacts:paths.Example of artifacts:name:
To create an archive with a name of the current job:
job:
artifacts:
name: "job1-artifacts-file"
paths:
- binaries/
Related topics:
artifacts:public{{< history >}}
artifacts:public before 15.10 are not guaranteed to remain private after this update.non_public_artifacts removed.{{< /history >}}
[!note]
artifacts:publicis now superseded byartifacts:accesswhich has more options.
Use artifacts:public to control whether job artifacts in public pipelines are available for download
with the GitLab UI and API by anonymous users, or Guest and Reporter roles.
[!warning] This option only affects GitLab UI and API access. CI/CD jobs using job tokens could still access artifacts with the runner API, regardless of this setting. To restrict job token access, configure your project's CI/CD visibility settings to Only project members.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
true (default): Artifacts in a job in public pipelines are available for download by anyone,
including anonymous users, or Guest and Reporter roles.false: Artifacts in the job are only available for download by users with the Developer, Maintainer, or Owner role.Example of artifacts:public:
job:
artifacts:
public: false
artifacts:access{{< history >}}
maintainer option introduced in GitLab 18.4.{{< /history >}}
Use artifacts:access to determine who can access the job artifacts from the GitLab UI
or API. This option does not prevent you from forwarding artifacts to downstream pipelines.
You cannot use artifacts:public and artifacts:access in the same job.
[!warning] This option only affects GitLab UI and API access. CI/CD jobs using job tokens could still access artifacts with the runner API, regardless of this setting. To restrict job token access, configure your project's CI/CD visibility settings to Only project members.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
all (default): Artifacts in a job in public pipelines are available for download by anyone,
including anonymous, guest, and reporter users.developer: Artifacts in the job are only available for download by users with the Developer, Maintainer, or Owner role.maintainer: Artifacts in the job are only available for download by users with the Maintainer or Owner role.none: Artifacts in the job are not available for download by anyone.Example of artifacts:access:
job:
artifacts:
access: 'developer'
Additional details:
artifacts:access affects all artifacts:reports too,
so you can also restrict access to artifacts for reports.artifacts:reportsUse artifacts:reports to collect artifacts generated by
included templates in jobs.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of artifacts:reports:
rspec:
stage: test
script:
- bundle install
- rspec --format RspecJunitFormatter --out rspec.xml
artifacts:
reports:
junit: rspec.xml
Additional details:
artifacts:paths keyword. This uploads and stores the artifact twice.artifacts: reports are always uploaded, regardless of the job results (success or failure).
You can use artifacts:expire_in to set an expiration
date for the artifacts.artifacts:untrackedUse artifacts:untracked to add all Git untracked files as artifacts (along
with the paths defined in artifacts:paths). artifacts:untracked ignores configuration
in the repository's .gitignore, so matching artifacts in .gitignore are included.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
true or false (default if not defined).Example of artifacts:untracked:
Save all Git untracked files:
job:
artifacts:
untracked: true
Related topics:
artifacts:whenUse artifacts:when to upload artifacts on job failure or despite the
failure.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
on_success (default): Upload artifacts only when the job succeeds.on_failure: Upload artifacts only when the job fails.always: Always upload artifacts (except when jobs time out). For example, when
uploading artifacts
required to troubleshoot failing tests.Example of artifacts:when:
job:
artifacts:
when: on_failure
Additional details:
artifacts:reports are always uploaded,
regardless of the job results (success or failure). artifacts:when does not change this behavior.before_scriptUse before_script to define an array of commands that should run before each job's
script commands, but after artifacts are restored.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: An array including:
CI/CD variables are supported.
Example of before_script:
job:
before_script:
- echo "Execute this command before any 'script:' commands."
script:
- echo "This command executes after the job's 'before_script' commands."
Additional details:
before_script are concatenated with any scripts you specify
in the main script. The combined scripts execute together in a single shell.before_script at the top level, but not in the default section, is deprecated.Related topics:
before_script with default
to define a default array of commands that should run before the script commands in all jobs.
default:before_script defined, and the job also has before_script,
the job configuration takes precedence and the default configuration is not used.before_script
to make job logs easier to review.cache{{< history >}}
symlinks are no longer followed, which happened in some edge cases with previous GitLab Runner versions.{{< /history >}}
Use cache to specify a list of files and directories to
cache between jobs. You can only use paths that are in the local working copy.
Caches are:
You can disable caching for specific jobs, for example to override:
Job configuration and default configuration does not merge together.
If the pipeline has default:cache defined, and the job also has cache,
the job configuration takes precedence and the default configuration is not used.
For more information about caches, see Caching in GitLab CI/CD.
Using cache at the top level, but not in the default section, is deprecated.
cache:pathsUse the cache:paths keyword to choose which files or directories to cache.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
$CI_PROJECT_DIR).
You can use wildcards that use glob and
doublestar.Glob patterns.CI/CD variables are supported.
Example of cache:paths:
Cache all files in binaries that end in .apk and the .config file:
rspec:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache
paths:
- binaries/*.apk
- .config
Additional details:
cache:paths keyword includes files even if they are untracked or in your .gitignore file.Related topics:
cache:paths examples.cache:keyUse the cache:key keyword to give each cache a unique identifying key. All jobs
that use the same cache key use the same cache, including in different pipelines.
If not set, the default key is default. All jobs with the cache keyword but
no cache:key share the default cache.
Must be used with cache: paths, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of cache:key:
cache-job:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache-$CI_COMMIT_REF_SLUG
paths:
- binaries/
Additional details:
$ with %. For example: key: %CI_COMMIT_REF_SLUG%cache:key value can't contain:
/ character, or the equivalent URI-encoded %2F.. character (any number), or the equivalent URI-encoded %2E.cache:key.
Otherwise cache content can be overwritten.Related topics:
cache:key is not found.cache:key examples.cache:key:filesUse cache:key:files to generate a new cache key when the content of the specified files change.
If the content remains unchanged, the cache key remains consistent across branches and pipelines.
You can reuse caches and rebuild them less often, which speeds up subsequent pipeline runs.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
CI/CD variables are not supported.
Example of cache:key:files:
cache-job:
script:
- echo "This job uses a cache."
cache:
key:
files:
- Gemfile.lock
- package.json
paths:
- vendor/ruby
- node_modules
This example creates a cache for Ruby and Node.js dependencies. The cache
is tied to the current versions of the Gemfile.lock and package.json files. When one of
these files changes, a new cache key is computed and a new cache is created. Any future
job runs that use the same Gemfile.lock and package.json with cache:key:files
use the new cache, instead of rebuilding the dependencies.
Additional details:
key is a SHA computed from the content of the listed files. If a file doesn't exist, it's ignored in the key calculation.
If none of the specified files exist, the fallback key is default.**/package.json can be used.cache:key:files_commitsUse cache:key:files_commits to generate a new cache key when the latest commit changes
for the specified files. cache:key:files_commits cache keys change whenever
the specified files have a new commit, even if the file content remains identical.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of cache:key:files_commits:
cache-job:
script:
- echo "This job uses a commit-based cache."
cache:
key:
files_commits:
- package.json
- yarn.lock
paths:
- node_modules
This example creates a cache based on the commit history of package.json and yarn.lock.
If the commit history changes for these files, a new cache key is computed and a new cache is created.
Additional details:
key is a SHA computed from the most recent commit for each specified file.default.cache:key:files in the same cache configuration.cache:key:prefixUse cache:key:prefix to combine a prefix with the SHA computed for cache:key:files.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of cache:key:prefix:
rspec:
script:
- echo "This rspec job uses a cache."
cache:
key:
files:
- Gemfile.lock
prefix: $CI_JOB_NAME
paths:
- vendor/ruby
For example, adding a prefix of $CI_JOB_NAME causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5.
If a branch changes Gemfile.lock, that branch has a new SHA checksum for cache:key:files.
A new cache key is generated, and a new cache is created for that key. If Gemfile.lock
is not found, the prefix is added to default, so the key in the example would be rspec-default.
Additional details:
cache:key:files is changed in any commits, the prefix is added to the default key.cache:untrackedUse untracked: true to cache all files that are untracked in your Git repository.
Untracked files include files that are:
.gitignore configuration.git add.Caching untracked files can create unexpectedly large caches if the job downloads:
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
true or false (default).Example of cache:untracked:
rspec:
script: test
cache:
untracked: true
Additional details:
You can combine cache:untracked with cache:paths to cache all untracked files, as well as files in the configured paths.
Use cache:paths to cache any specific files, including tracked files, or files that are outside of the working directory,
and use cache: untracked to also cache all untracked files. For example:
rspec:
script: test
cache:
untracked: true
paths:
- binaries/
In this example, the job caches all untracked files in the repository, as well as all the files in binaries/.
If there are untracked files in binaries/, they are covered by both keywords.
cache:unprotect{{< history >}}
{{< /history >}}
Use cache:unprotect to set a cache to be shared between protected
and unprotected branches.
[!warning] When set to
true, users without access to protected branches can read and write to cache keys used by protected branches.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
true or false (default).Example of cache:unprotect:
rspec:
script: test
cache:
unprotect: true
cache:whenUse cache:when to define when to save the cache, based on the status of the job.
Must be used with cache: paths, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
on_success (default): Save the cache only when the job succeeds.on_failure: Save the cache only when the job fails.always: Always save the cache.Example of cache:when:
rspec:
script: rspec
cache:
paths:
- rspec/
when: 'always'
This example stores the cache whether or not the job fails or succeeds.
cache:policyTo change the upload and download behavior of a cache, use the cache:policy keyword.
By default, the job downloads the cache when the job starts, and uploads changes
to the cache when the job ends. This caching style is the pull-push policy (default).
To set a job to only download the cache when the job starts, but never upload changes
when the job finishes, use cache:policy:pull.
To set a job to only upload a cache when the job finishes, but never download the
cache when the job starts, use cache:policy:push.
Use the pull policy when you have many jobs executing in parallel that use the same cache.
This policy speeds up job execution and reduces load on the cache server. You can
use a job with the push policy to build the cache.
Must be used with cache: paths, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
pullpushpull-push (default)Example of cache:policy:
prepare-dependencies-job:
stage: build
cache:
key: gems
paths:
- vendor/bundle
policy: push
script:
- echo "This job only downloads dependencies and builds the cache."
- echo "Downloading dependencies..."
faster-test-job:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- echo "This job script uses the cache, but does not update it."
- echo "Running tests..."
Related topics:
cache:fallback_keysUse cache:fallback_keys to specify a list of keys to try to restore cache from
if there is no cache found for the cache:key. Caches are retrieved in the order specified
in the fallback_keys section.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of cache:fallback_keys:
rspec:
script: rspec
cache:
key: gems-$CI_COMMIT_REF_SLUG
paths:
- rspec/
fallback_keys:
- gems
when: 'always'
coverageUse coverage with a custom regular expression to configure how code coverage
is extracted from the job output. The coverage is shown in the UI if at least one
line in the job output matches the regular expression.
To extract the code coverage value from the match, GitLab uses
this smaller regular expression: \d+(?:\.\d+)?.
Supported values:
/. Must match the coverage number.
May match surrounding text as well, so you don't need to use a regular expression character group
to capture the exact number.
Because it uses RE2 syntax, all groups must be non-capturing.Example of coverage:
job1:
script: rspec
coverage: '/Code coverage: \d+(?:\.\d+)?/'
In this example:
Code coverage: 67.89% of lines covered would match.\d+(?:\.\d+)?.
The sample regex can match a code coverage of 67.89.Additional details:
dast_configuration{{< details >}}
{{< /details >}}
Use the dast_configuration keyword to specify a site profile and scanner profile to be used in a
CI/CD configuration. Both profiles must first have been created in the project. The job's stage must
be dast.
Keyword type: Job keyword. You can use only as part of a job.
Supported values: One each of site_profile and scanner_profile.
site_profile to specify the site profile to be used in the job.scanner_profile to specify the scanner profile to be used in the job.Example of dast_configuration:
stages:
- build
- dast
include:
- template: DAST.gitlab-ci.yml
dast:
dast_configuration:
site_profile: "Example Co"
scanner_profile: "Quick Passive Test"
In this example, the dast job extends the dast configuration added with the include keyword
to select a specific site profile and scanner profile.
Additional details:
Related topics:
dependenciesUse the dependencies keyword to define a list of specific jobs to fetch artifacts
from. The specified jobs must all be in earlier stages. You can also set a job to download no artifacts at all.
When dependencies is not defined in a job, all jobs in earlier stages are considered dependent
and the job fetches all artifacts from those jobs.
To fetch artifacts from a job in the same stage, you must use needs:artifacts.
You should not combine dependencies with needs in the same job.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
[]), to configure the job to not download any artifacts.Example of dependencies:
build osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test osx:
stage: test
script: make test:osx
dependencies:
- build osx
test linux:
stage: test
script: make test:linux
dependencies:
- build linux
deploy:
stage: deploy
script: make deploy
environment: production
In this example, two jobs have artifacts: build osx and build linux. When test osx is executed,
the artifacts from build osx are downloaded and extracted in the context of the build.
The same thing happens for test linux and artifacts from build linux.
The deploy job downloads artifacts from all previous jobs because of
the stage precedence.
Additional details:
environmentUse environment to define the environment that a job deploys to.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: The name of the environment the job deploys to, in one of these formats:
-, _, /, $, {, }..gitlab-ci.yml file. You can't use variables defined in a script section.Example of environment:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment: production
Additional details:
environment and no environment with that name exists, an environment is
created.environment:nameSet a name for an environment.
Common environment names are qa, staging, and production, but you can use any name.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: The name of the environment the job deploys to, in one of these formats:
-, _, /, $, {, }..gitlab-ci.yml file. You can't use variables defined in a script section.Example of environment:name:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
environment:urlSet a URL for an environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A single URL, in one of these formats:
https://prod.example.com..gitlab-ci.yml file. You can't use variables defined in a script section.Example of environment:url:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
url: https://prod.example.com
Additional details:
environment:on_stopClosing (stopping) environments can be achieved with the on_stop keyword
defined under environment. It declares a different job that runs to close the
environment.
Keyword type: Job keyword. You can use it only as part of a job.
Additional details:
environment:action for more details and an example.environment:actionUse the action keyword to specify how the job interacts with the environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: One of the following keywords:
| Value | Description |
|---|---|
start | Default value. Indicates that the job starts the environment. The deployment is created after the job starts. |
prepare | Indicates that the job is only preparing the environment. It does not trigger deployments. Read more about preparing environments. |
stop | Indicates that the job stops an environment. Read more about stopping an environment. |
verify | Indicates that the job is only verifying the environment. It does not trigger deployments. Read more about verifying environments. |
access | Indicates that the job is only accessing the environment. It does not trigger deployments. Read more about accessing environments. |
Example of environment:action:
stop_review_app:
stage: deploy
variables:
GIT_STRATEGY: none
script: make delete-app
when: manual
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
environment:auto_stop_in{{< history >}}
prepare, access and verify environment actions in GitLab 17.7.{{< /history >}}
The auto_stop_in keyword specifies the lifetime of the environment. When an environment expires, GitLab
automatically stops it.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A period of time written in natural language. For example, these are all equivalent:
168 hours7 daysone weekneverCI/CD variables are supported.
Example of environment:auto_stop_in:
review_app:
script: deploy-review-app
environment:
name: review/$CI_COMMIT_REF_SLUG
auto_stop_in: 1 day
When the environment for review_app is created, the environment's lifetime is set to 1 day.
Every time the review app is deployed, that lifetime is also reset to 1 day.
The auto_stop_in keyword can be used for all environment actions except stop.
Some actions can be used to reset the scheduled stop time for the environment. For more information, see
Access an environment for preparation or verification purposes.
Related topics:
environment:kubernetes{{< history >}}
agent keyword introduced in GitLab 17.6.namespace and flux_resource_path keywords introduced in GitLab 17.7.namespace and flux_resource_path keywords deprecated in GitLab 18.4.dashboard:namespace and dashboard:flux_resource_path keywords introduced in GitLab 18.4.{{< /history >}}
Use the kubernetes keyword to configure the dashboard for Kubernetes
and GitLab-managed Kubernetes resources for an environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
agent: A string specifying the GitLab agent for Kubernetes. The format is path/to/agent/project:agent-name. If the agent is connected to the project running the pipeline, use $CI_PROJECT_PATH:agent-name.dashboard:namespace: A string representing the Kubernetes namespace where the environment is deployed. The namespace must be set together with the agent keyword. namespace is deprecated.dashboard:flux_resource_path: A string representing the full path to the Flux resource, such as a HelmRelease. The Flux resource must be set together with the
agent and dashboard:namespace keywords. flux_resource_path is deprecated.managed_resources: A hash with the enabled keyword to configure the
GitLab-managed Kubernetes resources for the environment.
managed_resources:enabled: A boolean value indicating whether GitLab-managed Kubernetes resources are enabled for the environment.dashboard: A hash with the dashboard:namespace and dashboard:flux_resource_path keywords to configure the
dashboard for Kubernetes for the environment.Example of environment:kubernetes:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
agent: path/to/agent/project:agent-name
dashboard:
namespace: my-namespace
flux_resource_path: helm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource
Example of environment:kubernetes when disabling managed resources:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
agent: path/to/agent/project:agent-name
managed_resources:
enabled: false
dashboard:
namespace: my-namespace
flux_resource_path: helm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource
This configuration:
deploy job to deploy to the production environment.agent-name with the environment.my-namespace and the flux_resource_path set to
helm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource.Additional details:
user_access
for the environment's project or its parent group.agent, namespace, and flux_resource_path attributes.agent, you do not have to set the namespace, and cannot set flux_resource_path. However, this configuration lists all namespaces in a cluster in the dashboard for Kubernetes.environment:deployment_tier{{< history >}}
{{< /history >}}
Use the deployment_tier keyword to specify the tier of the deployment environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: One of the following:
productionstagingtestingdevelopmentother.gitlab-ci.yml file. You can't use variables defined in a script section.Example of environment:deployment_tier:
deploy:
script: echo
environment:
name: customer-portal
deployment_tier: production
Additional details:
Related topics:
Use CI/CD variables to dynamically name environments.
For example:
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_ENVIRONMENT_SLUG.example.com/
The deploy as review app job is marked as a deployment to dynamically
create the review/$CI_COMMIT_REF_SLUG environment. $CI_COMMIT_REF_SLUG
is a CI/CD variable set by the runner. The
$CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable
for inclusion in URLs. If the deploy as review app job runs in a branch named
pow, this environment would be accessible with a URL like https://review-pow.example.com/.
The common use case is to create dynamic environments for branches and use them as review apps. You can see an example that uses review apps at https://gitlab.com/gitlab-examples/review-apps-nginx/.
extendsUse extends to reuse configuration sections. It's an alternative to YAML anchors
and is a little more flexible and readable.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of extends:
.tests:
stage: test
image: ruby:3.0
rspec:
extends: .tests
script: rake rspec
rubocop:
extends: .tests
script: bundle exec rubocop
In this example, the rspec job uses the configuration from the .tests template job.
When creating the pipeline, GitLab:
.tests content with the rspec job.The combined configuration is equivalent to these jobs:
rspec:
stage: test
image: ruby:3.0
script: rake rspec
rubocop:
stage: test
image: ruby:3.0
script: bundle exec rubocop
Additional details:
extends.extends keyword supports up to eleven levels of inheritance, but you should
avoid using more than three levels..tests is a hidden job,
but you can extend configuration from regular jobs as well.Related topics:
extends.extends to reuse configuration from included configuration files.hooks{{< history >}}
ci_hooks_pre_get_sources_script. Disabled by default.ci_hooks_pre_get_sources_script removed.{{< /history >}}
Use hooks to specify lists of commands to execute on the runner
at certain stages of job execution, like before retrieving the Git repository.
Job configuration and default configuration does not merge together.
If the pipeline has default:hooks defined, and the job also has hooks,
the job configuration takes precedence and the default configuration is not used.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
pre_get_sources_script.hooks:pre_get_sources_script{{< history >}}
ci_hooks_pre_get_sources_script. Disabled by default.ci_hooks_pre_get_sources_script removed.{{< /history >}}
Use hooks:pre_get_sources_script to specify a list of commands to execute on the runner
before cloning the Git repository and any submodules.
You can use it for example to:
Supported values: An array including:
CI/CD variables are supported.
Example of hooks:pre_get_sources_script:
job1:
hooks:
pre_get_sources_script:
- echo 'hello job1 pre_get_sources_script'
script: echo 'hello job1 script'
Related topics:
identity{{< details >}}
{{< /details >}}
{{< history >}}
google_cloud_support_feature_flag. This feature is in beta.google_cloud_support_feature_flag removed.{{< /history >}}
This feature is in beta.
Use identity to authenticate with third party services using identity federation.
Keyword type: Job keyword. You can use it only as part of a job or in the default: section.
Supported values: An identifier. Supported providers:
google_cloud: Google Cloud. Must be configured with the Google Cloud IAM integration.Example of identity:
job_with_workload_identity:
identity: google_cloud
script:
- gcloud compute instances list
Related topics:
id_tokens{{< history >}}
{{< /history >}}
Use id_tokens to create ID tokens to authenticate with third party services. All
JWTs created this way support OIDC authentication. The required aud sub-keyword is used to configure the aud claim for the JWT.
Job configuration and default configuration does not merge together.
If the pipeline has default:id_tokens defined, and the job also has id_tokens,
the job configuration takes precedence and the default configuration is not used.
Supported values:
aud claims. aud supports:
Example of id_tokens:
job_with_id_tokens:
id_tokens:
ID_TOKEN_1:
aud: https://vault.example.com
ID_TOKEN_2:
aud:
- https://gcp.com
- https://aws.com
SIGSTORE_ID_TOKEN:
aud: sigstore
script:
- command_to_authenticate_with_vault $ID_TOKEN_1
- command_to_authenticate_with_aws $ID_TOKEN_2
- command_to_authenticate_with_gcp $ID_TOKEN_2
Related topics:
imageUse image to specify a Docker image that the job runs in.
Job configuration and default configuration does not merge together.
If the pipeline has default:image defined, and the job also has image,
the job configuration takes precedence and the default configuration is not used.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: The name of the image, including the registry path if needed, in one of these formats:
<image-name> (Same as using <image-name> with the latest tag)<image-name>:<tag><image-name>@<digest>CI/CD variables are supported.
Example of image:
default:
image: ruby:3.0
rspec:
script: bundle exec rspec
rspec 2.7:
image: registry.example.com/my-group/my-project/ruby:2.7
script: bundle exec rspec
In this example, the ruby:3.0 image is the default for all jobs in the pipeline.
The rspec 2.7 job does not use the default, because it overrides the default with
a job-specific image section.
Additional details:
image at the top level, but not in the default section, is deprecated.Related topics:
image:nameThe name of the Docker image that the job runs in. Similar to image used by itself.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: The name of the image, including the registry path if needed, in one of these formats:
<image-name> (Same as using <image-name> with the latest tag)<image-name>:<tag><image-name>@<digest>CI/CD variables are supported.
Example of image:name:
test-job:
image:
name: "registry.example.com/my/image:latest"
script: echo "Hello world"
Related topics:
image:entrypointCommand or script to execute as the container's entry point.
When the Docker container is created, the entrypoint is translated to the Docker --entrypoint option.
The syntax is similar to the Dockerfile ENTRYPOINT directive,
where each shell token is a separate string in the array.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of image:entrypoint:
test-job:
image:
name: super/sql:experimental
entrypoint: [""]
script: echo "Hello world"
Related topics:
image:docker{{< history >}}
user input option introduced in GitLab 16.8.{{< /history >}}
Use image:docker to pass options to runners using the Docker executor
or the Kubernetes executor.
This keyword does not work with other executor types.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
A hash of options for the Docker executor, which can include:
platform: Selects the architecture of the image to pull. When not specified,
the default is the same platform as the host runner.user: Specify the username or UID to use when running the container.Example of image:docker:
arm-sql-job:
script: echo "Run sql tests"
image:
name: super/sql:experimental
docker:
platform: arm64/v8
user: dave
Additional details:
image:docker:platform maps to the docker pull --platform option.image:docker:user maps to the docker run --user option.image:kubernetes{{< history >}}
user input option introduced in GitLab Runner 17.11.user input option extended to support uid:gid format in GitLab 18.0.{{< /history >}}
Use image:kubernetes to pass options to the GitLab Runner Kubernetes executor.
This keyword does not work with other executor types.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
A hash of options for the Kubernetes executor, which can include:
user: Specify the username or UID to use when the container runs. You can also use it to set GID by using the UID:GID format.Example of image:kubernetes with only UID:
arm-sql-job:
script: echo "Run sql tests"
image:
name: super/sql:experimental
kubernetes:
user: "1001"
Example of image:kubernetes with both UID and GID:
arm-sql-job:
script: echo "Run sql tests"
image:
name: super/sql:experimental
kubernetes:
user: "1001:1001"
image:pull_policy{{< history >}}
ci_docker_image_pull_policy. Disabled by default.ci_docker_image_pull_policy removed.{{< /history >}}
The pull policy that the runner uses to fetch the Docker image.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values:
always, if-not-present, or never.Examples of image:pull_policy:
job1:
script: echo "A single pull policy."
image:
name: ruby:3.0
pull_policy: if-not-present
job2:
script: echo "Multiple pull policies."
image:
name: ruby:3.0
pull_policy: [always, if-not-present]
Additional details:
ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]).Related topics:
inputs{{< history >}}
{{< /history >}}
Use inputs to define typed and validated inputs for a job. Job inputs
can be overridden when manually running or retrying a job.
Job inputs are parameters that provide type safety and validation. Unlike CI/CD variables, only inputs explicitly defined in the job can be specified when running or retrying the job. All job input names must be predefined.
Reference job input values with the ${{ job.inputs.INPUT_NAME }} Moa expression syntax.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
A hash of input names, where each input is configured with one or more subkeys:
default (required)typeoptionsdescriptionregexExample of inputs:
test_job:
inputs:
test_suite:
default: unit
description: Which test suite to run
options: [unit, integration, e2e]
parallel_count:
type: number
default: 5
description: Number of parallel test runners
verbose:
type: boolean
default: false
description: Enable verbose test output
script:
- 'echo "Running ${{ job.inputs.test_suite }} tests"'
- 'if [ "${{ job.inputs.verbose }}" == "true" ]; then export TEST_VERBOSE=1; fi'
- ./run_tests.sh --suite ${{ job.inputs.test_suite }} --parallel ${{ job.inputs.parallel_count }}
Additional details:
inputs:defaultAll job inputs must have a default value defined with default.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: Any value matching the input's type.
Example of inputs:default:
test_job:
inputs:
environment:
default: staging
timeout:
type: number
default: 30
inputs:typeUse type to define the data type of the input value.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
string (default)numberbooleanarray.Example of inputs:type:
test_job:
inputs:
count:
type: number
default: 5
enabled:
type: boolean
default: true
inputs:descriptionUse description to provide information about the input's purpose.
The description does not affect the input's behavior.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A string.
Example of inputs:description:
deploy_job:
inputs:
environment:
default: staging
description: Target deployment environment
inputs:optionsUse options to specify a list of allowed values for an input.
The input value must match one of the listed options exactly (case-sensitive). Validation fails if the value does not match an option.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: An array of allowed values.
Example of inputs:options:
deploy_job:
inputs:
environment:
default: staging
options: [development, staging, production]
inputs:regexUse regex to specify a regular expression pattern that the input value must match.
Validation fails if the value does not match the regular expression.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A regular expression string.
Example of inputs:regex:
deploy_job:
inputs:
version:
default: v1.0.0
regex: ^v\d+\.\d+\.\d+$
In this example, an input value of v1.1.1 passes the regex validation, but an input of
v1.1.1-beta does not.
inheritUse inherit to control inheritance of default keywords and variables.
inherit:defaultUse inherit:default to control the inheritance of default keywords.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true (default) or false to enable or disable the inheritance of all default keywords.Example of inherit:default:
default:
retry: 2
image: ruby:3.0
interruptible: true
job1:
script: echo "This job does not inherit any default keywords."
inherit:
default: false
job2:
script: echo "This job inherits only the two listed default keywords. It does not inherit 'interruptible'."
inherit:
default:
- retry
- image
Additional details:
default: [keyword1, keyword2]inherit:variablesUse inherit:variables to control the inheritance of default variables keywords.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true (default) or false to enable or disable the inheritance of all default variables.Example of inherit:variables:
variables:
VARIABLE1: "This is default variable 1"
VARIABLE2: "This is default variable 2"
VARIABLE3: "This is default variable 3"
job1:
script: echo "This job does not inherit any default variables."
inherit:
variables: false
job2:
script: echo "This job inherits only the two listed default variables. It does not inherit 'VARIABLE3'."
inherit:
variables:
- VARIABLE1
- VARIABLE2
Additional details:
variables: [VARIABLE1, VARIABLE2]interruptible{{< history >}}
trigger jobs introduced in GitLab 16.8.{{< /history >}}
Use interruptible to configure the auto-cancel redundant pipelines
feature to cancel a job before it completes if a new pipeline on the same ref starts for a newer commit. If the feature
is disabled, the keyword has no effect. The new pipeline must be for a commit with new changes. For example,
the Auto-cancel redundant pipelines feature has no effect
if you select New pipeline in the UI to run a pipeline for the same commit.
The behavior of the Auto-cancel redundant pipelines feature can be controlled by
the workflow:auto_cancel:on_new_commit setting.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
true or false (default).Example of interruptible with the default behavior:
workflow:
auto_cancel:
on_new_commit: conservative # the default behavior
stages:
- stage1
- stage2
- stage3
step-1:
stage: stage1
script:
- echo "Can be canceled."
interruptible: true
step-2:
stage: stage2
script:
- echo "Can not be canceled."
step-3:
stage: stage3
script:
- echo "Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible."
interruptible: true
In this example, a new pipeline causes a running pipeline to be:
step-1 is running or pending.step-2 starts.Example of interruptible with the auto_cancel:on_new_commit:interruptible setting:
workflow:
auto_cancel:
on_new_commit: interruptible
stages:
- stage1
- stage2
- stage3
step-1:
stage: stage1
script:
- echo "Can be canceled."
interruptible: true
step-2:
stage: stage2
script:
- echo "Can not be canceled."
step-3:
stage: stage3
script:
- echo "Can be canceled."
interruptible: true
In this example, a new pipeline causes a running pipeline to cancel step-1 and step-3 if they are running or pending.
Additional details:
interruptible: true if the job can be safely canceled after it has started,
like a build job. Deployment jobs usually shouldn't be canceled, to prevent partial deployments.workflow:auto_cancel:on_new_commit: conservative:
interruptible: true, regardless of the job's configuration.
The interruptible configuration is only considered after the job starts.interruptible: true or
no jobs configured with interruptible: false have started at any time.
After a job with interruptible: false starts, the entire pipeline is no longer
considered interruptible.interruptible: false
in the downstream pipeline has started yet, the downstream pipeline is also canceled.interruptible: false in the first stage of
a pipeline to allow users to manually prevent a pipeline from being automatically
canceled. After a user starts the job, the pipeline cannot be canceled by the
Auto-cancel redundant pipelines feature.interruptible with a trigger job:
interruptible configuration.workflow:auto_cancel is set to conservative,
the trigger job's interruptible configuration has no effect.workflow:auto_cancel is set to interruptible,
a trigger job with interruptible: true can be automatically canceled.needsUse needs to execute jobs out-of-order. Relationships between jobs
that use needs can be visualized as a directed acyclic graph.
You can ignore stage ordering and run some jobs without waiting for others to complete. Jobs in multiple stages can run concurrently.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
[]), to set the job to start as soon as the pipeline is created.Example of needs:
linux:build:
stage: build
script: echo "Building linux..."
mac:build:
stage: build
script: echo "Building mac..."
lint:
stage: test
needs: []
script: echo "Linting..."
linux:rspec:
stage: test
needs: ["linux:build"]
script: echo "Running rspec on linux..."
mac:rspec:
stage: test
needs: ["mac:build"]
script: echo "Running rspec on mac..."
production:
stage: deploy
script: echo "Running production..."
environment: production
This example creates four paths of execution:
lint job runs immediately without waiting for the build stage
to complete because it has no needs (needs: []).linux:rspec job runs as soon as the linux:build
job finishes, without waiting for mac:build to finish.mac:rspec jobs runs as soon as the mac:build
job finishes, without waiting for linux:build to finish.production job runs as soon as all previous jobs finish:
lint, linux:build, linux:rspec, mac:build, mac:rspec.Additional details:
needs array is limited:
needs refers to a job that uses the parallel keyword,
it depends on all jobs created in parallel, not just one job. It also downloads
artifacts from all the parallel jobs by default. If the artifacts have the same
name, they overwrite each other and only the last one downloaded is saved.
needs refer to a subset of parallelized jobs (and not all of the parallelized jobs),
use the needs:parallel:matrix keyword.needs refers to a job that might not be added to
a pipeline because of only, except, or rules, the pipeline might fail to create. Use the needs:optional keyword to resolve a failed pipeline creation.needs: [] and jobs in the .pre stage, they will
all start as soon as the pipeline is created. Jobs with needs: [] start immediately,
and jobs in the .pre stage also start immediately.needs:artifactsWhen a job uses needs, it no longer downloads all artifacts from previous stages
by default, because jobs with needs can start before earlier stages complete. With
needs you can only download artifacts from the jobs listed in the needs configuration.
Use artifacts: true (default) or artifacts: false to control when artifacts are
downloaded in jobs that use needs.
Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job.
Supported values:
true (default) or false.Example of needs:artifacts:
test-job1:
stage: test
needs:
- job: build_job1
artifacts: true
test-job2:
stage: test
needs:
- job: build_job2
artifacts: false
test-job3:
needs:
- job: build_job1
artifacts: true
- job: build_job2
- build_job3
In this example:
test-job1 job downloads the build_job1 artifactstest-job2 job does not download the build_job2 artifacts.test-job3 job downloads the artifacts from all three build_jobs, because
artifacts is true, or defaults to true, for all three needed jobs.Additional details:
needs with dependencies in the same job.needs:project{{< details >}}
{{< /details >}}
Use needs:project to download artifacts from up to five jobs in other pipelines.
The artifacts are downloaded from the latest successful specified job for the specified ref.
To specify multiple jobs, add each as separate array items under the needs keyword.
If there is a pipeline running for the ref, a job with needs:project
does not wait for the pipeline to complete. Instead, the artifacts are downloaded
from the latest successful run of the specified job.
needs:project must be used with job, ref, and artifacts.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
needs:project: A full project path, including namespace and group.job: The job to download artifacts from.ref: The ref to download artifacts from.artifacts: Must be true to download artifacts.Examples of needs:project:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: namespace/group/project-name
job: build-1
ref: main
artifacts: true
- project: namespace/group/project-name-2
job: build-2
ref: main
artifacts: true
In this example, build_job downloads the artifacts from the latest successful build-1 and build-2 jobs
on the main branches in the group/project-name and group/project-name-2 projects.
You can use CI/CD variables in needs:project, for example:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: $CI_PROJECT_PATH
job: $DEPENDENCY_JOB_NAME
ref: $ARTIFACTS_DOWNLOAD_REF
artifacts: true
Additional details:
project
to be the same as the current project, but use a different ref than the current pipeline.
Concurrent pipelines running on the same ref could override the artifacts.needs:project in the same job as trigger.needs:project to download artifacts from another pipeline, the job does not wait for
the needed job to complete. Using needs to wait for jobs to complete
is limited to jobs in the same pipeline. Make sure that the needed job in the other
pipeline completes before the job that needs it tries to download the artifacts.parallel.project, job, and ref.Related topics:
needs:pipeline:job.needs:pipeline:jobA child pipeline can download artifacts from a successfully finished job in its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
needs:pipeline: A pipeline ID. Must be a pipeline present in the same parent-child pipeline hierarchy.job: The job to download artifacts from.Example of needs:pipeline:job:
Parent pipeline (.gitlab-ci.yml):
stages:
- build
- test
create-artifact:
stage: build
script: echo "sample artifact" > artifact.txt
artifacts:
paths: [artifact.txt]
child-pipeline:
stage: test
trigger:
include: child.yml
strategy: mirror
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
Child pipeline (child.yml):
use-artifact:
script: cat artifact.txt
needs:
- pipeline: $PARENT_PIPELINE_ID
job: create-artifact
In this example, the create-artifact job in the parent pipeline creates some artifacts.
The child-pipeline job triggers a child pipeline, and passes the CI_PIPELINE_ID
variable to the child pipeline as a new PARENT_PIPELINE_ID variable. The child pipeline
can use that variable in needs:pipeline to download artifacts from the parent pipeline.
Having the create-artifact and child-pipeline jobs in subsequent stages ensures that
the use-artifact job only executes when create-artifact has successfully finished.
Additional details:
pipeline attribute does not accept the current pipeline ID ($CI_PIPELINE_ID).
To download artifacts from a job in the current pipeline, use needs:artifacts.needs:pipeline:job in a trigger job, or to fetch artifacts
from a multi-project pipeline.
To fetch artifacts from a multi-project pipeline use needs:project.needs:pipeline:job must complete with a status of success
or the artifacts can't be fetched. Issue 367229
proposes to allow fetching artifacts from any job with artifacts.needs:optionalTo need a job that sometimes does not exist in the pipeline, add optional: true
to the needs configuration. If not defined, optional: false is the default.
Jobs that use rules, only, or except and that are added with include
might not always be added to a pipeline. GitLab checks the needs relationships before starting a pipeline:
needs entry has optional: true and the needed job is present in the pipeline,
the job waits for it to complete before starting.needs section contains only optional jobs, and none are added to the pipeline,
the job starts immediately (the same as an empty needs entry: needs: []).optional: false, but it was not added to the pipeline, the
pipeline fails to start with an error similar to: 'job1' job needs 'job2' job, but it was not added to the pipeline.Keyword type: Job keyword. You can use it only as part of a job.
Example of needs:optional:
build-job:
stage: build
test-job1:
stage: test
test-job2:
stage: test
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
deploy-job:
stage: deploy
needs:
- job: test-job2
optional: true
- job: test-job1
environment: production
review-job:
stage: deploy
needs:
- job: test-job2
optional: true
environment: review
In this example:
build-job, test-job1, and test-job2 start in stage order.test-job2 is added to the pipeline, so:
deploy-job waits for both test-job1 and test-job2 to complete.review-job waits for test-job2 to complete.test-job2 is not added to the pipeline, so:
deploy-job waits for only test-job1 to complete, and does not wait for the missing test-job2.review-job has no other needed jobs and starts immediately (at the same time as build-job),
like needs: [].Additional details:
needs:optional with needs:parallel:matrix.needs:pipelineYou can mirror the pipeline status from an upstream pipeline to a job by
using the needs:pipeline keyword. The latest pipeline status from the default branch is
replicated to the job.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
project
keyword. For example: project: group/project-name or project: project-name.Example of needs:pipeline:
upstream_status:
stage: test
needs:
pipeline: other/project
Additional details:
job keyword to needs:pipeline, the job no longer mirrors the
pipeline status. The behavior changes to needs:pipeline:job.needs:parallel:matrix{{< history >}}
{{< /history >}}
Jobs can use parallel:matrix to run a job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
Use needs:parallel:matrix to execute jobs out-of-order depending on parallelized jobs.
Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job.
Supported values: An array of hashes of matrix identifiers:
parallel:matrix job.Example of needs:parallel:matrix:
linux:build:
stage: build
script: echo "Building linux..."
parallel:
matrix:
- PROVIDER: aws
STACK:
- monitoring
- app1
- app2
linux:rspec:
stage: test
needs:
- job: linux:build
parallel:
matrix:
- PROVIDER: aws
STACK: app1
script: echo "Running rspec on linux..."
The previous example generates the following jobs:
linux:build: [aws, monitoring]
linux:build: [aws, app1]
linux:build: [aws, app2]
linux:rspec
The linux:rspec job runs as soon as the linux:build: [aws, app1] job finishes.
Additional details:
You cannot use needs:parallel:matrix with needs:optional.
The order of the matrix identifiers in needs:parallel:matrix must match the order
of the matrix variables in the needed job. For example, reversing the order of
the variables in the linux:rspec job in the previous example would be invalid:
linux:rspec:
stage: test
needs:
- job: linux:build
parallel:
matrix:
- STACK: app1 # The variable order does not match `linux:build` and is invalid.
PROVIDER: aws
script: echo "Running rspec on linux..."
Related topics:
needs:parallel:matrix.pagesUse pages to define a GitLab Pages job that
uploads static content to GitLab. The content is then published as a website.
You must:
pages: true to publish a directory named publicpages.publish if want to use a different content directory.index.html file in the root of the content directory.Keyword type: Job keyword or Job name (deprecated). You can use it only as part of a job.
Supported Values:
trueExample of pages:
create-pages:
stage: deploy
script:
- mv my-html-content public
pages: true # specifies that this is a Pages job and publishes the default public directory
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
environment: production
This example renames the my-html-content/ directory to public/.
This directory is exported as an artifact and published with GitLab Pages.
Example using a configuration hash:
create-pages:
stage: deploy
script:
- echo "nothing to do here"
pages: # specifies that this is a Pages job and publishes the default public directory
publish: my-html-content
expire_in: "1 week"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
environment: production
This example does not move the directory, but uses the publish property directly.
It also configures the pages deployment to be unpublished after a week.
Additional details:
pages as a job name is deprecated.pages as a job name without triggering a Pages deployment, set the pages property to falsepages.publish{{< history >}}
publish property in GitLab 17.9.publish property under the pages keyword in GitLab 17.9.pages.publish path automatically to artifacts:paths in GitLab 17.10.{{< /history >}}
Use pages.publish to configure the content directory of a pages job.
Keyword type: Job keyword. You can use it only as part of a pages job.
Supported values: A path to a directory containing the Pages content.
In GitLab 17.10 and later,
if not specified, the default public directory is used and if specified,
this path is automatically appended to artifacts:paths.
Example of pages.publish:
create-pages:
stage: deploy
script:
- npx @11ty/eleventy --input=path/to/eleventy/root --output=dist
pages:
publish: dist # this path is automatically appended to artifacts:paths
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
environment: production
This example uses Eleventy to generate a static website and
output the generated HTML files into a the dist/ directory. This directory is exported
as an artifact and published with GitLab Pages.
It is also possible to use variables in the pages.publish field. For example:
create-pages:
stage: deploy
script:
- mkdir -p $CUSTOM_FOLDER/$CUSTOM_PATH
- cp -r public $CUSTOM_FOLDER/$CUSTOM_SUBFOLDER
pages:
publish: $CUSTOM_FOLDER/$CUSTOM_SUBFOLDER # this path is automatically appended to artifacts:paths
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables:
CUSTOM_FOLDER: "custom_folder"
CUSTOM_SUBFOLDER: "custom_subfolder"
The publish path specified must be relative to the build root.
Additional details:
publish keyword is deprecated and must now be nested under the pages keywordpages.path_prefix{{< details >}}
{{< /details >}}
{{< history >}}
pages_multiple_versions_setting, disabled by default.pages_multiple_versions_setting removed.{{< /history >}}
Use pages.path_prefix to configure a path prefix for parallel deployments of GitLab Pages.
Keyword type: Job keyword. You can use it only as part of a pages job.
Supported values:
The given value is converted to lowercase and shortened to 63 bytes. Everything except alphanumeric characters or periods is replaced with a hyphen. Leading and trailing hyphens or periods are not permitted.
Example of pages.path_prefix:
create-pages:
stage: deploy
script:
- echo "Pages accessible through ${CI_PAGES_URL}"
pages: # specifies that this is a Pages job and publishes the default public directory
path_prefix: "$CI_COMMIT_BRANCH"
In this example, a different pages deployment is created for each branch.
pages.expire_in{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
Use expire_in to specify how long a deployment should be available before
it expires. After the deployment is expired, it's deactivated by a cron
job running every 10 minutes.
By default, parallel deployments expire
automatically after 24 hours.
To disable this behavior, set the value to never.
Keyword type: Job keyword. You can use it only as part of a pages job.
Supported values: The expiry time. If no unit is provided, the time is in seconds. Variables are also supported. Valid values include:
'42'42 seconds3 mins 4 sec2 hrs 20 min2h20min6 mos 1 day47 yrs 6 mos and 4d3 weeks and 2 daysnever$DURATIONExample of pages.expire_in:
create-pages:
stage: deploy
script:
- echo "Pages accessible through ${CI_PAGES_URL}"
pages: # specifies that this is a Pages job and publishes the default public directory
expire_in: 1 week
parallel{{< history >}}
parallel is increased from 50 to 200.{{< /history >}}
Use parallel to run a job multiple times in parallel in a single pipeline.
Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.
Parallel jobs are named sequentially from job_name 1/N to job_name N/N.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
1 to 200.Example of parallel:
test:
script: rspec
parallel: 5
This example creates 5 jobs that run in parallel, named test 1/5 to test 5/5.
Additional details:
CI_NODE_INDEX and CI_NODE_TOTAL
predefined CI/CD variable set.parallel might:
pending while waiting for an available runner.job_activity_limit_exceeded error if creating the pipeline would cause
the total number of jobs across all active pipelines to exceed the instance limit.Related topics:
parallel:matrix{{< history >}}
{{< /history >}}
Use parallel:matrix to run a job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: An array of hashes of variables:
_).Example of parallel:matrix:
deploystacks:
stage: deploy
script:
- bin/deploy
parallel:
matrix:
- PROVIDER: aws
STACK:
- monitoring
- app1
- app2
- PROVIDER: [gcp, vultr]
STACK: [data, processing]
environment: $PROVIDER/$STACK
The example generates 7 parallel deploystacks jobs, each with different values
for PROVIDER and STACK:
deploystacks: [aws, monitoring]deploystacks: [aws, app1]deploystacks: [aws, app2]deploystacks: [gcp, data]deploystacks: [gcp, processing]deploystacks: [vultr, data]deploystacks: [vultr, processing]Additional details:
parallel:matrix jobs add the matrix values to the job names to differentiate
the jobs from each other. However, long values can cause job names to exceed the
255-character limit. For more information, see epic 11791.
You cannot use the matrix values as variables for rules:if.
You cannot create multiple matrix configurations with the same values but different names. Job names are generated from the matrix values, not the names, so matrix entries with identical values generate identical job names that overwrite each other.
For example, this test configuration would try to create two series of identical jobs,
but the OS2 versions overwrite the OS versions:
test:
parallel:
matrix:
- OS: [ubuntu]
PROVIDER: [aws, gcp]
- OS2: [ubuntu]
PROVIDER: [aws, gcp]
Related topics:
needs:parallel:matrix.releaseUse release to create a release.
The release job must have access to the glab CLI,
which must be in the $PATH.
If you use the Docker executor,
you can use this image from the GitLab container registry: registry.gitlab.com/gitlab-org/cli:latest
If you use the Shell executor or similar,
install glab CLI on the server where the runner is registered.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: The release subkeys:
tag_nametag_message (optional)name (optional)descriptionref (optional)milestones (optional)released_at (optional)assets:links (optional)Example of release keyword:
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/cli:latest
rules:
- if: $CI_COMMIT_TAG # Run this job when a tag is created manually
script:
- echo "Running the release job."
release:
tag_name: $CI_COMMIT_TAG
name: 'Release $CI_COMMIT_TAG'
description: 'Release created using the CLI.'
This example creates a release:
Additional details:
Release jobs must include the script keyword. A release
job can use the output from script commands. If you don't need the script, you can use a placeholder:
script:
- echo "release job"
For more details, see issue 223856, which aims to remove this restriction.
The release section executes after the script keyword and before the after_script.
A release is created only if the job's main script succeeds.
If the release already exists, it is not updated and the job with the release keyword fails.
Related topics:
release keyword.release:tag_nameRequired. The Git tag for the release.
If the tag does not exist in the project yet, it is created at the same time as the release. New tags use the SHA associated with the pipeline.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
CI/CD variables are supported.
Example of release:tag_name:
To create a release when a new tag is added to the project:
$CI_COMMIT_TAG CI/CD variable as the tag_name.rules:if to configure the job to run only for new tags.job:
script: echo "Running the release job for the new tag."
release:
tag_name: $CI_COMMIT_TAG
description: 'Release description'
rules:
- if: $CI_COMMIT_TAG
To create a release and a new tag at the same time, your rules
should not configure the job to run only for new tags. A semantic versioning example:
job:
script: echo "Running the release job and creating a new tag."
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: 'Release description'
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
release:tag_messageIf the tag does not exist, the newly created tag is annotated with the message specified by tag_message.
If omitted, a lightweight tag is created.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of release:tag_message:
release_job:
stage: release
release:
tag_name: $CI_COMMIT_TAG
description: 'Release description'
tag_message: 'Annotated tag message'
release:nameThe release name. If omitted, it is populated with the value of release: tag_name.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of release:name:
release_job:
stage: release
release:
name: 'Release $CI_COMMIT_TAG'
release:descriptionThe long description of the release.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
$CI_PROJECT_DIR).$CI_PROJECT_DIR../path/to/file and filename can't contain spaces.Example of release:description:
job:
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: './path/to/CHANGELOG.md'
Additional details:
description is evaluated by the shell that runs glab.
You can use CI/CD variables to define the description, but some shells
use different syntax
to reference variables. Similarly, some shells might require special characters
to be escaped. For example, backticks (`) might need to be escaped with a backslash (\).release:refThe ref for the release, if the release: tag_name doesn't exist yet.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
release:milestonesThe title of each milestone the release is associated with.
release:released_atThe date and time when the release is ready.
Supported values:
Example of release:released_at:
released_at: '2021-03-15T08:00:00Z'
Additional details:
release:assets:linksUse release:assets:links to include asset links in the release.
Example of release:assets:links:
assets:
links:
- name: 'asset1'
url: 'https://example.com/assets/1'
- name: 'asset2'
url: 'https://example.com/assets/2'
filepath: '/pretty/url/1' # optional
link_type: 'other' # optional
resource_groupUse resource_group to create a resource group that
ensures a job is mutually exclusive across different pipelines for the same project.
For example, if multiple jobs that belong to the same resource group are queued simultaneously,
only one of the jobs starts. The other jobs wait until the resource_group is free.
Resource groups behave similar to semaphores in other programming languages.
You can choose a process mode to strategically control the job concurrency for your deployment preferences. The default process mode is unordered. To change the process mode of a resource group, use the API to send a request to edit an existing resource group.
You can define multiple resource groups per environment. For example, when deploying to physical devices, you might have multiple physical devices. Each device can be deployed to, but only one deployment can occur per device at any given time.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
-, _, /, $, {, }, ., and spaces.
It can't start or end with /. CI/CD variables are supported.Example of resource_group:
deploy-to-production:
script: deploy
resource_group: production
In this example, two deploy-to-production jobs in two separate pipelines can never run at the same time. As a result,
you can ensure that concurrent deployments never happen to the production environment.
Related topics:
retryUse retry to configure how many times a job is retried if it fails.
If not defined, defaults to 0 and jobs do not retry.
When a job fails, the job is processed up to two more times, until it succeeds or reaches the maximum number of retries.
By default, all failure types cause the job to be retried. Use retry:when or retry:exit_codes
to select which failures to retry on.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
0 (default), 1, or 2.Example of retry:
test:
script: rspec
retry: 2
test_advanced:
script:
- echo "Run a script that results in exit code 137."
- exit 137
retry:
max: 2
when: runner_system_failure
exit_codes: 137
test_advanced will be retried up to 2 times if the exit code is 137 or if it had
a runner system failure.
retry:whenUse retry:when with retry:max to retry jobs for only specific failure cases.
retry:max is the maximum number of retries, like retry, and can be
0, 1, or 2.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
always: Retry on any failure (default).unknown_failure: Retry when the failure reason is unknown.script_failure: Retry when:
docker, docker+machine, kubernetes executors.api_failure: Retry on API failure.stuck_or_timeout_failure: Retry when the job got stuck or timed out.runner_system_failure: Retry if there is a runner system failure (for example, job setup failed).runner_unsupported: Retry if the runner is unsupported.stale_schedule: Retry if a delayed job could not be executed.job_execution_timeout: Retry if the script exceeded the maximum execution time set for the job.archived_failure: Retry if the job is archived and can't be run.unmet_prerequisites: Retry if the job failed to complete prerequisite tasks.scheduler_failure: Retry if the scheduler failed to assign the job to a runner.data_integrity_failure: Retry if there is an unknown job problem.Example of retry:when (single failure type):
test:
script: rspec
retry:
max: 2
when: runner_system_failure
If there is a failure other than a runner system failure, the job is not retried.
Example of retry:when (array of failure types):
test:
script: rspec
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
retry:exit_codes{{< history >}}
ci_retry_on_exit_codes. Disabled by default.ci_retry_on_exit_codes removed.{{< /history >}}
Use retry:exit_codes with retry:max to retry jobs for only specific failure cases.
retry:max is the maximum number of retries, like retry, and can be
0, 1, or 2.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of retry:exit_codes:
test_job_1:
script:
- echo "Run a script that results in exit code 1. This job isn't retried."
- exit 1
retry:
max: 2
exit_codes: 137
test_job_2:
script:
- echo "Run a script that results in exit code 137. This job will be retried."
- exit 137
retry:
max: 1
exit_codes:
- 255
- 137
Related topics:
You can specify the number of retry attempts for certain stages of job execution using variables.
rulesUse rules to include or exclude jobs in pipelines.
Rules are evaluated when the pipeline is created, and evaluated in order. When a match is found, no more rules are checked and the job is either included or excluded from the pipeline depending on the configuration. If no rules match, the job is not added to the pipeline.
rules accepts an array of rules. Each rules must have at least one of:
ifchangesexistswhenRules can also optionally be combined with:
allow_failureneedsvariablesinterruptibleYou can combine multiple keywords together for complex rules.
The job is added to the pipeline:
if, changes, or exists rule matches, and is configured with when: on_success (default if not defined),
when: delayed, or when: always.when: on_success, when: delayed, or when: always.The job is not added to the pipeline:
when: never.For additional examples, see Specify when jobs run with rules.
rules:ifUse rules:if clauses to specify when to add a job to a pipeline:
if statement is true, add the job to the pipeline.if statement is true, but it's combined with when: never, do not add the job to the pipeline.if statement is false, check the next rules item (if any more exist).if clauses are evaluated:
rules execution flow.Keyword type: Job-specific and pipeline-specific. You can use it as part of a job
to configure the job behavior, or with workflow to configure the pipeline behavior.
Supported values:
Example of rules:if:
job:
script: echo "Hello, Rules!"
rules:
- if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != $CI_DEFAULT_BRANCH
when: never
- if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/
when: manual
allow_failure: true
- if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
Additional details:
if. See issue 327780 for more details.when defined, the rule uses the when
defined for the job, which defaults to on_success if not defined.when at the job-level with when in rules.
when configuration in rules takes precedence over when at the job-level.script
sections, variables in rules expressions are always formatted as $VARIABLE.
rules:if with include to conditionally include other configuration files.=~ and !~ expressions are evaluated as regular expressions.Related topics:
if expressions for rules.rules to run merge request pipelines.rules:changesUse rules:changes to specify when to add a job to a pipeline by checking for changes
to specific files.
For new branch pipelines or when there is no Git push event, rules: changes always evaluates to true
and the job always runs. Pipelines like tag pipelines, scheduled pipelines,
and manual pipelines, all do not have a Git push event associated with them.
To cover these cases, use rules: changes: compare_to to specify
the branch to compare against the pipeline ref.
If you do not use compare_to, you should use rules: changes only with branch pipelines
or merge request pipelines, though
rules: changes still evaluates to true when creating a new branch. With:
rules:changes compares the changes with the target MR branch.rules:changes compares the changes with the previous commit on the branch.Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
An array including any number of:
path/to/directory/*.path/to/directory/**/*.*.md or path/to/directory/*.{rb,py,sh}."*.json" or "**/*.json".Example of rules:changes:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- Dockerfile
when: manual
allow_failure: true
docker build alternative:
variables:
DOCKERFILES_DIR: 'path/to/dockerfiles'
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- $DOCKERFILES_DIR/**/*
In this example:
Dockerfile and the files in
$DOCKERFILES_DIR/**/* for changes.Dockerfile has changed, add the job to the pipeline as a manual job, and the pipeline
continues running even if the job is not triggered (allow_failure: true).$DOCKERFILES_DIR/**/* has changed, add the job to the pipeline.when: never).Additional details:
File.fnmatch
with the flags
File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB.changes patterns or file paths. After the 50,000th check, rules with patterned
globs always match. In other words, the changes rule always assumes a match when
more than 50,000 files changed, or if there are fewer than 50,000 changed files but
the changes rules are checked more than 50,000 times.rules:changes section.changes resolves to true if any of the matching files are changed (an OR operation).rules.$ character for both variables and paths. For example, if the
$VAR variable exists, its value is used. If it does not exist, the $ is interpreted
as being part of a path../, double slashes (//), or any other kind of relative path.
Paths are matched with exact string comparison, they are not evaluated like in a shell.Related topics:
rules:changes:paths{{< history >}}
{{< /history >}}
Use rules:changes to specify that a job only be added to a pipeline when specific
files are changed, and use rules:changes:paths to specify the files.
rules:changes:paths is the same as using rules:changes without
any subkeys. All additional details and related topics are the same.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
rules:changes.Example of rules:changes:paths:
docker-build-1:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- Dockerfile
docker-build-2:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
In this example, both jobs have the same behavior.
rules:changes:compare_to{{< history >}}
ci_rules_changes_compare. Enabled by default.ci_rules_changes_compare removed.{{< /history >}}
Use rules:changes:compare_to to specify which ref to compare against for changes to the files
listed under rules:changes:paths.
Keyword type: Job keyword. You can use it only as part of a job, and it must be combined with rules:changes:paths.
Supported values:
main, branch1, or refs/heads/branch1.tag1 or refs/tags/tag1.2fg31ga14b.CI/CD variables are supported.
Example of rules:changes:compare_to:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
compare_to: 'refs/heads/branch1'
In this example, the docker build job is only included when the Dockerfile has changed
relative to refs/heads/branch1 and the pipeline source is a merge request event.
Additional details:
compare_to in some situation can cause unexpected results:
Related topics:
rules:changes:compare_to to skip a job if the branch is empty.rules:exists{{< history >}}
exists patterns or file paths increased from 10,000 to 50,000 in GitLab 17.7.{{< /history >}}
Use exists to run a job when certain files or directories exist in the repository.
Keyword type: Job keyword. You can use it as part of a job or an include.
Supported values:
$CI_PROJECT_DIR)
and can't directly link outside it. File paths can use glob patterns and
CI/CD variables.Example of rules:exists:
job1:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
- Dockerfile
job2:
variables:
DOCKERPATH: "**/Dockerfile"
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
- $DOCKERPATH
In this example:
job1 runs if a Dockerfile exists in the root directory of the repository.job2 runs if a Dockerfile exists anywhere in the repository.Additional details:
File.fnmatch
with the flags
File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB.exists patterns or file paths. After the 50,000th check, rules with patterned
globs always match. In other words, the exists rule always assumes a match in
projects with more than 50,000 files, or if there are fewer than 50,000 files but
the exists rules are checked more than 50,000 times.
rules:exists section.exists resolves to true if any of the listed files are found (an OR operation).rules:exists, GitLab searches for the files in the project and
ref that runs the pipeline. When using include with rules:exists,
GitLab searches for the files or directories in the project and ref of the file that contains the include
section. The project containing the include section can be different than the project
running the pipeline when using:
rules:exists cannot search for the presence of artifacts,
because rules evaluation happens before jobs run and artifacts are fetched.rules:exists:paths{{< history >}}
ci_support_rules_exists_paths_and_project. Disabled by default.ci_support_rules_exists_paths_and_project removed.{{< /history >}}
rules:exists:paths is the same as using rules:exists without
any subkeys. All additional details are the same.
Keyword type: Job keyword. You can use it as part of a job or an include.
Supported values:
Example of rules:exists:paths:
docker-build-1:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
exists:
- Dockerfile
docker-build-2:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
exists:
paths:
- Dockerfile
In this example, both jobs have the same behavior.
rules:exists:project{{< history >}}
ci_support_rules_exists_paths_and_project. Disabled by default.ci_support_rules_exists_paths_and_project removed.{{< /history >}}
Use rules:exists:project to specify the location in which to search for the files
listed under rules:exists:paths. Must be used with rules:exists:paths.
Keyword type: Job keyword. You can use it as part of a job or an include, and it must be combined with rules:exists:paths.
Supported values:
exists:project: A full project path, including namespace and group.exists:ref: Optional. The commit ref to use to search for the file. The ref can be a tag, branch name, or SHA. Defaults to the HEAD of the project when not specified.Example of rules:exists:project:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
paths:
- Dockerfile
project: my-group/my-project
ref: v1.0.0
In this example, the docker build job is only included when the Dockerfile exists in
the project my-group/my-project on the commit tagged with v1.0.0.
rules:whenUse rules:when alone or as part of another rule to control conditions for adding
a job to a pipeline. rules:when is similar to when, but with slightly
different input options.
If a rules:when rule is not combined with if, changes, or exists, it always matches
if reached when evaluating a job's rules.
Keyword type: Job-specific. You can use it only as part of a job.
Supported values:
on_success (default): Run the job only when no jobs in earlier stages fail.on_failure: Run the job only when at least one job in an earlier stage fails.never: Don't run the job regardless of the status of jobs in earlier stages.always: Run the job regardless of the status of jobs in earlier stages.manual: Add the job to the pipeline as a manual job.
The default value for allow_failure changes to false.delayed: Add the job to the pipeline as a delayed job.Example of rules:when:
job1:
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_REF_NAME =~ /feature/
when: delayed
- when: manual
script:
- echo
In this example, job1 is added to pipelines:
when: on_success which is the default behavior
when when is not defined.Additional details:
on_success and on_failure:
allow_failure: true in earlier stages are considered successful, even if they failed.rules:when: manual to add a manual job:
allow_failure becomes false by default. This default is the opposite of
using when: manual to add a manual job.when: manual defined outside of rules, set rules: allow_failure to true.rules:allow_failureUse allow_failure: true in rules to allow a job to fail
without stopping the pipeline.
You can also use allow_failure: true with a manual job. The pipeline continues
running without waiting for the result of the manual job. allow_failure: false
combined with when: manual in rules causes the pipeline to wait for the manual
job to run before continuing.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true or false. Defaults to false if not defined.Example of rules:allow_failure:
job:
script: echo "Hello, Rules!"
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH
when: manual
allow_failure: true
If the rule matches, then the job is a manual job with allow_failure: true.
Additional details:
rules:allow_failure overrides the job-level allow_failure,
and only applies when the specific rule triggers the job.rules:needs{{< history >}}
introduce_rules_with_needs. Disabled by default.introduce_rules_with_needs removed.{{< /history >}}
Use needs in rules to update a job's needs for specific conditions. When a condition matches a rule, the job's needs configuration is completely replaced with the needs in the rule.
Keyword type: Job-specific. You can use it only as part of a job.
Supported values:
[]), to set the job needs to none when the specific condition is met.Example of rules:needs:
build-dev:
stage: build
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
script: echo "Feature branch, so building dev version..."
build-prod:
stage: build
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script: echo "Default branch, so building prod version..."
tests:
stage: test
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
needs: ['build-dev']
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
needs: ['build-prod']
script: echo "Running dev specs by default, or prod specs when default branch..."
In this example:
specs job needs the build-dev job.specs job needs the build-prod job.Additional details:
needs in rules override any needs defined at the job-level. When overridden, the behavior is same as job-level needs.needs in rules can accept artifacts and optional.rules:variablesUse variables in rules to define variables for specific conditions.
Keyword type: Job-specific. You can use it only as part of a job.
Supported values:
VARIABLE-NAME: value.Example of rules:variables:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
rules:interruptible{{< history >}}
{{< /history >}}
Use interruptible in rules to update a job's interruptible value for specific conditions.
Keyword type: Job-specific. You can use it only as part of a job.
Supported values:
true or false.Example of rules:interruptible:
job:
script: echo "Hello, Rules!"
interruptible: true
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
interruptible: false # Override interruptible defined at the job level.
- when: on_success
Additional details:
rules:interruptible overrides the job-level interruptible,
and only applies when the specific rule triggers the job.run{{< details >}}
{{< /details >}}
{{< history >}}
pipeline_run_keyword. Disabled by default. Requires GitLab Runner 17.1.pipeline_run_keyword removed in GitLab 17.5.{{< /history >}}
[!note] This feature is available for testing, but not ready for production use.
Use run to define a series of steps to be executed in a job. Each step can be either a script or a predefined step.
You can also provide optional environment variables and inputs.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
name: A string representing the name of the step.script: A string containing shell commands to execute.step: A string identifying a predefined step to run.env: Optional. A hash of environment variables specific to this step.inputs: Optional. A hash of input parameters for predefined steps.Each array entry must have a name, and one script or step (but not both).
Example of run:
job:
run:
- name: 'hello_steps'
script: 'echo "hello from step1"'
- name: 'bye_steps'
step: gitlab.com/gitlab-org/ci-cd/runner-tools/echo-step@main
inputs:
echo: 'bye steps!'
env:
var1: 'value 1'
In this example, the job has two steps:
hello_steps runs the echo shell command.bye_steps uses a predefined step with an environment variable and an input parameter.Additional details:
script or a step key, but not both.run configuration cannot be used together with existing script, after_script or before_script keywords.scriptUse script to specify commands for the runner to execute.
All jobs except trigger jobs require a script keyword.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: An array including:
CI/CD variables are supported.
Example of script:
job1:
script: "bundle exec rspec"
job2:
script:
- uname -a
- bundle exec rspec
Additional details:
script, you must use single quotes (') or double quotes (").Related topics:
script
to make job logs easier to review.secrets{{< details >}}
{{< /details >}}
Use secrets to specify CI/CD secrets to:
file type by default).secrets:vault{{< history >}}
generic engine option introduced in GitLab Runner 16.11.{{< /history >}}
Use secrets:vault to specify secrets provided by a HashiCorp Vault.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
engine:name: Name of the secrets engine. Can be one of kv-v2 (default), kv-v1, or generic.engine:path: Path to the secrets engine.path: Path to the secret.field: Name of the field where the password is stored.Example of secrets:vault:
To specify all details explicitly and use the KV-V2 secrets engine:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: # Translates to secret: `ops/data/production/db`, field: `password`
engine:
name: kv-v2
path: ops
path: production/db
field: password
You can shorten this syntax. With the short syntax, engine:name and engine:path
both default to kv-v2:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: production/db/password # Translates to secret: `kv-v2/data/production/db`, field: `password`
To specify a custom secrets engine path in the short syntax, add a suffix that starts with @:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: production/db/password@ops # Translates to secret: `ops/data/production/db`, field: `password`
secrets:gcp_secret_manager{{< history >}}
{{< /history >}}
Use secrets:gcp_secret_manager to specify secrets provided by GCP Secret Manager.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
name: Name of the secret.version: Version of the secret.Example of secrets:gcp_secret_manager:
job:
secrets:
DATABASE_PASSWORD:
gcp_secret_manager:
name: 'test'
version: 2
Related topics:
secrets:azure_key_vault{{< history >}}
{{< /history >}}
Use secrets:azure_key_vault to specify secrets provided by a Azure Key Vault.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
name: Name of the secret.version: Version of the secret.Example of secrets:azure_key_vault:
job:
secrets:
DATABASE_PASSWORD:
azure_key_vault:
name: 'test'
version: 'test'
Related topics:
secrets:fileUse secrets:file to configure the secret to be stored as either a
file or variable type CI/CD variable
By default, the secret is passed to the job as a file type CI/CD variable. The value
of the secret is stored in the file and the variable contains the path to the file.
If your software can't use file type CI/CD variables, set file: false to store
the secret value directly in the variable.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true (default) or false.Example of secrets:file:
job:
secrets:
DATABASE_PASSWORD:
vault: production/db/password@ops
file: false
Additional details:
file keyword is a setting for the CI/CD variable and must be nested under
the CI/CD variable name, not in the vault section.secrets:token{{< history >}}
{{< /history >}}
Use secrets:token to explicitly select a token to use when authenticating with the external secrets provider by referencing the token's CI/CD variable.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of secrets:token:
job:
id_tokens:
AWS_TOKEN:
aud: https://aws.example.com
VAULT_TOKEN:
aud: https://vault.example.com
secrets:
DB_PASSWORD:
vault: gitlab/production/db
token: $VAULT_TOKEN
Additional details:
token keyword is not set and there is only one token defined, the defined token will automatically be used.token keyword.
If you do not specify which token to use, it is not possible to predict which token is used each time the job runs.servicesUse services to specify any additional Docker images that your scripts require to run successfully. The services image is linked
to the image specified in the image keyword.
Job configuration and default configuration does not merge together.
If the pipeline has default:services defined, and the job also has services,
the job configuration takes precedence and the default configuration is not used.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: The name of the services image, including the registry path if needed, in one of these formats:
<image-name> (Same as using <image-name> with the latest tag)<image-name>:<tag><image-name>@<digest>CI/CD variables are supported, but not for alias.
To customize alias dynamically, use CI/CD inputs instead.
Example of services:
default:
image:
name: ruby:2.6
entrypoint: ["/bin/bash"]
services:
- name: my-postgres:11.7
alias: db-postgres
entrypoint: ["/usr/local/bin/db-postgres"]
command: ["start"]
before_script:
- bundle install
test:
script:
- bundle exec rake spec
In this example, GitLab launches two containers for the job:
script commands.script commands in the Ruby container can connect to
the PostgreSQL database at the db-postgres hostname.Additional details:
services at the top level, but not in the default section, is deprecated.Related topics:
services.services in the .gitlab-ci.yml file.services:nameThe full name of the image to use for the service.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values: The name of the service image, including the registry path if needed, in one of these formats:
<image-name> (Same as using <image-name> with the latest tag)<image-name>:<tag><image-name>@<digest>CI/CD variables are supported.
Example of services:name:
services:
- name: postgres:11.7
- name: registry.example.com/my-org/custom-service:latest
Additional details:
alias to define unique name aliases when using multiple identical service images, or when the service image name is long.entrypoint, command, or variables, the name keyword is required.services:alias{{< history >}}
{{< /history >}}
Additional aliases to access the service from the job's container.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values: A string with one or more aliases separated by spaces or commas.
Example of services:alias:
services:
- name: postgres:11.7
alias: db,postgres,pg
- name: mysql:latest
alias: mysql-1
Additional details:
services:docker{{< history >}}
user input option introduced in GitLab 16.8.{{< /history >}}
Use services:docker to pass options to the Docker executor of a GitLab Runner.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
A hash of options for the Docker executor, which can include:
platform: Selects the architecture of the image to pull. When not specified,
the default is the same platform as the host runner.user: Specify the username or UID to use when running the container.Example of services:docker:
arm-sql-job:
script: echo "Run sql tests in service container"
image: ruby:2.6
services:
- name: super/sql:experimental
docker:
platform: arm64/v8
user: dave
Additional details:
services:docker:platform maps to the docker pull --platform option.services:docker:user maps to the docker run --user option.services:kubernetes{{< history >}}
user input option introduced in GitLab Runner 17.11.user input option extended to support uid:gid format in GitLab 18.0.{{< /history >}}
Use services:kubernetes to pass options to the GitLab Runner Kubernetes executor.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
A hash of options for the Kubernetes executor, which can include:
user: Specify the username or UID to use when the container runs. You can also use it to set GID by using the UID:GID format.Example of services:kubernetes with only UID:
arm-sql-job:
script: echo "Run sql tests"
image: ruby:2.6
services:
- name: super/sql:experimental
kubernetes:
user: "1001"
Example of services:kubernetes with both UID and GID:
arm-sql-job:
script: echo "Run sql tests"
image: ruby:2.6
services:
- name: super/sql:experimental
kubernetes:
user: "1001:1001"
services:entrypointA command or script to execute as the container's entrypoint.
When the Docker container is created, the entrypoint is translated to the Docker --entrypoint option.
The syntax is similar to the Dockerfile ENTRYPOINT directive,
where each shell token is a separate string in the array.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values: An array of strings representing the entrypoint command.
Example of services:entrypoint:
services:
- name: my-postgres:11.7
entrypoint: ["/usr/local/bin/db-postgres"]
services:commandCommand or script that should be used as the container's command.
It's translated to arguments passed to Docker after the image's name. The syntax is similar to the
Dockerfile CMD directive,
where each shell token is a separate string in the array.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values: An array of strings representing the command.
Example of services:command:
services:
- name: super/sql:latest
command: ["/usr/bin/super-sql", "run"]
services:variablesAdditional environment variables that are passed exclusively to the service. Service variables are passed exclusively to the service container and are not available to the job container.
The syntax is the same as job variables.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values: A hash of environment variable names and values.
Example of services:variables:
services:
- name: postgres:11.7
alias: db
variables:
POSTGRES_DB: "my_custom_db"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "example"
PGDATA: "/var/lib/postgresql/data"
Additional details:
services:pull_policy{{< history >}}
ci_docker_image_pull_policy. Disabled by default.ci_docker_image_pull_policy removed.{{< /history >}}
The pull policy that the runner uses to fetch the Docker image. Requires GitLab Runner 15.1 or later.
Keyword type: Job keyword. You can use it only as part of a job or in the default section.
Supported values:
always, if-not-present, or never.Examples of services:pull_policy:
job1:
script: echo "A single pull policy."
services:
- name: postgres:11.6
pull_policy: if-not-present
job2:
script: echo "Multiple pull policies."
services:
- name: postgres:11.6
pull_policy: [always, if-not-present]
Additional details:
ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]).Related topics:
stageUse stage to define which stage a job runs in. Jobs in the same
stage can execute in parallel (see Additional details).
If stage is not defined, the job uses the test stage by default.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A string, which can be a:
Example of stage:
stages:
- build
- test
- deploy
job1:
stage: build
script:
- echo "This job compiles code."
job2:
stage: test
script:
- echo "This job tests the compiled code. It runs when the build stage completes."
job3:
script:
- echo "This job also runs in the test stage."
job4:
stage: deploy
script:
- echo "This job deploys the code. It runs when the test stage completes."
environment: production
Additional details:
concurrent setting
is greater than 1.stage: .preUse the .pre stage to make a job run at the start of a pipeline. By default, .pre is
the first stage in a pipeline. User-defined stages execute after .pre.
You do not have to define .pre in stages.
If a pipeline contains only jobs in the .pre or .post stages, it does not run.
There must be at least one other job in a different stage.
Keyword type: You can only use it with a job's stage keyword.
Example of stage: .pre:
stages:
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
first-job:
stage: .pre
script:
- echo "This job runs in the .pre stage, before all other stages."
job2:
stage: test
script:
- echo "This job runs in the test stage."
Additional details:
needs: [] and jobs in the .pre stage, they will
all start as soon as the pipeline is created. Jobs with needs: [] start immediately,
ignoring any stage configuration..pipeline-policy-pre stage which runs before .pre.stage: .postUse the .post stage to make a job run at the end of a pipeline. By default, .post
is the last stage in a pipeline. User-defined stages execute before .post.
You do not have to define .post in stages.
If a pipeline contains only jobs in the .pre or .post stages, it does not run.
There must be at least one other job in a different stage.
Keyword type: You can only use it with a job's stage keyword.
Example of stage: .post:
stages:
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
last-job:
stage: .post
script:
- echo "This job runs in the .post stage, after all other stages."
job2:
stage: test
script:
- echo "This job runs in the test stage."
Additional details:
.pipeline-policy-post stage which runs after .post.tagsUse tags to select a specific runner from the list of all runners that are
available for the project.
When you register a runner, you can specify the runner's tags, for
example ruby, postgres, or development. To pick up and run a job, a runner must
be assigned every tag listed in the job.
Job configuration and default configuration does not merge together.
If the pipeline has default:tags defined, and the job also has tags,
the job configuration takes precedence and the default configuration is not used.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
Example of tags:
job:
tags:
- ruby
- postgres
In this example, only runners with both the ruby and postgres tags can run the job.
Additional details:
50.Related topics:
timeoutUse timeout to configure a timeout for a specific job. If the job runs for longer
than the timeout, the job fails.
The job-level timeout can be longer than the project-level timeout, but can't be longer than the runner's timeout.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A period of time written in natural language. For example, these are all equivalent:
3600 seconds60 minutesone hourExample of timeout:
build:
script: build.sh
timeout: 3 hours 30 minutes
test:
script: rspec
timeout: 3h 30m
Additional details:
timeout keyword is not supported in the default configuration. Define timeout in individual job configurations instead.
For more information, see issue 213634.trigger{{< history >}}
environment introduced in GitLab 16.4.{{< /history >}}
Use trigger to declare that a job is a "trigger job" which starts a
downstream pipeline that is either:
Trigger jobs can use only a limited set of GitLab CI/CD configuration keywords. The keywords available for use in trigger jobs are:
allow_failure.extends.needs, but not needs:project.only and except.parallel.rules.stage.trigger.variables.when (only with a value of on_success, on_failure, always, or manual).resource_group.environment.Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
trigger:project.trigger:include.Example of trigger:
trigger-multi-project-pipeline:
trigger: my-group/my-project
Additional details:
when:manual in the same job as trigger, but you cannot
use the API to start when:manual trigger jobs. See issue 284086
for more details.variables section (globally) or in the trigger job are forwarded
to the downstream pipeline as trigger variables.trigger:forward to forward
these variables to downstream pipelines.config.toml are not available to trigger jobs and are not passed to downstream pipelines.needs:pipeline:job in a trigger job.Related topics:
trigger keyword.trigger:inputs{{< history >}}
{{< /history >}}
Use trigger:inputs to set the inputs for a multi-project pipeline
when the downstream pipeline configuration uses spec:inputs.
Example of trigger:inputs:
trigger:
- project: 'my-group/my-project'
inputs:
website: "My website"
trigger:includeUse trigger:include to declare that a job is a "trigger job" which starts a
child pipeline.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of trigger:include:
trigger-child-pipeline:
trigger:
include: path/to/child-pipeline.gitlab-ci.yml
Additional details:
Use:
trigger:include:artifact to trigger a dynamic child pipeline.
trigger:include:inputs to set the inputs when the downstream pipeline configuration
uses spec:inputs.
trigger:include:local for a path to a child pipeline configuration file when:
Combining multiple child pipeline configuration files.
Combined with trigger:include:inputs to pass inputs to the child pipeline. For example:
staging-job:
trigger:
include:
- local: path/to/child-pipeline.yml
inputs:
environment: staging
trigger:include:project to trigger a child pipeline with a configuration file in a different project.
If the file contains additional include entries, GitLab looks for the files
in the project running the pipeline, not the project hosting the file.
trigger:include:template to trigger a child pipeline with a CI/CD template.
Related topics:
trigger:include:inputs{{< history >}}
{{< /history >}}
Use trigger:include:inputs to set the inputs for a child pipeline
when the downstream pipeline configuration uses spec:inputs.
Example of trigger:inputs:
trigger-job:
trigger:
include:
- local: path/to/child-pipeline.yml
inputs:
website: "My website"
trigger:projectUse trigger:project to declare that a job is a "trigger job" which starts a
multi-project pipeline.
By default, the multi-project pipeline triggers for the default branch. Use trigger:branch
to specify a different branch.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of trigger:project:
trigger-multi-project-pipeline:
trigger:
project: my-group/my-project
Example of trigger:project for a different branch:
trigger-multi-project-pipeline:
trigger:
project: my-group/my-project
branch: development
Related topics:
trigger keyword.trigger:strategy{{< history >}}
strategy:mirror option introduced in GitLab 18.2.{{< /history >}}
Use trigger:strategy to force the trigger job to wait for the downstream pipeline to complete
before it is marked as success.
This behavior is different than the default, which is for the trigger job to be marked as
success as soon as the downstream pipeline is created.
This setting makes your pipeline execution linear rather than parallel.
Supported values:
mirror: Mirrors the status of the downstream pipeline exactly.depend: Not recommended, use mirror instead. The trigger job status shows failed, success,
or running, depending on the downstream pipeline status. See additional details.Example of trigger:strategy:
trigger_job:
trigger:
include: path/to/child-pipeline.yml
strategy: mirror
In this example, jobs from subsequent stages wait for the triggered pipeline to successfully complete before starting.
Additional details:
strategy:depend (no longer recommended, use strategy:mirror instead):
allow_failure: true,
the downstream pipeline is considered successful and the trigger job shows success.trigger:forward{{< history >}}
ci_trigger_forward_variables removed.{{< /history >}}
Use trigger:forward to specify what to forward to the downstream pipeline. You can control
what is forwarded to both parent-child pipelines
and multi-project pipelines.
Forwarded variables do not get forwarded again in nested downstream pipelines by default,
unless the nested downstream trigger job also uses trigger:forward.
Supported values:
yaml_variables: true (default), or false. When true, variables defined
in the trigger job are passed to downstream pipelines.pipeline_variables: true or false (default). When true, pipeline variables
are passed to the downstream pipeline.Example of trigger:forward:
Run this pipeline manually, with
the CI/CD variable MYVAR = my value:
variables: # default variables for each job
VAR: value
---
# Default behavior:
---
# - VAR is passed to the child
---
# - MYVAR is not passed to the child
child1:
trigger:
include: .child-pipeline.yml
---
# Forward pipeline variables:
---
# - VAR is passed to the child
---
# - MYVAR is passed to the child
child2:
trigger:
include: .child-pipeline.yml
forward:
pipeline_variables: true
---
# Do not forward YAML variables:
---
# - VAR is not passed to the child
---
# - MYVAR is not passed to the child
child3:
trigger:
include: .child-pipeline.yml
forward:
yaml_variables: false
Additional details:
trigger:forward are pipeline variables,
which have high precedence. If a variable with the same name is defined in the downstream pipeline,
that variable is usually overwritten by the forwarded variable.whenUse when to configure the conditions for when jobs run. If not defined in a job,
the default value is when: on_success.
Keyword type: Job keyword. You can use it as part of a job. when: always and when: never can also be used in workflow:rules.
Supported values:
on_success (default): Run the job only when no jobs in earlier stages fail.on_failure: Run the job only when at least one job in an earlier stage fails.never: Don't run the job regardless of the status of jobs in earlier stages.
Can only be used in a rules section or workflow: rules.always: Run the job regardless of the status of jobs in earlier stages.manual: Add the job to the pipeline as a manual job.delayed: Add the job to the pipeline as a delayed job.Example of when:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
environment: production
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
In this example, the script:
cleanup_build_job only when build_job fails.cleanup_job as the last step in pipeline regardless of
success or failure.deploy_job when you run it manually in the GitLab UI.Additional details:
on_success and on_failure:
allow_failure: true in earlier stages are considered successful, even if they failed.allow_failure is true with when: manual. The default value
changes to false with rules:when: manual.Related topics:
when can be used with rules for more dynamic job control.when can be used with workflow to control when a pipeline can start.manual_confirmation{{< history >}}
{{< /history >}}
Use manual_confirmation with when: manual to define a custom confirmation message for manual jobs.
If no manual job is defined with when: manual, this keyword has no effect.
Manual confirmation works with all manual jobs, including environment stop jobs that use
environment:action: stop.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
Example of manual_confirmation:
delete_job:
stage: post-deployment
script:
- make delete
when: manual
manual_confirmation: 'Are you sure you want to delete this environment?'
stop_production:
stage: cleanup
script:
- echo "Stopping production environment"
environment:
name: production
action: stop
when: manual
manual_confirmation: "Are you sure you want to stop the production environment?"
start_inUse start_in to delay the execution of a job for a specified duration after the job is created.
You must configure when: delayed for the job.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: A period of time in seconds, minutes, or hours. Must be less than or equal to one week. Examples of valid values:
'5' (5 seconds)'10 seconds''30 minutes''1 hour''1 day'Example of start_in:
deploy_production:
stage: deploy
script:
- echo "Deploying to production"
when: delayed
start_in: 30 minutes
In this example, the deploy_production job starts 30 minutes after the previous stage completes.
Additional details:
start_in only works when when is set to delayed. If you use any other value for when, the configuration is invalid.
If a job uses rules, start_in and when must be defined in the rules, not at the job level.
Otherwise, you receive a validation error: config key may not be used with 'rules': start_in.start_in is not supported with workflow:rules, but does not cause any syntax violation.Related topics:
variablesUse variables to define CI/CD variables.
Variables can be defined in a CI/CD job, or as a top-level (global) keyword to define default CI/CD variables for all jobs.
Additional details:
trigger:forward
to forward these variables to downstream pipelines.Related topics:
variablesYou can use job variables in commands in the job's script, before_script, or after_script sections,
and also with some job keywords. Check the Supported values section of each job keyword
to see if it supports variables.
You cannot use job variables as values for global keywords like
include.
Supported values: Variable name and value pairs:
_). In some shells,
the first character must be a letter.CI/CD variables are supported.
Example of job variables:
review_job:
variables:
DEPLOY_SITE: "https://dev.example.com/"
REVIEW_PATH: "/review"
script:
- deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
In this example:
review_job has DEPLOY_SITE and REVIEW_PATH job variables defined.
Both job variables can be used in the script section.variablesVariables defined in a top-level variables section act as default variables
for all jobs.
Each default variable is made available to every job in the pipeline, except when the job already has a variable defined with the same name. The variable defined in the job takes precedence, so the value of the default variable with the same name cannot be used in the job.
Like job variables, you cannot use default variables as values for other global keywords,
like include.
Supported values: Variable name and value pairs:
_). In some shells,
the first character must be a letter.CI/CD variables are supported.
Examples of variables:
variables:
DEPLOY_SITE: "https://example.com/"
deploy_job:
stage: deploy
script:
- deploy-script --url $DEPLOY_SITE --path "/"
environment: production
deploy_review_job:
stage: deploy
variables:
DEPLOY_SITE: "https://dev.example.com/"
REVIEW_PATH: "/review"
script:
- deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
environment: production
In this example:
deploy_job has no variables defined. The default DEPLOY_SITE variable is copied to the job
and can be used in the script section.deploy_review_job already has a DEPLOY_SITE variable defined, so the default DEPLOY_SITE
is not copied to the job. The job also has a REVIEW_PATH job variable defined.
Both job variables can be used in the script section.variables:descriptionUse the description keyword to define a description for a default variable.
The description displays with the prefilled variable name when running a pipeline manually.
Keyword type: You can only use this keyword with default variables, not job variables.
Supported values:
Example of variables:description:
variables:
DEPLOY_NOTE:
description: "The deployment note. Explain the reason for this deployment."
Additional details:
value, the variable exists in pipelines that were not triggered manually,
and the default value is an empty string ('').variables:valueUse the value keyword to define a pipeline-level (default) variable's value. When used with
variables: description, the variable value is prefilled when running a pipeline manually.
Keyword type: You can only use this keyword with default variables, not job variables.
Supported values:
Example of variables:value:
variables:
DEPLOY_ENVIRONMENT:
value: "staging"
description: "The deployment target. Change this variable to 'canary' or 'production' if needed."
Additional details:
variables: description, the behavior is
the same as variables.variables:options{{< history >}}
{{< /history >}}
Use variables:options to define an array of values that are selectable in the UI when running a pipeline manually.
Must be used with variables: value, and the string defined for value:
options array.If there is no description,
this keyword has no effect.
Keyword type: You can only use this keyword with default variables, not job variables.
Supported values:
Example of variables:options:
variables:
DEPLOY_ENVIRONMENT:
value: "staging"
options:
- "production"
- "staging"
- "canary"
description: "The deployment target. Set to 'staging' by default."
variables:expand{{< history >}}
ci_raw_variables_in_yaml_config. Disabled by default.ci_raw_variables_in_yaml_config removed.{{< /history >}}
Use the expand keyword to configure a variable to be expandable or not.
Keyword type: You can use this keyword with both default and job variables.
Supported values:
true (default): The variable is expandable.false: The variable is not expandable.Example of variables:expand:
variables:
VAR1: value1
VAR2: value2 $VAR1
VAR3:
value: value3 $VAR1
expand: false
VAR2 is value2 value1.VAR3 is value3 $VAR1.Additional details:
expand keyword can only be used with default and job variables keywords.
You can't use it with rules:variables or workflow:rules:variables.