doc/administration/cicd/job_artifacts_troubleshooting.md
{{< details >}}
{{< /details >}}
When administering job artifacts, you might encounter the following issues.
Before GitLab 18.6, migrating from remote to local storage could lead to artifacts being copied with incorrect filenames.
For example:
path/to/artifacts/2025_10_15/922/485/artifacts.zip.path/to/artifacts/2025_10_15/922/485/4f8681af93715b90c913e507f24b05cc6ca6e (with no .zip extension).If this happened to your GitLab instance, run:
gitlab-rake gitlab:artifacts:fix_artifact_filepath
This task checks for artifacts in local storage that have an incorrect filename and renames them to the expected file name.
Job artifacts can fill up your disk space quicker than expected. Some possible reasons are:
unknown status might not be processed by automatic cleanup. You can check for these artifacts and clean them up to reclaim disk space.In these and other cases, identify the projects most responsible for disk space usage, figure out what types of artifacts are using the most space, and in some cases, manually delete job artifacts to reclaim disk space.
Artifacts housekeeping is the process that identifies which artifacts are expired and can be deleted.
unknown statusSome artifacts have a status of unknown because the housekeeping system cannot
determine their correct lock status. These artifacts are not processed by automatic
cleanup even after they expire, and can contribute to excessive disk space usage.
To check if your instance has artifacts with unknown status:
Start a database console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-psql
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
# Connect to the PostgreSQL console
kubectl exec -it <toolbox-pod-name> -- /srv/gitlab/bin/rails dbconsole --include-password --database main
{{< /tab >}}
{{< tab title="Docker" >}}
sudo docker exec -it <container_name> /bin/bash
gitlab-psql
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
sudo -u git -H psql -d gitlabhq_production
{{< /tab >}}
{{< /tabs >}}
Run the following query:
select expire_at, file_type, locked, count(*) from p_ci_job_artifacts
where expire_at is not null and
file_type != 3
group by expire_at, file_type, locked having count(*) > 1;
If records are returned with locked status 2, these are unknown artifacts.
For example:
expire_at | file_type | locked | count
-------------------------------+-----------+--------+--------
2021-06-21 22:00:00+00 | 1 | 2 | 73614
2021-06-21 22:00:00+00 | 2 | 2 | 73614
2021-06-21 22:00:00+00 | 4 | 2 | 3522
2021-06-21 22:00:00+00 | 9 | 2 | 32
2021-06-21 22:00:00+00 | 12 | 2 | 163
If you have unknown artifacts, you can set shorter expiration times or manually remove them to reclaim disk space.
unknown artifactsTo clean up unknown artifacts, you can set shorter expiration times,
which allows the automatic cleanup process to handle them:
Start a Rails console.
Set the expiration to the current time for unknown artifacts:
# This marks unknown artifacts for immediate cleanup
Ci::JobArtifact.where(locked: 2).update_all(expire_at: Time.current)
The automatic housekeeping process will then clean up these artifacts during its next run.
@final artifacts not deleted from object storeIn GitLab 16.1 and later, artifacts are uploaded directly to their final storage location in the @final directory, rather than using a temporary location first.
An issue in GitLab 16.1 and 16.2 causes artifacts to not be deleted from object storage when they expire.
The cleanup process for expired artifacts does not remove artifacts from the @final directory. This issue is fixed in GitLab 16.3 and later.
Administrators of GitLab instances that ran GitLab 16.1 or 16.2 for some time could see an increase in object storage used by artifacts. Follow this procedure to check for and remove these artifacts.
Removing the files is a two stage process:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
{{< /tab >}}
{{< tab title="Docker" >}}
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
Either write to a persistent volume mounted in the container, or when the command completes: copy the output file out of the session.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
sudo -u git -H bundle exec rake gitlab:cleanup:list_orphan_job_artifact_final_objects RAILS_ENV=production
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
When the command complete, copy the file out of the session onto persistent storage.
{{< /tab >}}
{{< /tabs >}}
The Rake task has some additional features that apply to all types of GitLab deployment:
Scanning object storage can be interrupted. Progress is recorded in Redis, this is used to resume scanning artifacts from that point.
By default, the Rake task generates a CSV file:
/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csv
Set an environment variable to specify a different filename:
# Packaged GitLab
sudo su -
FILENAME='custom_filename.csv' gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
If the output file exists already (the default, or the specified file) it appends entries to the file.
Each row contains the fields object_path,object_size comma separated, with no file header. For example:
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
{{< /tab >}}
{{< tab title="Docker" >}}
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
sudo -u git -H bundle exec rake gitlab:cleanup:delete_orphan_job_artifact_final_objects RAILS_ENV=production
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
{{< /tab >}}
{{< /tabs >}}
The following applies to all types of GitLab deployment:
FILENAME variable. By default the script looks for:
/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csvthe file is in the same directory as the input file
the filename is prefixed with deleted_from--. For example: deleted_from--orphan_job_artifact_final_objects.csv.
The rows in the file are: object_path,object_size,object_generation/version, for example:
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201,1711616743796587
Using a Rails console, you can find projects that have job artifacts with either:
Similar to deleting artifacts, use the following example time frames and alter them as needed:
7.days.from_now10.days.from_now2.weeks.from_now3.months.from_now1.year.from_nowEach of the following scripts also limits the search to 50 results with .limit(50), but this number can also be changed as needed:
# Find builds & projects with artifacts that never expire
builds_with_artifacts_that_never_expire = Ci::Build.with_downloadable_artifacts.where(artifacts_expire_at: nil).limit(50)
builds_with_artifacts_that_never_expire.find_each do |build|
puts "Build with id #{build.id} has artifacts that don't expire and belongs to project #{build.project.full_path}"
end
# Find builds & projects with artifacts that expire after 7 days from today
builds_with_artifacts_that_expire_in_a_week = Ci::Build.with_downloadable_artifacts.where('artifacts_expire_at > ?', 7.days.from_now).limit(50)
builds_with_artifacts_that_expire_in_a_week.find_each do |build|
puts "Build with id #{build.id} has artifacts that expire at #{build.artifacts_expire_at} and belongs to project #{build.project.full_path}"
end
List the top 20 projects, sorted by the total size of job artifacts stored, by running the following code in the Rails console:
include ActionView::Helpers::NumberHelper
ProjectStatistics.order(build_artifacts_size: :desc).limit(20).each do |s|
puts "#{number_to_human_size(s.build_artifacts_size)} \t #{s.project.full_path}"
end
You can change the number of projects listed by modifying .limit(20) to the
number you want.
List the 50 largest job artifacts in a single project by running the following code in the Rails console:
include ActionView::Helpers::NumberHelper
project = Project.find_by_full_path('path/to/project')
Ci::JobArtifact.where(project: project).order(size: :desc).limit(50).map { |a| puts "ID: #{a.id} - #{a.file_type}: #{number_to_human_size(a.size)}" }
You can change the number of job artifacts listed by modifying .limit(50) to
the number you want.
List the artifacts for a single project, sorted by artifact size. The output includes the:
p = Project.find_by_id(<project_id>)
arts = Ci::JobArtifact.where(project: p)
list = arts.order(size: :desc).limit(50).each do |art|
puts "Job ID: #{art.job_id} - Size: #{art.size}b - Type: #{art.file_type} - Created: #{art.created_at} - File loc: #{art.file}"
end
To change the number of job artifacts listed, change the number in limit(50).
[!warning] These commands remove data permanently. Before running them in a production environment, you should try them in a test environment first and make a backup of the instance that can be restored if needed.
This step also erases artifacts that users have chosen to keep:
project = Project.find_by_full_path('path/to/project')
builds_with_artifacts = project.builds.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
This step also erases artifacts that users have chosen to keep:
builds_with_artifacts = Ci::Build.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
project = Project.find_by_full_path('path/to/project')
builds = project.builds
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
builds = Ci::Build.all
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
1.year.ago is a Rails ActiveSupport::Duration method.
Start with a long duration to reduce the risk of accidentally deleting artifacts that are still in use.
Rerun the deletion with shorter durations as needed, for example 3.months.ago, 2.weeks.ago, or 7.days.ago.
The method erase_erasable_artifacts! is synchronous, and upon execution the artifacts are immediately removed;
they are not scheduled by a background queue.
When artifacts are deleted, the process occurs in two phases:
Ci::JobArtifact records are removed from the database and
converted to Ci::DeletedObject records with a future pick_up_at timestamp.Ci::ScheduleDeleteObjectsCronWorker worker
processes the Ci::DeletedObject records and physically removes the files.The removal is deliberately limited to prevent overwhelming system resources:
pick_up_at timestamp that determines when it becomes
eligible for physical deletionFor large-scale deletions, the physical cleanup can take a significant amount of time before disk space is fully reclaimed. Cleanup could take several days for very large deletions.
If you need to reclaim disk space quickly, you can expedite artifact deletion.
If you need to reclaim disk space quickly after deleting a large number of artifacts, you can bypass the standard scheduling limitations and expedite the deletion process.
[!warning] These commands put significant load on your system if you are deleting a large number of artifacts.
# Set the pick_up_date to the current time on all artifacts
# This will mark them for immediate deletion
Ci::DeletedObject.update_all(pick_up_at: Time.current)
# Get the count of artifacts marked for deletion
Ci::DeletedObject.where("pick_up_at < ?", Time.current)
# Delete the artifacts from disk
while Ci::DeletedObject.where("pick_up_at < ?", Time.current).count > 0
Ci::DeleteObjectsService.new.execute
sleep(10)
end
# Get the count of artifacts marked for deletion (should now be zero)
Ci::DeletedObject.count
[!warning] These commands remove data permanently. Before running them in a production environment, consider seeking guidance from a Support Engineer. You should also try them in a test environment first and make a backup of the instance that can be restored if needed.
Deleting a pipeline also removes that pipeline's:
Removing job and pipeline metadata can help reduce the size of the CI tables in the database. The CI tables are usually the largest tables in an instance's database.
project = Project.find_by_full_path('path/to/project')
user = User.find(1)
project.ci_pipelines.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
user = User.find(1)
Ci::Pipeline.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id} for project #{pipeline.project_id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
If you are using object storage for artifacts and a job artifact fails to upload, review:
The job log for an error message similar to:
WARNING: Uploading artifacts as "archive" to coordinator... failed id=12345 responseStatus=500 Internal Server Error status=500 token=abcd1234
The workhorse log for an error message similar to:
{"error":"MissingRegion: could not find region configuration","level":"error","msg":"error uploading S3 session","time":"2021-03-16T22:10:55-04:00"}
In both cases, you might need to add region to the job artifact object storage configuration.
500 Internal Server Error (Missing file)Bucket names that include folder paths are not supported with consolidated object storage.
For example, bucket/path. If a bucket name has a path in it, you might receive an error similar to:
WARNING: Uploading artifacts as "archive" to coordinator... POST https://gitlab.example.com/api/v4/jobs/job_id/artifacts?artifact_format=zip&artifact_type=archive&expire_in=1+day: 500 Internal Server Error (Missing file)
FATAL: invalid argument
If a job artifact fails to upload due to the previous error when using consolidated object storage, make sure you are using separate buckets for each data type.
FATAL: invalid argument when using Windows mountIf you are using a Windows mount with CIFS for job artifacts, you may see an
invalid argument error when the runner attempts to upload artifacts:
WARNING: Uploading artifacts as "dotenv" to coordinator... POST https://<your-gitlab-instance>/api/v4/jobs/<JOB_ID>/artifacts: 500 Internal Server Error id=1296 responseStatus=500 Internal Server Error status=500 token=*****
FATAL: invalid argument
To work around this issue, you can try:
nolease mount option to disable file leasing.For more information, see the investigation details.
Sometimes the artifacts storage usage displays an incorrect value for the total storage space used by artifacts. To recalculate the artifact usage statistics for all projects in the instance, you can run this background script:
gitlab-rake gitlab:refresh_project_statistics_build_artifacts_size[https://example.com/path/file.csv]
The https://example.com/path/file.csv file must list the project IDs for
all projects for which you want to recalculate artifact storage usage. Use this format for the file:
PROJECT_ID
1
2
The artifact usage value can fluctuate to 0 while the script is running. After
recalculation, usage should display as expected again.
The following flow diagrams illustrate how job artifacts work. These diagrams assume object storage is configured for job artifacts.
With proxy_download set to false, GitLab
redirects the runner to download artifacts from object storage with a
pre-signed URL. It is usually faster for runners to fetch from the
source directly so this configuration is generally recommended. It
should also reduce bandwidth usage because the data does not have to be
fetched by GitLab and sent to the runner. However, it does require
giving runners direct access to object storage.
The request flow looks like:
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Direct artifact download flow
accDescr: Runner authenticates, gets redirected to object storage, and downloads artifacts directly.
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: Send 302 redirect to object storage presigned URL
R->>C: 302 redirect
C->>O: GET <presigned URL>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
In this diagram:
First, the runner attempts to fetch a job artifact by using the
GET /api/v4/jobs/:id/artifacts endpoint. The runner attaches the
direct_download=true query parameter on the first attempt to indicate
that it is capable of downloading from object storage directly. Direct
downloads can be disabled in the runner configuration via the
FF_USE_DIRECT_DOWNLOAD feature flag.
This flag is set to true by default.
The runner sends the GET request using HTTP Basic Authentication
with the gitlab-ci-token username and an auto-generated
CI/CD job token as the password. This token is generated by GitLab and
given to the runner at the start of a job.
The GET request gets passed to the GitLab API, which looks up the token in the database and finds the user who triggered the job.
In steps 5-8:
If the user has access to the build, then GitLab generates
a presigned URL and sends a 302 Redirect with the Location set to that
URL. The runner follows the 302 Redirect and downloads the artifacts.
If the job cannot be found or the user does not have access to the job, then the API returns 401 Unauthorized.
The runner does not retry if it receives the following HTTP status codes:
However, if the runner receives any other status code, such as a 500 error,
it re-attempts to download the artifacts two more times, sleeping 1 second
between each attempt. The subsequent attempts omit direct_download=true.
If proxy_download is true, GitLab always fetches the
artifacts from object storage and send the data to the runner, even if
the runner sends the direct_download=true query parameter. Proxy
downloads might be desirable if runners have restricted network access.
The following diagram is similar to the disabled proxy download example,
except at steps 6-9, GitLab does not send a 302 Redirect to the
runner. Instead, GitLab instructs Workhorse to fetch the data and stream
it back to the runner. From the runner perspective, the original GET
request to /api/v4/jobs/:id/artifacts returns the binary data
directly.
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Proxied artifact download flow
accDescr: Runner authenticates, GitLab fetches from object storage, and streams artifacts back.
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: SendURL with object storage presigned URL
W->>O: GET <presigned URL>
O->>W: <artifacts data>
W->>C: <artifacts data>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
413 Request Entity Too Large errorIf the artifacts are too large, the job might fail with the following error:
Uploading artifacts as "archive" to coordinator... too large archive <job-id> responseStatus=413 Request Entity Too Large status=413" at end of a build job on pipeline when trying to store artifacts to <object-storage>.
You might need to:
client-max-body-size in the NGINX configuration file.