Job artifact troubleshooting for administrators

When administering job artifacts, you might encounter the following issues.

Job artifacts using too much disk space

Job artifacts can fill up your disk space quicker than expected. Some possible reasons are:

In these and other cases, identify the projects most responsible for disk space usage, figure out what types of artifacts are using the most space, and in some cases, manually delete job artifacts to reclaim disk space.

Artifacts housekeeping

Artifacts housekeeping is the process that identifies which artifacts are expired and can be deleted.

Housekeeping disabled in GitLab 14.6 to 15.2

Artifact housekeeping was disabled in GitLab 14.6. It was significantly improved in GitLab 14.10, and the changes were back ported to patch versions of GitLab 14.6 and later, introduced behind feature flags disabled by default. The flags were enabled by default in GitLab 15.3.

If artifacts housekeeping does not seem to be working in GitLab 14.6 to GitLab 15.2, you should check if the feature flags are enabled.

To check if the feature flags are enabled:

  1. Start a Rails console.

  2. Check if the feature flags are enabled.

    • GitLab 14.10 and earlier:

      Feature.enabled?(:ci_detect_wrongly_expired_artifacts, default_enabled: :yaml)
      Feature.enabled?(:ci_update_unlocked_job_artifacts, default_enabled: :yaml)
      Feature.enabled?(:ci_job_artifacts_backlog_work, default_enabled: :yaml)
      
    • GitLab 15.0 and later:

      Feature.enabled?(:ci_detect_wrongly_expired_artifacts)
      Feature.enabled?(:ci_update_unlocked_job_artifacts)
      Feature.enabled?(:ci_job_artifacts_backlog_work)
      
  3. If any of the feature flags are disabled, enable them:

    Feature.enable(:ci_detect_wrongly_expired_artifacts)
    Feature.enable(:ci_update_unlocked_job_artifacts)
    Feature.enable(:ci_destroy_unlocked_job_artifacts)
    

These changes include switching artifacts from unlocked to locked if they should be retained.

Artifacts with unknown status

Artifacts created before housekeeping was updated have a status of unknown. After they expire, these artifacts are not processed by the new housekeeping.

You can check the database to confirm if your instance has artifacts with the unknown status:

  1. Start a database console:

    Linux package (Omnibus)
    sudo gitlab-psql
    
    Helm chart (Kubernetes)
    # Find the toolbox pod
    kubectl --namespace <namespace> get pods -lapp=toolbox
    # Connect to the PostgreSQL console
    kubectl exec -it <toolbox-pod-name> -- /srv/gitlab/bin/rails dbconsole --include-password --database main
    
    Docker
    sudo docker exec -it <container_name> /bin/bash
    gitlab-psql
    
    Self-compiled (source)
    sudo -u git -H psql -d gitlabhq_production
    
  2. Run the following query:

    select expire_at, file_type, locked, count(*) from ci_job_artifacts
    where expire_at is not null and
    file_type != 3
    group by expire_at, file_type, locked having count(*) > 1;
    

If records are returned, then there are artifacts which the housekeeping job is unable to process. For example:

           expire_at           | file_type | locked | count
-------------------------------+-----------+--------+--------
 2021-06-21 22:00:00+00        |         1 |      2 |  73614
 2021-06-21 22:00:00+00        |         2 |      2 |  73614
 2021-06-21 22:00:00+00        |         4 |      2 |   3522
 2021-06-21 22:00:00+00        |         9 |      2 |     32
 2021-06-21 22:00:00+00        |        12 |      2 |    163

Artifacts with locked status 2 are unknown. Check issue #346261 for more details.

Clean up unknown artifacts

The Sidekiq worker that processes all unknown artifacts is enabled by default in GitLab 15.3 and later. It analyzes the artifacts returned by the above database query and determines which should be locked or unlocked. Artifacts are then deleted by that worker if needed.

The worker can be enabled on self-managed instances running GitLab 14.10 and later:

  1. Start a Rails console.

  2. Check if the feature is enabled.

    • GitLab 14.10:

      Feature.enabled?(:ci_job_artifacts_backlog_work, default_enabled: :yaml)
      
    • GitLab 15.0 and later:

      Feature.enabled?(:ci_job_artifacts_backlog_work)
      
  3. Enable the feature, if needed:

    Feature.enable(:ci_job_artifacts_backlog_work)
    

The worker processes 10,000 unknown artifacts every seven minutes, or roughly two million in 24 hours.

There is a related ci_job_artifacts_backlog_large_loop_limit feature flag which causes the worker to process unknown artifacts in batches that are five times larger. This flag is not recommended for use on self-managed instances.

List projects and builds with artifacts with a specific expiration (or no expiration)

Using a Rails console, you can find projects that have job artifacts with either:

  • No expiration date.
  • An expiration date more than 7 days in the future.

Similar to deleting artifacts, use the following example time frames and alter them as needed:

  • 7.days.from_now
  • 10.days.from_now
  • 2.weeks.from_now
  • 3.months.from_now

Each of the following scripts also limits the search to 50 results with .limit(50), but this number can also be changed as needed:

# Find builds & projects with artifacts that never expire
builds_with_artifacts_that_never_expire = Ci::Build.with_downloadable_artifacts.where(artifacts_expire_at: nil).limit(50)
builds_with_artifacts_that_never_expire.find_each do |build|
  puts "Build with id #{build.id} has artifacts that don't expire and belongs to project #{build.project.full_path}"
end

# Find builds & projects with artifacts that expire after 7 days from today
builds_with_artifacts_that_expire_in_a_week = Ci::Build.with_downloadable_artifacts.where('artifacts_expire_at > ?', 7.days.from_now).limit(50)
builds_with_artifacts_that_expire_in_a_week.find_each do |build|
  puts "Build with id #{build.id} has artifacts that expire at #{build.artifacts_expire_at} and belongs to project #{build.project.full_path}"
end

List projects by total size of job artifacts stored

List the top 20 projects, sorted by the total size of job artifacts stored, by running the following code in the Rails console (sudo gitlab-rails console):

include ActionView::Helpers::NumberHelper
ProjectStatistics.order(build_artifacts_size: :desc).limit(20).each do |s|
  puts "#{number_to_human_size(s.build_artifacts_size)} \t #{s.project.full_path}"
end

You can change the number of projects listed by modifying .limit(20) to the number you want.

List largest artifacts in a single project

List the 50 largest job artifacts in a single project by running the following code in the Rails console (sudo gitlab-rails console):

include ActionView::Helpers::NumberHelper
project = Project.find_by_full_path('path/to/project')
Ci::JobArtifact.where(project: project).order(size: :desc).limit(50).map { |a| puts "ID: #{a.id} - #{a.file_type}: #{number_to_human_size(a.size)}" }

You can change the number of job artifacts listed by modifying .limit(50) to the number you want.

List artifacts in a single project

List the artifacts for a single project, sorted by artifact size. The output includes the:

  • ID of the job that created the artifact
  • artifact size
  • artifact file type
  • artifact creation date
  • on-disk location of the artifact
p = Project.find_by_id(<project_id>)
arts = Ci::JobArtifact.where(project: p)

list = arts.order(size: :desc).limit(50).each do |art|
    puts "Job ID: #{art.job_id} - Size: #{art.size}b - Type: #{art.file_type} - Created: #{art.created_at} - File loc: #{art.file}"
end

To change the number of job artifacts listed, change the number in limit(50).

Delete job artifacts from jobs completed before a specific date

caution
These commands remove data permanently from database and storage. Before running them, we highly recommend seeking guidance from a Support Engineer, or running them in a test environment with a backup of the instance ready to be restored, just in case.

If you need to manually remove job artifacts associated with multiple jobs while retaining their job logs, this can be done from the Rails console (sudo gitlab-rails console):

  1. Select jobs to be deleted:

    To select all jobs with artifacts for a single project:

    project = Project.find_by_full_path('path/to/project')
    builds_with_artifacts =  project.builds.with_downloadable_artifacts
    

    To select all jobs with artifacts across the entire GitLab instance:

    builds_with_artifacts = Ci::Build.with_downloadable_artifacts
    
  2. Delete job artifacts older than a specific date:

    note
    This step also erases artifacts that users have chosen to “keep”.
    builds_to_clear = builds_with_artifacts.where("finished_at < ?", 1.week.ago)
    builds_to_clear.find_each do |build|
      Ci::JobArtifacts::DeleteService.new(build).execute
      build.update!(artifacts_expire_at: Time.now)
    end
    

    In GitLab 15.3 and earlier, use the following instead:

    builds_to_clear = builds_with_artifacts.where("finished_at < ?", 1.week.ago)
    builds_to_clear.find_each do |build|
      build.artifacts_expire_at = Time.now
      build.erase_erasable_artifacts!
    end
    

    1.week.ago is a Rails ActiveSupport::Duration method which calculates a new date or time in the past. Other valid examples are:

    • 7.days.ago
    • 3.months.ago
    • 1.year.ago

    erase_erasable_artifacts! is a synchronous method, and upon execution the artifacts are immediately removed; they are not scheduled by a background queue.

Delete job artifacts and logs from jobs completed before a specific date

caution
These commands remove data permanently from both the database and from disk. Before running them, we highly recommend seeking guidance from a Support Engineer, or running them in a test environment with a backup of the instance ready to be restored, just in case.

If you need to manually remove all job artifacts associated with multiple jobs, including job logs, this can be done from the Rails console (sudo gitlab-rails console):

  1. Select the jobs to be deleted:

    To select jobs with artifacts for a single project:

    project = Project.find_by_full_path('path/to/project')
    builds_with_artifacts =  project.builds.with_downloadable_artifacts
    

    To select jobs with artifacts across the entire GitLab instance:

    builds_with_artifacts = Ci::Build.with_downloadable_artifacts
    

    Occasionally, when choosing jobs with artifacts, there could be a risk of the process being terminated due to selecting a large number of rows. This can result in high memory usage and eventually lead to the process being killed due to an Out-of-Memory (OOM) error. To resolve this, you can run in small batches. The example below limits each batch to 1000.

    To select jobs with artifacts for a single project:

    project = Project.find_by_full_path('path/to/project')
    builds_with_artifacts =  project.builds.with_downloadable_artifacts.find_each(batch_size: 1000)
    

    To select jobs with artifacts across the entire GitLab instance:

    builds_with_artifacts = Ci::Build.with_downloadable_artifacts.find_each(batch_size: 1000)
    
  2. Select the user which is mentioned in the web UI as erasing the job:

    admin_user = User.find_by(username: 'username')
    
  3. Erase the job artifacts and logs older than a specific date:

    builds_to_clear = builds_with_artifacts.where("finished_at < ?", 1.week.ago)
    builds_to_clear.find_each do |build|
      print "Ci::Build ID #{build.id}... "
    
      if build.erasable?
        Ci::BuildEraseService.new(build, admin_user).execute
        puts "Erased"
      else
        puts "Skipped (Nothing to erase or not erasable)"
      end
    end
    

    In GitLab 15.3 and earlier, replace Ci::BuildEraseService.new(build, admin_user).execute with build.erase(erased_by: admin_user).

    1.week.ago is a Rails ActiveSupport::Duration method which calculates a new date or time in the past. Other valid examples are:

    • 7.days.ago
    • 3.months.ago
    • 1.year.ago

Job artifact upload fails with error 500

If you are using object storage for artifacts and a job artifact fails to upload, review:

  • The job log for an error message similar to:

    WARNING: Uploading artifacts as "archive" to coordinator... failed id=12345 responseStatus=500 Internal Server Error status=500 token=abcd1234
    
  • The workhorse log for an error message similar to:

    {"error":"MissingRegion: could not find region configuration","level":"error","msg":"error uploading S3 session","time":"2021-03-16T22:10:55-04:00"}
    

In both cases, you might need to add region to the job artifact object storage configuration.

Job artifact upload fails with 500 Internal Server Error (Missing file)

Bucket names that include folder paths are not supported with consolidated object storage. For example, bucket/path. If a bucket name has a path in it, you might receive an error similar to:

WARNING: Uploading artifacts as "archive" to coordinator... POST https://gitlab.example.com/api/v4/jobs/job_id/artifacts?artifact_format=zip&artifact_type=archive&expire_in=1+day: 500 Internal Server Error (Missing file)
FATAL: invalid argument

If a job artifact fails to upload with the above error when using consolidated object storage, make sure you are using separate buckets for each data type.

Job artifacts fail to upload with FATAL: invalid argument when using Windows mount

If you are using a Windows mount with CIFS for job artifacts, you may see an invalid argument error when the runner attempts to upload artifacts:

WARNING: Uploading artifacts as "dotenv" to coordinator... POST https://<your-gitlab-instance>/api/v4/jobs/<JOB_ID>/artifacts: 500 Internal Server Error  id=1296 responseStatus=500 Internal Server Error status=500 token=*****
FATAL: invalid argument

To work around this issue, you can try:

  • Switching to an ext4 mount instead of CIFS.
  • Upgrading to at least Linux kernel 5.15 which contains a number of important bug fixes relating to CIFS file leases.
  • For older kernels, using the nolease mount option to disable file leasing.

For more information, see the investigation details.

Usage quota shows incorrect artifact storage usage

Introduced in GitLab 14.10.

Sometimes the artifacts storage usage displays an incorrect value for the total storage space used by artifacts. To recalculate the artifact usage statistics for all projects in the instance, you can run this background script:

gitlab-rake gitlab:refresh_project_statistics_build_artifacts_size[https://example.com/path/file.csv]

The https://example.com/path/file.csv file must list the project IDs for all projects for which you want to recalculate artifact storage usage. Use this format for the file:

PROJECT_ID
1
2

The artifact usage value can fluctuate to 0 while the script is running. After recalculation, usage should display as expected again.