Pipelines for the GitLab project
Pipelines for gitlab-org/gitlab
(as well as the dev
instance’s) is configured in the usual
.gitlab-ci.yml
which itself includes files under
.gitlab/ci/
for easier maintenance.
We’re striving to dogfood GitLab CI/CD features and best-practices as much as possible.
Predictive test jobs before a merge request is approved
To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a predictive set of RSpec & Jest tests that are likely to fail for the merge request changes.
After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests have been run before a merge request is merged.
Overview of the GitLab project test dependency
To understand how the predictive test jobs are executed, we need to understand the dependency between GitLab code (frontend and backend) and the respective tests (Jest and RSpec). This dependency can be visualized in the following diagram:
In summary:
- RSpec tests are dependent on the backend code.
- Jest tests are dependent on both frontend and backend code, the latter through the frontend fixtures.
Predictive Tests Dashboards
- https://app.periscopedata.com/app/gitlab/1116767/Test-Intelligence-Accuracy
- https://app.periscopedata.com/app/gitlab/899368/EP---Predictive-testing-analysis
The detect-tests
CI job
Most CI/CD pipelines for gitlab-org/gitlab
will run a detect-tests
CI job in the prepare
stage to detect which backend/frontend tests should be run based on the files that changed in the given MR.
The detect-tests
job will create many files that will contain the backend/frontend tests that should be run. Those files will be read in subsequent jobs in the pipeline, and only those tests will be executed.
RSpec predictive jobs
Determining predictive RSpec test files in a merge request
To identify the RSpec tests that are likely to fail in a merge request, we use static mappings and dynamic mappings.
Static mappings
We use the test_file_finder
gem, with a static mapping maintained in the tests.yml
file for special cases that cannot
be mapped via coverage tracing (see where it’s used).
The test mappings contain a map of each source files to a list of test files which is dependent of the source file.
Dynamic mappings
First, we use the test_file_finder
gem, with a dynamic mapping strategy from test coverage tracing (generated via the Crystalball
gem)
(see where it’s used).
In addition to test_file_finder
, we have added several advanced mappings to detect even more tests to run:
-
FindChanges
(!74003)- Automatically detect Jest tests to run upon backend changes (via frontend fixtures)
-
PartialToViewsMappings
(#395016)- Run view specs when Rails partials included in those views are changed in an MR
-
JsToSystemSpecsMappings
(#386754)- Run certain system specs if a JavaScript file was changed in an MR
-
GraphqlBaseTypeMappings
(#386756)- If a GraphQL type class changed, we should try to identify the other GraphQL types that potentially include this type, and run their specs.
-
ViewToSystemSpecsMappings
(#395017)- When a view gets changed, we try to find feature specs that would test that area of the code.
-
ViewToJsMappings
(#386719)- If a JS file is changed, we should try to identify the system specs that are covering this JS component.
-
FindFilesUsingFeatureFlags
(#407366)- If a feature flag was changed, we check which Ruby file is including that feature flag, and we add it to the list of changed files in the detect-tests CI job. The remainder of the job will then detect which frontend/backend tests should be run based on those changed files.
Exceptional cases
In addition, there are a few circumstances where we would always run the full RSpec tests:
- when the
pipeline:run-all-rspec
label is set on the merge request. This label will trigger all RSpec tests including those run in theas-if-foss
jobs. - when the
pipeline:mr-approved
label is set on the merge request, and if the code changes satisfy thebackend-patterns
rule. Note that this label is assigned by triage automation when the merge request is approved by any reviewer. It is not recommended to apply this label manually. - when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when any CI configuration file is changed (for example,
.gitlab-ci.yml
or.gitlab/ci/**/*
)
Have you encountered a problem with backend predictive tests?
If so, have a look at the Engineering Productivity RUNBOOK on predictive tests for instructions on how to act upon predictive tests issues. Additionally, if you identified any test selection gaps, let @gl-quality/eng-prod
know so that we can take the necessary steps to optimize test selections.
Jest predictive jobs
Determining predictive Jest test files in a merge request
To identify the jest tests that are likely to fail in a merge request, we pass a list of all the changed files into jest
using the --findRelatedTests
option.
In this mode, jest
would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.
Exceptional cases
In addition, there are a few circumstances where we would always run the full Jest tests:
- when the
pipeline:run-all-jest
label is set on the merge request - when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when relevant CI configuration file is changed (
.gitlab/ci/rules.gitlab-ci.yml
,.gitlab/ci/frontend.gitlab-ci.yml
) - when any frontend dependency file is changed (for example,
package.json
,yarn.lock
,config/webpack.config.js
,config/helpers/**/*.js
) - when any vendored JavaScript file is changed (for example,
vendor/assets/javascripts/**/*
)
The rules
definitions for full Jest tests are defined at .frontend:rules:jest
in
rules.gitlab-ci.yml
.
Have you encountered a problem with frontend predictive tests?
If so, have a look at the Engineering Productivity RUNBOOK on predictive tests for instructions on how to act upon predictive tests issues.
Fork pipelines
We run only the predictive RSpec & Jest jobs for fork pipelines, unless the pipeline:run-all-rspec
label is set on the MR. The goal is to reduce the compute quota consumed by fork pipelines.
See the experiment issue.
Fail-fast job in merge request pipelines
To provide faster feedback when a merge request breaks existing tests, we implemented a fail-fast mechanism.
An rspec fail-fast
job is added in parallel to all other rspec
jobs in a merge
request pipeline. This job runs the tests that are directly related to the changes
in the merge request.
If any of these tests fail, the rspec fail-fast
job fails, triggering a
fail-pipeline-early
job to run. The fail-pipeline-early
job:
- Cancels the currently running pipeline and all in-progress jobs.
- Sets pipeline to have status
failed
.
For example:
The rspec fail-fast
is a no-op if there are more than 10 test files related to the
merge request. This prevents rspec fail-fast
duration from exceeding the average
rspec
job duration and defeating its purpose.
This number can be overridden by setting a CI/CD variable named RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD
.
Re-run previously failed tests in merge request pipelines
In order to reduce the feedback time after resolving failed tests for a merge request, the rspec rspec-pg14-rerun-previous-failed-tests
and rspec rspec-ee-pg14-rerun-previous-failed-tests
jobs run the failed tests from the previous MR pipeline.
This was introduced on August 25th 2021, with https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69053.
How it works?
- The
detect-previous-failed-tests
job (prepare
stage) detects the test files associated with failed RSpec jobs from the previous MR pipeline. - The
rspec rspec-pg14-rerun-previous-failed-tests
andrspec rspec-ee-pg14-rerun-previous-failed-tests
jobs will run the test files gathered by thedetect-previous-failed-tests
job.
Merge Trains
Why do we need to have a “stable” master branch to enable merge trains?
If the master branch is unstable (i.e. CI/CD pipelines for the master branch are failing frequently), all of the merge requests pipelines that were added AFTER a faulty merge request pipeline would have to be cancelled and added back to the train, which would create a lot of delays if the merge train is long.
How stable does the master branch have to be for us to enable merge trains?
We don’t have a specific number, but we need to have better numbers for flaky tests failures and infrastructure failures (see the Master Broken Incidents RCA Dashboard).
Could we gradually move to merge trains in our CI/CD configuration?
There was a proposal from a contributor, but the approach is not without some downsides: see the original proposal and discussion.
Faster feedback for some merge requests
Broken Master Fixes
When you need to fix a broken master
, you can add the pipeline:expedite
label to expedite the pipelines that run on the merge request.
Note that the merge request also needs to have the master:broken
or master:foss-broken
label set.
Revert MRs
To make your Revert MRs faster, use the revert MR template before you create your merge request. It will apply the pipeline:expedite
label and others that will expedite the pipelines that run on the merge request.
The pipeline:expedite
label
When this label is assigned, the following steps of the CI/CD pipeline are skipped:
- The
e2e:package-and-test
job. - The
rspec:undercoverage
job. - The entire Review Apps process.
Apply the label to the merge request, and run a new pipeline for the MR.
Test jobs
We have dedicated jobs for each testing level and each job runs depending on the
changes made in your merge request.
If you want to force all the RSpec jobs to run regardless of your changes, you can add the pipeline:run-all-rspec
label to the merge request.
End-to-end jobs
The e2e:package-and-test
child pipeline
runs end-to-end jobs automatically depending on changes, and is manual in other cases.
See .qa:rules:package-and-test
in
rules.gitlab-ci.yml
for
the specific list of rules.
If you want to force e2e:package-and-test
to run regardless of your changes, you can add the
pipeline:run-all-e2e
label to the merge request.
Consult the End-to-end Testing dedicated page for more information.
Review app jobs
The start-review-app-pipeline
child pipeline deploys a Review App and runs
end-to-end tests against it automatically depending on changes, and is manual in other cases.
See .review:rules:start-review-app-pipeline
in
rules.gitlab-ci.yml
for
the specific list of rules.
If you want to force a Review App to be deployed regardless of your changes, you can add the
pipeline:run-review-app
label to the merge request.
Consult the Review Apps dedicated page for more information.
As-if-FOSS jobs
The * as-if-foss
jobs run the GitLab test suite “as if FOSS”, meaning as if the jobs would run in the context
of gitlab-org/gitlab-foss
. These jobs are only created in the following cases:
- when the
pipeline:run-as-if-foss
label is set on the merge request - when the merge request is created in the
gitlab-org/security/gitlab
project - when any CI configuration file is changed (for example,
.gitlab-ci.yml
or.gitlab/ci/**/*
)
The * as-if-foss
jobs are run in addition to the regular EE-context jobs. They have the FOSS_ONLY='1'
variable
set and get the ee/
folder removed before the tests start running.
The intent is to ensure that a change doesn’t introduce a failure after gitlab-org/gitlab
is synced to gitlab-org/gitlab-foss
.
As-if-JH cross project downstream pipeline
What it is
This pipeline is also called JiHu validation pipeline, and it’s currently allowed to fail. When that happens, follow What to do when the validation pipeline fails.
How we run it
The start-as-if-jh
job triggers a cross project downstream pipeline which
runs the GitLab test suite “as if JiHu”, meaning as if the pipeline would run
in the context of GitLab JH. These jobs are only
created in the following cases:
- when changes are made to feature flags
- when the
pipeline:run-as-if-jh
label is set on the merge request
This pipeline runs under the context of a generated branch in the GitLab JH validation project, which is a mirror of the GitLab JH mirror.
The generated branch name is prefixed with as-if-jh/
along with the branch
name in the merge request. This generated branch is based on the merge request
branch, additionally adding changes downloaded from the
corresponding JH branch on top to turn the whole
pipeline as if JiHu.
The intent is to ensure that a change doesn’t introduce a failure after GitLab is synchronized to GitLab JH.
When to consider applying pipeline:run-as-if-jh
label
If a Ruby file is renamed and there’s a corresponding prepend_mod
line,
it’s likely that GitLab JH is relying on it and requires a corresponding
change to rename the module or class it’s prepending.
Corresponding JH branch
You can create a corresponding JH branch on GitLab JH by
appending -jh
to the branch name. If a corresponding JH branch is found,
as-if-jh pipeline grabs files from the respective branch, rather than from the
default branch main-jh
.
main-jh
.
This is why when we want to fetch corresponding JH branch we should fetch it
from the main mirror, rather than the validation project.How as-if-JH pipeline was configured
The whole process looks like this:
sync-as-if-jh-branch
when there are dependencies changes.Tokens set in the project variables
-
ADD_JH_FILES_TOKEN
: This is a GitLab JH mirror project token withread_api
permission, to be able to download JiHu files. -
AS_IF_JH_TOKEN
: This is a GitLab JH validation project token withwrite_repository
permission, to push generatedas-if-jh/*
branch.
How we generate the as-if-JH branch
First add-jh-files
job will download the required JiHu files from the
corresponding JH branch, saving in artifacts. Next prepare-as-if-jh-branch
job will create a new branch from the merge request branch, commit the
changes, and finally push the branch to the
validation project.
Optionally, if the merge requests have changes to the dependencies, we have an
additional step to run sync-as-if-jh-branch
job to trigger a downstream
pipeline on as-if-jh-code-sync
branch
in the validation project. This job will perform the same process as
JiHu code-sync, making sure the dependencies changes can be brought to the
as-if-jh branch prior to run the validation pipeline.
If there are no dependencies changes, we don’t run this process.
How we trigger and run the as-if-JH pipeline
After having the as-if-jh/*
branch prepared and optionally synchronized,
start-as-if-jh
job will trigger a pipeline in the
validation project
to run the cross-project downstream pipeline.
How the GitLab JH mirror project is set up
The GitLab JH mirror project is private and CI is disabled.
It’s a pull mirror pulling from GitLab JH, mirroring all branches, overriding divergent refs, triggering no pipelines when mirror is updated.
The pulling user is @gitlab-jh-bot
, who
is a maintainer in the project. The credentials can be found in the 1password
engineering vault.
No password is used from mirroring because GitLab JH is a public project.
How the GitLab JH validation project is set up
This GitLab JH validation project is public and CI is enabled, with temporary project variables set.
It’s a pull mirror pulling from GitLab JH mirror,
mirroring specific branches: (master|main-jh)
, overriding
divergent refs, triggering no pipelines when mirror is updated.
The pulling user is @gitlab-jh-validation-bot
, who is a maintainer in the project, and also a
reporter in the
GitLab JH mirror.
The credentials can be found in the 1password engineering vault.
A personal access token from @gitlab-jh-validation-bot
with
write_repository
permission is used as the password to pull changes from
the GitLab JH mirror. Username is set with gitlab-jh-validation-bot
.
There is also a pipeline schedule
to run maintenance pipelines with variable SCHEDULE_TYPE
set to maintenance
running every day, updating cache.
The default CI/CD configuration file is also set at jh/.gitlab-ci.yml
so it
runs exactly like GitLab JH.
Additionally, a special branch
as-if-jh-code-sync
is set and protected. Maintainers can push and developers can merge for this
branch. We need to set it so developers can merge because we need to let
developers to trigger pipelines for this branch. This is a compromise
before we resolve Developer-level users no longer able to run pipelines on protected branches.
It’s used to run sync-as-if-jh-branch
to synchronize the dependencies
when the merge requests changed the dependencies. See
How we generate the as-if-JH branch
for how it works.
Temporary GitLab JH validation project variables
-
BUNDLER_CHECKSUM_VERIFICATION_OPT_IN
is set tofalse
- We can remove this variable after JiHu has
jh/Gemfile.checksum
committed. More context can be found at: Setting it tofalse
to skip it
- We can remove this variable after JiHu has
Why do we have both the mirror project and validation project?
We have separate projects for a several reasons.
- Security: Previously, we had the mirror project only. However, to fully mitigate a security issue, we had to make the mirror project private.
-
Isolation: We want to run JH code in a completely isolated and standalone project.
We should not run it under the
gitlab-org
group, which is where the mirror project is. The validation project is completely isolated. - Cost: We don’t want to connect to JiHuLab.com from each merge request. It is more cost effective to mirror the code from JiHuLab.com to somewhere at GitLab.com, and have our merge requests fetch code from there. This means that the validation project can fetch code from the mirror, rather than from JiHuLab.com. The mirror project will periodically fetch from JiHuLab.com.
-
Branch separation/security/efficiency: We want to mirror all branches, so that we can fetch the corresponding JH branch from JiHuLab.com. However, we don’t want to overwrite the
as-if-jh-code-sync
branch in the validation project, because we use it to control the validation pipeline and it has access toAS_IF_JH_TOKEN
. However, we cannot mirror all branches except a single one. See this issue for details.Given this issue, the validation project is set to only mirror
master
andmain-jh
. Technically, we don’t even need those branches, but we do want to keep the repository up-to-date with all the default branches so that when we push changes from the merge request, we only need to push changes from the merge request, which can be more efficient. - Separation of concerns:
- Validation project only has the following branches:
-
master
andmain-jh
to keep changes up-to-date. -
as-if-jh-code-sync
for dependency synchronization. We should never mirror this. -
as-if-jh/*
branches from the merge requests. We should never mirror these.
-
- All branches from the mirror project are all coming from JiHuLab.com. We never push anything to the mirror project, nor does it run any pipelines. CI/CD is disabled in the mirror project.
- Validation project only has the following branches:
We can consider merging the two projects to simplify the setup and process, but we need to make sure that all of these reasons are no longer concerns.
rspec:undercoverage
job
Introduced in GitLab 14.6.
The rspec:undercoverage
job runs undercover
to detect, and fail if any changes introduced in the merge request has zero coverage.
The rspec:undercoverage
job obtains coverage data from the rspec:coverage
job.
If the rspec:undercoverage
job detects missing coverage due to a CE method being overridden in EE, add the pipeline:run-as-if-foss
label to the merge request and start a new pipeline.
In the event of an emergency, or false positive from this job, add the
pipeline:skip-undercoverage
label to the merge request to allow this job to
fail.
Troubleshooting rspec:undercoverage
failures
The rspec:undercoverage
job has known bugs
that can cause false positive failures. You can test coverage locally to determine if it’s
safe to apply pipeline:skip-undercoverage
. For example, using <spec>
as the name of the
test causing the failure:
- Run
SIMPLECOV=1 bundle exec rspec <spec>
. - Run
scripts/undercoverage
.
If these commands return undercover: ✅ No coverage is missing in latest changes
then you can apply pipeline:skip-undercoverage
to bypass pipeline failures.
Test suite parallelization
Our current RSpec tests parallelization setup is as follows:
- The
retrieve-tests-metadata
job in theprepare
stage ensures we have aknapsack/report-master.json
file:- The
knapsack/report-master.json
file is fetched from the latestmain
pipeline which runsupdate-tests-metadata
(for now it’s the 2-hourlymaintenance
scheduled master pipeline), if it’s not here we initialize the file with{}
.
- The
- Each
[rspec|rspec-ee] [migration|unit|integration|system|geo] n m
job are run withknapsack rspec
and should have an evenly distributed share of tests:- It works because the jobs have access to the
knapsack/report-master.json
since the “artifacts from all previous stages are passed by default”. - the jobs set their own report path to
"knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json"
. - if knapsack is doing its job, test files that are run should be listed under
Report specs
, not underLeftover specs
.
- It works because the jobs have access to the
- The
update-tests-metadata
job (which only runs on scheduled pipelines for the canonical project and updates theknapsack/report-master.json
in 2 ways:- By default, it takes all the
knapsack/rspec*.json
files and merge them all together into a singleknapsack/report-master.json
file that is saved as artifact. - (Experimental) When the
AVERAGE_KNAPSACK_REPORT
environment variable is set totrue
, instead of merging the reports, the job will calculate the average of the test duration betweenknapsack/report-master.json
andknapsack/rspec*.json
to reduce the performance impact from potentially random factors such as spec ordering, runner hardware differences, flaky tests, etc. This experimental approach is aimed to better predict the duration for each spec files to distribute load among parallel jobs more evenly so the jobs can finish around the same time.
- By default, it takes all the
After that, the next pipeline uses the up-to-date knapsack/report-master.json
file.
Flaky tests
Automatic skipping of flaky tests
We used to skip tests that are known to be flaky,
but we stopped doing so since that could actually lead to actual broken master
.
Instead, we introduced
a fast-quarantining process
to proactively quarantine any flaky test reported in #master-broken
incidents.
This fast-quarantining process can be disabled by setting the $FAST_QUARANTINE
variable to false
.
Automatic retry of failing tests in a separate process
Unless $RETRY_FAILED_TESTS_IN_NEW_PROCESS
variable is set to false
(true
by default), RSpec tests that failed are automatically retried once in a separate
RSpec process. The goal is to get rid of most side-effects from previous tests that may lead to a subsequent test failure.
We keep track of retried tests in the $RETRIED_TESTS_REPORT_FILE
file saved as artifact by the rspec:flaky-tests-report
job.
See the experiment issue.
Compatibility testing
By default, we run all tests with the versions that runs on GitLab.com.
Other versions (usually one back-compatible version, and one forward-compatible version) should be running in nightly scheduled pipelines.
Exceptions to this general guideline should be motivated and documented.
Ruby versions testing
We’re running Ruby 3.0 on GitLab.com, as well as for the default branch. To prepare for the next Ruby version, we run merge requests in Ruby 3.1.
This takes effects at the time when Run merge requests in Ruby 3.1 by default is merged. See Ruby 3.1 epic for the roadmap to fully make Ruby 3.1 the default.
To make sure both Ruby versions are working, we also run our test suite against both Ruby 3.0 and Ruby 3.1 on dedicated 2-hourly scheduled pipelines.
For merge requests, you can add the pipeline:run-in-ruby3_0
label to switch
the Ruby version to 3.0. When you do this, the test suite will no longer run
in Ruby 3.1 (default for merge requests).
When the pipeline is running in a Ruby version not considered default, an
additional job verify-default-ruby
will also run and always fail to remind
us to remove the label and run in default Ruby before merging the merge
request. At the moment both Ruby 3.0 and Ruby 3.1 are considered default.
This should let us:
- Test changes for Ruby 3.1
- Make sure it will not break anything when it’s merged into the default branch
PostgreSQL versions testing
Our test suite runs against PostgreSQL 14 as GitLab.com runs on PostgreSQL 14 and Omnibus defaults to PG14 for new installs and upgrades.
We do run our test suite against PostgreSQL 14 on nightly scheduled pipelines.
We also run our test suite against PostgreSQL 13 upon specific database library changes in merge requests and main
pipelines (with the rspec db-library-code pg13
job).
Current versions testing
Where? | PostgreSQL version | Ruby version |
---|---|---|
Merge requests | 14 (default version), 13 for DB library changes | 3.1 |
master branch commits
| 14 (default version), 13 for DB library changes | 3.0 (default version) |
maintenance scheduled pipelines for the master branch (every even-numbered hour)
| 14 (default version), 13 for DB library changes | 3.0 (default version) |
maintenance scheduled pipelines for the ruby3_1 branch (every odd-numbered hour), see below.
| 14 (default version), 13 for DB library changes | 3.1 |
nightly scheduled pipelines for the master branch
| 14 (default version), 13, 15 | 3.0 (default version) |
There are 2 pipeline schedules used for testing Ruby 3.1. One is triggering a
pipeline in ruby3_1-sync
branch, which updates the ruby3_1
branch with latest
master
, and no pipelines will be triggered by this push. The other schedule
is triggering a pipeline in ruby3_1
5 minutes after it, which is considered
the maintenance schedule to run test suites and update cache.
The ruby3_1
branch must not have any changes. The branch is only there to set
RUBY_VERSION
to 3.1
in the maintenance pipeline schedule.
The gitlab
job in the ruby3_1-sync
branch uses a gitlab-org/gitlab
project
token with write_repository
scope and Maintainer
role with no expiration.
The token is stored in the RUBY3_1_SYNC_TOKEN
variable in gitlab-org/gitlab
.
Redis versions testing
Our test suite runs against Redis 6 as GitLab.com runs on Redis 6 and Omnibus defaults to Redis 6 for new installs and upgrades.
We do run our test suite against Redis 7 on nightly
scheduled pipelines, specifically when running forward-compatible PostgreSQL 15 jobs.
Current versions testing
Where? | Redis version |
---|---|
MRs | 6 |
default branch (non-scheduled pipelines)
| 6 |
nightly scheduled pipelines
| 7 |
Single database testing
By default, all tests run with multiple databases.
We also run tests with a single database in nightly scheduled pipelines, and in merge requests that touch database-related files.
Single database tests run in two modes:
-
Single database with one connection. Where GitLab connects to all the tables using one connection pool.
This runs through all the jobs that end with
-single-db
-
Single database with two connections. Where GitLab connects to
gitlab_main
,gitlab_ci
database tables using different database connections. This runs through all the jobs that end with-single-db-ci-connection
.
If you want to force tests to run with a single database, you can add the pipeline:run-single-db
label to the merge request.
Monitoring
The GitLab test suite is monitored for the main
branch, and any branch
that includes rspec-profile
in their name.
Logging
- Rails logging to
log/test.log
is disabled by default in CI for performance reasons. To override this setting, provide theRAILS_ENABLE_TEST_LOG
environment variable.
Pipelines types for merge requests
In general, pipelines for an MR fall into one of the following types (from shorter to longer), depending on the changes made in the MR:
- Documentation pipeline: For MRs that touch documentation.
- Backend pipeline: For MRs that touch backend code.
- Review app pipeline: For MRs that touch frontend code.
-
End-to-end pipeline: For MRs that touch code in the
qa/
folder.
A “pipeline type” is an abstract term that mostly describes the “critical path” (for example, the chain of jobs for which the sum of individual duration equals the pipeline’s duration). We use these “pipeline types” in metrics dashboards to detect what types and jobs need to be optimized first.
An MR that touches multiple areas would be associated with the longest type applicable. For instance, an MR that touches backend and frontend would fall into the “Frontend” pipeline type since this type takes longer to finish than the “Backend” pipeline type.
We use the rules:
and needs:
keywords extensively
to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).
Following are graphs of the critical paths for each pipeline type. Jobs that aren’t part of the critical path are omitted.
Documentation pipeline
Backend pipeline
Review app pipeline
End-to-end pipeline
CI configuration internals
See the dedicated CI configuration internals page.
Performance
See the dedicated CI configuration performance page.