- Steps
- Upgrade the bundled PostgreSQL chart
- Upgrade to version 7.0
- Upgrade to version 6.0
- Upgrade to version 5.9
- Upgrade to version 5.5
- Upgrade to version 5.0
- Upgrade to version 4.0
Upgrade the GitLab chart
Before upgrading your GitLab installation, you need to check the changelog corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new GitLab chart version.
Upgrades have to follow a supported upgrade path. Because the GitLab chart versions don’t follow the same numbering as GitLab versions, see the version mappings between them.
We also recommend that you take a backup first. Also note that you
must provide all values using helm upgrade --set key=value
syntax or -f values.yaml
instead of
using --reuse-values
, because some of the current values might be deprecated.
You can retrieve your previous --set
arguments cleanly, with
helm get values <release name>
. If you direct this into a file
(helm get values <release name> > gitlab.yaml
), you can safely pass this
file via -f
. Thus helm upgrade gitlab gitlab/gitlab -f gitlab.yaml
.
This safely replaces the behavior of --reuse-values
Steps
7.0
version of the chart, follow the manual upgrade steps for 7.0.
If you’re upgrading to the 5.0
version of the chart, follow the manual upgrade steps for 5.0.
If you’re upgrading to the 4.0
version of the chart, follow the manual upgrade steps for 4.0.
If you’re upgrading to an older version of the chart, follow the upgrade steps for older versions.Before you upgrade, reflect on your set values and if you’ve possibly “over-configured” your settings. We expect you to maintain a small list of modified values, and leverage most of the chart defaults. If you’ve explicitly set a large number of settings by:
- Copying computed settings
- Copying all settings and explicitly defining values that are actually the same as the default values
This will almost certainly cause issues during the upgrade as the configuration structure could have changed across versions, and that will cause problems applying the settings. We cover how to check this in the following steps.
The following are the steps to upgrade GitLab to a newer version:
- Check the change log for the specific version you would like to upgrade to.
- Go through the deployment documentation step by step.
-
Extract your previously provided values:
helm get values gitlab > gitlab.yaml
- Decide on all the values you need to carry through as you upgrade. GitLab has reasonable default values, and while upgrading, you can attempt to pass in all values from the above command, but it could create a scenario where a configuration has changed across chart versions and it might not map cleanly. We advise keeping a minimal set of values that you want to explicitly set, and passing those during the upgrade process.
-
Perform the upgrade, with values extracted in the previous step:
helm upgrade gitlab gitlab/gitlab \ --version <new version> \ -f gitlab.yaml \ --set gitlab.migrations.enabled=true \ --set ...
During a major database upgrade, we ask you to set gitlab.migrations.enabled
set to false
.
Ensure that you explicitly set it back to true
for future updates.
Upgrade the bundled PostgreSQL chart
postgresql.install
is false), you do not need to
perform this step.Upgrade the bundled PostgreSQL to version 13
PostgreSQL 13 is supported by GitLab 14.1 and later. PostgreSQL 13 brings significant performance improvements.
To upgrade the bundled PostgreSQL to version 13, the following steps are required:
- Prepare the existing database.
- Delete existing PostgreSQL data.
- Update the
postgresql.image.tag
value to13.6.0
and reinstall the chart to create a new PostgreSQL 13 database. - Restore the database.
Upgrade the bundled PostgreSQL to version 12
As part of the 5.0.0
release of this chart, we upgraded the bundled PostgreSQL version from 11.9.0
to 12.7.0
. This is
not a drop in replacement. Manual steps need to be performed to upgrade the database.
The steps have been documented in the 5.0 upgrade steps.
Upgrade the bundled PostgreSQL to version 11
As part of the 4.0.0
release of this chart, we upgraded the bundled PostgreSQL chart from 7.7.0
to 8.9.4
. This is not a drop in replacement. Manual steps need to be performed to upgrade the database.
The steps have been documented in the 4.0 upgrade steps.
Upgrade to version 7.0
6.x
version of the chart to the latest 7.0
release, you need
to first update to the latest 6.11.x
patch release in order for the upgrade to work.
The 7.0 release notes describe the supported upgrade path.The 7.0.x
release may require manual steps in order to perform the upgrade.
- If using the bundled
bitnami/Redis
sub-chart to provide an in-cluster Redis service - you’ll need to manually delete the StatefulSet for Redis prior to upgrading to version 7.0 of the GitLab chart. Follow the setups in Upgrade the bundled Redis sub-chart below.
Update the bundled Redis sub-chart
Release 7.0 of the GitLab chart updates the bundled bitnami/Redis
sub-chart to version 16.13.2
from the previously installed 11.3.4
. Due to
changes in matchLabels
applied to the redis-master
StatefulSet in the sub-chart,
upgrading without manually deleting the StatefulSet will result in the following error:
Error: UPGRADE FAILED: cannot patch "gitlab-redis-master" with kind StatefulSet: StatefulSet.apps "gitlab-redis-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy' and 'minReadySeconds' are forbidden
To delete the StatefulSet for RELEASE-redis-master
:
-
Scale down the replicas to
0
for thewebservice
,sidekiq
,kas
, andgitlab-exporter
deployments:kubectl scale deployment --replicas 0 --selector 'app in (webservice, sidekiq, kas, gitlab-exporter)' --namespace <namespace>
-
Delete the
RELEASE-redis-master
StatefulSet:kubectl delete statefulset RELEASE-redis-master --namespace <namespace>
-
<namespace>
should be replaced with the namespace where you installed the GitLab chart.
-
Then follow the standard upgrade steps. Due to how Helm merges changes, you may need to scale up the deployments you scaled down in step one manually.
Use of global.redis.password
In order to mitigate a configuration type conflict with the use of global.redis.password
we’ve deprecated the use of global.redis.password
in favor of global.redis.auth
.
In addition to displaying a deprecation notice - if you see the following warning
message from helm upgrade
:
coalesce.go:199: warning: destination for password is a table. Ignoring non-table value
This is an indication that you are setting global.redis.password
in your values file.
useNewIngressForCerts
on Ingresses
If you are upgrading an existing chart from 7.x
to a later version, and are changing
global.ingress.useNewIngressForCerts
to true
, you must also update any existing
cert-manager Certificate
objects to delete the acme.cert-manager.io/http01-override-ingress-name
annotation.
You must make this change because with this attribute set to false
(default),
this annotation is added by default to the Certificates, and cert-manager uses
it to identify which Ingress method to use for that certificate. The annotation
is not automatically removed by only changing this attribute to false
.
A manual action is needed otherwise cert-manager keeps using the old
behavior for pre-existing Ingresses.
Upgrade to version 6.0
5.x
version of the chart to the latest 6.0
release, you need
to first update to the latest 5.10.x
patch release in order for the upgrade to work.
The 6.0 release notes describe the supported upgrade path.To upgrade to the 6.0
release you must first be on the latest 5.10.x
patch release. There isn’t any additional manual changes required in 6.0
so you can follow the regular release upgrade steps.
Upgrade to version 5.9
Sidekiq pod never becomes ready
Upgrading to 5.9.x
may lead to a situation where the Sidekiq pod does not become ready. The pod starts and appears to work properly but never listens on the 3807
, the default metrics endpoint port (metrics.port
). As a result, the Sidekiq pod is not considered to be ready.
This can be resolved from the Admin Area:
- On the left sidebar, at the bottom, select Admin Area.
- Select Settings > Metrics and profiling.
- Expand Metrics - Prometheus.
- Ensure that Enable health and performance metrics endpoint is enabled.
- Restart the affected pods.
There is additional conversation about this scenario in a closed issue.
Upgrade to version 5.5
The task-runner
chart was renamed
to toolbox
and removed in 5.5.0
. As a result, any mention of task-runner
in your configuration should be renamed to toolbox
. In version 5.5 and newer,
use the toolbox
chart, and in version 5.4 and older, use the task-runner
chart.
Missing object storage secret error
Upgrading to 5.5 or newer might cause an error similar to the following:
Error: UPGRADE FAILED: execution error at (gitlab/charts/gitlab/charts/toolbox/templates/deployment.yaml:227:23): A valid backups.objectStorage.config.secret is needed!
If the secret mentioned in the error already exists and is correct, then this error
is likely because there is an object storage configuration value that still references
task-runner
instead of the new toolbox
. Rename task-runner
to toolbox
in your
configuration to fix this.
There is an open issue about clarifying the error message.
Upgrade to version 5.0
4.x
version of the chart to the latest 5.0
release, you need
to first update to the latest 4.12.x
patch release in order for the upgrade to work.
The 5.0 release notes describe the supported upgrade path.The 5.0.0
release requires manual steps in order to perform the upgrade. If you’re using the
bundled PostgreSQL, the best way to perform this upgrade is to back up your old database, and
restore into a new database instance.
If you are using an external PostgreSQL database, you should first upgrade the database to version 12 or greater. Then follow the standard upgrade steps.
If you are using the bundled PostgreSQL database, you should follow the bundled database upgrade steps.
Troubleshooting 5.0 release upgrade process
-
If you see any failure during the upgrade, it may be useful to check the description of
gitlab-upgrade-check
pod for details:kubectl get pods -lrelease=RELEASE,app=gitlab kubectl describe pod <gitlab-upgrade-check-pod-full-name>
Upgrade to version 4.0
The 4.0.0
release requires manual steps in order to perform the upgrade. If you’re using the
bundled PostgreSQL, the best way to perform this upgrade is to back up your old database, and
restore into a new database instance.
If you are using an external PostgreSQL database, you should first upgrade the database to version 11 or greater. Then follow the standard upgrade steps.
If you are using the bundled PostgreSQL database, you should follow the bundled database upgrade steps.
Troubleshooting 4.0 release upgrade process
-
If you see any failure during the upgrade, it may be useful to check the description of
gitlab-upgrade-check
pod for details:kubectl get pods -lrelease=RELEASE,app=gitlab kubectl describe pod <gitlab-upgrade-check-pod-full-name>
4.8: Repository data appears to be lost upgrading Praefect
The Praefect chart is not yet considered suitable for production use.
If you have enabled Praefect before upgrading to version 4.8 of the chart (GitLab 13.8), note that the StatefulSet name for Gitaly will now include the virtual storage name.
In version 4.8 of the Praefect chart, the ability to specify multiple virtual storages was added, making it necessary to change the StatefulSet name.
Any existing Praefect-managed Gitaly StatefulSet names (and, therefore, their associated PersistentVolumeClaims) will change as well, leading to repository data appearing to be lost.
Prior to upgrading, ensure that:
-
All your repositories are in sync across the Gitaly Cluster, and GitLab is not in use during the upgrade. To check whether the repositories are in sync, run the following command in one of your Praefect pods:
/usr/local/bin/praefect -config /etc/gitaly/config.toml dataloss
-
You have a complete and tested backup.
Repository data can be restored by following the managing persistent volumes documentation, which provides guidance on reconnecting existing PersistentVolumeClaims to previous PersistentVolumes.
A key step of the process is setting the old persistent volumes’ persistentVolumeReclaimPolicy
to Retain
. If this step is missed, actual data loss will likely occur.
After reviewing the documentation, there is a scripted summary of the procedure in a comment on one of a related issues.
Having reconnected the PersistentVolumes, it is likely that all your repositories
will be set read-only
by Praefect, as shown by running the following in a
Praefect container:
praefect -config /etc/gitaly/config.toml dataloss
If all your Git repositories are in sync across the old persistent volumes, use the
accept-dataloss
procedure for each repository to fix the Gitaly Cluster in Praefect.
We have an issue open to verify that this is the best approach to fixing Praefect.