Contribute to Helm chart development

Our contribution policies can be found in CONTRIBUTING.md

Contributing documentation changes to the charts requires only a text editor. Documentation is stored in the doc/ directory.

Architecture

Before starting development, it is helpful to review the goals, architecture, and design decisions for the charts.

See Architecture of GitLab Helm charts for this information.

Environment setup

See setting up your development environment to prepare your workstation for charts development.

Style guide

See the chart development style guide for guidelines and best practices for chart development.

Writing and running tests

We run several different types of tests to validate the charts work as intended.

Developing RSpec tests

Unit tests are written in RSpec and stored in the spec/ directory of the chart repository.

Read the notes on creating RSpec tests to validate the functionality of the chart.

Running GitLab QA

GitLab QA can be used to run integrations and functional tests against a deployed cloud-native GitLab installation.

Read more in the GitLab QA chart docs.

ChaosKube

ChaosKube can be used to test the fault tolerance of highly available cloud-native GitLab installations.

Read more in the ChaosKube chart docs.

ClickHouse

Instructions for configuring an external ClickHouse server with GitLab.

Versioning and Release

Details on the version scheme, branching and tags can be found in release document.

Changelog Entries

All CHANGELOG.md entries should be created via the changelog entries workflow.

Pipelines

GitLab CI pipelines run on pipelines for:

  • Merge requests
  • Default branch
  • Stable branches
  • Tags

The configuration for these CI pipelines is managed in:

Review apps

We use Review apps in CI to deploy running instances of the Helm Charts and test against them.

We deploy these Review apps to our EKS and GKE clusters, confirm that the Helm release is created successfully, and then run GitLab QA and other RSpec tests.

For merge requests specifically, we make use of vcluster to create ephemeral clusters. This allows us to test against newer versions of Kubernetes more quickly due to the ease of configuration and simplified environments that do not include External DNS or Cert Manager dependencies. In this case, we simply deploy the Helm Charts, confirm the release was created successfully, and validate that Webservice is in the Ready state. This approach takes advantage of Kubernetes readiness probes to ensure that the application is in a healthy state. See issue 5013 for more information on our vcluster implementation plan.

When to fork upstream charts

No changes, no fork

Let it be stated that any chart that does not require changes to function for our use should not be forked into this repository.

Guidelines for forking

Sensitive information

If a given chart expects that sensitive communication secrets will be presented from within environment, such as passwords or cryptographic keys, we prefer to use initContainers.

Extending functionality

There are some cases where it is needed to extend the functionality of a chart in such a way that an upstream may not accept.

Handling configuration deprecations

There are times in a development where changes in behavior require a functionally breaking change. We try to avoid such changes, but some items can not be handled without such a change.

To handle this, we have implemented the deprecations template. This template is designed to recognize properties that need to be replaced or relocated, and inform the user of the actions they need to take. This template will compile all messages into a list, and then cause the deployment to stop via a fail call. This provides a method to inform the user at the same time as preventing the deployment the chart in a broken or unexpected state.

See the documentation of the deprecations template for further information on the design, functionality, and how to add new deprecations.

Attempt to catch problematic configurations

Due to the complexity of these charts and their level of flexibility, there are some overlaps where it is possible to produce a configuration that would lead to an unpredictable, or entirely non-functional deployment. In an effort to prevent known problematic settings combinations, we have the following two patterns in place:

  • We use schema validations for all our sub-charts to ensure the user-specified values meet expectations. See the documentation to learn more.
  • We implement template logic designed to detect and warn the user that their configuration will not work. See the documentation of the checkConfig template for further information on the design and functionality, and how to add new configuration checks.

Verifying registry

In development mode, verifying Registry with Docker clients can be difficult. This is partly due to issues with certificate of the registry. You can either add the certificate or expose the registry over HTTP (see global.hosts.registry.https). Note that adding the certificate is more secure than the insecure registry solution.

Please keep in mind that Registry uses the external domain name of MinIO service (see global.hosts.minio.name). You may encounter an error when using internal domain names, e.g. with custom TLDs for development environment. The common symptom is that you can log in to the Registry but you can’t push or pull images. This is generally because the Registry container(s) can not resolve the MinIO domain name and find the correct endpoint (you can see the errors in container logs).

Troubleshooting a development environment

Developers may encounter unique issues while working on new chart features. Refer to the troubleshooting guide for information if your development cluster seems to have strange issues.

note
The troubleshooting steps outlined in the link above are for development clusters only. Do not use these procedures in a production environment or data will be lost.

Additional Helm information

Some information on how all the inner Helm workings behave: