Using the GitLab Pages chart

The gitlab-pages subchart provides a daemon for serving static websites from GitLab projects.

Requirements

This chart depends on access to the Workhorse services, either as part of the complete GitLab chart or provided as an external service reachable from the Kubernetes cluster this chart is deployed onto.

Configuration

The gitlab-pages chart is configured as follows: Global settings and Chart settings.

Global Settings

We share some common global settings among our charts. See the Globals Documentation for details.

Chart settings

The tables in following two sections contains all the possible chart configurations that can be supplied to the helm install command using the --set flags.

General settings

ParameterDefaultDescription
annotations Pod annotations
common.labels{}Supplemental labels that are applied to all objects created by this chart.
deployment.strategy{}Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data source to expose
hpa.behavior{scaleDown: {stabilizationWindowSeconds: 300 }}Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics[]Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization)
hpa.cpu.targetTypeAverageValueSet the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue100mSet the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Set the autoscaling CPU target utilization
hpa.memory.targetType Set the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue Set the autoscaling memory target value
hpa.memory.targetAverageUtilization Set the autoscaling memory target utilization
hpa.minReplicas1Minimum number of replicas
hpa.maxReplicas10Maximum number of replicas
hpa.targetAverageValue  DEPRECATED Set the autoscaling CPU target value
image.pullPolicyIfNotPresentGitLab image pull policy
image.pullSecrets Secrets for the image repository
image.repositoryregistry.gitlab.com/gitlab-org/build/cng/gitlab-exporterGitLab Exporter image repository
image.tag image tag
init.image.repository initContainer image
init.image.tag initContainer image tag
init.containerSecurityContext{}initContainer container specific securityContext
keda.enabledfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
keda.pollingInterval30The interval to check each trigger on
keda.cooldownPeriod300The period to wait after the last trigger reported active before scaling the resource back to 0
keda.minReplicaCount Minimum number of replicas KEDA will scale the resource down to, defaults to hpa.minReplicas
keda.maxReplicaCount Maximum number of replicas KEDA will scale the resource up to, defaults to hpa.maxReplicas
keda.fallback KEDA fallback configuration, see the documentation
keda.hpaName The name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
keda.restoreToOriginalReplicaCount Specifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
keda.behavior The specifications for up- and downscaling behavior, defaults to hpa.behavior
keda.triggers List of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory
metrics.enabledtrueIf a metrics endpoint should be made available for scraping
metrics.port9235Metrics endpoint port
metrics.path/metricsMetrics endpoint path
metrics.serviceMonitor.enabledfalseIf a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels{}Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig{}Additional endpoint configuration for the ServiceMonitor
metrics.annotations  DEPRECATED Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabledfalseTLS enabled for the metrics endpoint
metrics.tls.secretName{Release.Name}-pages-metrics-tlsSecret for the metrics endpoint TLS cert and key
podLabels Supplemental Pod labels. Will not be used for selectors.
resources.requests.cpu900mGitLab Pages minimum CPU
resources.requests.memory2GGitLab Pages minimum memory
securityContext.fsGroup1000Group ID under which the pod should be started
securityContext.runAsUser1000User ID under which the pod should be started
securityContext.fsGroupChangePolicy Policy for changing ownership and permission of the volume (requires Kubernetes 1.23)
containerSecurityContext Override container securityContext under which the container is started
containerSecurityContext.runAsUser1000Allow to overwrite the specific security context under which the container is started
service.externalPort8090GitLab Pages exposed port
service.internalPort8090GitLab Pages internal port
service.namegitlab-pagesGitLab Pages service name
service.annotations Annotations for all pages services.
service.primary.annotations Annotations for the primary service only.
service.metrics.annotations Annotations for the metrics service only.
service.customDomains.annotations Annotations for the custom domains service only.
service.customDomains.typeLoadBalancerType of service created for handling custom domains
service.customDomains.internalHttpsPort8091Port where Pages daemon listens for HTTPS requests
service.customDomains.internalHttpsPort8091Port where Pages daemon listens for HTTPS requests
service.customDomains.nodePort.http Node Port to be opened for HTTP connections. Valid only if service.customDomains.type is NodePort
service.customDomains.nodePort.https Node Port to be opened for HTTPS connections. Valid only if service.customDomains.type is NodePort
service.sessionAffinityNoneType of the session affinity. Must be either ClientIP or None (this only makes sense for traffic originating from within the cluster)
service.sessionAffinityConfig Session affinity config. If service.sessionAffinity == ClientIP the default session sticky time is 3 hours (10800)
serviceLabels{}Supplemental service labels
tolerations[]Toleration labels for pod assignment

Pages specific settings

ParameterDefaultDescription
artifactsServerTimeout10Timeout (in seconds) for a proxied request to the artifacts server
artifactsServerUrl API URL to proxy artifact requests to
extraVolumeMounts List of extra volumes mounts to add
extraVolumes List of extra volumes to create
gitlabCache.cleanupintSee: Pages Global Settings
gitlabCache.expiryintSee: Pages Global Settings
gitlabCache.refreshintSee: Pages Global Settings
gitlabClientHttpTimeout GitLab API HTTP client connection timeout in seconds
gitlabClientJwtExpiry JWT Token expiry time in seconds
gitlabRetrieval.intervalintSee: Pages Global Settings
gitlabRetrieval.retriesintSee: Pages Global Settings
gitlabRetrieval.timeoutintSee: Pages Global Settings
gitlabServer GitLab server FQDN
headers[]Specify any additional http headers that should be sent to the client with each response. Multiple headers can be given as an array, header and value as one string, for example ['my-header: myvalue', 'my-other-header: my-other-value']
insecureCiphersfalseUse default list of cipher suites, may contain insecure ones like 3DES and RC4
internalGitlabServer Internal GitLab server used for API requests
logFormatjsonLog output format
logVerbosefalseVerbose logging
maxConnections Limit on the number of concurrent connections to the HTTP, HTTPS or proxy listeners
maxURILength Limit the length of URI, 0 for unlimited.
propagateCorrelationId Reuse existing Correlation-ID from the incoming request header X-Request-ID if present
redirectHttpfalseRedirect pages from HTTP to HTTPS
sentry.enabledfalseEnable Sentry reporting
sentry.dsn The address for sending Sentry crash reporting to
sentry.environment The environment for Sentry crash reporting
serverShutdowntimeout30sGitLab Pages server shutdown timeout in seconds
statusUri The URL path for a status page
tls.minVersion Specifies the minimum SSL/TLS version
tls.maxVersion Specifies the maximum SSL/TLS version
useHTTPProxyfalseUse this option when GitLab Pages is behind a Reverse Proxy.
useProxyV2falseForce HTTPS request to utilize the PROXYv2 protocol.
zipCache.cleanupintSee: Zip Serving and Cache Configuration
zipCache.expirationintSee: Zip Serving and Cache Configuration
zipCache.refreshintSee: Zip Serving and Cache Configuration
zipOpenTimeoutintSee: Zip Serving and Cache Configuration
zipHTTPClientTimeoutintSee: Zip Serving and Cache Configuration
rateLimitSourceIP See: GitLab Pages rate-limits. To enable rate-limiting use extraEnv=["FF_ENFORCE_IP_RATE_LIMITS=true"]
rateLimitSourceIPBurst See: GitLab Pages rate-limits
rateLimitDomain See: GitLab Pages rate-limits. To enable rate-limiting use extraEnv=["FF_ENFORCE_DOMAIN_RATE_LIMITS=true"]
rateLimitDomainBurst See: GitLab Pages rate-limits
rateLimitTLSSourceIP See: GitLab Pages rate-limits. To enable rate-limiting use extraEnv=["FF_ENFORCE_IP_TLS_RATE_LIMITS=true"]
rateLimitTLSSourceIPBurst See: GitLab Pages rate-limits
rateLimitTLSDomain See: GitLab Pages rate-limits. To enable rate-limiting use extraEnv=["FF_ENFORCE_DOMAIN_TLS_RATE_LIMITS=true"]
rateLimitTLSDomainBurst See: GitLab Pages rate-limits
serverReadTimeout5sSee: GitLab Pages global settings
serverReadHeaderTimeout1sSee: GitLab Pages global settings
serverWriteTimeout5mSee: GitLab Pages global settings
serverKeepAlive15sSee: GitLab Pages global settings
authCookieSessionTimeout10mSee: GitLab Pages global settings

Configuring the ingress

This section controls the GitLab Pages Ingress.

NameTypeDefaultDescription
apiVersionString Value to use in the apiVersion field.
annotationsString This field is an exact match to the standard annotations for Kubernetes Ingress.
configureCertmanagerBooleanfalseToggles Ingress annotation cert-manager.io/issuer and acme.cert-manager.io/http01-edit-in-place. The acquisition of a TLS certificate for GitLab Pages via cert-manager is disabled because a wildcard certificate acquisition requires a cert-manager Issuer with a DNS01 solver, and the Issuer deployed by this chart only provides a HTTP01 solver. For more information see the TLS requirement for GitLab Pages.
enabledBoolean Setting that controls whether to create Ingress objects for services that support them. When not set, the global.ingress.enabled setting is used.
tls.enabledBoolean When set to false, you disable TLS for the Pages subchart. This is mainly useful for cases in which you cannot use TLS termination at ingress-level, like when you have a TLS-terminating proxy before the Ingress Controller.
tls.secretNameString The name of the Kubernetes TLS Secret that contains a valid certificate and key for the pages URL. When not set, the global.ingress.tls.secretName is used instead. Defaults to not being set.

Chart configuration examples

extraVolumes

extraVolumes allows you to configure extra volumes chart-wide.

Below is an example use of extraVolumes:

extraVolumes: |
  - name: example-volume
    persistentVolumeClaim:
      claimName: example-pvc

extraVolumeMounts

extraVolumeMounts allows you to configure extra volumeMounts on all containers chart-wide.

Below is an example use of extraVolumeMounts:

extraVolumeMounts: |
  - name: example-volume
    mountPath: /etc/example

Configuring the networkpolicy

This section controls the NetworkPolicy. This configuration is optional and is used to limit Egress and Ingress of the Pods to specific endpoints.

NameTypeDefaultDescription
enabledBooleanfalseThis setting enables the NetworkPolicy
ingress.enabledBooleanfalseWhen set to true, the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rulesArray[]Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabledBooleanfalseWhen set to true, the Egress network policy will be activated. This will block all egress connections unless rules are specified.
egress.rulesArray[]Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below

Example Network Policy

The gitlab-pages service requires Ingress connections for port 80 and 443 and Egress connections to various to default workhorse port 8181. This examples adds the following network policy:

  • All Ingress requests from the network on TCP 0.0.0.0/0 port 80 and 443 are allowed
  • All Egress requests to the network on UDP 10.0.0.0/8 port 53 are allowed for DNS
  • All Egress requests to the network on TCP 10.0.0.0/8 port 8181 are allowed for Workhorse

Note the example provided is only an example and may not be complete

networkpolicy:
  enabled: true
  ingress:
    enabled: true
    rules:
      - to:
        - ipBlock:
            cidr: 0.0.0.0/0
        ports:
          - port: 80
            protocol: TCP
          - port: 443
            protocol: TCP
  egress:
    enabled: true
    rules:
      - to:
        - ipBlock:
            cidr: 10.0.0.0/8
        ports:
          - port: 8181
            protocol: TCP
          - port: 53
            protocol: UDP

TLS access to GitLab Pages

To have TLS access to the GitLab Pages feature you must:

  1. Create a dedicated wildcard certificate for your GitLab Pages domain in this format: *.pages.<yourdomain>.

  2. Create the secret in Kubernetes:

    kubectl create secret tls tls-star-pages-<mysecret> --cert=<path/to/fullchain.pem> --key=<path/to/privkey.pem>
    
  3. Configure GitLab Pages to use this secret:

    gitlab:
      gitlab-pages:
        ingress:
          tls:
            secretName: tls-star-pages-<mysecret>
    
  4. Create a DNS entry in your DNS provider with the name *.pages.<yourdomaindomain> pointing to your LoadBalancer.

Configuring KEDA

This keda section enables the installation of KEDA ScaledObjects instead of regular HorizontalPodAutoscalers. This configuration is optional and can be used when there is a need for autoscaling based on custom or external metrics.

Most settings default to the values set in the hpa section where applicable.

If the following are true, CPU and memory triggers are added automatically based on the CPU and memory thresholds set in the hpa section:

  • triggers is not set.
  • The corresponding request.cpu.request or request.memory.request setting is also set to a non-zero value.

If no triggers are set, the ScaledObject is not created.

Refer to the KEDA documentation for more details about those settings.

NameTypeDefaultDescription
enabledBooleanfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
pollingIntervalInteger30The interval to check each trigger on
cooldownPeriodInteger300The period to wait after the last trigger reported active before scaling the resource back to 0
minReplicaCountInteger Minimum number of replicas KEDA will scale the resource down to, defaults to hpa.minReplicas
maxReplicaCountInteger Maximum number of replicas KEDA will scale the resource up to, defaults to hpa.maxReplicas
fallbackMap KEDA fallback configuration, see the documentation
hpaNameString The name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
restoreToOriginalReplicaCountBoolean Specifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
behaviorMap The specifications for up- and downscaling behavior, defaults to hpa.behavior
triggersArray List of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory