Using the GitLab-Spamcheck chart

The spamcheck sub-chart provides a deployment of Spamcheck which is an anti-spam engine developed by GitLab originally to combat the rising amount of spam in GitLab.com, and later made public to be used in self-managed GitLab instances.

Requirements

This chart depends on access to the GitLab API.

Configuration

Enable Spamcheck

spamcheck is disabled by default. To enable it on your GitLab instance, set the Helm property global.spamcheck.enabled to true, for example:

helm upgrade --force --install gitlab . \
--set global.hosts.domain='your.domain.com' \
--set global.hosts.externalIP=XYZ.XYZ.XYZ.XYZ \
--set certmanager-issuer.email='me@example.com' \
--set global.spamcheck.enabled=true

Configure GitLab to use Spamcheck

  1. On the top bar, select Menu > Admin.
  2. On the left sidebar, select Settings > Reporting.
  3. Expand Spam and Anti-bot Protection.
  4. Update the Spam Check settings:
    1. Check the Enable Spam Check via external API endpoint checkbox
    2. For URL of the external Spam Check endpoint use grpc://gitlab-spamcheck.default.svc:8001, where default is replaced with the Kubernetes namespace where GitLab is deployed.
    3. Leave Spam Check API key blank.
  5. Select Save changes.

Installation command line options

The table below contains all the possible charts configurations that can be supplied to the helm install command using the --set flags.

ParameterDefaultDescription
annotations{}Pod annotations
common.labels{}Supplemental labels that are applied to all objects created by this chart.
deployment.livenessProbe.initialDelaySeconds20Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds60How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds30When the liveness probe times out
deployment.livenessProbe.successThreshold1Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold3Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds0Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds10How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds2When the readiness probe times out
deployment.readinessProbe.successThreshold1Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold3Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy{}Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
hpa.behavior{scaleDown: {stabilizationWindowSeconds: 300 }}Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics[]Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization)
hpa.cpu.targetTypeAverageValueSet the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue100mSet the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Set the autoscaling CPU target utilization
hpa.memory.targetType Set the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue Set the autoscaling memory target value
hpa.memory.targetAverageUtilization Set the autoscaling memory target utilization
hpa.targetAverageValue  DEPRECATED Set the autoscaling CPU target value
image.repositoryregistry.gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/spam/spamcheckSpamcheck image repository
logging.levelinfoLog level
maxReplicas10HPA maxReplicas
maxUnavailable1HPA maxUnavailable
minReplicas2HPA maxReplicas
podLabels{}Supplemental Pod labels. Not used for selectors.
resources.requests.cpu100mSpamcheck minimum CPU
resources.requests.memory100MSpamcheck minimum memory
securityContext.fsGroup1000Group ID under which the pod should be started
securityContext.runAsUser1000User ID under which the pod should be started
securityContext.fsGroupChangePolicy Policy for changing ownership and permission of the volume (requires Kubernetes 1.23)
serviceLabels{}Supplemental service labels
service.externalPort8001Spamcheck external port
service.internalPort8001Spamcheck internal port
service.typeClusterIPSpamcheck service type
serviceAccount.enabledFlag for using ServiceAccountfalse
serviceAccount.createFlag for creating a ServiceAccountfalse
tolerations[]Toleration labels for pod assignment
extraEnvFrom{}List of extra environment variables from other data sources to expose
priorityClassName  Priority class assigned to pods.

Chart configuration examples

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations:

tolerations:
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoSchedule"
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoExecute"

annotations

annotations allows you to add annotations to the Spamcheck pods. For example:

annotations:
  kubernetes.io/example-annotation: annotation-value

resources

resources allows you to configure the minimum and maximum amount of resources (memory and CPU) a Spamcheck pod can consume.

For example:

resources:
  requests:
    memory: 100m
    cpu: 100M

livenessProbe/readinessProbe

deployment.livenessProbe and deployment.readinessProbe provide a mechanism to help control the termination of Spamcheck Pods in certain scenarios, such as, when a container is in a broken state.

For example:

deployment:
  livenessProbe:
    initialDelaySeconds: 10
    periodSeconds: 20
    timeoutSeconds: 3
    successThreshold: 1
    failureThreshold: 10
  readinessProbe:
    initialDelaySeconds: 10
    periodSeconds: 5
    timeoutSeconds: 2
    successThreshold: 1
    failureThreshold: 3

Refer to the official Kubernetes Documentation for additional details regarding this configuration.