Provision GitLab Cloud Native Hybrid on AWS EKS

GitLab “Cloud Native Hybrid” is a hybrid of the cloud native technology Kubernetes (EKS) and EC2. While as much of the GitLab application as possible runs in Kubernetes or on AWS services (PaaS), the GitLab service Gitaly must still be run on EC2. Gitaly is a layer designed to overcome limitations of the Git binaries in a horizontally scaled architecture. You can read more here about why Gitaly was built and why the limitations of Git mean that it must currently run on instance compute in Git Characteristics That Make Horizontal Scaling Difficult.

Amazon provides a managed Kubernetes service offering known as Amazon Elastic Kubernetes Service (EKS).

Tested AWS Bill of Materials by reference architecture size

GitLab Cloud Native Hybrid Ref ArchGitLab Baseline Performance Test Results (using the Linux package on instances)AWS Bill of Materials (BOM) for CNHAWS Build Performance Testing Results for CNH CNH Cost Estimate 3 AZs*
2K Linux package installation2K Baseline2K Cloud Native Hybrid on EKSGPT Test Results 1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs
(2 AZ Cost Estimate is in BOM Below)
3K3k Baseline3K Cloud Native Hybrid on EKS 3K Full Fixed Scale GPT Test Results

3K Elastic Auto Scale GPT Test Results
1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs
(2 AZ Cost Estimate is in BOM Below)
5K5k Baseline5K Cloud Native Hybrid on EKS 5K Full Fixed Scale GPT Test Results

5K AutoScale from 25% GPT Test Results
1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs
10K10k Baseline10K Cloud Native Hybrid on EKS 10K Full Fixed Scale GPT Test Results

10K Elastic Auto Scale GPT Test Results
10K 1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs
50K50k Baseline50K Cloud Native Hybrid on EKS 50K Full Fixed Scale GPT Test Results

10K Elastic Auto Scale GPT Test Results
50K 1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs

*Cost calculations for actual implementations are a rough guideline with the following considerations:

  • Actual choices about instance types should be based on GPT testing of your configuration.
  • The first year of actual usage will reveal potential savings due to lower than expected usage, especially for ramping migrations where the full loading takes months, so be careful not to commit to savings plans too early or for too long.
  • The cost estimates assume full scale of the Kubernetes cluster nodes 24 x 7 x 365. Savings due to ‘idling scale-in’ are not considered because they are highly dependent on the usage patterns of the specific implementation.
  • Costs such as GitLab Runners, data egress and storage costs are not included as they are very dependent on the configuration of a specific implementation and on development behaviors (for example, frequency of committing or frequency of builds).
  • These estimates will change over time as GitLab tests and optimizes compute choices.

Available Infrastructure as Code for GitLab Cloud Native Hybrid

The GitLab Environment Toolkit (GET) is an effort made by GitLab to create a multi-cloud, multi-GitLab (Linux package installation + Cloud Native Hybrid) toolkit to provision GitLab. GET is developed by GitLab developers and is open to community contributions. GET is where GitLab is investing its resources as the primary option for Infrastructure as Code, and is being actively used in production as a part of GitLab Dedicated.

For more information about the project, see GitLab Environment Toolkit.

The AWS Quick Start for GitLab Cloud Native Hybrid on EKS is developed by AWS, GitLab, and the community that contributes to AWS Quick Starts, whether directly to the GitLab Quick Start or to the underlying Quick Start dependencies GitLab inherits (for example, EKS Quick Start).

GET is recommended for most deployments. The AWS Quick Start can be used if the IaC language of choice is CloudFormation, integration with AWS services like Control Tower is desired, or preference for a UI-driven configuration experience or when any aspect in the below table is an overriding concern.

note
This automation is in Open Beta. GitLab is working with AWS on resolving the outstanding issues before it is fully released. You can subscribe to this issue to be notified of progress and release announcements: AWS Quick Start for GitLab Cloud Native Hybrid on EKS Status: Beta.

The Beta version deploys Aurora PostgreSQL, but the release version will deploy Amazon RDS PostgreSQL due to known issues with Aurora. All performance testing results will also be redone after this change has been made.
 AWS Quick Start for GitLab Cloud Native Hybrid on EKSGitLab Environment Toolkit (GET)
Overview and VisionAWS Quick StartGitLab Environment Toolkit
LicensingOpen Source (Apache 2.0) GitLab Enterprise Edition license (GitLab Premium tier)
GitLab SupportGitLab Beta SupportGitLab GA Support
GitLab Reference Architecture CompliantYesYes
GitLab Performance Tool (GPT) TestedYesYes
Amazon Well Architected CompliantYes
(via Quick Start program)
Critical portions
reviewed by AWS
Target Cloud PlatformsAWSAWS, Google, Azure
IaC LanguagesCloudFormation (Quick Starts)Terraform, Ansible
Community Contributions and Participation (EcoSystem) GitLab QSG: Getting Started
For QSG Dependencies (for example, EKS): Substantial
Getting Started
Compatible with AWS Meta-Automation Services (via CloudFormation)- AWS Service Catalog (Direct Import)
- ServiceNow via an AWS Service Catalog Connector
- Jira Service Manager via an AWS Service Catalog Connector
- AWS Control Tower (Integration)
- Quick Starts
- AWS SaaS Factory
No
Results in a Ready-to-Use instanceYesManual Actions or
Supplemental IaC Required
Configuration Features  
Can deploy Linux package (non-Kubernetes)NoYes
Can deploy a single instance by using the Linux package (non-Kubernetes)NoYes
Complete Internal Encryption85%, Targeting 100%Manual
AWS GovCloud SupportYesTBD
No Code Form-Based Deployment User Experience AvailableYesNo
Full IaC User Experience AvailableYesYes

Two and Three Zone High Availability

While GitLab Reference Architectures generally encourage three zone redundancy, AWS Quick Starts and AWS Well Architected consider two zone redundancy as AWS Well Architected. Individual implementations should weigh the costs of two and three zone configurations against their own high availability requirements for a final configuration.

Gitaly Cluster uses a consistency voting system to implement strong consistency between synchronized nodes. Regardless of the number of availability zones implemented, there will always need to be a minimum of three Gitaly and three Praefect nodes in the cluster to avoid voting stalemates cause by an even number of nodes.

Streamlined Performance Testing of AWS Quick Start Prepared GitLab Instances

A set of performance testing instructions have been abbreviated for testing a GitLab instance prepared using the AWS Quick Start for GitLab Cloud Native Hybrid on EKS. They assume zero familiarity with GitLab Performance Tool. They can be accessed here: Performance Testing an Instance Prepared using AWS Quick Start for GitLab Cloud Native Hybrid on EKS.

AWS GovCloud Support for AWS Quick Start for GitLab CNH on EKS

The AWS Quick Start for GitLab Cloud Native Hybrid on EKS has been tested with GovCloud and works with the following restrictions and understandings.

  • GovCloud does not have public Route53 hosted zones, so you must set the following parameters:

    CloudFormation Quick Start form fieldCloudFormation ParameterSetting
    Create Route 53 hosted zoneCreatedHostedZoneNo
    Request AWS Certificate Manager SSL certificateCreateSslCertificateNo
  • The Quick Start creates public load balancer IPs, so that you can easily configure your local hosts file to get to the GUI for GitLab when deploying tests. However, you may need to manually alter this if public load balancers are not part of your provisioning plan. We are planning to make non-public load balancers a configuration option issue link: Short Term: Documentation and/or Automation for private GitLab instance with no internet Ingress
  • As of 2021-08-19, AWS GovCloud has Graviton instances for Amazon RDS PostgreSQL available, but does not for ElastiCache Redis.
  • It is challenging to get the Quick Start template to load in GovCloud from the Standard Quick Start URL, so the generic ones are provided here:

AWS PaaS qualified for all GitLab implementations

For both implementations that used the Linux package or Cloud Native Hybrid implementations, the following GitLab Service roles can be performed by AWS Services (PaaS). Any PaaS solutions that require preconfigured sizing based on the scale of your instance will also be listed in the per-instance size Bill of Materials lists. Those PaaS that do not require specific sizing, are not repeated in the BOM lists (for example, AWS Certification Manager).

These services have been tested with GitLab.

Some services, such as log aggregation, outbound email are not specified by GitLab, but where provided are noted.

GitLab ServicesAWS PaaS (Tested)Provided by AWS Cloud
Native Hybrid Quick Start
Tested PaaS Mentioned in Reference Architectures  
PostgreSQL DatabaseAmazon RDS PostgreSQLYes.
Redis CachingRedis ElastiCacheYes.
Gitaly Cluster (Git Repository Storage)
(Including Praefect and PostgreSQL)
ASG and InstancesYes - ASG and Instances
Note: Gitaly cannot be put into a Kubernetes Cluster.
All GitLab storages besides Git Repository Storage
(Includes Git-LFS which is S3 Compatible)
AWS S3Yes
   
Tested PaaS for Supplemental Services  
Front End Load BalancingAWS ELBYes
Internal Load BalancingAWS ELBYes
Outbound Email ServicesAWS Simple Email Service (SES)Yes
Certificate Authority and ManagementAWS Certificate Manager (ACM)Yes
DNSAWS Route53 (tested)Yes
GitLab and Infrastructure Log AggregationAWS CloudWatch LogsYes (ContainerInsights Agent for EKS)
Infrastructure Performance MetricsAWS CloudWatch MetricsYes
   
Supplemental Services and Configurations (Tested)  
Prometheus for GitLabAWS EKS (Cloud Native Only)Yes
Grafana for GitLabAWS EKS (Cloud Native Only)Yes
Administrative Access to GitLab BackendBastion Host in VPCYes - HA - Preconfigured for Cluster Management.
Encryption (In Transit / At Rest)AWS KMSYes
Secrets Storage for ProvisioningAWS Secrets ManagerYes
Configuration Data for ProvisioningAWS Parameter StoreYes
AutoScaling KubernetesEKS AutoScaling AgentYes

GitLab Cloud Native Hybrid on AWS

2K Cloud Native Hybrid on EKS

2K Cloud Native Hybrid on EKS Bill of Materials (BOM)

GPT Test Results

  • TBD

Deploy Now Deploy Now links leverage the AWS Quick Start automation and only pre-populate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation’s Deployment steps section.

  • Deploy Now: AWS Quick Start for 2 AZs
  • Deploy Now: AWS Quick Start for 3 AZs
note
On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the “GitLab on AWS Compute” table above and customize it with your desired savings plan.

BOM Total: = Bill of Materials Total - this is what you use when building this configuration

Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.

Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.

ServiceRef Arch Raw (Full Scaled)AWS BOMExample Full Scaled Cost
(On Demand, US East)
Webservice12 vCPU,16 GB  
Sidekiq2 vCPU, 8 GB  
Supporting services such as NGINX, Prometheus, etc2 vCPU, 8 GB  
GitLab Ref Arch Raw Total K8s Node Capacity16 vCPU, 32 GB  
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc)+ 8 vCPU, 16 GB  
Grand Total w/ Overheads
Minimum hosts = 3
24 vCPU, 48 GB c5.2xlarge
(8vCPU/16 GB) x 3 nodes
24 vCPU, 48 GB
$1.02/hr
Idle Configuration (Scaled-In)16 vCPU, 32 GB c5.2xlarge x 2$0.68/hr
note
If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes ComputeRef Arch Raw TotalAWS BOM
(Directly Usable in AWS Quick Start)
Example Cost
US East, 3 AZ
Example Cost
US East, 2 AZ
Bastion Host (Quick Start)1 HA instance in ASG t2.micro for prod, m4.2xlarge for performance testing  
PostgreSQL
AWS Amazon RDS PostgreSQL Nodes Configuration (GPT tested)
2vCPU, 7.5 GB
Tested with Graviton ARM
db.r6g.large x 3 nodes
(6vCPU, 48 GB)
3 nodes x $0.26 = $0.78/hr3 nodes x $0.26 = $0.78/hr
Redis1vCPU, 3.75GB
(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State)
cache.m6g.large x 3 nodes
(6vCPU, 19 GB)
3 nodes x $0.15 = $0.45/hr2 nodes x $0.15 = $0.30/hr
Gitaly Cluster Details Gitaly & Praefect Must Have an Uneven Node Count for HA   
Gitaly Instances (in ASG)12 vCPU, 45GB
(across 3 nodes)
m5.xlarge x 3 nodes
(48 vCPU, 180 GB)
$0.192 x 3 = $0.58/hr$0.192 x 3 = $0.58/hr
 The GitLab Reference architecture for 2K is not Highly Available and therefore has a single Gitaly no Praefect. AWS Quick Starts MUST be HA, so it implements Praefect from the 3K Ref Architecture to meet that requirement   
Praefect (Instances in ASG with load balancer)6 vCPU, 10 GB
(across 3 nodes)
c5.large x 3 nodes
(6 vCPU, 12 GB)
$0.09 x 3 = $0.21/hr$0.09 x 3 = $0.21/hr
Praefect PostgreSQL(1) (AWS RDS)6 vCPU, 5.4 GB
(across 3 nodes)
Not applicable; reuses GitLab PostgreSQL$0$0
Internal Load Balancing Node2 vCPU, 1.8 GBAWS ELB$0.10/hr$0.10/hr

3K Cloud Native Hybrid on EKS

3K Cloud Native Hybrid on EKS Bill of Materials (BOM)

GPT Test Results

  • 3K Full Fixed Scale GPT Test Results

  • 3K AutoScale from 25% GPT Test Results

    Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.

Deploy Now

Deploy Now links leverage the AWS Quick Start automation and only pre-populate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation’s Deployment steps section.

note
On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the “GitLab on AWS Compute” table above and customize it with your desired savings plan.

BOM Total: = Bill of Materials Total - this is what you use when building this configuration

Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.

Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.

ServiceRef Arch Raw (Full Scaled)AWS BOMExample Full Scaled Cost
(On Demand, US East)
Webservice 4 pods x (5 vCPU & 6.25 GB) =
20 vCPU, 25 GB
  
Sidekiq 8 pods x (1 vCPU & 2 GB) =
8 vCPU, 16 GB
  
Supporting services such as NGINX, Prometheus, etc 2 allocations x (2 vCPU and 7.5 GB) =
4 vCPU, 15 GB
  
GitLab Ref Arch Raw Total K8s Node Capacity32 vCPU, 56 GB  
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc)+ 16 vCPU, 32GB  
Grand Total w/ Overheads Full Scale
Minimum hosts = 3
48 vCPU, 88 GB c5.2xlarge (8vCPU/16 GB) x 5 nodes
40 vCPU, 80 GB
Full Fixed Scale GPT Test Results
$1.70/hr
Possible Idle Configuration (Scaled-In 75% - round up)
Pod autoscaling must be also adjusted to enable lower idling configuration.
24 vCPU, 48 GBc5.2xlarge x 4$1.36/hr

Other combinations of node type and quantity can be used to meet the Grand Total. Due to the properties of pods, hosts that are overly small may have significant unused capacity.

note
If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes ComputeRef Arch Raw TotalAWS BOM
(Directly Usable in AWS Quick Start)
Example Cost
US East, 3 AZ
Example Cost
US East, 2 AZ
Bastion Host (Quick Start)1 HA instance in ASG t2.micro for prod, m4.2xlarge for performance testing  
PostgreSQL
Amazon RDS PostgreSQL Nodes Configuration (GPT tested)
18vCPU, 36 GB
(across 9 nodes for PostgreSQL, PgBouncer, Consul)
Tested with Graviton ARM
db.r6g.xlarge x 3 nodes
(12vCPU, 96 GB)
3 nodes x $0.52 = $1.56/hr3 nodes x $0.52 = $1.56/hr
Redis6vCPU, 18 GB
(across 6 nodes for Redis Cache, Sentinel)
cache.m6g.large x 3 nodes
(6vCPU, 19 GB)
3 nodes x $0.15 = $0.45/hr2 nodes x $0.15 = $0.30/hr
Gitaly Cluster Details     
Gitaly Instances (in ASG)12 vCPU, 45GB
(across 3 nodes)
m5.large x 3 nodes
(12 vCPU, 48 GB)
$0.192 x 3 = $0.58/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect (Instances in ASG with load balancer)6 vCPU, 5.4 GB
(across 3 nodes)
c5.large x 3 nodes
(6 vCPU, 12 GB)
$0.09 x 3 = $0.21/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect PostgreSQL(1) (Amazon RDS)6 vCPU, 5.4 GB
(across 3 nodes)
Not applicable; reuses GitLab PostgreSQL$0 
Internal Load Balancing Node2 vCPU, 1.8 GBAWS ELB$0.10/hr$0.10/hr

5K Cloud Native Hybrid on EKS

5K Cloud Native Hybrid on EKS Bill of Materials (BOM)

GPT Test Results

  • 5K Full Fixed Scale GPT Test Results

  • 5K AutoScale from 25% GPT Test Results

    Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.

Deploy Now

Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation’s Deployment steps section.

note
On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the “GitLab on AWS Compute” table above and customize it with your desired savings plan.

BOM Total: = Bill of Materials Total - this is what you use when building this configuration

Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.

Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.

ServiceRef Arch Raw (Full Scaled)AWS BOMExample Full Scaled Cost
(On Demand, US East)
Webservice 10 pods x (5 vCPU & 6.25GB) =
50 vCPU, 62.5 GB
  
Sidekiq 8 pods x (1 vCPU & 2 GB) =
8 vCPU, 16 GB
  
Supporting services such as NGINX, Prometheus, etc 2 allocations x (2 vCPU and 7.5 GB) =
4 vCPU, 15 GB
  
GitLab Ref Arch Raw Total K8s Node Capacity62 vCPU, 96.5 GB  
One Node for Quick Start Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc)+ 8 vCPU, 16 GB  
Grand Total w/ Overheads Full Scale
Minimum hosts = 3
70 vCPU, 112.5 GB c5.2xlarge (8vCPU/16 GB) x 9 nodes
72 vCPU, 144 GB
Full Fixed Scale GPT Test Results
$2.38/hr
Possible Idle Configuration (Scaled-In 75% - round up)
Pod autoscaling must be also adjusted to enable lower idling configuration.
24 vCPU, 48 GBc5.2xlarge x 7$1.85/hr

Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.

note
If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes ComputeRef Arch Raw TotalAWS BOM
(Directly Usable in AWS Quick Start)
Example Cost
US East, 3 AZ
Example Cost
US East, 2 AZ
Bastion Host (Quick Start)1 HA instance in ASG t2.micro for prod, m4.2xlarge for performance testing  
PostgreSQL
Amazon RDS PostgreSQL Nodes Configuration (GPT tested)
21vCPU, 51 GB
(across 9 nodes for PostgreSQL, PgBouncer, Consul)
Tested with Graviton ARM
db.r6g.2xlarge x 3 nodes
(24vCPU, 192 GB)
3 nodes x $1.04 = $3.12/hr3 nodes x $1.04 = $3.12/hr
Redis9vCPU, 27GB
(across 6 nodes for Redis, Sentinel)
cache.m6g.xlarge x 3 nodes
(12vCPU, 39GB)
3 nodes x $0.30 = $0.90/hr2 nodes x $0.30 = $0.60/hr
Gitaly Cluster Details     
Gitaly Instances (in ASG)24 vCPU, 90GB
(across 3 nodes)
m5.2xlarge x 3 nodes
(24 vCPU, 96GB)
$0.384 x 3 = $1.15/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect (Instances in ASG with load balancer)6 vCPU, 5.4 GB
(across 3 nodes)
c5.large x 3 nodes
(6 vCPU, 12 GB)
$0.09 x 3 = $0.21/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect PostgreSQL(1) (Amazon RDS)6 vCPU, 5.4 GB
(across 3 nodes)
Not applicable; reuses GitLab PostgreSQL$0 
Internal Load Balancing Node2 vCPU, 1.8 GBAWS ELB$0.10/hr$0.10/hr

10K Cloud Native Hybrid on EKS

10K Cloud Native Hybrid on EKS Bill of Materials (BOM)

GPT Test Results

  • 10K Full Fixed Scale GPT Test Results

  • 10K Elastic Auto Scale GPT Test Results

    Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.

Deploy Now

Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation’s Deployment steps section.

note
On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the “GitLab on AWS Compute” table above and customize it with your desired savings plan.

BOM Total: = Bill of Materials Total - this is what you use when building this configuration

Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.

Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.

ServiceRef Arch Raw (Full Scaled)AWS BOM
(Directly Usable in AWS Quick Start)
Example Full Scaled Cost
(On Demand, US East)
Webservice 20 pods x (5 vCPU & 6.25 GB) =
100 vCPU, 125 GB
  
Sidekiq 14 pods x (1 vCPU & 2 GB)
14 vCPU, 28 GB
  
Supporting services such as NGINX, Prometheus, etc 2 allocations x (2 vCPU and 7.5 GB)
4 vCPU, 15 GB
  
GitLab Ref Arch Raw Total K8s Node Capacity128 vCPU, 158 GB  
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc)+ 16 vCPU, 32GB  
Grand Total w/ Overheads Fully Scaled
Minimum hosts = 3
142 vCPU, 190 GB c5.4xlarge (16vCPU/32GB) x 9 nodes
144 vCPU, 288GB

Full Fixed Scale GPT Test Results
$6.12/hr
Possible Idle Configuration (Scaled-In 75% - round up)
Pod autoscaling must be also adjusted to enable lower idling configuration.
40 vCPU, 80 GBc5.4xlarge x 7

Elastic Auto Scale GPT Test Results
$4.76/hr

Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.

note
If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes ComputeRef Arch Raw TotalAWS BOMExample Cost
US East, 3 AZ
Example Cost
US East, 2 AZ
Bastion Host (Quick Start)1 HA instance in ASG t2.micro for prod, m4.2xlarge for performance testing  
PostgreSQL
Amazon RDS PostgreSQL Nodes Configuration (GPT tested)
36vCPU, 102 GB
(across 9 nodes for PostgreSQL, PgBouncer, Consul)
db.r6g.2xlarge x 3 nodes
(24vCPU, 192 GB)
3 nodes x $1.04 = $3.12/hr3 nodes x $1.04 = $3.12/hr
Redis30vCPU, 114 GB
(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State)
cache.m5.2xlarge x 3 nodes
(24vCPU, 78GB)
3 nodes x $0.62 = $1.86/hr2 nodes x $0.62 = $1.24/hr
Gitaly Cluster Details     
Gitaly Instances (in ASG)48 vCPU, 180 GB
(across 3 nodes)
m5.4xlarge x 3 nodes
(48 vCPU, 180 GB)
$0.77 x 3 = $2.31/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect (Instances in ASG with load balancer)6 vCPU, 5.4 GB
(across 3 nodes)
c5.large x 3 nodes
(6 vCPU, 12 GB)
$0.09 x 3 = $0.21/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect PostgreSQL(1) (Amazon RDS)6 vCPU, 5.4 GB
(across 3 nodes)
Not applicable; reuses GitLab PostgreSQL$0 
Internal Load Balancing Node2 vCPU, 1.8 GBAWS ELB$0.10/hr$0.10/hr

50K Cloud Native Hybrid on EKS

50K Cloud Native Hybrid on EKS Bill of Materials (BOM)

GPT Test Results

  • 50K Full Fixed Scale GPT Test Results

  • 50K Elastic Auto Scale GPT Test Results

    Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.

Deploy Now

Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation’s Deployment steps section.

note
On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the “GitLab on AWS Compute” table above and customize it with your desired savings plan.

BOM Total: = Bill of Materials Total - this is what you use when building this configuration

Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.

Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.

ServiceRef Arch Raw (Full Scaled)AWS BOM
(Directly Usable in AWS Quick Start)
Example Full Scaled Cost
(On Demand, US East)
Webservice 80 pods x (5 vCPU & 6.25 GB) =
400 vCPU, 500 GB
  
Sidekiq 14 pods x (1 vCPU & 2 GB)
14 vCPU, 28 GB
  
Supporting services such as NGINX, Prometheus, etc 2 allocations x (2 vCPU and 7.5 GB)
4 vCPU, 15 GB
  
GitLab Ref Arch Raw Total K8s Node Capacity428 vCPU, 533 GB  
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc)+ 16 vCPU, 32GB  
Grand Total w/ Overheads Fully Scaled
Minimum hosts = 3
444 vCPU, 565 GB c5.4xlarge (16vCPU/32GB) x 28 nodes
448 vCPU, 896GB

Full Fixed Scale GPT Test Results
$19.04/hr
Possible Idle Configuration (Scaled-In 75% - round up)
Pod autoscaling must be also adjusted to enable lower idling configuration.
40 vCPU, 80 GBc5.2xlarge x 10

Elastic Auto Scale GPT Test Results
$6.80/hr

Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.

note
If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes ComputeRef Arch Raw TotalAWS BOMExample Cost
US East, 3 AZ
Example Cost
US East, 2 AZ
Bastion Host (Quick Start)1 HA instance in ASG t2.micro for prod, m4.2xlarge for performance testing  
PostgreSQL
Amazon RDS PostgreSQL Nodes Configuration (GPT tested)
96vCPU, 360 GB
(across 3 nodes)
db.r6g.8xlarge x 3 nodes
(96vCPU, 768 GB total)
3 nodes x $4.15 = $12.45/hr3 nodes x $4.15 = $12.45/hr
Redis30vCPU, 114 GB
(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State)
cache.m6g.2xlarge x 3 nodes
(24vCPU, 78GB total)
3 nodes x $0.60 = $1.80/hr2 nodes x $0.60 = $1.20/hr
Gitaly Cluster Details     
Gitaly Instances (in ASG)64 vCPU, 240GB x 3 nodes m5.16xlarge x 3 nodes
(64 vCPU, 256 GB each)
$3.07 x 3 = $9.21/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect (Instances in ASG with load balancer)4 vCPU, 3.6 GB x 3 nodes c5.xlarge x 3 nodes
(4 vCPU, 8 GB each)
$0.17 x 3 = $0.51/hrGitaly & Praefect Must Have an Uneven Node Count for HA
Praefect PostgreSQL(1) (AWS RDS)2 vCPU, 1.8 GB x 3 nodes Not applicable; reuses GitLab PostgreSQL$0 
Internal Load Balancing Node2 vCPU, 1.8 GBAWS ELB$0.10/hr$0.10/hr

Helpful Resources

This page contains information related to upcoming products, features, and functionality. It is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned on this page are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.