Gitaly and Geo capabilities

It is common to want the most available, quickly recoverable, highly performant, and fully resilient solution for your data. However, there are tradeoffs.

The following tables are intended to guide you to choose the right combination of capabilities based on your requirements.

Gitaly capabilities

CapabilityAvailabilityRecoverabilityData ResiliencyPerformanceRisks/Trade-offs
Gitaly ClusterVery high - tolerant of node failuresRTO for a single node of 10 s with no manual interventionData is stored on multiple nodesGood - While writes may take slightly longer due to voting, read distribution improves read speeds Trade-off - Slight decrease in write speed for redundant, strongly-consistent storage solution. Risks - Does not support snapshot backups, GitLab backup task can be slow for large data sets
Gitaly ShardsSingle storage location is a single point of failureWould need to restore only shards which failedSingle point of failureGood - can allocate repositories to shards to spread load Trade-off - Need to manually configure repositories into different shards to balance loads / storage space Risks - Single point of failure relies on recovery process when single-node failure occurs

Geo capabilities

If your availability needs to span multiple zones or multiple locations, read about Geo.

CapabilityAvailabilityRecoverabilityData ResiliencyPerformanceRisks/Trade-offs
GeoDepends on the architecture of the Geo site. It is possible to deploy secondaries in single and multiple node configurations.Eventually consistent. Recovery point depends on replication lag, which depends on a number of factors such as network speeds. Geo supports failover from a primary to secondary site using manual commands that are scriptable.Geo replicates 100% of planned data types and verifies 50%. See limitations table for more detail.Improves read/clone times for users of a secondary.Geo is not intended to replace other backup/restore solutions. Because of replication lag and the possibility of replicating bad data from a primary, customers should also take regular backups of their primary site and test the restore process.

Scenarios for failure modes and available mitigation paths

The following table outlines failure modes and mitigation paths for the product offerings detailed in the tables above. Note - Gitaly Cluster install assumes an odd number replication factor of 3 or greater

Gitaly ModeLoss of Single Gitaly NodeApplication / Data CorruptionRegional Outage (Loss of Instance)Notes
Single Gitaly NodeDowntime - Must restore from backupDowntime - Must restore from BackupDowntime - Must wait for outage to end 
Single Gitaly Node + Geo SecondaryDowntime - Must restore from backup, can perform a manual failover to secondaryDowntime - Must restore from Backup, errors could have propagated to secondaryManual intervention - failover to Geo secondary 
Sharded Gitaly InstallPartial Downtime - Only repositories on impacted node affected, must restore from backupPartial Downtime - Only repositories on impacted node affected, must restore from backupDowntime - Must wait for outage to end 
Sharded Gitaly Install + Geo SecondaryPartial Downtime - Only repositories on impacted node affected, must restore from backup, could perform manual failover to secondary for impacted repositoriesPartial Downtime - Only repositories on impacted node affected, must restore from backup, errors could have propagated to secondaryManual intervention - failover to Geo secondary 
Gitaly Cluster Install*No Downtime - swaps repository primary to another node after 10 secondsNot applicable; All writes are voted on by multiple Gitaly Cluster nodesDowntime - Must wait for outage to endSnapshot backups for Gitaly Cluster nodes not supported at this time
Gitaly Cluster Install* + Geo SecondaryNo Downtime - swaps repository primary to another node after 10 secondsNot applicable; All writes are voted on by multiple Gitaly Cluster nodesManual intervention - failover to Geo secondarySnapshot backups for Gitaly Cluster nodes not supported at this time