doc/administration/backup_restore/backup_gitlab.md
{{< details >}}
{{< /details >}}
GitLab backups protect your data and help with disaster recovery.
The optimal backup strategy depends on your GitLab deployment configuration, data volume, and storage locations. These factors determine which backup methods to use, where to store backups, and how to structure your backup schedule.
For larger GitLab instances, alternative backup strategies include:
{{< history >}}
{{< /history >}}
GitLab provides a command-line interface to back up your entire instance. By default, the backup creates an archive in a single compressed tar file. This file includes:
[!warning] You are highly advised to read about storing configuration files to back up those separately.
/etc/gitlab)As a rough guideline, if you are using a 1k reference architecture with less than 100 GB of data, then follow these steps:
See also:
As the volume of GitLab data grows, the backup command takes longer to execute. Backup options such as backing up Git repositories concurrently and incremental repository backups can help to reduce execution time. At some point, the backup command becomes impractical by itself. For example, it can take 24 hours or more.
Starting with GitLab 18.0, repository backup performance has been significantly improved for repositories with large numbers of references (branches, tags). This improvement can reduce backup times from hours to minutes for affected repositories. No configuration changes are required to benefit from this enhancement.
In some cases, architecture changes may be warranted to allow backups to scale.
Further reading:
The following data needs to be backed up.
In the simplest case, GitLab has one PostgreSQL database in one PostgreSQL server on the same VM as all other GitLab services. But depending on configuration, GitLab may use multiple PostgreSQL databases in multiple PostgreSQL servers.
In general, this data is the single source of truth for most user-generated content in the Web interface, such as issue and merge request content, comments, permissions, and credentials.
PostgreSQL also holds some cached data like HTML-rendered Markdown, and by default, merge request diffs. However, merge request diffs can also be configured to be offloaded to the file system or object storage.
Gitaly Cluster (Praefect) uses a PostgreSQL database as a single source of truth to manage its Gitaly nodes.
A common PostgreSQL utility, pg_dump, produces a backup file which can be used to restore a PostgreSQL database. The backup command uses this utility under the hood.
Unfortunately, the larger the database, the longer it takes pg_dump to execute. Depending on your situation, the duration becomes impractical at some point (days, for example). If your database is over 100 GB, pg_dump, and by extension the backup command, is likely not usable. For more information, see alternative backup strategies.
A GitLab instance can have one or more repository shards. Each shard is a Gitaly instance or Gitaly Cluster (Praefect) that is responsible for allowing access and operations on the locally stored Git repositories. Gitaly can run on a machine:
Each project can have up to 3 different repositories:
They all live in the same shard and share the same base name with a -wiki and -design suffix
for Wiki and Design Repository cases.
Personal and project snippets, and group wiki content, are stored in Git repositories.
Project forks are deduplicated in live a GitLab site using pool repositories.
The backup command produces a Git bundle for each repository and tars them all up. This duplicates pool repository data into every fork. In our testing, 100 GB of Git repositories took a little over 2 hours to back up and upload to S3. At around 400 GB of Git data, the backup command is likely not viable for regular backups. For more information, see alternative backup strategies.
GitLab stores blobs (or files) such as issue attachments or LFS objects into either:
The backup command doesn't back up blobs that aren't stored on the file system. If you're using object storage, be sure to enable backups with your object storage provider.
Provider-specific backup guides:
See also:
GitLab container registry storage can be configured in either:
The backup command does not back up registry data when they are stored in Object Storage.
See also:
[!warning] The backup Rake task GitLab provides does not store your configuration files. The primary reason for this is that your database contains items including encrypted information for two-factor authentication and the CI/CD secure variables. Storing encrypted information in the same location as its key defeats the purpose of using encryption in the first place. For example, the secrets file contains your database encryption key. If you lose it, then the GitLab application will not be able to decrypt any encrypted values in the database.
Additionally, the secrets file may change after upgrades.
You should back up the configuration directory. At the very minimum, you must back up:
{{< tabs >}}
{{< tab title="Linux package" >}}
/etc/gitlab/gitlab-secrets.json/etc/gitlab/gitlab.rbFor more information, see Backup and restore Linux package (Omnibus) configuration.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
/home/git/gitlab/config/secrets.yml/home/git/gitlab/config/gitlab.yml{{< /tab >}}
{{< tab title="Docker" >}}
/srv/gitlab/config directory.{{< /tab >}}
{{< tab title="GitLab Helm chart" >}}
{{< /tab >}}
{{< /tabs >}}
You may also want to back up any TLS keys and certificates (/etc/gitlab/ssl, /etc/gitlab/trusted-certs), and your
SSH host keys
to avoid man-in-the-middle attack warnings if you have to perform a full machine restore.
In the unlikely event that the secrets file is lost, see When the secrets file is lost.
GitLab uses Redis both as a cache store and to hold persistent data for our background jobs system, Sidekiq. The provided backup command does not back up Redis data. This means that in order to take a consistent backup with the backup command, there must be no pending or running background jobs.
Elasticsearch is an optional database for advanced search. It can improve search in both source-code level, and user generated content in issues, merge requests, and discussions. The backup command does not back up Elasticsearch data. Elasticsearch data can be regenerated from PostgreSQL data after a restore.
Manual backup options:
See also: Backup command details.
To be able to back up and restore, ensure that Rsync is installed on your system. If you installed GitLab:
rsync is installed and install it if not.Important considerations:
To create a backup:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Run the backup task by using kubectl to run the backup-utility script on the GitLab toolbox pod. For more details, see the charts backup documentation.
{{< /tab >}}
{{< tab title="Docker" >}}
Run the backup from the host.
docker exec -t <container name> gitlab-backup create
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production
{{< /tab >}}
{{< /tabs >}}
If your GitLab deployment has multiple nodes, you need to pick a node for running the backup command. You must ensure that the designated node:
Example output:
Dumping database tables:
- Dumping table events... [DONE]
- Dumping table issues... [DONE]
- Dumping table keys... [DONE]
- Dumping table merge_requests... [DONE]
- Dumping table milestones... [DONE]
- Dumping table namespaces... [DONE]
- Dumping table notes... [DONE]
- Dumping table projects... [DONE]
- Dumping table protected_branches... [DONE]
- Dumping table schema_migrations... [DONE]
- Dumping table services... [DONE]
- Dumping table snippets... [DONE]
- Dumping table taggings... [DONE]
- Dumping table tags... [DONE]
- Dumping table users... [DONE]
- Dumping table users_projects... [DONE]
- Dumping table web_hooks... [DONE]
- Dumping table wikis... [DONE]
Dumping repositories:
- Dumping repository abcd... [DONE]
Creating backup archive: <backup-id>_gitlab_backup.tar [DONE]
Deleting tmp directories...[DONE]
Deleting old backups... [SKIPPING]
For detailed information about the backup process, see Backup archive process.
The command-line tool GitLab provides to back up your instance can accept more options.
The default backup strategy is to essentially stream data from the respective
data locations to the backup using the Linux command tar and gzip. This works
fine in most cases, but can cause problems when data is rapidly changing.
When data changes while tar is reading it, the error file changed as we read it
may occur, and causes the backup process to fail. In that case, you can use
the backup strategy called copy. The strategy copies data files
to a temporary location before calling tar and gzip, avoiding the error.
A side-effect is that the backup process takes up to an additional 1X disk space. The process does its best to clean up the temporary files at each stage so the problem doesn't compound, but it could be a considerable change for large installations.
To use the copy strategy instead of the default streaming strategy, specify
STRATEGY=copy in the Rake task command. For example:
sudo gitlab-backup create STRATEGY=copy
[!warning] If you use a custom backup filename, you can't limit the lifetime of the backups.
Backup files are created with filenames according to specific defaults. However, you can
override the <backup-id> portion of the filename by setting the BACKUP
environment variable. For example:
sudo gitlab-backup create BACKUP=dump
The resulting file is named dump_gitlab_backup.tar. This is useful for
systems that make use of rsync and incremental backups, and results in
considerably faster transfer speeds.
By default, Gzip fast compression is applied during backup of:
See also:
The default command is gzip -c -1. You can override this command with COMPRESS_CMD. Similarly, you can override the decompression command with DECOMPRESS_CMD.
Caveats:
stdout..gz.gzip -cd. Therefore if you override the compression command to use a format that cannot be decompressed by gzip -cd, you must override the decompression command during restore.gitlab-backup create COMPRESS_CMD="pigz -c --best" doesn't work as intended.gitlab-backup create
COMPRESS_CMD="gzip -c --best" gitlab-backup create
If gzip was used for backup, then restore does not require any options:
gitlab-backup restore
If your backup destination has built-in automatic compression, then you may wish to skip compression.
The tee command pipes stdin to stdout.
COMPRESS_CMD=tee gitlab-backup create
And on restore:
DECOMPRESS_CMD=tee gitlab-backup restore
pigz[!warning] While we support using
COMPRESS_CMDandDECOMPRESS_CMDto override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
An example of compressing backups with pigz using 4 processes:
sudo COMPRESS_CMD="pigz --compress --stdout --fast --processes=4" gitlab-backup create
Because pigz compresses to the gzip format, it is not required to use pigz to decompress backups which were compressed by pigz. However, it can still have a performance benefit over gzip. An example of decompressing backups with pigz:
sudo DECOMPRESS_CMD="pigz --decompress --stdout" gitlab-backup restore
[!note]
pigzis not included in the GitLab Linux package. You must install it yourself.
zstd[!warning] While we support using
COMPRESS_CMDandDECOMPRESS_CMDto override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
An example of compressing backups with zstd using 4 threads:
sudo COMPRESS_CMD="zstd --compress --stdout --fast --threads=4" gitlab-backup create
An example of decompressing backups with zstd:
sudo DECOMPRESS_CMD="zstd --decompress --stdout" gitlab-backup restore
[!note]
zstdis not included in the GitLab Linux package. You must install it yourself.
To ensure the generated archive is transferable by rsync, you can set the GZIP_RSYNCABLE=yes
option. This sets the --rsyncable option to gzip, which is useful only in
combination with setting the Backup filename option.
The --rsyncable option in gzip isn't guaranteed to be available
on all distributions. To verify that it's available in your distribution, run
gzip --help or consult the man pages.
sudo gitlab-backup create BACKUP=dump GZIP_RSYNCABLE=yes
Depending on your installation type, slightly different components can be skipped on backup creation.
{{< tabs >}}
{{< tab title="Linux package (Omnibus) / Docker / Self-compiled" >}}
<!-- source: <https://gitlab.com/gitlab-org/gitlab/-/blob/d693aa7f894c7306a0d20ab6d138a7b95785f2ff/lib/backup/manager.rb#L117-133> -->db (database)repositories (Git repositories data, including wikis)uploads (attachments)builds (CI job output logs)artifacts (CI job artifacts)pages (Pages content)lfs (LFS objects)terraform_state (Terraform states)registry (Container registry images)packages (Packages)ci_secure_files (Project-level secure files)external_diffs (External merge request diffs){{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
<!-- source: <https://gitlab.com/gitlab-org/build/CNG/-/blob/068e146db915efcd875414e04403410b71a2e70c/gitlab-toolbox/scripts/bin/backup-utility#L19> -->db (database)repositories (Git repositories data, including wikis)uploads (attachments)artifacts (CI job artifacts and output logs)pages (Pages content)lfs (LFS objects)terraform_state (Terraform states)registry (Container registry images)packages (Package registry)ci_secure_files (Project-level Secure Files)external_diffs (Merge request diffs){{< /tab >}}
{{< /tabs >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create SKIP=db,uploads
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
See Skipping components in charts backup documentation.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=db,uploads RAILS_ENV=production
{{< /tab >}}
{{< /tabs >}}
SKIP= is also used to:
SKIP=tar).SKIP=remote).[!note] It is not possible to skip the tar creation when using object storage for backups.
The last part of creating a backup is generation of a .tar file containing all the parts. In some cases, creating a .tar file might be wasted effort or even directly harmful, so you can skip this step by adding tar to the SKIP environment variable. Example use-cases:
PREVIOUS_BACKUP and BACKUP must not be specified, otherwise the specified backup is extracted, but no .tar file is generated at the end.)Adding tar to the SKIP variable leaves the files and directories containing the
backup in the directory used for the intermediate files. These files are
overwritten when a new backup is created, so you should make sure they are copied
elsewhere, because you can only have one backup on the system.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create SKIP=tar
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=tar RAILS_ENV=production
{{< /tab >}}
{{< /tabs >}}
{{< history >}}
gitlab-backup in GitLab 16.3.gitlab-backup for restoring a specified backup instead of the latest backup introduced in GitLab 16.6.gitlab-backup for creating incremental backups introduced in GitLab 16.6.backup-utility introduced in GitLab 17.0.{{< /history >}}
Instead of storing large repository backups in the backup archive, repository backups can be configured so that the Gitaly node that hosts each repository is responsible for creating the backup and streaming it to object storage. This helps reduce the network resources required to create and restore a backup.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_SERVER_SIDE=true
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
kubectl exec <Toolbox pod name> -it -- backup-utility --repositories-server-side
When you are using cron-based backups,
add the --repositories-server-side flag to the extra arguments.
{{< /tab >}}
{{< /tabs >}}
When using multiple repository storages, repositories can be backed up or restored concurrently to help fully use CPU time. The following variables are available to modify the default behavior of the Rake task:
GITLAB_BACKUP_MAX_CONCURRENCY: The maximum number of projects to back up at
the same time. Defaults to the number of logical CPUs.GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY: The maximum number of projects to
back up at the same time on each storage. This allows the repository backups
to be spread across storages. Defaults to 2.For example, with 4 repository storages:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
toolbox:
#...
extra: {}
extraEnv:
GITLAB_BACKUP_MAX_CONCURRENCY: 4
GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY: 1
{{< /tab >}}
{{< /tabs >}}
{{< history >}}
{{< /history >}}
[!note] Only repositories support incremental backups. Therefore, if you use
INCREMENTAL=yes, the task creates a self-contained backup tar archive. This is because all subtasks except repositories are still creating full backups (they overwrite the existing full backup). See issue 19256 for a feature request to support incremental backups for all subtasks.
Incremental repository backups can be faster than full repository backups because they only pack changes since the last backup into the backup bundle for each repository.
Backup archives produced by gitlab-backup are portable and self-contained because they contain all the steps needed to restore each repository from the original full backup onward.
To restore an incremental backup to a new GitLab instance (no pre-existing data), you must create the incremental backup from a full backup. Do not skip any backup components when creating the base backup.
With server-side repository backups, incremental repository backup files are stored separately in object storage. Each increment depends on all prior steps back to the original full backup.
[!warning] Do not delete incremental backup files from object storage. If an intermediate file is deleted (for example, through an object storage lifecycle policy), the backup chain is broken and the backup cannot be restored.
For more details, see Restoring an incremental repository backup.
Use the PREVIOUS_BACKUP=<backup-id> option to choose the backup to use. By default, a backup file is created
as documented in the Backup ID section. You can override the <backup-id> portion of the filename by setting the
BACKUP environment variable.
To create an incremental backup, run:
sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<backup-id>
To create an untarred incremental backup from a tarred backup, use SKIP=tar:
sudo gitlab-backup create INCREMENTAL=yes SKIP=tar
{{< history >}}
{{< /history >}}
When using multiple repository storages,
repositories from specific repository storages can be backed up separately
using the REPOSITORIES_STORAGES option. The option accepts a comma-separated list of
storage names.
For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create REPOSITORIES_STORAGES=storage1,storage2
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_STORAGES=storage1,storage2
{{< /tab >}}
{{< /tabs >}}
{{< history >}}
{{< /history >}}
You can back up specific repositories using the REPOSITORIES_PATHS option.
Similarly, you can use SKIP_REPOSITORIES_PATHS to skip certain repositories.
Both options accept a comma-separated list of project or group paths. If you
specify a group path, all repositories in all projects in the group and
descendant groups are included or skipped, depending on which option you used.
For example, to back up all repositories for all projects in Group A (group-a), the repository for
Project C in Group B (group-b/project-c),
and skip the Project D in Group A (group-a/project-d):
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
REPOSITORIES_PATHS=group-a SKIP_REPOSITORIES_PATHS=group-a/project_a2 backup-utility --skip db,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,ci_secure_files,pages
{{< /tab >}}
{{< /tabs >}}
[!note] It is not possible to skip the tar creation when using object storage for backups.
You can let the backup script upload the .tar file it creates to remote storage.
In the following example, we use Amazon S3 for storage, but you can also use
other cloud providers like Google Cloud Storage and Azure, or local mounted shares.
See also:
For Linux package (Omnibus):
Add the following to /etc/gitlab/gitlab.rb:
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'eu-west-1',
# Choose one authentication method
# IAM Profile
'use_iam_profile' => true
# OR AWS Access and Secret key
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
# Consider using multipart uploads when file size reaches 100 MB. Enter a number in bytes.
# gitlab_rails['backup_multipart_chunk_size'] = 104857600
If you're using the IAM Profile authentication method, ensure the instance where backup-utility is to be run has the following policy set (replace <backups-bucket> with the correct bucket name):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<backups-bucket>/*"
}
]
}
Reconfigure GitLab for the changes to take effect
AWS supports these modes for server side encryption:
Use your mode of choice with GitLab. Each mode has similar, but slightly different, configuration methods.
To enable SSE-S3, in the backup storage options set the server_side_encryption
field to AES256. For example, in the Linux package (Omnibus):
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'AES256'
}
To enable SSE-KMS, you need the
KMS key via its Amazon Resource Name (ARN) in the arn:aws:kms:region:acct-id:key/key-id format.
Under the backup_upload_storage_options configuration setting, set:
server_side_encryption to aws:kms.server_side_encryption_kms_key_id to the ARN of the key.For example, in the Linux package (Omnibus):
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'aws:kms',
'server_side_encryption_kms_key_id' => 'arn:aws:<YOUR KMS KEY ID>:'
}
SSE-C requires you to set these encryption options:
backup_encryption: AES256.backup_encryption_key: Unencoded, 32-byte (256 bits) key. The upload fails if this isn't exactly 32 bytes.For example, in the Linux package (Omnibus):
gitlab_rails['backup_encryption'] = 'AES256'
gitlab_rails['backup_encryption_key'] = '<YOUR 32-BYTE KEY HERE>'
If the key contains binary characters and cannot be encoded in UTF-8,
instead, specify the key with the GITLAB_BACKUP_ENCRYPTION_KEY environment variable.
For example:
gitlab_rails['env'] = { 'GITLAB_BACKUP_ENCRYPTION_KEY' => "\xDE\xAD\xBE\xEF" * 8 }
This example can be used for a bucket in Amsterdam (AMS3):
Add the following to /etc/gitlab/gitlab.rb:
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'ams3',
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123',
'endpoint' => 'https://ams3.digitaloceanspaces.com'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
Reconfigure GitLab for the changes to take effect
If you see a 400 Bad Request error message when using Digital Ocean Spaces,
the cause may be the use of backup encryption. Because Digital Ocean Spaces
doesn't support encryption, remove or comment the line that contains
gitlab_rails['backup_encryption'].
Not all S3 providers are fully compatible with the Fog library. For example,
if you see a 411 Length Required error message after attempting to upload,
you may need to downgrade the aws_signature_version value from the default
value to 2, due to this issue.
For self-compiled installations:
Edit home/git/gitlab/config/gitlab.yml:
backup:
# snip
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: AWS
region: eu-west-1
aws_access_key_id: AKIAKIAKI
aws_secret_access_key: 'secret123'
# If using an IAM Profile, leave aws_access_key_id & aws_secret_access_key empty
# ie. aws_access_key_id: ''
# use_iam_profile: 'true'
# The remote 'directory' to store your backups. For S3, this would be the bucket name.
remote_directory: 'my.s3.bucket'
# Specifies Amazon S3 storage class to use for backups, this is optional
# storage_class: 'STANDARD'
#
# Turns on AWS Server-Side Encryption with Amazon Customer-Provided Encryption Keys for backups, this is optional
# 'encryption' must be set in order for this to have any effect.
# 'encryption_key' should be set to the 256-bit encryption key for Amazon S3 to use to encrypt or decrypt.
# To avoid storing the key on disk, the key can also be specified via the `GITLAB_BACKUP_ENCRYPTION_KEY` your data.
# encryption: 'AES256'
# encryption_key: '<key>'
#
#
# Turns on AWS Server-Side Encryption with Amazon S3-Managed keys (optional)
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
# For SSE-S3, set 'server_side_encryption' to 'AES256'.
# For SS3-KMS, set 'server_side_encryption' to 'aws:kms'. Set
# 'server_side_encryption_kms_key_id' to the ARN of customer master key.
# storage_options:
# server_side_encryption: 'aws:kms'
# server_side_encryption_kms_key_id: 'arn:aws:kms:YOUR-KEY-ID-HERE'
Restart GitLab for the changes to take effect
To use Google Cloud Storage to save backups, you must first create an access key from the Google console:
For the Linux package (Omnibus):
Edit /etc/gitlab/gitlab.rb:
gitlab_rails['backup_upload_connection'] = {
'provider' => 'Google',
'google_storage_access_key_id' => 'Access Key',
'google_storage_secret_access_key' => 'Secret',
## If you have CNAME buckets (foo.example.com), you might run into SSL issues
## when uploading backups ("hostname foo.example.com.storage.googleapis.com
## does not match the server certificate"). In that case, uncomment the following
## setting. See: https://github.com/fog/fog/issues/2834
#'path_style' => true
}
gitlab_rails['backup_upload_remote_directory'] = 'my.google.bucket'
Reconfigure GitLab for the changes to take effect
For self-compiled installations:
Edit home/git/gitlab/config/gitlab.yml:
backup:
upload:
connection:
provider: 'Google'
google_storage_access_key_id: 'Access Key'
google_storage_secret_access_key: 'Secret'
remote_directory: 'my.google.bucket'
Restart GitLab for the changes to take effect
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
Edit /etc/gitlab/gitlab.rb:
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_access_key' => '<AZURE STORAGE ACCESS KEY>',
'azure_storage_domain' => 'blob.core.windows.net', # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
If you are using a managed identity, omit azure_storage_access_key:
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_domain' => '<AZURE STORAGE DOMAIN>' # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
Reconfigure GitLab for the changes to take effect
{{< /tab >}}
{{< tab title="Self-compiled" >}}
Edit home/git/gitlab/config/gitlab.yml:
backup:
upload:
connection:
provider: 'AzureRM'
azure_storage_account_name: '<AZURE STORAGE ACCOUNT NAME>'
azure_storage_access_key: '<AZURE STORAGE ACCESS KEY>'
remote_directory: '<AZURE BLOB CONTAINER>'
Restart GitLab for the changes to take effect
{{< /tab >}}
{{< /tabs >}}
For more details, see the table of Azure parameters.
This option works only for remote storage. If you want to group your backups,
you can pass a DIRECTORY environment variable:
sudo gitlab-backup create DIRECTORY=daily
sudo gitlab-backup create DIRECTORY=weekly
If you have configured GitLab to upload backups in a remote storage,
you can use the SKIP=remote option to skip uploading your backups to the remote storage.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create SKIP=remote
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=remote RAILS_ENV=production
{{< /tab >}}
{{< /tabs >}}
You can send backups to a locally-mounted share (for example, NFS,CIFS, or SMB) using the Fog
Local storage provider.
To do this, you must set the following configuration keys:
backup_upload_connection.local_root: mounted directory that backups are copied to.backup_upload_remote_directory: subdirectory of the backup_upload_connection.local_root directory. It is created if it doesn't exist.
If you want to copy the tarballs to the root of your mounted directory, use ..When mounted, the directory set in the local_root key must be owned by either:
git user. So, mounting with the uid= of the git user for CIFS and SMB.git user.Because file system performance may affect overall GitLab performance, we don't recommend using cloud-based file systems for storage.
Don't set the following configuration keys to the same path:
gitlab_rails['backup_path'] (backup.path for self-compiled installations).gitlab_rails['backup_upload_connection'].local_root (backup.upload.connection.local_root for self-compiled installations).The backup_path configuration key sets the local location of the backup file. The upload configuration key is
intended for use when the backup file is uploaded to a separate server, perhaps for archival purposes.
If these configuration keys are set to the same location, the upload feature fails because a backup already exists at the upload location. This failure causes the upload feature to delete the backup because it assumes it's a residual file remaining after the failed upload attempt.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
Edit /etc/gitlab/gitlab.rb:
gitlab_rails['backup_upload_connection'] = {
:provider => 'Local',
:local_root => '/mnt/backups'
}
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
gitlab_rails['backup_upload_remote_directory'] = 'gitlab_backups'
Reconfigure GitLab for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
Edit home/git/gitlab/config/gitlab.yml:
backup:
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: Local
local_root: '/mnt/backups'
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
remote_directory: 'gitlab_backups'
Restart GitLab for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
The backup archives created by GitLab (1393513186_2014_02_27_gitlab_backup.tar)
have the owner/group git/git and 0600 permissions by default. This is
meant to avoid other system users reading GitLab data. If you need the backup
archives to have different permissions, you can use the archive_permissions
setting.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
Edit /etc/gitlab/gitlab.rb:
gitlab_rails['backup_archive_permissions'] = 0644 # Makes the backup archives world-readable
Reconfigure GitLab for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
Edit /home/git/gitlab/config/gitlab.yml:
backup:
archive_permissions: 0644 # Makes the backup archives world-readable
Restart GitLab for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
[!warning] The following cron jobs do not back up your GitLab configuration files or SSH host keys.
Important: Remember to also back up:
You can schedule a cron job that backs up your repositories and GitLab metadata.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
Edit the crontab for the root user:
sudo su -
crontab -e
There, add the following line to schedule the backup for everyday at 2 AM:
0 2 * * * /opt/gitlab/bin/gitlab-backup create CRON=1
{{< /tab >}}
{{< tab title="Self-compiled" >}}
Edit the crontab for the git user:
sudo -u git crontab -e
Add the following lines at the bottom:
# Create a full backup of the GitLab repositories and SQL database every day at 2am
0 2 * * * cd /home/git/gitlab && PATH=/usr/local/bin:/usr/bin:/bin bundle exec rake gitlab:backup:create RAILS_ENV=production CRON=1
{{< /tab >}}
{{< /tabs >}}
The CRON=1 environment setting directs the backup script to hide all progress
output if there aren't any errors. This is recommended to reduce cron spam.
When troubleshooting backup problems, however, replace CRON=1 with --trace to log verbosely.
[!warning] The process described in this section doesn't work if you used a custom filename for your backups.
To prevent regular backups from using all your disk space, you may want to set
a limited lifetime for backups. The next time the backup task runs, backups
older than the backup_keep_time are pruned.
This configuration option manages only local files. GitLab doesn't prune old files stored in a third-party object storage because the user may not have permission to list and delete files. It's recommended that you configure the appropriate retention policy for your object storage.
See also:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
Edit /etc/gitlab/gitlab.rb:
## Limit backup lifetime to 7 days - 604800 seconds
gitlab_rails['backup_keep_time'] = 604800
Reconfigure GitLab for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
Edit /home/git/gitlab/config/gitlab.yml:
backup:
## Limit backup lifetime to 7 days - 604800 seconds
keep_time: 604800
Restart GitLab for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
Do not back up or restore GitLab through a PgBouncer connection. These tasks must bypass PgBouncer and connect directly to the PostgreSQL primary database node, or they cause a GitLab outage.
When the GitLab backup or restore task is used with PgBouncer, the following error message is shown:
ActiveRecord::StatementInvalid: PG::UndefinedTable
Each time the GitLab backup runs, GitLab starts generating 500 errors and errors about missing tables will be logged by PostgreSQL:
ERROR: relation "tablename" does not exist at character 123
This happens because the task uses pg_dump, which sets a null search path and explicitly includes the schema in every SQL query to address CVE-2018-1058.
Because connections are reused with PgBouncer in transaction pooling mode,
PostgreSQL fails to search the default public schema. As a result,
this clearing of the search path causes tables and columns to appear
missing.
Technical references:
There are two ways to fix this:
{{< history >}}
{{< /history >}}
By default, GitLab uses the database configuration stored in a
configuration file (database.yml). However, you can override the database settings
for the backup and restore task by setting environment
variables that are prefixed with GITLAB_BACKUP_:
GITLAB_BACKUP_PGHOSTGITLAB_BACKUP_PGUSERGITLAB_BACKUP_PGPORTGITLAB_BACKUP_PGPASSWORDGITLAB_BACKUP_PGSSLMODEGITLAB_BACKUP_PGSSLKEYGITLAB_BACKUP_PGSSLCERTGITLAB_BACKUP_PGSSLROOTCERTGITLAB_BACKUP_PGSSLCRLGITLAB_BACKUP_PGSSLCOMPRESSIONFor example, to override the database host and port to use 192.168.1.10 and port 5432 with the Linux package (Omnibus):
sudo GITLAB_BACKUP_PGHOST=192.168.1.10 GITLAB_BACKUP_PGPORT=5432 /opt/gitlab/bin/gitlab-backup create
If you run GitLab on multiple databases, you can override database settings by including
the database name in the environment variable. For example if your main and ci databases are
hosted on different database servers, you would append their name after the GITLAB_BACKUP_ prefix,
leaving the PG* names as is:
sudo GITLAB_BACKUP_MAIN_PGHOST=192.168.1.10 GITLAB_BACKUP_CI_PGHOST=192.168.1.12 /opt/gitlab/bin/gitlab-backup create
See the PostgreSQL documentation for more details on what these parameters do.
gitaly-backup for repository backup and restoreThe gitaly-backup binary is used by the backup Rake task to create and restore repository backups from Gitaly.
gitaly-backup replaces the previous backup method that directly calls RPCs on Gitaly from GitLab.
The backup Rake task must be able to find this executable. In most cases, you don't need to change
the path to the binary as it should work fine with the default path /opt/gitlab/embedded/bin/gitaly-backup.
If you have a specific reason to change the path, it can be configured in the Linux package (Omnibus):
Add the following to /etc/gitlab/gitlab.rb:
gitlab_rails['backup_gitaly_backup_path'] = '/path/to/gitaly-backup'
Reconfigure GitLab for the changes to take effect.
Because every deployment may have different capabilities, you should first review what data needs to be backed up to better understand if, and how, you can leverage them.
For example, if you use Amazon RDS, you might choose to use its built-in backup and restore features to handle your GitLab PostgreSQL data, and exclude PostgreSQL data when using the backup command.
See also:
In the following cases, consider using file system data transfer or snapshots as part of your backup strategy:
[!warning] Gitaly Cluster (Praefect) does not support snapshot backups.
When considering using file system data transfer or snapshots:
sudo gitlab-ctl stop) before
doing a file system transfer (with rsync, for example) or taking a snapshot to ensure all data in memory is flushed to disk. GitLab consists of multiple subsystems (Gitaly, database, file storage) that have their own buffers, queues, and storage layers. GitLab transactions can span these subsystems, which results in parts of a transaction taking different paths to disk. On live systems, file system transfers and snapshot runs fail to capture parts of the transaction still in memory.Example: Amazon Elastic Block Store (EBS)
/var/opt/gitlab.Example: Logical Volume Manager (LVM) snapshots + rsync
/var/opt/gitlab./var/opt/gitlab directory using rsync would not be reliable because too many files would change while rsync is running./var/opt/gitlab, we create a temporary LVM snapshot, which we mount as a read-only file system at /mnt/gitlab_backup.If you're running GitLab on a virtualized server, you can possibly also create VM snapshots of the entire GitLab server. It's not uncommon however for a VM snapshot to require you to power down the server, which limits this solution's practical use.
First, ensure you back up existing GitLab data while skipping repositories:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
sudo gitlab-backup create SKIP=repositories
{{< /tab >}}
{{< tab title="Self-compiled" >}}
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=repositories RAILS_ENV=production
{{< /tab >}}
{{< /tabs >}}
For manually backing up the Git repository data on disk, there are multiple possible strategies:
Git repositories must be copied in a consistent way. If repositories are copied during concurrent write operations, inconsistencies or corruption issues can occur. This can lead to repository corruption, missing commits, or incomplete backup data.
To prevent writes to the Git repository data, there are two possible approaches:
Use maintenance mode to place GitLab in a read-only state.
Create explicit downtime by stopping all Gitaly services before backing up the repositories:
sudo gitlab-ctl stop gitaly
# execute git data copy step
sudo gitlab-ctl start gitaly
You can copy Git repository data using any method, as long as writes are prevented on the data being copied (to prevent inconsistencies and corruption issues). In order of preference and safety, the recommended methods are:
Use rsync with archive-mode, delete, and checksum options, for example:
rsync -aR --delete --checksum source destination # be extra safe with the order as it will delete existing data if inverted
Use a tar pipe to copy the entire repository's directory to another server or location.
Use sftp, scp, cp, or any other copying method.
One way of backing up repositories without requiring instance-wide downtime is to programmatically mark projects as read-only while copying the underlying data.
There are a few possible downsides to this:
There is an experimental script that attempts to automate this process in the Geo team Runbooks project.