doc/administration/packages/container_registry_metadata_database.md
{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
The metadata database provides several enhancements to the container registry that improve performance and add new features. The work on the GitLab Self-Managed release of the registry metadata database feature is tracked in epic 5521.
By default, the container registry uses object storage or a local file system to persist metadata related to container images. This method to store metadata limits how efficiently the data can be accessed, especially data spanning multiple images, such as when listing tags. By using a database to store this data, many new features are possible, including online garbage collection which removes old data automatically with zero downtime.
This database works in conjunction with the storage already used by the registry, but does not replace object storage or a file system. You must continue to maintain a storage solution even after performing a metadata import to the metadata database.
For Helm Charts installations, see Manage the container registry metadata database in the Helm Charts documentation.
The metadata database architecture supports performance improvements, bug fixes, and new features that are not available with legacy metadata storage. These enhancements include:
Due to technical constraints of legacy metadata storage, new features are only implemented for the metadata database version. Non-security bug fixes might be limited to the metadata database version.
createdAt and publishedAt timestamp values for image tags are set to the import date. This is intentional to ensure consistency, because the legacy registry does not collect tag published dates for all images. While some images have build dates in their metadata, many do not. For more information, see issue 1384.You can import metadata from existing registries to the metadata database, and use online garbage collection.
Some database-enabled features are only enabled for GitLab.com and automatic database provisioning for the registry database is not available. Review the feature support table in the feedback issue for the status of features related to the container registry database.
Prerequisites:
For installations that have never written data to the container registry, no import is required. You must only enable the database before writing data to the registry.
For more information, see the instructions for new installations.
You can import your existing container registry metadata using either a one-step import method or three-step import method. A few factors affect the duration of the import:
You do not need to do the following in preparation before importing:
[!note] The metadata import only targets tagged images. Untagged and unreferenced manifests, and the layers exclusively referenced by them, are left behind and become inaccessible. Untagged images were never visible through the GitLab UI or API, but they can become "dangling" and left behind in the backend. After import to the new registry, all images are subject to continuous online garbage collection, by default deleting any untagged and unreferenced manifests and layers that remain for longer than 24 hours.
If you regularly run offline garbage collection, use the one-step import method. This method should take a similar amount of time and is a simpler operation compared to the three-step import method.
If your registry is too large to regularly run offline garbage collection, use the three-step import method to minimize the amount of read-only time significantly.
If you use an external database, make sure you set up the external database connection before proceeding with a migration path.
For more information, see Using an external database.
{{< history >}}
{{< /history >}}
Skip repositories that you pre-imported within the last 72 hours to resume interrupted imports. Repositories are pre-imported either:
To restore interrupted imports, configure the --pre-import-skip-recent flag. Defaults to 72 hours.
For example:
# Skip repositories imported within 6 hours from the start of the import command
--pre-import-skip-recent 6h
# Disable skipping behavior
--pre-import-skip-recent 0
For more information about valid duration units, see Go duration strings.
It may take approximately 48 hours post import to see your registry storage decrease. This is a normal and expected part of online garbage collection, as this delay ensures that online garbage collection does not interfere with image pushes. Check out the monitor online garbage collection section to see how to monitor the progress and health of the online garbage collector.
{{< history >}}
{{< /history >}}
Prefer mode is a configuration option for the metadata database that lets the registry fall back to legacy metadata storage when an existing registry has not been imported to the database yet.
To enable prefer mode:
In /etc/gitlab/gitlab.rb, set database.enabled to "prefer"
instead of true or false:
registry['database'] = {
'enabled' => 'prefer',
'host' => '<your_database_host>',
'port' => 5432,
'user' => '<your_database_user>',
'password' => '<your_database_password>',
'dbname' => '<your_database_name>',
}
Save the file and reconfigure GitLab.
After you reconfigure GitLab, the registry evaluates which metadata backend to use at startup based on lockfiles that track previous writes to the filesystem or database:
enabled: false until you complete
a metadata import.enabled: true.The fallback decision occurs once at startup and does not change while the registry is running. There is no automatic retry or reconnection to the database after a fallback. To move from filesystem to database mode after a fallback, complete the standard metadata import and restart the registry.
To verify which metadata backend your registry is using, use one of the following methods.
Send a request to the registry /v2/ endpoint:
curl --silent --head "https://registry.example.com/v2/" | grep --ignore-case gitlabcontainer-registry-database-enabled
Inspect the
gitlab-container-registry-database-enabled response header:
true means the registry is using the metadata database.false means it is using legacy filesystem storage.To check lockfiles on disk, look for these files in the configured storage backend at
<rootdirectory>/docker/registry/lockfiles/:
database-in-use: The registry is using the metadata database.filesystem-in-use: The registry is using legacy filesystem storage.If both lockfiles exist, the registry is in an invalid state and does not start.
The registry logs which metadata backend it selects at startup.
To check registry logs, look for one of the following messages:
If the registry falls back to legacy storage (prefer mode only):
database prefer mode enabled, but found filesystem metadata: falling back to legacy metadata
If the registry connects to the database:
using the metadata database
The container registry supports two types of migrations:
By default, the registry applies both regular schema and post-deployment migrations simultaneously. To reduce downtime during upgrades, you can skip post-deployment migrations and apply them manually after the application starts.
{{< tabs >}}
{{< tab title="GitLab 18.7 and later" >}}
To apply both regular schema and post-deployment migrations before the application starts:
Run database migrations:
sudo gitlab-ctl registry-database migrate up
To skip post-deployment migrations:
Run regular schema migrations only:
sudo gitlab-ctl registry-database migrate up --skip-post-deployment
As an alternative to the --skip-post-deployment flag, you can also set the SKIP_POST_DEPLOYMENT_MIGRATIONS environment variable to true:
SKIP_POST_DEPLOYMENT_MIGRATIONS=true sudo gitlab-ctl registry-database migrate up
After starting the application, apply any pending post-deployment migrations:
sudo gitlab-ctl registry-database migrate up
{{< /tab >}}
{{< tab title="GitLab 18.6 and earlier" >}}
To apply both regular schema and post-deployment migrations before the application starts:
Run database migrations:
sudo -u registry gitlab-ctl registry-database migrate up
To skip post-deployment migrations:
Run regular schema migrations only:
sudo -u registry gitlab-ctl registry-database migrate up --skip-post-deployment
As an alternative to the --skip-post-deployment flag, you can also set the SKIP_POST_DEPLOYMENT_MIGRATIONS environment variable to true:
SKIP_POST_DEPLOYMENT_MIGRATIONS=true sudo -u registry gitlab-ctl registry-database migrate up
After starting the application, apply any pending post-deployment migrations:
sudo -u registry gitlab-ctl registry-database migrate up
{{< /tab >}}
{{< /tabs >}}
[!note] The
migrate upcommand offers some extra flags that can be used to control how the migrations are applied. Runsudo gitlab-ctl registry-database migrate up --helpfor details.
The initial runs of online garbage collection following the import process varies in duration based on the number of imported images. You should monitor the efficiency and health of your online garbage collection during this period.
After completing an import, expect the database to experience a period of high load as the garbage collection queues drain. This high load is caused by a high number of individual database calls from the online garbage collector processing the queued tasks.
Regularly check PostgreSQL and registry logs for any errors or warnings. In the registry logs,
pay special attention to logs filtered by component=registry.gc.*.
Use monitoring tools like Prometheus and Grafana to visualize and track garbage collection metrics,
focusing on metrics with a prefix of registry_gc_*. These include the number of objects
marked for deletion, objects successfully deleted, run intervals, and durations.
See enable the registry debug server
for how to enable Prometheus.
Monitor the health and status of garbage collection task queues for blobs and manifests.
{{< tabs >}}
{{< tab title="GitLab 18.10 and later" >}}
The following command displays information related to online garbage collection.
sudo gitlab-ctl registry-database gc-stats
Example Output:
=== Blob Review Queue ===
Tasks Pending Removal: 42
Tasks ready for GC review (review_after has passed).
┌───────────────────────────────────────────────────────────────────┬─────────────────────┬─────────────────┐
│ DIGEST │ REVIEW AFTER │ EVENT │
├───────────────────────────────────────────────────────────────────┼─────────────────────┼─────────────────┤
│ sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22e │ 2026-01-16 21:56:13 │ blob_upload │
│ sha256:b4f5e6d7c8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2 │ 2026-01-16 19:56:13 │ manifest_delete │
│ sha256:c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3 │ 2026-01-16 17:56:13 │ layer_delete │
└───────────────────────────────────────────────────────────────────┴─────────────────────┴─────────────────┘
Long Overdue Tasks: 5
Tasks pending longer than configured delay - may need attention.
┌───────────────────────────────────────────────────────────────────┬─────────────────────┬──────────────┬─────────┐
│ DIGEST │ REVIEW AFTER │ EVENT │ OVERDUE │
├───────────────────────────────────────────────────────────────────┼─────────────────────┼──────────────┼─────────┤
│ sha256:d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4 │ 2026-01-11 23:56:13 │ blob_upload │ 4d 0h │
│ sha256:e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5 │ 2026-01-13 23:56:13 │ layer_delete │ 2d 0h │
└───────────────────────────────────────────────────────────────────┴─────────────────────┴──────────────┴─────────┘
High Retry Tasks: 2
Tasks with >10 review attempts - may indicate persistent issues.
┌───────────────────────────────────────────────────────────────────┬─────────────────────┬─────────────────┬─────────┐
│ DIGEST │ REVIEW AFTER │ EVENT │ RETRIES │
├───────────────────────────────────────────────────────────────────┼─────────────────────┼─────────────────┼─────────┤
│ sha256:f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6 │ 2026-01-17 00:56:13 │ blob_upload │ 15 │
│ sha256:a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7 │ 2026-01-17 01:56:13 │ manifest_delete │ 12 │
└───────────────────────────────────────────────────────────────────┴─────────────────────┴─────────────────┴─────────┘
=== Manifest Review Queue ===
Tasks Pending Removal: 128
Tasks ready for GC review (review_after has passed).
┌───────────────┬─────────────┬─────────────────────┬──────────────────────┐
│ REPOSITORY ID │ MANIFEST ID │ REVIEW AFTER │ EVENT │
├───────────────┼─────────────┼─────────────────────┼──────────────────────┤
│ 1001 │ 12345 │ 2026-01-16 22:56:13 │ tag_delete │
│ 1002 │ 67890 │ 2026-01-16 20:56:13 │ manifest_upload │
│ 1003 │ 11111 │ 2026-01-16 18:56:13 │ tag_switch │
│ 2001 │ 22222 │ 2026-01-16 16:56:13 │ manifest_list_delete │
└───────────────┴─────────────┴─────────────────────┴──────────────────────┘
Long Overdue Tasks: 8
Tasks pending longer than configured delay - may need attention.
┌───────────────┬─────────────┬─────────────────────┬─────────────────┬─────────┐
│ REPOSITORY ID │ MANIFEST ID │ REVIEW AFTER │ EVENT │ OVERDUE │
├───────────────┼─────────────┼─────────────────────┼─────────────────┼─────────┤
│ 3001 │ 33333 │ 2026-01-12 23:56:13 │ tag_delete │ 3d 0h │
│ 3002 │ 44444 │ 2026-01-14 23:56:13 │ manifest_delete │ 1d 0h │
└───────────────┴─────────────┴─────────────────────┴─────────────────┴─────────┘
High Retry Tasks: 3
Tasks with >10 review attempts - may indicate persistent issues.
┌───────────────┬─────────────┬─────────────────────┬─────────────────┬─────────┐
│ REPOSITORY ID │ MANIFEST ID │ REVIEW AFTER │ EVENT │ RETRIES │
├───────────────┼─────────────┼─────────────────────┼─────────────────┼─────────┤
│ 4001 │ 55555 │ 2026-01-17 00:26:13 │ tag_delete │ 18 │
│ 4002 │ 66666 │ 2026-01-17 00:41:13 │ manifest_upload │ 11 │
└───────────────┴─────────────┴─────────────────────┴─────────────────┴─────────┘
{{< /tab >}}
{{< tab title="GitLab 18.9 and earlier" >}}
The following queries return tasks that were retried more than 10 times, or were eligible for review for longer than 24 hours. The online garbage collector should pick up an item for review within 24 hours with few failed attempts. If any rows are returned, investigate the health of your online garbage collector.
For manifests:
SELECT
repository_id,
manifest_id,
ROUND(
EXTRACT(
EPOCH
FROM
AGE(NOW(), review_after)
) / 3600
) AS hours_eligible_for_review,
review_count as failed_review_attempts,
event
FROM
gc_manifest_review_queue
WHERE
review_after < NOW() - INTERVAL '24 hours'
OR review_count > 10
LIMIT
20;
For blobs:
SELECT
substring(encode(digest, 'hex'), 3) AS digest,
ROUND(
EXTRACT(
EPOCH
FROM
AGE(NOW(), review_after)
) / 3600
) AS hours_eligible_for_review,
review_count as failed_review_attempts,
event
FROM
gc_blob_review_queue
WHERE
review_after < NOW() - INTERVAL '24 hours'
OR review_count > 10
LIMIT
20;
Check the number of tasks eligible for review by running the following queries:
SELECT COUNT(*) FROM gc_blob_review_queue WHERE review_after < NOW();
SELECT COUNT(*) FROM gc_manifest_review_queue WHERE review_after < NOW();
{{< /tab >}}
{{< /tabs >}}
Generally, there should be relatively low counts of items ready for review, often nearing zero. However, there might be more if:
If there are tasks with retries or that are long overdue, check the registry logs
for messages related to garbage collection. Filter for entries by
component="registry.gc.* and investigate any error messages.
The unfiltered size of the gc_manifest_review_queue and gc_blob_review_queue
are not good indicators of the health of the online garbage collector. These
queues constantly have new entries added to them; therefore, these queues
never fully clear for an active registry.
Additionally, not all items in these queues will be removed from storage. Consult the online garbage collection specification for a full explanation of these queues for more context.
Large amounts of tasks eligible for review are also not necessarily a cause for concern. The garbage collector might be working through items caused by a spike in activity.
Similarly, the created_at date of these tasks alone is not a good health indicator.
When an event adds the same blob or manifest to the queue, the review_after
of the existing task is updated, which postpones the review. No duplicate task is created.
This can occur any number of times, so tasks created months ago are not a cause for concern.
If the number of tasks eligible for review remains high, and you want to increase the frequency
between the garbage collection blob or manifest worker runs, update your
interval configuration from the default (5s) to 1s:
registry['gc'] = {
'blobs' => {
'interval' => '1s'
},
'manifests' => {
'interval' => '1s'
}
}
After the import load has been cleared, you should fine-tune these settings for the long term to avoid unnecessary CPU load on the database and registry instances. You can gradually increase the interval to a value that balances performance and resource usage.
To ensure data consistency after the import, use the crane validate
tool. This tool checks that all image layers and manifests in your container registry
are accessible and correctly linked. By running crane validate, you confirm that
the images in your registry are complete and accessible, ensuring a successful import.
If most of your images are tagged, garbage collection won't significantly reduce storage space because it only deletes untagged images.
Implement cleanup policies to remove unneeded tags, which eventually causes images to be removed through garbage collection and storage space being recovered.
By default, GitLab 18.3 and later preprovisions a logical database within the main GitLab database for container registry metadata. However, you may want to use a dedicated external database for the container registry if you want to scale your registry.
Afterward, follow the same steps for the default database, substituting your own database values. Start with the database disabled, taking care to enable and disable the database as instructed:
registry['database'] = {
'enabled' => false,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
{{< history >}}
{{< /history >}}
When the metadata database is turned on, backups must include both the registry storage backend and the database.
The backup method depends on your storage type:
gitlab-backup includes the registry automatically.Back up storage and database as close together in time as possible to ensure a consistent registry state. To restore the registry, you must apply both backups.
In GitLab 18.10 and later, gitlab-backup create and gitlab-backup restore include the
registry metadata database automatically when the metadata database is configured. On Helm chart
(Kubernetes) installations, backup-utility behaves the same way.
The metadata database must be configured in gitlab.rb or in your Helm values file.
No additional configuration is required. The backup tools read the registry database connection settings from the existing configuration.
If you call the backup Rake task directly, you must set the following environment variables on the node that runs the backup:
| Variable | Required | Description |
|---|---|---|
REGISTRY_DATABASE_HOST | Yes | The database host. |
REGISTRY_DATABASE_NAME | Yes | The database name. |
REGISTRY_DATABASE_USER | Yes | The database user. |
REGISTRY_DATABASE_PORT | No | The database port. Defaults to 5432. |
REGISTRY_DATABASE_PASSWORD | No | The database password. |
REGISTRY_DATABASE_SSLMODE | No | Whether or not to require SSL mode. Set to require or omit. |
REGISTRY_DATABASE_SSLCERT | No | The path to the client certificate. |
REGISTRY_DATABASE_SSLKEY | No | The path to the client private key. |
REGISTRY_DATABASE_SSLROOTCERT | No | The path to the CA certificate. |
REGISTRY_DATABASE_CONNECT_TIMEOUT | No | The connection timeout in seconds. |
The backup Rake task activates the registry database backup when it detects any of the following credentials:
REGISTRY_DATABASE_PASSWORDREGISTRY_DATABASE_SSLCERTREGISTRY_DATABASE_SSLKEYREGISTRY_DATABASE_SSLROOTCERTWithout credentials, the registry database is not included in the backup. The same environment variables must be set when restoring.
If you use GitLab 18.9 or earlier, or if you prefer to manage registry database
backups separately, use standard PostgreSQL tools like pg_dump and pg_restore
to back up and restore the registry database independently.
{{< history >}}
{{< /history >}}
For Helm chart (Kubernetes) deployments, configure the toolbox pod with dedicated database credentials for backup and restore operations. Two separate PostgreSQL users are required:
Configure one or both users, depending on which operations you need.
Before you begin, enable the container registry metadata database by setting registry.database.enabled: true.
You must manually create the Kubernetes Secret before deploying. The chart does not auto-generate this secret.
For example, to create a secret with both backup and restore passwords:
kubectl create secret generic my-registry-db-password-secret \
--from-literal=backupPassword="BACKUP_USER_PASSWORD" \
--from-literal=restorePassword="RESTORE_USER_PASSWORD"
Add the required YAML to your Helm values.yaml to configure backup and restore users. Refer to the following table for configuration setting definitions.
| Setting | Default | Description |
|---|---|---|
backupUser | PostgreSQL username for backup operations. Required to enable registry database backups. | |
restoreUser | PostgreSQL username for restore operations. Required to enable registry database restores. | |
password.secret | <release-name>-toolbox-registry-database-password | Name of the Kubernetes Secret containing the passwords. |
password.backupPasswordKey | backupPassword | Key in the Kubernetes Secret for the backup user's password. |
password.restorePasswordKey | restorePassword | Key in the Kubernetes Secret for the restore user's password. |
The following example configures both backup and restore users:
gitlab:
toolbox:
backups:
registry:
database:
# PostgreSQL username for backing up the registry database
backupUser: "registry_backup"
# PostgreSQL username for restoring the registry database
restoreUser: "registry_restore"
password:
# Name of the Kubernetes Secret containing the passwords
secret: "my-registry-db-password-secret"
# Key in the Secret for the backup user's password
backupPasswordKey: "backupPassword"
# Key in the Secret for the restore user's password
restorePasswordKey: "restorePassword"
If no backupUser or restoreUser is configured, the registry database backup
is silently skipped and the toolbox pod operates normally.
The backup user requires read-only access to dump the registry database. The restore user requires superuser privileges to restore it.
For Linux package installations, these users and permissions are created
automatically when database_backup_username, database_backup_password,
database_restore_username, and database_restore_password are configured.
For self-compiled or external database installations, create the users and grant permissions manually:
-- Create the backup user with minimal privileges for pg_dump.
-- The registry database uses both the 'public' and 'partitions' schemas.
CREATE ROLE registry_backup WITH LOGIN PASSWORD 'password'
NOINHERIT NOCREATEDB NOSUPERUSER NOREPLICATION;
GRANT CONNECT ON DATABASE registry TO registry_backup;
-- Grant read-only access on both schemas
GRANT USAGE ON SCHEMA public TO registry_backup;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO registry_backup;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO registry_backup;
ALTER DEFAULT PRIVILEGES FOR ROLE registry IN SCHEMA public
GRANT SELECT ON TABLES TO registry_backup;
ALTER DEFAULT PRIVILEGES FOR ROLE registry IN SCHEMA public
GRANT SELECT ON SEQUENCES TO registry_backup;
GRANT USAGE ON SCHEMA partitions TO registry_backup;
GRANT SELECT ON ALL TABLES IN SCHEMA partitions TO registry_backup;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA partitions TO registry_backup;
ALTER DEFAULT PRIVILEGES FOR ROLE registry IN SCHEMA partitions
GRANT SELECT ON TABLES TO registry_backup;
ALTER DEFAULT PRIVILEGES FOR ROLE registry IN SCHEMA partitions
GRANT SELECT ON SEQUENCES TO registry_backup;
-- Create the restore user with superuser privileges.
-- SUPERUSER is required for database restore operations because the
-- restore process must SET ROLE to the registry owner and
-- CREATE TRIGGER on all tables.
CREATE ROLE registry_restore WITH LOGIN PASSWORD 'password' SUPERUSER;
When configured, the chart creates a volume mounted at /etc/gitlab/registry-db/ in both the
toolbox Deployment and the backup CronJob. The volume is read-only and includes the
following:
backupUser and restoreUser.The backup-utility in the toolbox pod reads these files and includes the registry
metadata database in backup and restore operations.
If any required credential files are missing, the backup-utility logs a warning
and continues with the backup of other resources.
SSL certificate paths for mutual TLS authentication with PostgreSQL are only
included when SSL is configured globally (global.psql.ssl). If SSL is
configured only at the registry subchart level (registry.database.ssl), those
settings are not passed to the toolbox.
When using Geo, each site maintains its own registry database and object storage. Back up the registry database and object storage at each site independently. Geo does not replicate the registry database between sites.
To downgrade the registry to a previous version after the import is complete, you must restore to a backup of the desired version in order to downgrade.
When using GitLab Geo with the container registry, you must configure separate database and object storage stacks for the registry at each site. Geo replication to the container registry uses events generated from registry notifications, rather than by database replication.
Each Geo site requires a separate, site-specific:
This diagram illustrates the data flow and basic architecture:
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TB
accTitle: Geo architecture for the container registry metadata database
accDescr: The primary site sends events to the secondary site through the GitLab Rails notification system for Geo replication.
subgraph "Primary site"
P_Rails[GitLab Rails]
P_Reg[Container registry]
P_RegDB[(Registry database)]
P_Obj[(Object storage)]
P_Reg --> P_RegDB
P_RegDB --> P_Obj
end
subgraph "Secondary site"
S_Rails[GitLab Rails]
S_Reg[Container registry]
S_RegDB[(Registry database)]
S_Obj[(Object storage)]
S_Reg --> S_RegDB
S_RegDB --> S_Obj
end
P_Reg -- "Notifications" --> P_Rails
P_Rails -- "Events" --> S_Rails
S_Rails --> S_Reg
Use separate database instances on each site because:
You can revert your registry to use object storage metadata after completing a metadata import.
[!warning] When you revert to object storage metadata, any container images, tags, or repositories added or deleted between the import completion and this revert operation are not available.
To revert to object storage metadata:
Restore a backup taken before the migration.
Add the following configuration to your /etc/gitlab/gitlab.rb file:
registry['database'] = {
'enabled' => false,
}
Save the file and reconfigure GitLab.
To review errors and troubleshooting solutions and workarounds, see Troubleshooting the container registry metadata database.