apps/docs/content/guides/self-hosting/copy-from-platform-s3.mdx
This guide walks you through copying storage objects from a managed Supabase platform project to a self-hosted instance using rclone with S3-to-S3 copy.
<Admonition type="caution">Direct file copy (e.g., downloading files and placing them into volumes/storage/) does not work. Self-hosted Storage uses an internal file structure that differs from what you get when downloading files from the platform. Use the S3 protocol to transfer objects so that Storage creates the correct metadata records.
You need:
In your managed Supabase project dashboard, go to Storage > S3 Configuration > Access keys. Generate a new access key pair and copy:
https://<project-ref>.supabase.co/storage/v1/s3us-east-1)For better performance with large files, use the direct storage hostname: https://<project-ref>.storage.supabase.co/storage/v1/s3
Buckets must exist on the destination before you can copy objects into them. You can create them through dashboard UI, or with SQL Editor.
<Admonition type="tip">If you already restored your platform database to self-hosted using the restore guide, your bucket definitions are already present. You can skip this step.
</Admonition>To list your platform buckets, connect to your platform database and run:
select id, name, public from storage.buckets order by name;
Then create matching buckets on your self-hosted instance. Connect to your self-hosted database and run:
insert into storage.buckets (id, name, public)
values
('your-storage-bucket', 'your-storage-bucket', false)
on conflict (id) do nothing;
Repeat for each bucket, setting public to true or false as appropriate.
Create or edit your rclone configuration file (~/.config/rclone/rclone.conf):
[platform]
type = s3
provider = Other
access_key_id = your-platform-access-key-id
secret_access_key = your-platform-secret-access-key
endpoint = https://your-project-ref.supabase.co/storage/v1/s3
region = your-project-region
[self-hosted]
type = s3
provider = Other
access_key_id = your-self-hosted-access-key-id
secret_access_key = your-self-hosted-secret-access-key
endpoint = http://your-domain:8000/storage/v1/s3
region = your-self-hosted-region
Replace the credentials with your actual values. For self-hosted, use the REGION, S3_PROTOCOL_ACCESS_KEY_ID and S3_PROTOCOL_ACCESS_KEY_SECRET you configured in Configure S3 Storage.
Verify both remotes connect:
rclone lsd platform:
rclone lsd self-hosted:
Both commands should list your buckets.
Copy a single bucket:
rclone copy platform:your-storage-bucket self-hosted:your-storage-bucket --progress
To copy all buckets:
for bucket in $(rclone lsf platform: | tr -d '/'); do
echo "Copying bucket: $bucket"
rclone copy "platform:$bucket" "self-hosted:$bucket" --progress
done
For large migrations, consider adding --transfers 4 to increase parallelism, or --checkers 8 to speed up the comparison phase. See the flags documentation for all options.
Compare object counts between source and destination:
rclone size platform:your-storage-bucket && \
rclone size self-hosted:your-storage-bucket
Open Studio on your self-hosted instance and browse the storage buckets to confirm files are accessible.
If you see SignatureDoesNotMatch when connecting to either remote:
/storage/v1/s3.REGION, S3_PROTOCOL_ACCESS_KEY_ID and S3_PROTOCOL_ACCESS_KEY_SECRET in .env file match your rclone config.If rclone reports that a bucket doesn't exist on the self-hosted side, create it first - see Step 2. The S3 protocol does not auto-create buckets on copy.
For very large files, increase rclone's timeout:
rclone copy platform:your-storage-bucket self-hosted:your-storage-bucket --timeout 30m
If rclone lsd platform: returns nothing, verify the endpoint URL ends with /storage/v1/s3 and that the S3 access keys have not expired. Regenerate them from the dashboard if needed.