docs/administrator/migration/backup_and_migration.md
:::info KUDOS This document is contributed by our community contributor TreeDy. We may not actively maintain this document. :::
A common scenario is processing large datasets on a powerful instance (e.g., with a GPU) and then migrating the entire RAGFlow service to a different production environment (e.g., a CPU-only server). This guide explains how to safely back up and restore your data using our provided migration script.
By default, RAGFlow uses Docker volumes to store all persistent data, including your database, uploaded files, and search indexes. You can see these volumes by running:
docker volume ls
The output will look similar to this:
DRIVER VOLUME NAME
local docker_esdata01
local docker_minio_data
local docker_mysql_data
local docker_redis_data
These volumes contain all the data you need to migrate.
:::note
The volume name prefix (e.g., docker_) comes from the Docker Compose project name. By default it is docker (derived from the directory name). If you started RAGFlow with docker compose -p <project_name>, your volumes will be prefixed with <project_name>_ instead, for example ragflow_mysql_data.
:::
Before starting the migration, you must stop all running RAGFlow services on the source machine. Navigate to the project's root directory and run:
docker compose -f docker/docker-compose.yml down
If you started RAGFlow with a custom project name (e.g., docker compose -p ragflow), include it in the command:
docker compose -p ragflow -f docker/docker-compose.yml down
Important: Do not use the -v flag (e.g., docker compose down -v), as this will delete all your data volumes. The migration script includes a check and will prevent you from running it if services are active.
We provide a convenient script to package all your data volumes into a single backup folder.
For a quick reference of the script's commands and options, you can run:
bash docker/migration.sh help
To create a backup, run the following command from the project's root directory:
bash docker/migration.sh backup
This will create a backup/ folder in your project root containing compressed archives of your data volumes.
You can also specify a custom name for your backup folder:
bash docker/migration.sh backup my_ragflow_backup
This will create a folder named my_ragflow_backup/ instead.
If you started RAGFlow with a custom project name (e.g., docker compose -p ragflow), use the -p flag so the script can find the correct volumes:
bash docker/migration.sh -p ragflow backup
bash docker/migration.sh -p ragflow backup my_ragflow_backup
Copy the entire backup folder (e.g., backup/ or my_ragflow_backup/) from your source machine to the RAGFlow project directory on your target machine. You can use tools like scp, rsync, or a physical drive for the transfer.
On the target machine, ensure that RAGFlow services are not running. Then, use the migration script to restore your data from the backup folder.
If your backup folder is named backup/, run:
bash docker/migration.sh restore
If you used a custom name, specify it in the command:
bash docker/migration.sh restore my_ragflow_backup
If the target machine uses a custom project name, use the -p flag to ensure the volumes are created with the correct prefix:
bash docker/migration.sh -p ragflow restore
bash docker/migration.sh -p ragflow restore my_ragflow_backup
The script will automatically create the necessary Docker volumes and unpack the data.
Note: If the script detects that Docker volumes with the same names already exist on the target machine, it will warn you that restoring will overwrite the existing data and ask for confirmation before proceeding.
Once the restore process is complete, you can start the RAGFlow services on your new machine:
docker compose -f docker/docker-compose.yml up -d
If you use a custom project name:
docker compose -p ragflow -f docker/docker-compose.yml up -d
Note: If you already have built a service by docker compose before, you may need to backup your data for target machine like this guide above and run like:
# Please backup by `bash docker/migration.sh backup backup_dir_name` before you do the following line.
# !!! this line -v flag will delete the original docker volume
docker compose -f docker/docker-compose.yml down -v
docker compose -f docker/docker-compose.yml up -d
Your RAGFlow instance is now running with all the data from your original machine.
:::info KUDOS This document is contributed by our community contributor arogan178. We may not actively maintain this document. :::
By default, RAGFlow creates one bucket per Knowledge Base (dataset) and one bucket per user folder. This can be problematic when:
The Single Bucket Mode allows you to configure RAGFlow to use a single bucket with a directory structure instead of multiple buckets.
bucket: kb_12345/
└── document_1.pdf
bucket: kb_67890/
└── document_2.pdf
bucket: folder_abc/
└── file_3.txt
bucket: ragflow-bucket/
└── ragflow/
├── kb_12345/
│ └── document_1.pdf
├── kb_67890/
│ └── document_2.pdf
└── folder_abc/
└── file_3.txt
Edit your service_conf.yaml or set environment variables:
minio:
user: "your-access-key"
password: "your-secret-key"
host: "minio.example.com:443"
bucket: "ragflow-bucket" # Default bucket name
prefix_path: "ragflow" # Optional prefix path
Or using environment variables:
export MINIO_USER=your-access-key
export MINIO_PASSWORD=your-secret-key
export MINIO_HOST=minio.example.com:443
export MINIO_BUCKET=ragflow-bucket
export MINIO_PREFIX_PATH=ragflow
s3:
access_key: "your-access-key"
secret_key: "your-secret-key"
endpoint_url: "https://s3.amazonaws.com"
bucket: "my-ragflow-bucket"
prefix_path: "production"
region: "us-east-1"
When using single bucket mode, you only need permissions for one bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
"arn:aws:s3:::ragflow-bucket",
"arn:aws:s3:::ragflow-bucket/*"
]
}
]
}
If you're migrating from multi-bucket mode to single-bucket mode:
# Example using mc (MinIO Client)
mc alias set old-minio http://old-minio:9000 ACCESS_KEY SECRET_KEY
mc alias set new-minio https://new-minio:443 ACCESS_KEY SECRET_KEY
# List all knowledge base buckets
mc ls old-minio/ | grep kb_ | while read -r line; do
bucket=$(echo $line | awk '{print $5}')
# Copy each bucket to the new structure
mc cp --recursive old-minio/$bucket/ new-minio/ragflow-bucket/ragflow/$bucket/
done
minio:
bucket: "my-single-bucket"
prefix_path: "ragflow"
minio:
# Leave bucket and prefix_path empty or commented out
# bucket: ''
# prefix_path: ''
Solution: Ensure your IAM policy grants access to the bucket specified in the configuration.
Solution: The path structure changes between modes. You'll need to migrate existing data.
Solution: Ensure secure: True is set in the MinIO connection (automatically handled for port 443).