docker-compose/README.md
This directory provides a Docker Compose setup for running MLflow locally with a PostgreSQL backend store and RustFS for S3-compatible artifact storage.
MLflow Tracking Server
Serves the REST API and UI (default: http://localhost:5000).
PostgreSQL
Stores MLflow's metadata (experiments, runs, params, metrics).
RustFS Artifact Storage
Stores model files and run artifacts.
Verify installation:
docker --version
docker compose version
git clone https://github.com/mlflow/mlflow.git
cd mlflow/docker-compose/
Copy and customize the environment file:
cp .env.dev.example .env
The .env file defines:
Common variables:
PostgreSQL
POSTGRES_USER=mlflowPOSTGRES_PASSWORD=mlflowPOSTGRES_DB=mlflowS3
AWS_ACCESS_KEY_ID=s3adminAWS_SECRET_ACCESS_KEY=s3adminAWS_DEFAULT_REGION=us-east-1S3_BUCKET=mlflowRustFS
RUSTFS_CONSOLE_ENABLE=trueMLflow
MLFLOW_VERSION=latestMLFLOW_HOST=0.0.0.0MLFLOW_PORT=5000MLFLOW_BACKEND_STORE_URI=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}MLFLOW_ARTIFACTS_DESTINATION=s3://${S3_BUCKET}MLFLOW_S3_ENDPOINT_URL=http://storage:9000Inside directory mlflow/docker-compose:
docker compose up -d
This will:
Check status:
docker compose ps
Tail logs:
docker compose logs -f
Once running:
http://localhost:5000 (or the port defined in .env)You can now log runs, metrics, artifacts, and models to your local MLflow instance.
To stop and remove containers:
docker compose down
To reset everything, including volumes:
docker compose down -v
Set server domains/host so virtual-hosted requests can be resolved by RustFS:
RUSTFS_SERVER_DOMAINS=storage:9000
(match the compose service DNS name)
Prefer AWS CLI s3api for bucket creation. Some S3 clients default to path-style on custom endpoints; if bucket creation fails with InvalidBucketName, switch to s3api or a client like MinIO mc.
Inside MLflow, use the internal endpoint:
MLFLOW_S3_ENDPOINT_URL=http://storage:9000
MLFLOW_ARTIFACTS_DESTINATION=s3://mlflow/
RustFS usually responds on /health with a json that contains the status of the server:
curl -s http://127.0.0.1:9000/health | grep -q '\"status\"\\s*:\\s*\"ok\"'
Use that in a container healthcheck (no -f, 4xx may appear during bootstrap).
Verify:
MLFLOW_ARTIFACTS_DESTINATION=s3://<bucket>/MLFLOW_S3_ENDPOINT_URL=http://<service>:<port>To verify S3 storage is working:
aws --endpoint-url=${MLFLOW_S3_ENDPOINT_URL} s3api list-buckets
echo hi > /tmp/t.txt
aws --endpoint-url=${MLFLOW_S3_ENDPOINT_URL} s3 cp /tmp/t.txt s3://${S3_BUCKET}/t.txt
aws --endpoint-url=${MLFLOW_S3_ENDPOINT_URL} s3 cp s3://${S3_BUCKET}/t.txt -
If this passes, MLflow can read and write artifacts to RustFS.
InvalidBucketName on create-bucket → use s3api (virtual-host friendly) or MinIO mc; ensure RUSTFS_SERVER_DOMAINS matches the S3 hostname.MLFLOW_S3_ENDPOINT_URL uses the service name visible from MLflow (e.g., http://storage:9000).docker compose down -v
docker compose up -d
docker compose logs -f mlflow
docker compose logs -f postgres
docker compose logs -f storage
Edit .env and restart containers:
docker compose down
docker compose up -d
export MLFLOW_TRACKING_URI=http://localhost:5000
mlflow.start_run() (Python) or the MLflow CLI..env and docker-compose.yml to fit your local workflow (e.g., change image tags, add volumes, etc.).You now have a fully local MLflow stack with persistent metadata and artifact storage—ideal for development and experimentation.