scripts/s3-tests/README.md
This directory contains scripts for running S3 compatibility tests against RustFS.
Run the local S3 compatibility test script:
./scripts/s3-tests/run.sh
The script will automatically:
cargo build --release (skips if binary is recent < 30 min)artifacts/s3tests-${TEST_MODE}/The script supports four deployment modes, controlled via the DEPLOY_MODE environment variable:
Compile with cargo build --release and run:
DEPLOY_MODE=build ./scripts/s3-tests/run.sh
# Or simply (build is the default)
./scripts/s3-tests/run.sh
# Force rebuild even if binary exists and is recent
./scripts/s3-tests/run.sh --no-cache
Behavior:
--no-cache is specified)Use pre-compiled binary file:
# Use default path (./target/release/rustfs)
DEPLOY_MODE=binary ./scripts/s3-tests/run.sh
# Specify custom binary path
DEPLOY_MODE=binary RUSTFS_BINARY=./target/release/rustfs ./scripts/s3-tests/run.sh
Behavior:
Build Docker image and run in container:
DEPLOY_MODE=docker ./scripts/s3-tests/run.sh
Behavior:
Dockerfile.sourcerustfs-net) if it doesn't existConnect to an already running RustFS service:
DEPLOY_MODE=existing S3_HOST=127.0.0.1 S3_PORT=9000 ./scripts/s3-tests/run.sh
# Connect to remote service
DEPLOY_MODE=existing S3_HOST=192.168.1.100 S3_PORT=9000 ./scripts/s3-tests/run.sh
Behavior:
rustfsalt) provisioned, or the script will provision it automaticallyAutomatic Cleanup: The script uses trap handlers to automatically clean up resources when it exits (success or failure):
-h, --help: Show help message--no-cache: Force rebuild even if binary exists and is recent (for build mode)DEPLOY_MODE: Deployment mode, options:
build: Compile with cargo build --release and run (default)binary: Use pre-compiled binary filedocker: Build Docker image and run in containerexisting: Use already running serviceRUSTFS_BINARY: Path to binary file (for binary mode, default: ./target/release/rustfs)DATA_ROOT: Root directory for test data storage (default: target)
${DATA_ROOT}/test-data/${CONTAINER_NAME}DATA_ROOT=/tmp stores data in /tmp/test-data/rustfs-single/S3_ACCESS_KEY: Main user access key (default: rustfsadmin)S3_SECRET_KEY: Main user secret key (default: rustfsadmin)S3_ALT_ACCESS_KEY: Alt user access key (default: rustfsalt)S3_ALT_SECRET_KEY: Alt user secret key (default: rustfsalt)S3_REGION: S3 region (default: us-east-1)S3_HOST: S3 service host (default: 127.0.0.1)S3_PORT: S3 service port (default: 9000)TEST_MODE: Test mode (default: single)MAXFAIL: Stop after N failures (default: 1)XDIST: Enable parallel execution with N workers (default: 0, disabled)MARKEXPR: pytest marker expression for filtering tests
TESTEXPR: optional pytest -k expression for custom runs
implemented_tests.txtTESTEXPR overrides the implemented test listS3TESTS_CONF_TEMPLATE: Path to s3tests config template (default: .github/s3tests/s3tests.conf)
envsubst to substitute variables (e.g., ${S3_HOST})S3TESTS_CONF: Path to generated s3tests config (default: s3tests.conf)
# Basic usage - compiles and runs automatically
# Skips compilation if binary exists and is less than 30 minutes old
./scripts/s3-tests/run.sh
# Force rebuild (skip cache check, always compile)
./scripts/s3-tests/run.sh --no-cache
# Run all tests, stop after 50 failures
MAXFAIL=50 ./scripts/s3-tests/run.sh
# Enable parallel execution (4 worker processes)
# Automatically installs pytest-xdist if needed
XDIST=4 ./scripts/s3-tests/run.sh
# Use custom data storage location
# Data will be stored in /tmp/test-data/rustfs-single/
DATA_ROOT=/tmp ./scripts/s3-tests/run.sh
# Run specific test markers (e.g., test multipart uploads only)
MARKEXPR="multipart" ./scripts/s3-tests/run.sh
# First compile the binary
cargo build --release
# Run with default path
DEPLOY_MODE=binary ./scripts/s3-tests/run.sh
# Specify custom path
DEPLOY_MODE=binary RUSTFS_BINARY=/path/to/rustfs ./scripts/s3-tests/run.sh
# Use binary with parallel tests
DEPLOY_MODE=binary XDIST=4 ./scripts/s3-tests/run.sh
# Build Docker image and run in container
DEPLOY_MODE=docker ./scripts/s3-tests/run.sh
# Run with parallel tests
DEPLOY_MODE=docker XDIST=4 ./scripts/s3-tests/run.sh
# Connect to locally running service (default: 127.0.0.1:9000)
DEPLOY_MODE=existing ./scripts/s3-tests/run.sh
# Connect to remote service
DEPLOY_MODE=existing S3_HOST=192.168.1.100 S3_PORT=9000 ./scripts/s3-tests/run.sh
# Test specific features (custom marker expression)
DEPLOY_MODE=existing MARKEXPR="not lifecycle and not versioning" ./scripts/s3-tests/run.sh
# Use custom credentials
DEPLOY_MODE=existing \
S3_ACCESS_KEY=myaccesskey \
S3_SECRET_KEY=mysecretkey \
./scripts/s3-tests/run.sh
# Use custom config template and output path
S3TESTS_CONF_TEMPLATE=my-configs/s3tests.conf.template \
S3TESTS_CONF=my-s3tests.conf \
./scripts/s3-tests/run.sh
Test results are saved in the artifacts/s3tests-${TEST_MODE}/ directory (default: artifacts/s3tests-single/):
junit.xml: Test results in JUnit format (compatible with CI/CD systems)pytest.log: Detailed pytest logs with full test outputrustfs-${TEST_MODE}/rustfs.log: RustFS service logsrustfs-${TEST_MODE}/inspect.json: Service metadata (PID, binary path, mode, etc.)View results:
# Check test summary
cat artifacts/s3tests-single/junit.xml | grep -E "testsuite|testcase"
# View test logs
less artifacts/s3tests-single/pytest.log
# View service logs
less artifacts/s3tests-single/rustfs-single/rustfs.log
The following dependencies must be installed manually on your system:
Python 3: Required for running s3-tests
python3 --versionbrew install python3apt-get install python3 or yum install python3Git: Required for cloning s3-tests repository
git --versionbrew install gitapt-get install git or yum install gitPort checking tools: One of the following for port availability checks
nc (netcat): apt-get install netcat or brew install netcattimeout command: Usually pre-installed on LinuxDEPLOY_MODE=docker
docker --versionDEPLOY_MODE=build (default)
rustc --version and cargo --versionThe script will automatically install the following dependencies if missing (no manual action required):
awscurl: For S3 API calls and user provisioning
python3 -m pip install --user --upgrade pip awscurl$HOME/.local/bin/awscurltox: For running s3-tests in isolated Python environment
python3 -m pip install --user --upgrade pip tox$HOME/.local/bin/toxgettext-base: For envsubst command (config file generation)
brew install gettextsudo apt-get install gettext-bases3-tests repository: Automatically cloned if not present
https://github.com/ceph/s3-tests.git${PROJECT_ROOT}/s3-testsNote: The script adds $HOME/.local/bin to PATH automatically, so auto-installed Python tools are accessible.
The script automatically disables proxy for localhost requests to avoid interference. All proxy environment variables (http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY) are unset at script startup. The NO_PROXY variable is set to 127.0.0.1,localhost,::1.
The test script automatically cleans up processes and containers when it exits. However, if you need to manually clean up:
A dedicated cleanup script is available to clean up test resources:
# Clean up port 9000 and test data directory
./scripts/s3-tests/cleanup.sh
# Use custom port and host
S3_PORT=9001 S3_HOST=127.0.0.1 ./scripts/s3-tests/cleanup.sh
The cleanup script will:
target/test-data/rustfs-single/If the cleanup script doesn't work or you need more control:
# Kill process on port 9000
lsof -ti:9000 | xargs kill -9
# Or use netstat/ss
kill -9 $(netstat -tuln | grep :9000 | awk '{print $7}' | cut -d'/' -f1)
# Remove test data
rm -rf target/test-data/rustfs-single/
# Stop Docker container (if using docker mode)
docker rm -f rustfs-single
# Remove Docker network (if using docker mode)
docker network rm rustfs-net
If port 9000 is already in use, change the port:
S3_PORT=9001 ./scripts/s3-tests/run.sh
Note: The script automatically checks if the port is available before starting (except in existing mode). If the port is in use, the script will exit with an error message.
Check Docker logs:
docker logs rustfs-single
For binary mode, ensure the binary is compiled:
cargo build --release
Or specify the correct path:
DEPLOY_MODE=binary RUSTFS_BINARY=/path/to/rustfs ./scripts/s3-tests/run.sh
Increase wait time or check service status:
curl http://127.0.0.1:9000/health
For existing mode, ensure the service is running and accessible:
# Check if service is reachable
curl http://192.168.1.100:9000/health
# Verify S3 API is responding
awscurl --service s3 --region us-east-1 \
--access_key rustfsadmin \
--secret_key rustfsadmin \
-X GET "http://192.168.1.100:9000/"
This script mirrors the GitHub Actions workflow defined in .github/workflows/e2e-s3tests.yml.
The script follows the same steps: