ARCHITECTURE.md
Last updated: 2026-04-13 · Revision: 1 (draft)
This document describes the high-level architecture of RustFS. If you want to familiarize yourself with the code base, you are in the right place!
See also CONTRIBUTING.md for development workflow.
RustFS is a high-performance, S3-compatible distributed object storage system written in Rust. It uses erasure coding for data durability, supports multi-tenancy through IAM/STS, and provides a web-based admin console.
A running RustFS node exposes:
/minio/ prefix) — cluster management, IAM, metricsThe core data flow for a PUT request looks like:
HTTP request
→ server (TLS, auth, routing, compression)
→ app/object_usecase (validation, policy, lifecycle)
→ storage/ecfs (erasure coding, encryption, checksums)
→ ecstore (disk pool selection, data distribution)
→ rio (reader pipeline: encrypt → compress → hash → write)
→ io-core (zero-copy I/O, buffer pool, direct I/O)
→ local disk / remote disk via RPC
The repository is a Cargo workspace with a flat crates/ layout:
rustfs/ # Workspace root (virtual manifest)
├── rustfs/ # Main binary + library crate (75K lines)
│ └── src/
│ ├── main.rs # Entry point, startup sequence
│ ├── lib.rs # Module tree root
│ ├── server/ # HTTP server, TLS, routing, middleware
│ ├── admin/ # Admin API handlers and console
│ ├── app/ # Use-case layer (object, bucket, multipart)
│ ├── storage/ # Storage engine interface and implementation
│ ├── auth.rs # S3 request authentication
│ ├── config/ # CLI args, config parsing, workload profiles
│ └── ...
├── crates/ # 39 library crates
│ ├── ecstore/ # Erasure-coded storage engine (⚠️ 87K lines)
│ ├── rio/ # Reader I/O pipeline (encrypt, compress, hash)
│ ├── io-core/ # Zero-copy I/O, scheduling, buffer pool
│ ├── io-metrics/ # I/O metrics collection
│ ├── common/ # Shared runtime state, globals, data usage types
│ ├── config/ # Configuration types and parsing
│ ├── utils/ # Pure utility functions
│ ├── ... # (see "Crate Reference" below)
│ └── e2e_test/ # End-to-end integration tests
└── docs/ # Design documents and analysis
rustfs/src/)The main crate is organized in layers, top to bottom:
| Layer | Directory | Responsibility |
|---|---|---|
| Server | server/ | HTTP listener, TLS, CORS, compression, middleware, graceful shutdown |
| Admin | admin/ | Admin API routing, 30+ handler modules, web console |
| App | app/ | Use-case orchestration: object_usecase, bucket_usecase, multipart_usecase |
| Storage | storage/ | S3 API translation, erasure-coded FS, SSE encryption, RPC, concurrency |
| Auth | auth.rs | S3 signature verification, credential validation |
| Config | config/ | CLI parsing, config struct, workload profiles |
A request flows downward through the layers. No layer should reach upward (e.g., storage must not import from admin).
Crates are organized in a dependency DAG with 9 depth levels (0 = leaf, 8 = top):
Depth 0 — LEAF (no internal deps):
appauth, checksums, config, credentials, crypto, io-metrics,
madmin, s3-common, workers, zip
Depth 1:
io-core (→ io-metrics)
policy (→ config, credentials, crypto)
utils (→ config) ⚠️ inverted: utils should be leaf
Depth 2:
concurrency, filemeta, keystone, kms, lock, obs,
signer, targets, trusted-proxies
Depth 3:
common (→ filemeta, madmin) ⚠️ inverted: common should be leaf
Depth 4:
object-capacity, protos, rio
Depth 5 — CORE:
ecstore (16 internal deps, 11 dependents — the architectural heart)
Depth 6:
audit, heal, iam, metrics, notify, s3select-api, scanner
Depth 7:
object-io, protocols, s3select-query
Depth 8 — TOP:
rustfs (35 internal deps — the binary, depends on almost everything)
Core Infrastructure:
| Crate | Lines | Purpose |
|---|---|---|
config | 3.3K | Configuration types and environment parsing |
utils | 8.7K | Pure utilities (paths, compression, network, retry) |
common | 4.4K | Shared runtime state, globals, data usage types, metrics |
madmin | 5.5K | Admin API request/response types |
I/O Pipeline:
| Crate | Lines | Purpose |
|---|---|---|
io-core | 6.5K | Zero-copy I/O, buffer pool, direct I/O, scheduling, backpressure |
io-metrics | 4.5K | I/O operation metrics and counters |
rio | 6.9K | Composable reader chain (encrypt → compress → hash → limit) |
object-io | 2.4K | High-level object read/write using rio + ecstore |
concurrency | 1.8K | Concurrency control wrappers over io-core |
Storage Engine:
| Crate | Lines | Purpose |
|---|---|---|
ecstore | 87K | ⚠️ Erasure-coded storage: disks, pools, buckets, replication, lifecycle |
filemeta | 10K | File/object metadata types and versioning |
checksums | 732 | Checksum computation |
lock | 7.1K | Distributed lock manager |
heal | 5.9K | Data healing / bitrot repair |
scanner | 5.4K | Background data usage scanner |
object-capacity | 2.5K | Capacity tracking and management |
Security & Auth:
| Crate | Lines | Purpose |
|---|---|---|
crypto | 1.6K | Encryption primitives |
credentials | 713 | Credential types (access key / secret key) |
signer | 1.4K | S3 v4 request signing |
iam | 9.0K | Identity and access management |
policy | 8.8K | Policy engine (S3 bucket/IAM policies) |
kms | 8.1K | Key management service integration |
keystone | 1.9K | OpenStack Keystone auth |
appauth | 143 | Application-level auth tokens |
Protocol & API:
| Crate | Lines | Purpose |
|---|---|---|
protos | 5.7K | Protobuf/gRPC definitions for inter-node RPC |
protocols | 18K | FTP/FTPS, WebDAV, Swift API support |
s3-common | 738 | Shared S3 types |
s3select-api | 1.9K | S3 Select interface |
s3select-query | 3.6K | S3 Select query engine |
Observability:
| Crate | Lines | Purpose |
|---|---|---|
metrics | 8.4K | Prometheus metric collectors |
io-metrics | 4.5K | I/O-specific metrics |
obs | 5.6K | OpenTelemetry tracing and telemetry |
audit | 2.4K | Audit logging |
Events:
| Crate | Lines | Purpose |
|---|---|---|
notify | 5.5K | Event notification system |
targets | 3.2K | Notification targets (Kafka, AMQP, webhook, etc.) |
Other:
| Crate | Lines | Purpose |
|---|---|---|
trusted-proxies | 4.0K | Trusted proxy / IP forwarding |
zip | 986 | ZIP archive support for bulk downloads |
workers | 136 | Simple worker abstraction |
These are rules that the codebase should follow. Some are currently violated (marked with ⚠️). Documenting them here makes the violations explicit and trackable.
Layers flow downward. Server → Admin/App → Storage → ecstore → rio/io-core. No upward imports.
Leaf crates have zero internal dependencies. config, credentials, crypto,
io-metrics, madmin, s3-common should depend only on external crates.
utils depends on config, common depends on filemeta and madmin.Each type has exactly one definition. Types shared across crates must be defined in one crate and re-exported or imported by others.
ReplicationStats (4 copies), LastMinuteLatency (3 copies),
BackpressureConfig (3 copies), DataUsageInfo (2 copies).ecstore does not know about HTTP or S3 protocol details. It operates on storage-level abstractions (objects, buckets, disks, pools).
The rustfs binary crate is the only place that wires everything together.
Individual crates should be testable in isolation.
Error types use thiserror with descriptive names (e.g., StorageError,
not bare Error).
pub enum Error; 2 crates use snafu;
heal use anyhow in library code.This section documents known problems in the current architecture. It exists so the team can track and address them deliberately.
common/scanner code duplication (~3K lines). scanner depends on common
but maintains its own copies of DataUsageInfo, LastMinuteLatency, and related
types instead of importing them.
ecstore is a monolith (87K lines, 163 files). It contains disk management, bucket management, erasure coding, replication, lifecycle, RPC, and configuration — all in one crate. It should be decomposed along its existing subdirectories.
Dependency inversions. utils → config and common → filemeta/madmin break
the layering model. These need to be untangled.
Three-layer BackpressureConfig/DeadlockConfig duplication across io-core, concurrency, and rustfs/storage. Should be defined once with builder/composition.
Inconsistent error handling. Three strategies (thiserror/snafu/anyhow) and
mixed naming (bare Error vs descriptive names).
Ambiguous common vs utils boundary. Both described as "utilities and data structures." Need clear ownership rules.
The project convention is thiserror for typed errors with descriptive names.
See AGENTS.md: "Prefer thiserror for library-facing error types."
// GOOD
#[derive(Debug, thiserror::Error)]
pub enum StorageError {
#[error("disk not found: {0}")]
DiskNotFound(String),
}
// AVOID
pub enum Error { ... } // too generic
anyhow::Result<T> // in library code (OK in tests/CLI)
tracing crate (info!, warn!, error!, debug!, trace!)tracing::info!(bucket = %name, "created bucket")rustfs-obs runtime and schemarustfs-io-metricsrustfs-obs#[cfg(test)] mod tests in the same filetests/)crates/e2e_test/ — tests against a running servermake test or cargo nextest runThe binary (main.rs) boots in this order:
MINIO_* → RUSTFS_*)FullReady ┌─────────┐
│ rustfs │ (binary + lib, 75K lines)
│ main │
└────┬────┘
│
┌───────────────┼───────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌─────▼─────┐
│ server │ │ admin │ │ app │
│ (HTTP) │ │(console)│ │(use-cases) │
└────┬────┘ └────┬────┘ └─────┬─────┘
│ │ │
└───────────────┼───────────────┘
│
┌──────▼──────┐
│ storage │
│ (ecfs, SSE, │
│ RPC, ACL) │
└──────┬──────┘
│
┌──────────────────┼──────────────────┐
│ │ │
┌─────▼─────┐ ┌──────▼──────┐ ┌──────▼──────┐
│ ecstore │ │ rio │ │ io-core │
│ (87K,core) │ │ (readers) │ │ (zero-copy) │
└─────┬──────┘ └─────────────┘ └─────────────┘
│
┌─────┬──┼──┬─────┬──────┐
│ │ │ │ │ │
common utils config policy filemeta ...
"Where does S3 PutObject go?"
server/ routes → app/object_usecase validates → storage/ecfs encodes →
ecstore distributes → rio encrypts/compresses → io-core writes
"Where are bucket policies enforced?"
app/bucket_usecase calls into crates/policy/
"Where is replication configured?"
admin/handlers/replication.rs and admin/handlers/site_replication.rs for API,
ecstore/src/bucket/replication/ for engine
"Where do I add a new admin endpoint?"
Add handler in admin/handlers/, register in admin/router.rs
"Where do I add a new metric?"
Define descriptor/collector in crates/obs/src/metrics/, expose via /minio/v2/metrics
Inspired by matklad's ARCHITECTURE.md and rust-analyzer's architecture.md.