crates/burn-store/README.md
Advanced model storage and serialization for the Burn deep learning framework
A comprehensive storage library for Burn that enables efficient model serialization, cross-framework interoperability, and advanced tensor management.
Migrating from burn-import? See the Migration Guide for help moving from
PyTorchFileRecorder/SafetensorsFileRecorderto the new Store API.
use burn_store::{ModuleSnapshot, PytorchStore, SafetensorsStore, BurnpackStore, HalfPrecisionAdapter};
// Load from PyTorch
let mut store = PytorchStore::from_file("model.pt");
model.load_from(&mut store)?;
// Load from SafeTensors (with PyTorch adapter)
let mut store = SafetensorsStore::from_file("model.safetensors")
.with_from_adapter(PyTorchToBurnAdapter);
model.load_from(&mut store)?;
// Save to Burnpack
let mut store = BurnpackStore::from_file("model.bpk");
model.save_into(&mut store)?;
// Save with half-precision (F32 -> F16, ~50% smaller files)
let adapter = HalfPrecisionAdapter::new();
let mut store = BurnpackStore::from_file("model_f16.bpk")
.with_to_adapter(adapter.clone());
model.save_into(&mut store)?;
// Load half-precision back (F16 -> F32, same adapter)
let mut store = BurnpackStore::from_file("model_f16.bpk")
.with_from_adapter(adapter);
model.load_from(&mut store)?;
For comprehensive documentation including:
See the Burn Book - Saving and Loading chapter.
# Generate model files (one-time setup)
uv run benches/generate_unified_models.py
# Run loading benchmarks
cargo bench --bench unified_loading
# Run saving benchmarks
cargo bench --bench unified_saving
# With specific backend
cargo bench --bench unified_loading --features metal
This project is dual-licensed under MIT and Apache-2.0.