README.md
<a name="top"></a>
<!-- <h1 align="center"> mistral.rs </h1> --> <div align="center"> </div> <h3 align="center"> Fast, flexible LLM inference. </h3> <p align="center"> | <a href="https://ericlbuehler.github.io/mistral.rs/"><b>Documentation</b></a> | <a href="https://crates.io/crates/mistralrs"><b>Rust SDK</b></a> | <a href="https://ericlbuehler.github.io/mistral.rs/PYTHON_SDK.html"><b>Python SDK</b></a> | <a href="https://discord.gg/SZrecqK8qw"><b>Discord</b></a> | </p> <p align="center"> <a href="https://github.com/EricLBuehler/mistral.rs/stargazers"> </a> </p>mistralrs run -m user/model.mistralrs quantize.mistralrs serve --ui gives you a web interface instantly.mistralrs tune benchmarks your system and picks optimal quantization + device mapping.Linux/macOS:
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/EricLBuehler/mistral.rs/master/install.sh | sh
Windows (PowerShell):
irm https://raw.githubusercontent.com/EricLBuehler/mistral.rs/master/install.ps1 | iex
Manual installation & other platforms
# Interactive chat
mistralrs run -m Qwen/Qwen3-4B
# One-shot prompt (no interactive session)
mistralrs run -m Qwen/Qwen3-4B -i "What is the capital of France?"
# One-shot with an image
mistralrs run -m google/gemma-4-E4B-it --image photo.jpg -i "Describe this image"
# Or start a server with web UI
mistralrs serve --ui -m google/gemma-4-E4B-it
Then visit http://localhost:1234/ui for the web chat interface.
mistralrs CLIThe CLI is designed to be zero-config: just point it at a model and go.
run, serve, bench)mistralrs tune to automatically benchmark and configure optimal settings for your hardware# Auto-tune for your hardware and emit a config file
mistralrs tune -m Qwen/Qwen3-4B --emit-config config.toml
# Run using the generated config
mistralrs from-config -f config.toml
# Diagnose system issues (CUDA, Metal, HuggingFace connectivity)
mistralrs doctor
Performance
Quantization (full docs)
Flexibility
Agentic Features
Request a new model | Full compatibility tables
pip install mistralrs # or mistralrs-cuda, mistralrs-metal, mistralrs-mkl, mistralrs-accelerate
from mistralrs import Runner, Which, ChatCompletionRequest
runner = Runner(
which=Which.Plain(model_id="Qwen/Qwen3-4B"),
in_situ_quant="4",
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="default",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=256,
)
)
print(res.choices[0].message.content)
Python SDK | Installation | Examples | Cookbook
cargo add mistralrs
use anyhow::Result;
use mistralrs::{IsqType, TextMessageRole, TextMessages, MultimodalModelBuilder};
#[tokio::main]
async fn main() -> Result<()> {
let model = MultimodalModelBuilder::new("google/gemma-4-E4B-it")
.with_isq(IsqType::Q4K)
.with_logging()
.build()
.await?;
let messages = TextMessages::new().add_message(
TextMessageRole::User,
"Hello!",
);
let response = model.send_chat_request(messages).await?;
println!("{:?}", response.choices[0].message.content);
Ok(())
}
For quick containerized deployment:
docker pull ghcr.io/ericlbuehler/mistral.rs:latest
docker run --gpus all -p 1234:1234 ghcr.io/ericlbuehler/mistral.rs:latest \
serve -m Qwen/Qwen3-4B
For production use, we recommend installing the CLI directly for maximum flexibility.
For complete documentation, see the Documentation.
Quick Links:
Contributions welcome! Please open an issue to discuss new features or report bugs. If you want to add a new model, please contact us via an issue and we can coordinate.
This project would not be possible without the excellent work at Candle. Thank you to all contributors!
mistral.rs is not affiliated with Mistral AI.
<p align="right"> <a href="#top">Back to Top</a> </p>