examples/python/cookbook.ipynb
# First, install Rust: https://rustup.rs/
%pip install mistralrs-cuda -v
from mistralrs import Runner, Which, ChatCompletionRequest
runner = Runner(
which=Which.GGUF(
tok_model_id="mistralai/Mistral-7B-Instruct-v0.1",
quantized_model_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
quantized_filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf",
)
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="mistral",
messages=[
{"role": "user", "content": "Tell me a story about the Rust type system."}
],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res)
Lets walk through this code.
from mistralrs import Runner, Which, ChatCompletionRequest
This imports the requires classes for our example. The Runner is a class which handles loading and running the model, which are enumerated by the Which class.
runner = Runner(
which=Which.GGUF(
tok_model_id="mistralai/Mistral-7B-Instruct-v0.1",
quantized_model_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
quantized_filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf",
tokenizer_json=None,
repeat_last_n=64,
)
)
This tells the Runner to actually load the model. It will use a CUDA, Metal, or CPU device depending on what features you set during compilation: here.
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="mistral",
messages=[{"role":"user", "content":"Tell me a story about the Rust type system."}],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res)
Now we actually send a request! We can specify the messages just like with an OpenAI API.
from mistralrs import Runner, Which, ChatCompletionRequest, Architecture
runner = Runner(
which=Which.Plain(
model_id="mistralai/Mistral-7B-Instruct-v0.1",
arch=Architecture.Mistral,
)
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="mistral",
messages=[
{"role": "user", "content": "Tell me a story about the Rust type system."}
],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res)
Lets walk through this code too and see the difference between loading a Plain model and loading a GGUF model.
from mistralrs import Runner, Which, ChatCompletionRequest, Architecture
This imports the requires classes for our example. The Runner is a class which handles loading and running the model, which are enumerated by the Which class. Note that we also import the Architecture enum which controls the non-GGUF model's architecture.
runner = Runner(
which=Which.Plain(
model_id="mistralai/Mistral-7B-Instruct-v0.1",
arch=Architecture.Mistral,
tokenizer_json=None,
repeat_last_n=64,
)
)
This tells the Runner to actually load the model as a Mistral architecture. It will use a CUDA, Metal, or CPU device depending on what features you set during compilation: here.
from mistralrs import Runner, Which
runner = Runner(
which=Which.GGUF(
tok_model_id="mistralai/Mistral-7B-Instruct-v0.1",
quantized_model_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
quantized_filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf",
)
)
from mistralrs import Runner, Which, Architecture
runner = Runner(
which=Which.Plain(
tok_model_id="mistralai/Mistral-7B-Instruct-v0.1",
arch=Architecture.Mistral,
)
)
from mistralrs import Runner, Which
runner = Runner(
which=Which.XLoraGGUF(
tok_model_id=None, # Automatically determine from ordering file
quantized_model_id="TheBloke/zephyr-7B-beta-GGUF",
quantized_filename="zephyr-7b-beta.Q4_0.gguf",
xlora_model_id="lamm-mit/x-lora",
order="orderings/xlora-paper-ordering.json",
tgt_non_granular_index=None,
)
)
from mistralrs import ChatCompletionRequest
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="mistral",
messages=[
{"role": "user", "content": "Tell me a story about the Rust type system."}
],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res.choices[0].message.content)
print(res.usage)