docs/PHI4MM.md
microsoft/Phi-4-multimodal-instructThe Phi 4 Multimodal Model has support in the Rust, Python, and HTTP APIs. The Phi 4 Multimodal Model supports ISQ for increased performance.
The Python and HTTP APIs support sending images as:
The Rust SDK takes an image from the image crate.
Note: The Phi 4 Multimodal model works best with one image although it is supported to send multiple images.
Note: when sending multiple images, they will be resized to the minimum dimension by which all will fit without cropping. Aspect ratio is not preserved in that case.
Phi 4 multimodal supports audio inputs!.
You can find this example here.
We support an OpenAI compatible HTTP API for multimodal models. This example demonstrates sending a chat completion request with an image.
Note: The image_url may be either a path, URL, or a base64 encoded string.
Image:
<h6><a href = "https://www.nhmagazine.com/mount-washington/">Credit</a></h6>Prompt:
What is shown in this image? Write a detailed response analyzing the scene.
Output:
A mountain with snow on it.
mistralrs serve multimodal -p 1234 -m microsoft/Phi-4-multimodal-instruct
from openai import OpenAI
client = OpenAI(api_key="foobar", base_url="http://localhost:1234/v1/")
completion = client.chat.completions.create(
model="default",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://www.nhmagazine.com/content/uploads/2019/05/mtwashingtonFranconia-2-19-18-108-Edit-Edit.jpg"
},
},
{
"type": "text",
"text": "What is shown in this image? Write a detailed response analyzing the scene.",
},
],
},
],
max_tokens=256,
frequency_penalty=1.0,
top_p=0.1,
temperature=0,
)
resp = completion.choices[0].message.content
print(resp)
You can find this example here.
This is a minimal example of running the Phi 4 Multimodal model with a dummy image.
use anyhow::Result;
use mistralrs::{IsqType, TextMessageRole, MultimodalMessages, MultimodalModelBuilder};
#[tokio::main]
async fn main() -> Result<()> {
let model =
MultimodalModelBuilder::new("microsoft/Phi-4-multimodal-instruct")
.with_isq(IsqType::Q4K)
.with_logging()
.build()
.await?;
let bytes = match reqwest::blocking::get(
"https://cdn.britannica.com/45/5645-050-B9EC0205/head-treasure-flower-disk-flowers-inflorescence-ray.jpg",
) {
Ok(http_resp) => http_resp.bytes()?.to_vec(),
Err(e) => anyhow::bail!(e),
};
let image = image::load_from_memory(&bytes)?;
let messages = MultimodalMessages::new().add_image_message(
TextMessageRole::User,
"What is depicted here? Please describe the scene in detail.",
vec![image],
);
let response = model.send_chat_request(messages).await?;
println!("{}", response.choices[0].message.content.as_ref().unwrap());
dbg!(
response.usage.avg_prompt_tok_per_sec,
response.usage.avg_compl_tok_per_sec
);
Ok(())
}
You can find this example here.
This example demonstrates loading and sending a chat completion request with an image.
Note: the image_url may be either a path, URL, or a base64 encoded string.
from mistralrs import Runner, Which, ChatCompletionRequest, MultimodalArchitecture
runner = Runner(
which=Which.MultimodalPlain(
model_id="microsoft/Phi-4-multimodal-instruct",
arch=MultimodalArchitecture.Phi4MM,
),
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="default",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/e/e7/Everest_North_Face_toward_Base_Camp_Tibet_Luca_Galuzzi_2006.jpg"
},
},
{
"type": "text",
"text": "What is shown in this image? Write a detailed response analyzing the scene.",
},
],
}
],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res.choices[0].message.content)
print(res.usage)
Alongside vision, Phi 4 Multimodal in mistral.rs can accept audio as an additional modality. This unlocks fully-local pipelines such as text + speech + vision -> text where the model can reason jointly over what it hears and what it sees.
mistral.rs automatically decodes the supplied audio (WAV/MP3/FLAC/OGG/… – anything Symphonia can handle) into 16-bit PCM.
Audio is delivered with the audio_url content-type that mirrors OpenAIʼs official specification:
{
"role": "user",
"content": [
{
"type": "audio_url",
"audio_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/4/42/Bird_singing.ogg" }
},
{
"type": "image_url",
"image_url": { "url": "https://www.allaboutbirds.org/guide/assets/og/528129121-1200px.jpg" }
},
{
"type": "text",
"text": "Describe what is happening in this clip in as much detail as possible."
}
]
}
use anyhow::Result;
use mistralrs::{AudioInput, IsqType, TextMessageRole, MultimodalMessages, MultimodalModelBuilder};
#[tokio::main]
async fn main() -> Result<()> {
let model = MultimodalModelBuilder::new("microsoft/Phi-4-multimodal-instruct")
.with_isq(IsqType::Q4K)
.with_logging()
.build()
.await?;
let audio_bytes = reqwest::blocking::get(
"https://upload.wikimedia.org/wikipedia/commons/4/42/Bird_singing.ogg",
)?
.bytes()?
.to_vec();
let audio = AudioInput::from_bytes(&audio_bytes)?;
let image_bytes = reqwest::blocking::get(
"https://www.allaboutbirds.org/guide/assets/og/528129121-1200px.jpg",
)?
.bytes()?
.to_vec();
let image = image::load_from_memory(&image_bytes)?;
let messages = MultimodalMessages::new()
.add_multimodal_message(
TextMessageRole::User,
"Describe in detail what is happening.",
vec![image],
vec![audio],
vec![],
);
let response = model.send_chat_request(messages).await?;
println!("{}", response.choices[0].message.content.as_ref().unwrap());
Ok(())
}
With this, you now have a single-call pipeline that fuses sound, vision, and text – all running locally through mistral.rs! 🔥