docs/src/ai/inline-assistant.md
Use {#kb assistant::InlineAssist} to open the Inline Assistant in editors, text threads, the rules library, channel notes, and the terminal panel.
The Inline Assistant sends your current selection (or line) to a language model and replaces it with the response.
If you're using the Inline Assistant for the first time, you need to have at least one LLM provider or external agent configured. You can do that by:
If you have already set up an LLM provider to interact with the Agent Panel, then that will also work for the Inline Assistant.
Unlike the Agent Panel, though, the only exception at the moment is external agents. They currently can't be used for generating changes with the Inline Assistant.
You can add context in the Inline Assistant the same way you can in the Agent Panel:
You can also create a thread in the Agent Panel, then reference it with @thread in the Inline Assistant. This lets you refine a specific change from a larger thread without re-explaining context.
The Inline Assistant can generate multiple changes at once:
With multiple cursors, pressing {#kb assistant::InlineAssist} sends the same prompt to each cursor position, generating changes at all locations simultaneously.
This works well with excerpts in multibuffers.
You can use the Inline Assistant to send the same prompt to multiple models at once.
Here's how you can customize your settings file (how to edit) to add this functionality:
{
"agent": {
"default_model": {
"provider": "zed.dev",
"model": "claude-sonnet-4-5"
},
"inline_alternatives": [
{
"provider": "zed.dev",
"model": "gpt-4-mini"
}
]
}
}
When multiple models are configured, you'll see in the Inline Assistant UI buttons that allow you to cycle between outputs generated by each model.
The models you specify here are always used in addition to your default model.
For example, the following configuration will generate three outputs for every assist. One with Claude Sonnet 4.5 (the default model), another with GPT-5-mini, and another one with Gemini 3 Flash.
{
"agent": {
"default_model": {
"provider": "zed.dev",
"model": "claude-sonnet-4-5"
},
"inline_alternatives": [
{
"provider": "zed.dev",
"model": "gpt-4-mini"
},
{
"provider": "zed.dev",
"model": "gemini-3-flash"
}
]
}
}
Both features generate inline code, but they work differently:
The key difference: Inline Assistant is explicit and prompt-driven; Edit Prediction is automatic and context-inferred.
To create a custom keybinding that prefills a prompt, you can add the following format in your keymap:
[
{
"context": "Editor && mode == full",
"bindings": {
"ctrl-shift-enter": [
"assistant::InlineAssist",
{ "prompt": "Build a snake game" }
]
}
}
]