docs/features/tool_calling.md
vLLM currently supports named function calling, as well as the auto, required (as of vllm>=0.8.3), and none options for the tool_choice field in the chat completion API.
Start the server with tool calling enabled. This example uses Meta's Llama 3.1 8B model, so we need to use the llama3_json tool calling chat template from the vLLM examples directory:
vllm serve meta-llama/Llama-3.1-8B-Instruct \
--enable-auto-tool-choice \
--tool-call-parser llama3_json \
--chat-template examples/tool_chat_template_llama3.1_json.jinja
Next, make a request that triggers the model to use the available tools:
??? code
```python
from openai import OpenAI
import json
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
def get_weather(location: str, unit: str):
return f"Getting the weather for {location} in {unit}..."
tool_functions = {"get_weather": get_weather}
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"],
},
},
},
]
response = client.chat.completions.create(
model=client.models.list().data[0].id,
messages=[{"role": "user", "content": "What's the weather like in San Francisco?"}],
tools=tools,
tool_choice="auto",
)
tool_call = response.choices[0].message.tool_calls[0].function
print(f"Function called: {tool_call.name}")
print(f"Arguments: {tool_call.arguments}")
print(f"Result: {tool_functions[tool_call.name](**json.loads(tool_call.arguments))}")
```
Example output:
Function called: get_weather
Arguments: {"location": "San Francisco, CA", "unit": "fahrenheit"}
Result: Getting the weather for San Francisco, CA in fahrenheit...
This example demonstrates:
tool_choice="auto"You can also specify a particular function using named function calling by setting tool_choice={"type": "function", "function": {"name": "get_weather"}}. Note that this will use the structured outputs backend - so the first time this is used, there will be several seconds of latency (or more) as the FSM is compiled for the first time before it is cached for subsequent requests.
Remember that it's the caller's responsibility to:
For more advanced usage, including parallel tool calls and different model-specific parsers, see the sections below.
vLLM supports named function calling in the chat completion API by default. This should work with most structured outputs backends supported by vLLM. You are guaranteed a validly-parsable function call - not a high-quality one.
vLLM will use structured outputs to ensure the response matches the tool parameter object defined by the JSON schema in the tools parameter.
For best results, we recommend ensuring that the expected output format / schema is specified in the prompt to ensure that the model's intended generation is aligned with the schema that it's being forced to generate by the structured outputs backend.
To use a named function, you need to define the functions in the tools parameter of the chat completion request, and
specify the name of one of the tools in the tool_choice parameter of the chat completion request.
vLLM supports the tool_choice='required' option in the chat completion API. Similar to the named function calling, it also uses structured outputs, so this is enabled by default and will work with any supported model. However, support for alternative decoding backends are on the roadmap for the V1 engine.
When tool_choice='required' is set, the model is guaranteed to generate one or more tool calls based on the specified tool list in the tools parameter. The number of tool calls depends on the user's query. The output format strictly follows the schema defined in the tools parameter.
vLLM supports the tool_choice='none' option in the chat completion API. When this option is set, the model will not generate any tool calls and will respond with regular text content only, even if tools are defined in the request.
!!! note
When tools are specified in the request, vLLM includes tool definitions in the prompt by default, regardless of the tool_choice setting. To exclude tool definitions when tool_choice='none', use the --exclude-tools-when-tool-choice-none option.
Whether vLLM enforces the tool parameter schema during generation depends on the tool_choice mode:
tool_choice value | Schema-constrained decoding | Behavior |
|---|---|---|
| Named function | Yes (via structured outputs backend) | Arguments are guaranteed to be valid JSON conforming to the function's parameter schema. |
"required" | Yes (via structured outputs backend) | Same as named function. The model must produce at least one tool call. |
"auto" | No | The model generates freely. A tool-call parser extracts tool calls from the raw text. Arguments may be malformed or not match the schema. |
"none" | N/A | No tool calls are produced. |
When schema conformance matters, prefer tool_choice="required" or named function calling over "auto".
strict parameter)The OpenAI API supports a strict field on function definitions. When set to true, OpenAI uses constrained decoding to guarantee that tool-call arguments match the function schema, even in tool_choice="auto" mode.
vLLM does not implement strict mode today. The strict field is accepted in requests (to avoid breaking clients that set it), but it has no effect on decoding behavior. In auto mode, argument validity depends entirely on the model's output quality and the parser's extraction logic.
Tracking issues: #15526, #16313.
To enable this feature, you should set the following flags:
--enable-auto-tool-choice -- mandatory Auto tool choice. It tells vLLM that you want to enable the model to generate its own tool calls when it
deems appropriate.--tool-call-parser -- select the tool parser to use (listed below). Additional tool parsers
will continue to be added in the future. You can also register your own tool parsers in the --tool-parser-plugin.--tool-parser-plugin -- optional tool parser plugin used to register user defined tool parsers into vllm, the registered tool parser name can be specified in --tool-call-parser.--chat-template -- optional for auto tool choice. It's the path to the chat template which handles tool-role messages and assistant-role messages
that contain previously generated tool calls. Hermes, Mistral and Llama models have tool-compatible chat templates in their
tokenizer_config.json files, but you can specify a custom template. This argument can be set to tool_use if your model has a tool use-specific chat
template configured in the tokenizer_config.json. In this case, it will be used per the transformers specification. More on this here
from HuggingFace; and you can find an example of this in a tokenizer_config.json here.If your favorite tool-calling model is not supported, please feel free to contribute a parser & tool use chat template!
!!! note
With tool_choice="auto", tool-call arguments are extracted from the model's raw text output by the selected parser. No schema-level constraint is applied during decoding, so arguments may occasionally be malformed or violate the function's parameter schema. See Constrained Decoding Behavior for details.
hermes)All Nous Research Hermes-series models newer than Hermes 2 Pro should be supported.
NousResearch/Hermes-2-Pro-*NousResearch/Hermes-2-Theta-*NousResearch/Hermes-3-*Note that the Hermes 2 Theta models are known to have degraded tool call quality and capabilities due to the merge step in their creation.
Flags: --tool-call-parser hermes
mistral)Supported models:
mistralai/Mistral-7B-Instruct-v0.3 (confirmed)Known issues:
Mistral 7B struggles to generate parallel tool calls correctly.
For Transformers tokenization backend only: Mistral's tokenizer_config.json chat template requires tool call IDs that are exactly 9 digits, which is
much shorter than what vLLM generates. Since an exception is thrown when this condition
is not met, the following additional chat templates are provided:
tool_call_id fields are truncated to the last 9 digits)Recommended flags:
To use the official Mistral AI's format:
--tool-call-parser mistral
To use the Transformers format when available:
--tokenizer_mode hf --config_format hf --load_format hf --tool-call-parser mistral --chat-template examples/tool_chat_template_mistral_parallel.jinja
!!! note Models officially released by Mistral AI have two possible formats:
1. The official format that is used by default with `auto` or `mistral` arguments:
`--tokenizer_mode mistral --config_format mistral --load_format mistral`
This format uses [mistral-common](https://github.com/mistralai/mistral-common), the Mistral AI's tokenizer backend.
2. The Transformers format, when available, that is used with `hf` arguments:
`--tokenizer_mode hf --config_format hf --load_format hf --chat-template examples/tool_chat_template_mistral_parallel.jinja`
llama3_json)Supported models:
All Llama 3.1, 3.2 and 4 models should be supported.
meta-llama/Llama-3.1-*meta-llama/Llama-3.2-*meta-llama/Llama-4-*The tool calling that is supported is the JSON-based tool calling. For pythonic tool calling introduced by the Llama-3.2 models, see the pythonic tool parser below. As for Llama 4 models, it is recommended to use the llama4_pythonic tool parser.
Other tool calling formats like the built-in python tool calling or custom tool calling are not supported.
Known issues:
VLLM provides two JSON-based chat templates for Llama 3.1 and 3.2:
Recommended flags: --tool-call-parser llama3_json --chat-template {see_above}
VLLM also provides a pythonic and JSON-based chat template for Llama 4, but pythonic tool calling is recommended:
For Llama 4 model, use --tool-call-parser llama4_pythonic --chat-template examples/tool_chat_template_llama4_pythonic.jinja.
Supported models:
ibm-granite/granite-4.0-h-small and other Granite 4.0 models
Recommended flags: --tool-call-parser granite4
ibm-granite/granite-3.0-8b-instruct
Recommended flags: --tool-call-parser granite --chat-template examples/tool_chat_template_granite.jinja
examples/tool_chat_template_granite.jinja: this is a modified chat template from the original on Hugging Face. Parallel function calls are supported.
ibm-granite/granite-3.1-8b-instruct
Recommended flags: --tool-call-parser granite
The chat template from Huggingface can be used directly. Parallel function calls are supported.
ibm-granite/granite-20b-functioncalling
Recommended flags: --tool-call-parser granite-20b-fc --chat-template examples/tool_chat_template_granite_20b_fc.jinja
examples/tool_chat_template_granite_20b_fc.jinja: this is a modified chat template from the original on Hugging Face, which is not vLLM-compatible. It blends function description elements from the Hermes template and follows the same system prompt as "Response Generation" mode from the paper. Parallel function calls are supported.
internlm)Supported models:
internlm/internlm2_5-7b-chat (confirmed)Known issues:
internlm/internlm2-chat-7b model.Recommended flags: --tool-call-parser internlm --chat-template examples/tool_chat_template_internlm2_tool.jinja
jamba)AI21's Jamba-1.5 models are supported.
ai21labs/AI21-Jamba-1.5-Miniai21labs/AI21-Jamba-1.5-LargeFlags: --tool-call-parser jamba
xlam)The xLAM tool parser is designed to support models that generate tool calls in various JSON formats. It detects function calls in several different output styles:
[ and ending with ]<think>...</think> tags containing JSON arraysjson ...)[TOOL_CALLS] or <tool_call>...</tool_call> tagsParallel function calls are supported, and the parser can effectively separate text content from tool calls.
Supported models:
Salesforce/Llama-xLAM-2-8B-fc-r, Salesforce/Llama-xLAM-2-70B-fc-rSalesforce/xLAM-1B-fc-r, Salesforce/xLAM-3B-fc-r, Salesforce/Qwen-xLAM-32B-fc-rFlags:
--tool-call-parser xlam --chat-template examples/tool_chat_template_xlam_llama.jinja--tool-call-parser xlam --chat-template examples/tool_chat_template_xlam_qwen.jinjaFor Qwen2.5, the chat template in tokenizer_config.json has already included support for the Hermes-style tool use. Therefore, you can use the hermes parser to enable tool calls for Qwen models. For more detailed information, please refer to the official Qwen documentation
Qwen/Qwen2.5-*Qwen/QwQ-32BFlags: --tool-call-parser hermes
minimax_m1)Supported models:
MiniMaxAi/MiniMax-M1-40k (use with examples/tool_chat_template_minimax_m1.jinja)MiniMaxAi/MiniMax-M1-80k (use with examples/tool_chat_template_minimax_m1.jinja)Flags: --tool-call-parser minimax --chat-template examples/tool_chat_template_minimax_m1.jinja
deepseek_v3)Supported models:
deepseek-ai/DeepSeek-V3-0324 (use with examples/tool_chat_template_deepseekv3.jinja)deepseek-ai/DeepSeek-R1-0528 (use with examples/tool_chat_template_deepseekr1.jinja)Flags: --tool-call-parser deepseek_v3 --chat-template {see_above}
deepseek_v31)Supported models:
deepseek-ai/DeepSeek-V3.1 (use with examples/tool_chat_template_deepseekv31.jinja)Flags: --tool-call-parser deepseek_v31 --chat-template {see_above}
Supported models:
openai/gpt-oss-20bopenai/gpt-oss-120bFlags: --tool-call-parser openai
kimi_k2)Supported models:
moonshotai/Kimi-K2-InstructFlags: --tool-call-parser kimi_k2
hunyuan_a13b)Supported models:
tencent/Hunyuan-A13B-Instruct (The chat template is already included in the Hugging Face model files.)Flags:
--tool-call-parser hunyuan_a13b--tool-call-parser hunyuan_a13b --reasoning-parser hunyuan_a13blongcat)Supported models:
meituan-longcat/LongCat-Flash-Chatmeituan-longcat/LongCat-Flash-Chat-FP8Flags: --tool-call-parser longcat
glm45)Supported models:
zai-org/GLM-4.5zai-org/GLM-4.5-Airzai-org/GLM-4.6Flags: --tool-call-parser glm45
glm47)Supported models:
zai-org/GLM-4.7zai-org/GLM-4.7-FlashFlags: --tool-call-parser glm47
functiongemma)Google's FunctionGemma is a lightweight (270M parameter) model specifically designed for function calling. It's built on Gemma 3 and optimized for edge deployment on devices like laptops and phones.
Supported models:
google/functiongemma-270m-itFunctionGemma uses a unique output format with <start_function_call> and <end_function_call> tags:
<start_function_call>call:get_weather{location:<escape>London<escape>}<end_function_call>
The model is designed to be fine-tuned for specific function-calling tasks for best results.
Flags: --tool-call-parser functiongemma --chat-template examples/tool_chat_template_functiongemma.jinja
!!! note FunctionGemma is intended to be fine-tuned for your specific function-calling task. The base model provides general function calling capabilities, but best results are achieved with task-specific fine-tuning. See Google's FunctionGemma documentation for fine-tuning guides.
qwen3_xml)Supported models:
Qwen/Qwen3-Coder-480B-A35B-InstructQwen/Qwen3-Coder-30B-A3B-InstructFlags: --tool-call-parser qwen3_xml
olmo3)Olmo 3 models output tool calls in a format that is very similar to the one expected by the pythonic parser (see below), with a few differences. Each tool call is a pythonic string, but the parallel tool calls are newline-delimited, and the calls are wrapped within XML tags as <function_calls>..</function_calls>. In addition, the parser also allows JSON boolean and null literals (true, false, and null) in addition to the pythonic ones (True, False, and None).
Supported models:
allenai/Olmo-3-7B-Instructallenai/Olmo-3-32B-ThinkFlags: --tool-call-parser olmo3
gigachat3)Use chat template from the Hugging Face model files.
Supported models:
ai-sage/GigaChat3-702B-A36B-previewai-sage/GigaChat3-702B-A36B-preview-bf16ai-sage/GigaChat3-10B-A1.8Bai-sage/GigaChat3-10B-A1.8B-bf16Flags: --tool-call-parser gigachat3
pythonic)A growing number of models output a python list to represent tool calls instead of using JSON. This has the advantage of inherently supporting parallel tool calls and removing ambiguity around the JSON schema required for tool calls. The pythonic tool parser can support such models.
As a concrete example, these models may look up the weather in San Francisco and Seattle by generating:
[get_weather(city='San Francisco', metric='celsius'), get_weather(city='Seattle', metric='celsius')]
Limitations:
Example supported models:
meta-llama/Llama-3.2-1B-Instruct ⚠️ (use with examples/tool_chat_template_llama3.2_pythonic.jinja)meta-llama/Llama-3.2-3B-Instruct ⚠️ (use with examples/tool_chat_template_llama3.2_pythonic.jinja)Team-ACE/ToolACE-8B (use with examples/tool_chat_template_toolace.jinja)fixie-ai/ultravox-v0_4-ToolACE-8B (use with examples/tool_chat_template_toolace.jinja)meta-llama/Llama-4-Scout-17B-16E-Instruct ⚠️ (use with examples/tool_chat_template_llama4_pythonic.jinja)meta-llama/Llama-4-Maverick-17B-128E-Instruct ⚠️ (use with examples/tool_chat_template_llama4_pythonic.jinja)Flags: --tool-call-parser pythonic --chat-template {see_above}
!!! warning Llama's smaller models frequently fail to emit tool calls in the correct format. Results may vary depending on the model.
A tool parser plugin is a Python file containing one or more ToolParser implementations. You can write a ToolParser similar to the Hermes2ProToolParser in vllm/tool_parsers/hermes_tool_parser.py.
Here is a summary of a plugin file:
??? code
```python
# import the required packages
# define a tool parser and register it to vllm
# the name list in register_module can be used
# in --tool-call-parser. you can define as many
# tool parsers as you want here.
class ExampleToolParser(ToolParser):
def __init__(self, tokenizer: TokenizerLike):
super().__init__(tokenizer)
# adjust request. e.g.: set skip special tokens
# to False for tool call output.
def adjust_request(self, request: ChatCompletionRequest | ResponsesRequest) -> ChatCompletionRequest | ResponsesRequest:
return request
# implement the tool call parse for stream call
def extract_tool_calls_streaming(
self,
previous_text: str,
current_text: str,
delta_text: str,
previous_token_ids: Sequence[int],
current_token_ids: Sequence[int],
delta_token_ids: Sequence[int],
request: ChatCompletionRequest,
) -> DeltaMessage | None:
return delta
# implement the tool parse for non-stream call
def extract_tool_calls(
self,
model_output: str,
request: ChatCompletionRequest,
) -> ExtractedToolCallInformation:
return ExtractedToolCallInformation(tools_called=False,
tool_calls=[],
content=text)
# register the tool parser to ToolParserManager
ToolParserManager.register_lazy_module(
name="example",
module_path="vllm.tool_parsers.example",
class_name="ExampleToolParser",
)
```
Then you can use this plugin in the command line like this.
--enable-auto-tool-choice \
--tool-parser-plugin <absolute path of the plugin file>
--tool-call-parser example \
--chat-template <your chat template> \