multimodal/websites/tarko/docs/en/guide/basic/tool-call-engine.mdx
Tarko's Tool Call Engine determines how the Agent processes and executes tool calls. Different engines provide compatibility with various LLM providers and use cases.
The Tool Call Engine handles:
Based on the actual ToolCallEngineType from the source code:
Best for: Models with native function calling support (GPT-4, Claude 3.5, etc.)
import { Agent } from '@tarko/agent';
const agent = new Agent({
toolCallEngine: 'native',
model: {
provider: 'openai',
id: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
},
tools: [weatherTool],
});
How it works:
Best for: Models without native function calling or custom parsing needs
const agent = new Agent({
toolCallEngine: 'prompt_engineering',
model: {
provider: 'volcengine',
id: 'doubao-seed-1-6-vision-250815',
apiKey: process.env.ARK_API_KEY,
},
tools: [weatherTool],
});
How it works:
Best for: Models that support structured output but not function calling
const agent = new Agent({
toolCallEngine: 'structured_outputs',
model: {
provider: 'anthropic',
id: 'claude-3-5-sonnet-20241022',
apiKey: process.env.ANTHROPIC_API_KEY,
},
tools: [weatherTool],
});
How it works:
Tarko can automatically select the best engine for your model:
// Tarko will choose the optimal engine based on the model provider
const agent = new Agent({
// toolCallEngine not specified - auto-selected
model: {
provider: 'openai',
id: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
},
tools: [weatherTool],
});
Choose explicitly based on your needs:
// Force prompt engineering for custom control
const agent = new Agent({
toolCallEngine: 'prompt_engineering',
model: {
provider: 'openai', // Even for OpenAI, use prompt engineering
id: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
},
tools: [weatherTool],
});
| Engine | Reliability | Performance | Compatibility | Use Case |
|---|---|---|---|---|
native | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Production with supported models |
structured_outputs | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Models with schema support |
prompt_engineering | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | Universal compatibility |
From multimodal/tarko/agent/examples/tool-calls/basic.ts:
import { Agent, Tool, z, LogLevel } from '@tarko/agent';
const agent = new Agent({
model: {
provider: 'volcengine',
id: 'doubao-seed-1-6-vision-250815',
apiKey: process.env.ARK_API_KEY,
},
tools: [locationTool, weatherTool],
logLevel: LogLevel.DEBUG,
// toolCallEngine will be auto-selected based on model capabilities
});
From multimodal/tarko/agent/examples/streaming/tool-calls.ts:
const agent = new Agent({
model: {
provider: 'volcengine',
id: 'doubao-seed-1-6-vision-250815',
apiKey: process.env.ARK_API_KEY,
},
tools: [locationTool, weatherTool],
toolCallEngine: 'native',
enableStreamingToolCallEvents: true,
});
import { LogLevel } from '@tarko/agent';
const agent = new Agent({
toolCallEngine: 'prompt_engineering',
logLevel: LogLevel.DEBUG, // See detailed tool call parsing
tools: [weatherTool],
});
const response = await agent.run({
input: "What's the weather?",
stream: true,
});
for await (const event of response) {
if (event.type === 'tool_call') {
console.log('Tool called:', event.toolCall.function.name);
}
if (event.type === 'tool_result') {
console.log('Tool result:', event.result);
}
}
Tool calls not being detected:
prompt_engineering for broader compatibilityParsing errors with prompt engineering:
structured_outputs if the model supports schemasPerformance issues:
native engine is fastest for supported modelsprompt_engineering adds parsing overheadDoes your model support native function calling?
├─ Yes → Use 'native' (recommended)
└─ No
├─ Does it support structured outputs?
│ ├─ Yes → Use 'structured_outputs'
│ └─ No → Use 'prompt_engineering'
└─ Need custom parsing logic?
└─ Consider implementing custom engine