docs/_core_features/tools.md
{: .no_toc }
{{ page.description }} {: .fs-6 .fw-300 }
{: .no_toc .text-delta }
After reading this guide, you will know:
RubyLLM::Tool.RubyLLM::Chat.Tools bridge the gap between the AI model's conversational abilities and the real world. They allow the model to delegate tasks it cannot perform itself to your application code.
Common use cases:
Define a tool by creating a class that inherits from RubyLLM::Tool.
class Weather < RubyLLM::Tool
description "Gets current weather for a location"
params do # the params DSL is only available in v1.9+. older versions should use the param helper instead
string :latitude, description: "Latitude (e.g., 52.5200)"
string :longitude, description: "Longitude (e.g., 13.4050)"
end
def execute(latitude:, longitude:)
url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}¤t=temperature_2m,wind_speed_10m"
response = Faraday.get(url)
data = JSON.parse(response.body)
rescue => e
{ error: e.message }
end
end
RubyLLM::Tool.description: A class method defining what the tool does. Crucial for the AI model to understand its purpose. Keep it clear and concise.params: (v1.9+) The DSL for describing your input schema. Declare nested objects, arrays, enums, and optional fields in one place. If you only need flat keyword arguments, the older param (v1.0+) helper remains available. See Using the param Helper for Simple Tools.execute Method: The instance method containing your Ruby code. It receives the keyword arguments defined by your schema and returns the payload the model will see (typically a String, Hash, or RubyLLM::Content).The tool's class name is automatically converted to a snake_case name used in the API call (e.g.,
WeatherLookupbecomesweather_lookup). This is how the LLM would call it. You can override this by defining anamemethod in your tool class:rubyclass WeatherLookup < RubyLLM::Tool def name "Weather" end end
{: .note }
If a model attempts to call a tool that doesn't exist (sometimes called "tool hallucination"), RubyLLM handles this gracefully by:
- Returning an error message to the model indicating which tool it tried to call
- Listing the actually available tools
- Allowing the conversation to continue so the model can correct itself
This prevents crashes and gives the model a chance to use the correct tool or respond appropriately. {: .note }
RubyLLM ships with two complementary approaches:
params DSL for expressive, structured inputs. (v1.9+)param helper for quick, flat argument lists. (v1.0+)Start with the DSL whenever you need anything beyond a handful of simple strings—it keeps complex schemas maintainable and identical across every provider.
{: .d-inline-block }
v1.9.0+ {: .label .label-green }
When you need nested objects, arrays, enums, or union types, the params do ... end DSL produces the JSON Schema that function-calling models expect while staying Ruby-flavoured.
class Scheduler < RubyLLM::Tool
description "Books a meeting"
params do
object :window, description: "Time window to reserve" do
string :start, description: "ISO8601 start time"
string :finish, description: "ISO8601 end time"
end
array :participants, of: :string, description: "Email addresses to invite"
any_of :format, description: "Optional meeting format" do
string enum: %w[virtual in_person]
null
end
end
def execute(window:, participants:, format: nil)
# ...
end
end
RubyLLM bundles the DSL through ruby_llm-schema, so every project has the same schema builders out of the box.
param Helper for Simple ToolsIf your tool just needs a few scalar arguments, stick with the param helper. RubyLLM translates these declarations into JSON Schema under the hood.
class Distance < RubyLLM::Tool
description "Calculates distance between two cities"
param :origin, desc: "Origin city name"
param :destination, desc: "Destination city name"
param :units, type: :string, desc: "Unit system (metric or imperial)", required: false
def execute(origin:, destination:, units: "metric")
# ...
end
end
{: .d-inline-block }
v1.9.0+ {: .label .label-green }
Prefer to own the JSON Schema yourself? Pass a schema hash (or a class/object responding to #to_json_schema) directly to params:
class Lookup < RubyLLM::Tool
description "Performs catalog lookups"
params type: "object",
properties: {
sku: { type: "string", description: "Product SKU" },
locale: { type: "string", description: "Country code", default: "US" }
},
required: %w[sku],
additionalProperties: false,
strict: true
def execute(sku:, locale: "US")
# ...
end
end
RubyLLM normalizes symbol keys, deep duplicates the schema, and sends it to providers unchanged. This gives you full control when you need it.
Tools can return RubyLLM::Content objects with file attachments, allowing you to pass images, documents, or other files from your tools to the AI model:
class AnalyzeTool < RubyLLM::Tool
description "Analyzes data and returns results with visualizations"
param :query, desc: "Analysis query"
def execute(query:)
# Generate analysis and create visualization
chart_path = generate_chart(query)
# Return Content with text and attachments
RubyLLM::Content.new(
"Analysis complete for: #{query}",
[chart_path] # Attach the generated chart (array of paths/blobs)
)
end
private
def generate_chart(query)
# Your chart generation logic
"/tmp/chart_#{Time.now.to_i}.png"
end
end
chat = RubyLLM.chat.with_tool(AnalyzeTool)
response = chat.ask("Analyze sales trends for Q4")
# The AI receives both the text and the chart image
When a tool returns a Content object:
This is particularly useful for:
Tools can have custom initialization:
class DocumentSearch < RubyLLM::Tool
description "Searches documents by relevance"
param :query,
desc: "The search query"
param :limit,
type: :integer,
desc: "Maximum number of results",
required: false
def initialize(database)
@database = database
end
def execute(query:, limit: 5)
# Search in @database
@database.search(query, limit: limit)
end
end
# Initialize with dependencies
search_tool = DocumentSearch.new(MyDatabase)
chat.with_tool(search_tool)
Attach tools to a Chat instance using with_tool or with_tools.
# Create a chat instance
chat = RubyLLM.chat(model: '{{ site.models.openai_tools }}') # Use a model that supports tools
# Instantiate your tool if it requires arguments, otherwise use the class
weather_tool = Weather.new
# Add the tool(s) to the chat
chat.with_tool(weather_tool)
# Or add multiple: chat.with_tools(WeatherLookup, AnotherTool.new)
# Replace all tools with new ones
chat.with_tools(NewTool, AnotherTool, replace: true)
# Clear all tools
chat.with_tools(replace: true)
# Ask a question that should trigger the tool
response = chat.ask "What's the current weather like in Berlin? (Lat: 52.52, Long: 13.40)"
puts response.content
# => "Current weather at 52.52, 13.4: Temperature: 12.5°C, Wind Speed: 8.3 km/h, Conditions: Mainly clear, partly cloudy, and overcast."
{: .d-inline-block }
v1.13.0+ {: .label .label-green }
Control tool behavior with two options:
choice controls which tools the model is allowed/required to use.calls controls how many tool calls can appear in one assistant response.choice)Use choice to control whether the model can call tools and which one it can call.
# Model decides if a tool is needed
chat.with_tools(Weather, Calculator, choice: :auto)
# Model must call a tool
chat.with_tools(Weather, Calculator, choice: :required)
# Disable tool calls
chat.with_tools(Weather, Calculator, choice: :none)
# Force one specific tool (symbol or class)
chat.with_tools(Weather, Calculator, choice: :weather)
chat.with_tools(Weather, Calculator, choice: Weather)
Valid values:
:auto:required:noneToolClassWith
:requiredor specific tool choices,tool_choiceis automatically reset tonilafter tool execution to prevent infinite loops. {: .note }
calls)Providers usually call this parallel tool calling. We call it
callsbecause "parallel" can be misleading: tools are not executed in parallel unless the tool executor itself is parallelized.callsdescribes the actual behavior directly::manymeans multiple tool calls in one assistant response,:onemeans one tool call in one assistant response. {: .note }
Use calls to control how many tool calls the model may return in a single assistant response.
# Allow multiple tool calls in one response
chat.with_tools(Weather, Calculator, calls: :many)
# Allow one tool call in one response
chat.with_tools(Weather, Calculator, calls: :one)
# equivalent:
chat.with_tools(Weather, Calculator, calls: 1)
Valid values:
:many:one1If calls is not provided, RubyLLM uses provider/model defaults, which are usually equivalent to calls: :many.
Tool choice and call-count controls are provider/model dependent. {: .note }
RubyLLM will attempt to use tools with any model. If the model doesn't support function calling, the provider will return an appropriate error when you call ask.
When you ask a question that the model determines requires a tool:
WeatherLookup tool is needed and extracts the latitude and longitude.weather_lookup) and arguments ({ latitude: 52.52, longitude: 13.40 }).WeatherLookup tool and calls its execute(latitude: 52.52, longitude: 13.40) method.execute method runs (calling the weather API) and returns a result string.:tool role.RubyLLM::Message object containing the text generated in step 7.This entire multi-step process happens behind the scenes within a single chat.ask call when a tool is invoked.
You can monitor tool execution using event callbacks to track when tools are called and what they return:
chat = RubyLLM.chat(model: '{{ site.models.openai_tools }}')
.with_tool(Weather)
.on_tool_call do |tool_call|
# Called when the AI decides to use a tool
puts "Calling tool: #{tool_call.name}"
puts "Arguments: #{tool_call.arguments}"
end
.on_tool_result do |result|
# Called after the tool returns its result
puts "Tool returned: #{result}"
end
response = chat.ask "What's the weather in Paris?"
# Output:
# Calling tool: weather
# Arguments: {"latitude": "48.8566", "longitude": "2.3522"}
# Tool returned: {"temperature": 15, "conditions": "Partly cloudy"}
These callbacks are useful for:
To prevent excessive API usage or infinite loops, you can use callbacks to limit tool calls:
# Limit total tool calls per conversation
call_count = 0
max_calls = 10
chat = RubyLLM.chat(model: '{{ site.models.openai_tools }}')
.with_tool(Weather)
.on_tool_call do |tool_call|
call_count += 1
if call_count > max_calls
raise "Tool call limit exceeded (#{max_calls} calls)"
end
end
# The conversation will stop if it tries to use tools more than 10 times
chat.ask("Check weather for every major city...")
Raising an exception in
on_tool_callbreaks the conversation flow - the LLM expects a tool response after requesting a tool call. This can leave the chat in an inconsistent state. Consider using better models or clearer tool descriptions to prevent loops instead of hard limits. {: .warning }
{: .d-inline-block }
v1.9.0+ {: .label .label-green }
Some providers accept additional metadata alongside the JSON Schema—for example, Anthropic’s cache_control hints. Use with_params to declare these once on the tool class and RubyLLM will merge them into the payload when the provider supports the keys.
class TodoTool < RubyLLM::Tool
description "Adds a task to the shared TODO list"
params do
string :title, description: "Human-friendly task description"
end
with_params cache_control: { type: "ephemeral" }
def execute(title:)
Todo.create!(title:)
"Added “#{title}” to the list."
end
end
Provider metadata is passed through verbatim—turn on RUBYLLM_DEBUG=true if you want to inspect the final payload while experimenting.
After a tool executes, the LLM normally continues the conversation to explain what happened. In rare cases, you might want to skip this and return the tool result directly.
The halt helper stops the LLM from continuing after your tool:
class SaveFileTool < RubyLLM::Tool
description "Save content to a file"
param :path, desc: "File path"
param :content, desc: "File content"
def execute(path:, content:)
File.write(path, content)
halt "Saved to #{path}" # Returns this directly, no LLM commentary
end
end
# Without halt: LLM adds "I've successfully saved the file to config.yml..."
# With halt: Just returns "Saved to config.yml"
The LLM's continuation is usually helpful - it provides context and natural language formatting. Only use
haltwhen you specifically need to bypass this behavior. {: .warning }
class DelegateTool < RubyLLM::Tool
description "Delegate to expert"
param :query, desc: "The query"
def execute(query:)
response = RubyLLM.chat
.with_instructions("You are an expert...")
.ask(query) { |chunk| print chunk } # Stream to user
halt response.content # Skip router's commentary
end
end
Sub-agents work perfectly without halt! You can create sub-agents and stream their responses without using
halt. The router will simply summarize what the sub-agent said, which is often helpful. Usehaltonly when you specifically want to skip the router's summary. {: .note }
For MCP server integration, check out the community-maintained ruby_llm-mcp gem.
Set the RUBYLLM_DEBUG environment variable to see detailed logging, including tool calls and results.
export RUBYLLM_DEBUG=true
# Run your script
You'll see log lines similar to:
D, [timestamp] -- RubyLLM: Tool weather_lookup called with: {:latitude=>52.52, :longitude=>13.4}
D, [timestamp] -- RubyLLM: Tool weather_lookup returned: "Current weather at 52.52, 13.4: Temperature: 12.5°C, Wind Speed: 8.3 km/h, Conditions: Mainly clear, partly cloudy, and overcast."
See the [Error Handling Guide]({% link _advanced/error-handling.md %}#debugging) for more on debugging.
Tools should handle errors based on whether they're recoverable:
{ error: "description" }def execute(location:)
return { error: "Location too short" } if location.length < 3
# Fetch weather data...
rescue Faraday::ConnectionFailed
{ error: "Weather service unavailable" }
end
See the [Error Handling Guide]({% link _advanced/error-handling.md %}#handling-errors-within-tools) for more discussion.
Treat any arguments passed to your
executemethod as potentially untrusted user input, as the AI model generates them based on the conversation. {: .warning }
eval, system, send, or direct SQL interpolation with raw arguments from the AI.execute only has access to the resources it absolutely needs.