Back to Llama Index

Building a Metaphor Data Agent

llama-index-integrations/tools/llama-index-tools-metaphor/examples/metaphor.ipynb

0.14.215.0 KB
Original Source

Building a Metaphor Data Agent

This tutorial walks through using the LLM tools provided by the Metaphor API to allow LLMs to easily search and retrieve HTML content from the Internet.

To get started, you will need an OpenAI api key and a Metaphor API key

We will import the relevant agents and tools and pass them our keys here:

python
# Set up OpenAI
import os

os.environ["OPENAI_API_KEY"] = "sk-your-key"

from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI

# Set up Metaphor tool
from llama_index.tools.metaphor.base import MetaphorToolSpec

metaphor_tool = MetaphorToolSpec(
    api_key="your-key",
)

metaphor_tool_list = metaphor_tool.to_tool_list()
for tool in metaphor_tool_list:
    print(tool.metadata.name)

Testing the Metaphor tools

We've imported our OpenAI agent, set up the api key, and initialized our tool, checking the methods that it has available. Let's test out the tool before setting up our Agent.

All of the Metaphor search tools make use of the AutoPrompt option where Metaphor will pass the query through an LLM to refine and improve it.

python
metaphor_tool.search("machine learning transformers", num_results=3)
python
metaphor_tool.retrieve_documents(["iEYMai5rS9k0hN5_BH0VZg"])
python
metaphor_tool.find_similar(
    "https://www.mihaileric.com/posts/transformers-attention-in-disguise/"
)
python
metaphor_tool.search_and_retrieve_documents(
    "This is the best explanation for machine learning transformers:", num_results=1
)

We can see we have different tools to search for results, retrieve the results, find similar results to a web page, and finally a tool that combines search and document retrieval into a single tool. We will test them out in LLM Agents below:

Using the Search and Retrieve documents tools in an Agent

We can create an agent with access to the above tools and start testing it out:

python
# We don't give the Agent our unwrapped retrieve document tools, instead passing the wrapped tools
agent = FunctionAgent(
    tools=metaphor_tool_list,
    llm=OpenAI(model="gpt-4.1"),
)

# Context to store chat history
ctx = Context(agent)
python
print(await agent.run("What are the best resturants in toronto?", ctx=ctx))
python
print(await agent.run("tell me more about Osteria Giulia", ctx=ctx))

Avoiding Context Window Issues

The above example shows the core uses of the Metaphor tool. We can easily retrieve a clean list of links related to a query, and then we can fetch the content of the article as a cleaned up html extract. Alternatively, the search_and_retrieve_documents tool directly returns the documents from our search result.

We can see that the content of the articles is somewhat long compared to current LLM context windows, and so to allow retrieval and summary of many documents we will set up and use another tool from LlamaIndex that allows us to load text into a VectorStore, and query it for retrieval. This is where the search_and_retrieve_documents tool become particularly useful. The Agent can make a single query to retrieve a large number of documents, using a very small number of tokens, and then make queries to retrieve specific information from the documents.

python
from llama_index.tools.tool_spec.load_and_search.base import LoadAndSearchToolSpec

# The search_and_retrieve_documents tool is the third in the tool list, as seen above
wrapped_retrieve = LoadAndSearchToolSpec.from_defaults(
    metaphor_tool_list[2],
)

Our wrapped retrieval tools separate loading and reading into separate interfaces. We use load to load the documents into the vector store, and read to query the vector store. Let's try it out again

python
wrapped_retrieve.load("This is the best explanation for machine learning transformers:")
print(wrapped_retrieve.read("what is a transformer"))
print(wrapped_retrieve.read("who wrote the first paper on transformers"))

Creating the Agent

We now are ready to create an Agent that can use Metaphors services to it's full potential. We will use our wrapped read and load tools, as well as the get_date utility for the following agent and test it out below:

python
# Just pass the wrapped tools and the get_date utility
agent = FunctionAgent(
    tools=[*wrapped_retrieve.to_tool_list(), metaphor_tool_list[4]],
    llm=OpenAI(model="gpt-4.1"),
)
python
print(
    await agent.run(
        "Can you summarize everything published in the last month regarding news on"
        " superconductors"
    )
)

We asked the agent to retrieve documents related to superconductors from this month. It used the get_date tool to determine the current month, and then applied the filters in Metaphor based on publication date when calling search. It then loaded the documents using retrieve_documents and read them using read_retrieve_documents.

We can make another query to the vector store to read from it again, now that the articles are loaded: