Back to Llama Index

MCP ToolSpec

llama-index-integrations/tools/llama-index-tools-mcp/examples/mcp.ipynb

0.14.215.2 KB
Original Source

MCP ToolSpec

This tool connects to MCP Servers and allows an Agent to call the tools provided by MCP Servers.

This idea is migrated from Integrate MCP tools into LlamaIndex.

To run this example, you need to edit the .env file to set the correct values for the IPInfo API Token and a OpenAI API key.

bash
# get the code
git clone https://github.com/run-llama/llama_index
cd llama_index/llama-index-integrations/tools/llama-index-tools-mcp/examples

# install dependencies
pip install ipinfo

cp .env.example .env
# NOTE: edit the .env file to have the correct values!

# run the server
python mcp_server.py --server_type=sse

In this example, we will create a toy example with an agent that can query the ip info of the user.

It's built using the AgentWorkflow class from LlamaIndex. If that's new to you, you can read more about it!

python
from llama_index.llms.openai import OpenAI
import dotenv

dotenv.load_dotenv()

llm = OpenAI(model="gpt-4o")
python
from llama_index.tools.mcp import McpToolSpec
from llama_index.core.agent.workflow import FunctionAgent, ToolCallResult, ToolCall
from llama_index.core.workflow import Context

SYSTEM_PROMPT = """\
You are an AI assistant.

Before you help a user, you need to fetch the ip info first, to help you follow the laws of the country.
"""


async def get_agent(tools: McpToolSpec):
    tools = await tools.to_tool_list_async()
    agent = FunctionAgent(
        name="Agent",
        description="An agent that can fetch the ip info of the user.",
        tools=tools,
        llm=llm,
        system_prompt=SYSTEM_PROMPT,
    )
    return agent


async def handle_user_message(
    message_content: str,
    agent: FunctionAgent,
    agent_context: Context,
    verbose: bool = False,
):
    handler = agent.run(message_content, ctx=agent_context)
    async for event in handler.stream_events():
        if verbose and type(event) == ToolCall:
            print(f"Calling tool {event.tool_name} with kwargs {event.tool_kwargs}")
        elif verbose and type(event) == ToolCallResult:
            print(f"Tool {event.tool_name} returned {event.tool_output}")

    response = await handler
    return str(response)
python
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

# We consider there is a mcp server running on 127.0.0.1:8000, or you can use the mcp client to connect to your own mcp server.
mcp_client = BasicMCPClient("http://127.0.0.1:8000/sse")
mcp_tool = McpToolSpec(client=mcp_client)

# get the agent
agent = await get_agent(mcp_tool)

# create the agent context
agent_context = Context(agent)
python
# Run the agent!
while True:
    user_input = input("Enter your message: ")
    if user_input == "exit":
        break
    print("User: ", user_input)
    response = await handle_user_message(user_input, agent, agent_context, verbose=True)
    print("Agent: ", response)

Here, we can see the agent is calling the fetch_ipinfo tool to get the ip info! This tool is running remotely on the mcp server.

The MCPToolSpec is connecting to the MCP server and creating FunctionTools for each tool that is registered on the MCP server.

python
tools = await mcp_tool.to_tool_list_async()
for tool in tools:
    print(tool.metadata.name, tool.metadata.description)

You can also limit the tools that the MCPToolSpec will create by passing a list of tool names to the MCPToolSpec constructor.

python
mcp_tool = McpToolSpec(client=mcp_client, allowed_tools=["some fake tool"])
tools = await mcp_tool.to_tool_list_async()
for tool in tools:
    print(tool.metadata.name, tool.metadata.description)

Alternatively,

You can directly use the get_tools_from_mcp_url or aget_tools_from_mcp_url functions to get a list of FunctionTools from an MCP server.

python
from llama_index.tools.mcp import (
    get_tools_from_mcp_url,
    aget_tools_from_mcp_url,
)

# async
tools = await aget_tools_from_mcp_url("http://127.0.0.1:8000/sse")

By default, this will use our BasicMCPClient, which will run a command or connect to the URL and return the tools.

You can also pass in a custom ClientSession to use a different client.

You can also specify a list of allowed tools to filter the tools that are returned.

python
from llama_index.tools.mcp import BasicMCPClient

client = BasicMCPClient("http://127.0.0.1:8000/sse")

tools = await aget_tools_from_mcp_url(
    "http://127.0.0.1:8000/sse",
    client=client,
    allowed_tools=["fetch_ipinfo"],
)

Then create the agent directly using the obtained list of FunctionTools.

python
agent = FunctionAgent(
    name="Agent",
    description="An agent that can fetch the ip info of the user.",
    tools=tools,
    llm=llm,
    system_prompt=SYSTEM_PROMPT,
)
python
# Run the agent!
while True:
    user_input = input("Enter your message: ")
    if user_input == "exit":
        break
    print("User: ", user_input)
    response = await handle_user_message(user_input, agent, agent_context, verbose=True)
    print("Agent: ", response)