docs/en/documentation/connect-to/toolbox-sdks/python-sdk/langchain/index.md
The toolbox-langchain package provides a Python interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
pip install toolbox-langchain
Here's a minimal example to get you started using LangGraph:
from toolbox_langchain import ToolboxClient
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
model = ChatVertexAI(model="gemini-3-flash-preview")
agent = create_react_agent(model, tools)
prompt = "How's the weather today?"
for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
{{< notice tip >}} For a complete, end-to-end example including setting up the service and using an SDK, see the full tutorial: Toolbox Quickstart Tutorial {{< /notice >}}
Import and initialize the toolbox client.
from toolbox_langchain import ToolboxClient
# Replace with your Toolbox service's URL
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the Model Context Protocol (MCP).
You can explicitly select a protocol using the protocol option during client initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note >}}
We currently support different versions of the MCP protocol.
| Constant | Description |
|---|---|
Protocol.MCP | (Default) Alias for the default MCP version (currently 2025-06-18). |
Protocol.MCP_v20251125 | MCP Protocol version 2025-11-25. |
Protocol.MCP_v20250618 | MCP Protocol version 2025-06-18. |
Protocol.MCP_v20250326 | MCP Protocol version 2025-03-26. |
Protocol.MCP_v20241105 | MCP Protocol version 2024-11-05. |
from toolbox_langchain import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.MCP) as toolbox:
# Use client
pass
If you want to pin the MCP Version 2025-03-26:
from toolbox_langchain import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.MCP_v20250326) as toolbox:
# Use client
pass
A toolset is a collection of related tools. You can load all tools in a toolset or a specific one:
# Load all tools
tools = toolbox.load_toolset()
# Load a specific toolset
tools = toolbox.load_toolset("my-toolset")
tool = toolbox.load_tool("my-tool")
Loading individual tools gives you finer-grained control over which tools are available to your LLM agent.
LangChain's agents can dynamically choose and execute tools based on the user input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
from langchain_google_vertexai import ChatVertexAI
model = ChatVertexAI(model="gemini-3-flash-preview")
# Initialize agent with tools
agent = model.bind_tools(tools)
# Run the agent
result = agent.invoke("Do something with the tools")
Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a graph-based workflow. Follow the official guide with minimal changes.
Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:
from toolbox_langchain import ToolboxClient
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Define the function that calls the model
def call_model(state: MessagesState):
messages = state['messages']
response = model.invoke(messages)
return {"messages": [response]} # Return a list to add to existing messages
model = ChatVertexAI(model="gemini-3-flash-preview")
builder = StateGraph(MessagesState)
tool_node = ToolNode(tools)
builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)
Connect tool nodes with LLM nodes. The LLM decides which tool to use based on input or context. Tool output can be fed back into the LLM:
from typing import Literal
from langgraph.graph import END, START
from langchain_core.messages import HumanMessage
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools" # Route to "tools" node if LLM makes a tool call
return END # Otherwise, stop
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", 'agent')
graph = builder.compile()
graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})
Execute a tool manually using the invoke method:
result = tools[0].invoke({"name": "Alice", "age": 30})
This is useful for testing tools or when you need precise control over tool execution outside of an agent framework.
This section describes how to authenticate the ToolboxClient itself when connecting to a Toolbox server instance that requires authentication. This is crucial for securing your Toolbox server endpoint, especially when deployed on platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify the identity of the client making the request before any tool is loaded or called. It is different from Authenticating Tools, which deals with providing credentials for specific tools within an already connected Toolbox session.
You'll need this type of authentication if your Toolbox server is configured to deny unauthenticated requests. For example:
Without proper client authentication in these scenarios, attempts to connect or
make calls (like load_tool) will likely fail with Unauthorized errors.
The ToolboxClient allows you to specify functions (or coroutines for the async
client) that dynamically generate HTTP headers for every request sent to the
Toolbox server. The most common use case is to add an Authorization header with
a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring that fresh credentials or header values can be used.
You can configure these dynamic headers as follows:
from toolbox_langchain import ToolboxClient
async with ToolboxClient(
"toolbox-url",
client_headers={"header1": header1_getter, "header2": header2_getter, ...}
) as client:
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
Google ID token authentication, the helper module
auth_methods provides utility functions.
Configure Permissions:
Grant
the roles/run.invoker IAM role on the Cloud
Run service to the principal. This could be your user account email or a
service account.
Configure Credentials
Connect to the Toolbox Server
from toolbox_langchain import ToolboxClient
from toolbox_core import auth_methods
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as client:
tools = client.load_toolset()
# Now, you can use the client as usual.
{{< notice info >}} Always use HTTPS to connect your application with the Toolbox service, especially when using tools with authentication configured. Using HTTP exposes your application to serious security risks. {{< /notice >}}
Some tools require user authentication to access sensitive data.
Toolbox currently supports authentication using the OIDC protocol with ID tokens (not access tokens) for Google OAuth 2.0.
Refer to these instructions on configuring tools for authenticated parameters.
You need a method to retrieve an ID token from your authentication service:
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
auth_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token) # Single token
multi_auth_tool = tools[0].add_auth_token_getters({"auth_1": get_auth_1}, {"auth_2": get_auth_2}) # Multiple tokens
# OR
auth_tools = [tool.add_auth_token_getter("my_auth", get_auth_token) for tool in tools]
auth_tool = toolbox.load_tool(auth_token_getters={"my_auth": get_auth_token})
auth_tools = toolbox.load_toolset(auth_token_getters={"my_auth": get_auth_token})
{{< notice note >}} Adding auth tokens during loading only affect the tools loaded within that call. {{< /notice >}}
import asyncio
from toolbox_langchain import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = toolbox.load_tool("my-tool")
auth_tool = tool.add_auth_token_getter("my_auth", get_auth_token)
result = auth_tool.invoke({"input": "some input"})
print(result)
Predetermine values for tool parameters using the SDK. These values won't be modified by the LLM. This is useful for:
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
bound_tool = tool[0].bind_param("param", "value") # Single param
multi_bound_tool = tools[0].bind_params({"param1": "value1", "param2": "value2"}) # Multiple params
# OR
bound_tools = [tool.bind_param("param", "value") for tool in tools]
bound_tool = toolbox.load_tool("my-tool", bound_params={"param": "value"})
bound_tools = toolbox.load_toolset(bound_params={"param": "value"})
{{< notice note >}} Bound values during loading only affect the tools loaded in that call. {{< /notice >}}
Use a function to bind dynamic values:
def get_dynamic_value():
# Logic to determine the value
return "dynamic_value"
dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)
{{< notice note >}} You don’t need to modify tool configurations to bind parameter values. {{< /notice >}}
For better performance through cooperative
multitasking, you can
use the asynchronous interfaces of the ToolboxClient.
{{< notice note >}}
Asynchronous interfaces like aload_tool and aload_toolset require an asynchronous environment. For guidance on running asynchronous Python programs, see asyncio documentation.
{{< /notice >}}
import asyncio
from toolbox_langchain import ToolboxClient
async def main():
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await client.aload_tool("my-tool")
tools = await client.aload_toolset()
response = await tool.ainvoke()
if __name__ == "__main__":
asyncio.run(main())
The SDK supports OpenTelemetry tracing and metrics via the toolbox-core layer, following the MCP Semantic Conventions.
First install the telemetry extra from toolbox-core:
pip install toolbox-core[telemetry]
Then pass telemetry_enabled=True when creating your client:
from toolbox_langchain import ToolboxClient
with ToolboxClient("http://127.0.0.1:5000", telemetry_enabled=True) as toolbox:
tool = toolbox.load_tool("my-tool")
result = tool.invoke({"param": "value"})
Configure your OpenTelemetry TracerProvider and MeterProvider before creating the client. See the toolbox-core OpenTelemetry documentation for a full setup example.