llama-index-integrations/tools/llama-index-tools-aws-bedrock-agentcore/README.md
This module provides a runtime adapter and tools for deploying and extending LlamaIndex agents with Amazon Bedrock AgentCore -- including managed compute via AgentCore Runtime, sandboxed browser automation, and code execution.
bedrock-agentcore:* actions (see the AgentCore documentation for details)(Optional) To run the examples below, first install:
pip install llama-index llama-index-llms-bedrock-converse
Install the main tools package:
pip install llama-index-tools-aws-bedrock-agentcore
The AgentCoreRuntime adapter deploys any LlamaIndex agent to Amazon Bedrock AgentCore Runtime -- a managed compute platform for AI agents. It wraps BedrockAgentCoreApp from the bedrock-agentcore SDK, providing the required POST /invocations and GET /ping endpoints.
from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.tools.aws_bedrock_agentcore import AgentCoreRuntime
llm = BedrockConverse(
model="us.anthropic.claude-sonnet-4-6-v1",
region_name="us-west-2",
)
agent = FunctionAgent(llm=llm, tools=[])
# One-liner -- starts uvicorn on port 8080
AgentCoreRuntime.serve(agent)
runtime = AgentCoreRuntime(
agent=agent,
stream=True, # SSE streaming (default)
port=8080, # Required port for AgentCore deployment
debug=False,
)
runtime.run()
from llama_index.memory.bedrock_agentcore import (
AgentCoreMemory,
AgentCoreMemoryContext,
)
memory = AgentCoreMemory(
context=AgentCoreMemoryContext(
memory_id="your-memory-id",
actor_id="user-123",
),
region_name="us-west-2",
)
# Session ID from the X-Amzn-Bedrock-AgentCore-Runtime-Session-Id header
# is automatically wired to memory
AgentCoreRuntime.serve(agent, memory=memory)
# Non-streaming
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello, what can you do?"}'
# Streaming (SSE)
curl -N -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello, what can you do?"}'
The adapter accepts prompt, message, or input as the payload key.
When stream=True (default), the SSE stream emits these event types:
| Event | Fields | Description |
|---|---|---|
agent_stream | delta, response, thinking_delta? | Token-by-token LLM output |
tool_call | tool_name, tool_kwargs | Before tool execution |
tool_result | tool_name, tool_output | After tool execution |
done | response | Final agent response |
error | message | Error during streaming |
runtime = AgentCoreRuntime(agent=agent)
app = runtime.app # BedrockAgentCoreApp (Starlette-based)
# Use with httpx.AsyncClient for testing
The AgentCore Browser toolspec provides a set of tools for interacting with web browsers in a secure sandbox environment. It enables your LlamaIndex agents to navigate websites, extract content, click elements, and more.
Included tools:
navigate_browser: Navigate to a URLclick_element: Click on an element using CSS selectorsextract_text: Extract all text from the current webpageextract_hyperlinks: Extract all hyperlinks from the current webpageget_elements: Get elements matching a CSS selectornavigate_back: Navigate to the previous pagecurrent_webpage: Get information about the current webpagegenerate_live_view_url: Generate a presigned URL for human oversight of a browser sessiontake_control: Take manual control of a browser session (disables automation)release_control: Release manual control (re-enables automation)Lifecycle methods available for programmatic use (not exposed as agent tools):
list_browsers, create_browser, delete_browser, get_browserYou can optionally pass a custom identifier for VPC-enabled browser resources:
tool_spec = AgentCoreBrowserToolSpec(
region="us-west-2",
identifier="my-custom-browser-id",
)
Example usage:
import asyncio
from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.tools.aws_bedrock_agentcore import AgentCoreBrowserToolSpec
from llama_index.core.agent.workflow import FunctionAgent
import nest_asyncio
nest_asyncio.apply() # In case of existing loop (ex. in JupyterLab)
async def main():
tool_spec = AgentCoreBrowserToolSpec(region="us-west-2")
tools = tool_spec.to_tool_list()
llm = BedrockConverse(
model="us.anthropic.claude-sonnet-4-6-v1",
region_name="us-west-2",
)
agent = FunctionAgent(
tools=tools,
llm=llm,
)
task = "Go to https://news.ycombinator.com/ and tell me the titles of the top 5 posts."
response = await agent.run(task)
print(str(response))
await tool_spec.cleanup()
if __name__ == "__main__":
asyncio.run(main())
The AgentCore Code Interpreter toolspec provides a set of tools for interacting with a secure code interpreter sandbox environment. It enables your LlamaIndex agents to execute code, run shell commands, manage files, and perform computational tasks.
Included tools:
execute_code: Run code in various languages (primarily Python)execute_command: Run shell commandsread_files: Read content of files in the environmentlist_files: List files in directoriesdelete_files: Remove files from the environmentwrite_files: Create or update filesstart_command: Start long-running commands asynchronouslyget_task: Check status of async tasksstop_task: Stop running tasksupload_file: Upload a file with an optional semantic descriptionupload_files: Upload multiple files at onceinstall_packages: Install Python packages via pipdownload_file: Download a file from the sandboxdownload_files: Download multiple files from the sandboxclear_context: Clear all variable state in the Python execution contextLifecycle methods available for programmatic use (not exposed as agent tools):
list_code_interpreters, create_code_interpreter, delete_code_interpreter, get_code_interpreterYou can optionally pass a custom identifier for VPC-enabled code interpreter resources:
tool_spec = AgentCoreCodeInterpreterToolSpec(
region="us-west-2",
identifier="my-custom-interpreter-id",
)
Example usage:
import asyncio
from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.tools.aws_bedrock_agentcore import (
AgentCoreCodeInterpreterToolSpec,
)
from llama_index.core.agent.workflow import FunctionAgent
import nest_asyncio
nest_asyncio.apply() # In case of existing loop (ex. in JupyterLab)
async def main():
tool_spec = AgentCoreCodeInterpreterToolSpec(region="us-west-2")
tools = tool_spec.to_tool_list()
llm = BedrockConverse(
model="us.anthropic.claude-sonnet-4-6-v1",
region_name="us-west-2",
)
agent = FunctionAgent(
tools=tools,
llm=llm,
)
code_task = "Write a Python function that calculates the factorial of a number and test it."
code_response = await agent.run(code_task)
print(str(code_response))
command_task = "Use terminal CLI commands to: 1) Show the environment's Python version. 2) Show me the list of Python package currently installed in the environment."
command_response = await agent.run(command_task)
print(str(command_response))
await tool_spec.cleanup()
if __name__ == "__main__":
asyncio.run(main())