docs/sf/providers/aws/guide/agents/README.md
AWS Bedrock AgentCore is a fully managed service for deploying AI agents built with any framework -- LangGraph, Strands Agents, CrewAI, or your own custom code. It provides memory, custom tools, web browsing, and code execution capabilities. The Serverless Framework provisions and manages AgentCore infrastructure alongside your Lambda functions.
Deploy your first AI agent with a minimal configuration and your agent code.
1. Configuration (serverless.yml):
service: my-ai-agent
provider:
name: aws
region: us-east-1
ai:
agents:
chatbot: {}
Note: Use
{}for an empty agent configuration. YAML requires explicit empty braces.
2. Agent code:
JavaScript (index.js):
import { BedrockAgentCoreApp } from 'bedrock-agentcore/runtime'
import { z } from 'zod'
// Your agent setup - use any framework (LangGraph, Strands Agents, CrewAI, etc.)
const agent = createYourAgent()
const app = new BedrockAgentCoreApp({
invocationHandler: {
requestSchema: z.object({
prompt: z.string(),
}),
async process(request) {
// Your agent logic
const result = await agent.invoke(request.prompt)
return result
},
},
})
app.run()
Python (agent.py):
from bedrock_agentcore.runtime import BedrockAgentCoreApp
# Your agent setup - use any framework (LangGraph, Strands Agents, CrewAI, etc.)
agent = create_your_agent()
app = BedrockAgentCoreApp()
@app.entrypoint
def agent_invocation(payload, context):
# Your agent logic
result = agent.invoke(payload.get("prompt"))
return {"result": result}
app.run()
3. Deploy:
serverless deploy
The Framework automatically builds a Docker image from your source code, pushes it to ECR, and deploys the agent. No Dockerfile is needed.
See full examples:
When you run serverless deploy, the Framework automatically handles:
You write the agent logic and serverless.yml configuration; the Framework handles the infrastructure.
AgentCore supports three deployment methods:
The simplest option. The Framework automatically builds a Docker image from your source code:
ai:
agents:
myAgent: {} # No Dockerfile needed
Requirements for auto-build:
For Node.js projects:
package.json - requiredpackage-lock.json, yarn.lock, or pnpm-lock.yaml)index.js or server.js in project root, or a start script in package.jsonengines.node in package.json (defaults to latest LTS)For Python projects:
requirements.txt or pyproject.toml - required for dependency installationBest for: Getting started quickly, simple projects
Provide your own Dockerfile for full control. The Framework auto-detects it:
ai:
agents:
myAgent: {} # Auto-detects Dockerfile in project directory
Node.js Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
CMD ["node", "index.js"]
Python Dockerfile:
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "agent.py"]
Add optional configuration as needed:
ai:
agents:
myAgent:
environment:
MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0
lifecycle:
idleRuntimeSessionTimeout: 900 # seconds (60-28800)
maxLifetime: 3600 # seconds (60-28800)
Best for: Multi-language projects, complex dependencies, full control over the container
Deploy Python code directly without Docker:
ai:
agents:
myAgent:
handler: agent.py # Triggers code deployment mode
runtime: python3.13 # or python3.10, python3.11, python3.12
environment:
MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0
Best for: Simple Python agents, faster iterations, no Docker setup
AgentCore provides infrastructure components that you reference in your agent code:
ai:
agents:
myAgent:
# Deployment method (choose one)
artifact:
image: # Docker deployment
# OR
handler: agent.py # Code deployment
runtime: python3.12
# Optional configuration
environment:
MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0
lifecycle:
idleRuntimeSessionTimeout: 900 # seconds (60-28800)
maxLifetime: 3600 # seconds (60-28800)
tags:
team: ai
project: chatbot
ai:
memory:
conversations:
expiration: 30 # days
strategies:
- SemanticMemoryStrategy:
Name: Conversations
agents:
myAgent:
memory: conversations # Reference memory by name
Learn more: Memory Configuration
ai:
tools:
calculator:
function: calculatorFunction
toolSchema:
- name: calculate
description: Perform calculations
inputSchema:
type: object
properties:
expression:
type: string
required:
- expression
gateways:
default:
tools:
- calculator
agents:
myAgent:
gateway: default # Reference gateway by name
Learn more: Gateway Configuration
# Local development mode - runs agent locally with hot reload
serverless dev
# Invoke a deployed agent
serverless invoke --agent myAgent --data '{"prompt": "Hello!"}'
# View agent logs
serverless logs --agent myAgent
# Tail agent logs in real-time
serverless logs --agent myAgent --tail
# View deployment info
serverless info
serverless dev runs your agent locally in Docker, injects AWS credentials, watches for file changes, and provides an interactive chat CLI. See Dev Mode for details.
serverless invoke --agent supports --data, --path (file input), and --session-id (for multi-turn conversations).
serverless logs --agent supports --tail, --startTime, --filter, and --interval -- the same options as function logs.
LangGraph with simple tools:
Real-time token streaming via SSE:
Conversation persistence across invocations:
Connect Lambda functions and APIs as agent tools:
Web automation and content extraction:
Secure Python code execution:
Deploy an MCP server as an AgentCore runtime: