Back to Serverless

AI Agents

docs/sf/providers/aws/guide/agents/README.md

4.29.013.3 KB
Original Source
<!-- title: Serverless Framework - AWS Bedrock AgentCore description: Deploy AI agents with AWS Bedrock AgentCore using the Serverless Framework short_title: AI Agents keywords: [ 'Serverless Framework', 'AWS Bedrock', 'AgentCore', 'AI Agents', 'LangGraph', 'Memory', 'Gateway', 'Browser Tool', 'Code Interpreter', ] --> <!-- DOCS-SITE-LINK:START automatically generated -->

Read this on the main serverless docs site

<!-- DOCS-SITE-LINK:END -->

AI Agents

AWS Bedrock AgentCore is a fully managed service for deploying AI agents built with any framework -- LangGraph, Strands Agents, CrewAI, or your own custom code. It provides memory, custom tools, web browsing, and code execution capabilities. The Serverless Framework provisions and manages AgentCore infrastructure alongside your Lambda functions.

Quick Start

Deploy your first AI agent with a minimal configuration and your agent code.

1. Configuration (serverless.yml):

yml
service: my-ai-agent

provider:
  name: aws
  region: us-east-1

ai:
  agents:
    chatbot: {}

Note: Use {} for an empty agent configuration. YAML requires explicit empty braces.

2. Agent code:

JavaScript (index.js):

javascript
import { BedrockAgentCoreApp } from 'bedrock-agentcore/runtime'
import { z } from 'zod'

// Your agent setup - use any framework (LangGraph, Strands Agents, CrewAI, etc.)
const agent = createYourAgent()

const app = new BedrockAgentCoreApp({
  invocationHandler: {
    requestSchema: z.object({
      prompt: z.string(),
    }),
    async process(request) {
      // Your agent logic
      const result = await agent.invoke(request.prompt)
      return result
    },
  },
})

app.run()

Python (agent.py):

python
from bedrock_agentcore.runtime import BedrockAgentCoreApp

# Your agent setup - use any framework (LangGraph, Strands Agents, CrewAI, etc.)
agent = create_your_agent()

app = BedrockAgentCoreApp()

@app.entrypoint
def agent_invocation(payload, context):
    # Your agent logic
    result = agent.invoke(payload.get("prompt"))
    return {"result": result}

app.run()

3. Deploy:

bash
serverless deploy

The Framework automatically builds a Docker image from your source code, pushes it to ECR, and deploys the agent. No Dockerfile is needed.

See full examples:

What the Framework Manages

When you run serverless deploy, the Framework automatically handles:

  • Docker image builds - Builds from your Dockerfile or automatically from source code, pushes to ECR
  • IAM roles - Creates least-privilege execution roles for each agent component (runtime, memory, gateway, browser, code interpreter)
  • CloudFormation resources - Provisions Runtime, Endpoint, Gateway, Memory, Browser, and Code Interpreter resources
  • Environment variables - Injects memory IDs, gateway URLs, and other references into your agent's environment
  • Code packaging - For Python code deployment, packages and uploads code to S3

You write the agent logic and serverless.yml configuration; the Framework handles the infrastructure.

Deployment Options

AgentCore supports three deployment methods:

Auto-build (No Dockerfile)

The simplest option. The Framework automatically builds a Docker image from your source code:

yml
ai:
  agents:
    myAgent: {} # No Dockerfile needed

Requirements for auto-build:

For Node.js projects:

  • package.json - required
  • A lockfile - required (package-lock.json, yarn.lock, or pnpm-lock.yaml)
  • Entry point: index.js or server.js in project root, or a start script in package.json
  • Node.js version: set via engines.node in package.json (defaults to latest LTS)

For Python projects:

  • requirements.txt or pyproject.toml - required for dependency installation

Best for: Getting started quickly, simple projects

Dockerfile Deployment

Provide your own Dockerfile for full control. The Framework auto-detects it:

yml
ai:
  agents:
    myAgent: {} # Auto-detects Dockerfile in project directory

Node.js Dockerfile:

dockerfile
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
CMD ["node", "index.js"]

Python Dockerfile:

dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "agent.py"]

Add optional configuration as needed:

yml
ai:
  agents:
    myAgent:
      environment:
        MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0
      lifecycle:
        idleRuntimeSessionTimeout: 900 # seconds (60-28800)
        maxLifetime: 3600 # seconds (60-28800)

Best for: Multi-language projects, complex dependencies, full control over the container

Code Deployment (Python Only)

Deploy Python code directly without Docker:

yml
ai:
  agents:
    myAgent:
      handler: agent.py # Triggers code deployment mode
      runtime: python3.13 # or python3.10, python3.11, python3.12
      environment:
        MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0

Best for: Simple Python agents, faster iterations, no Docker setup

Core Concepts

AgentCore provides infrastructure components that you reference in your agent code:

  • Runtime - Your deployed agent application (Docker or Python)
  • Gateway - Converts Lambda/APIs/MCP servers into agent tools
  • Memory - Conversation persistence and context management
  • Browser - Managed web automation capabilities
  • Code Interpreter - Secure Python code execution
  • Dev Mode - Local development with hot reload

Prerequisites

  • AWS account with Bedrock model access
  • Docker installed (for image deployment and auto-build)
  • Node.js 20+ (for JavaScript agents)
  • Serverless Framework v4+

Configuration Reference

Basic Runtime

yml
ai:
  agents:
    myAgent:
      # Deployment method (choose one)
      artifact:
        image: # Docker deployment
      # OR
      handler: agent.py # Code deployment
      runtime: python3.12

      # Optional configuration
      environment:
        MODEL_ID: us.anthropic.claude-sonnet-4-5-20250929-v1:0
      lifecycle:
        idleRuntimeSessionTimeout: 900 # seconds (60-28800)
        maxLifetime: 3600 # seconds (60-28800)
      tags:
        team: ai
        project: chatbot

With Memory

yml
ai:
  memory:
    conversations:
      expiration: 30 # days
      strategies:
        - SemanticMemoryStrategy:
            Name: Conversations

  agents:
    myAgent:
      memory: conversations # Reference memory by name

Learn more: Memory Configuration

With Gateway (Custom Tools)

yml
ai:
  tools:
    calculator:
      function: calculatorFunction
      toolSchema:
        - name: calculate
          description: Perform calculations
          inputSchema:
            type: object
            properties:
              expression:
                type: string
            required:
              - expression

  gateways:
    default:
      tools:
        - calculator

  agents:
    myAgent:
      gateway: default # Reference gateway by name

Learn more: Gateway Configuration

Development & Testing

bash
# Local development mode - runs agent locally with hot reload
serverless dev

# Invoke a deployed agent
serverless invoke --agent myAgent --data '{"prompt": "Hello!"}'

# View agent logs
serverless logs --agent myAgent

# Tail agent logs in real-time
serverless logs --agent myAgent --tail

# View deployment info
serverless info

serverless dev runs your agent locally in Docker, injects AWS credentials, watches for file changes, and provides an interactive chat CLI. See Dev Mode for details.

serverless invoke --agent supports --data, --path (file input), and --session-id (for multi-turn conversations).

serverless logs --agent supports --tail, --startTime, --filter, and --interval -- the same options as function logs.

Examples

Basic Agent

LangGraph with simple tools:

Streaming

Real-time token streaming via SSE:

Memory

Conversation persistence across invocations:

Gateway (Custom Tools)

Connect Lambda functions and APIs as agent tools:

Browser

Web automation and content extraction:

Code Interpreter

Secure Python code execution:

MCP Server

Deploy an MCP server as an AgentCore runtime:

Next Steps