packages/docs/examples-gallery/aws.mdx
Deploy AI agents as serverless AWS Lambda functions.
cd examples/aws
export OPENAI_API_KEY="your-key"
# TypeScript
cd typescript && bun run build
sam deploy --guided --parameter-overrides OpenAIApiKey=$OPENAI_API_KEY
┌──────────────┐ ┌─────────────────┐ ┌────────────────┐
│ Client │────▶│ API Gateway │────▶│ Lambda │
│ │◀────│ (HTTP API) │◀────│ (elizaOS) │
└──────────────┘ └─────────────────┘ └────────────────┘
| Language | Directory | Runtime |
|---|---|---|
| TypeScript | examples/aws/typescript/ | Node.js 20 |
| Python | examples/aws/python/ | Python 3.11 |
| Rust | examples/aws/rust/ | Custom Runtime |
import { APIGatewayProxyHandler } from "aws-lambda";
import { AgentRuntime } from "@elizaos/core";
import { openaiPlugin } from "@elizaos/plugin-openai";
let runtime: AgentRuntime | null = null;
export const handler: APIGatewayProxyHandler = async (event) => {
if (!runtime) {
runtime = new AgentRuntime({
character: { name: "Eliza", bio: "A helpful AI." },
plugins: [openaiPlugin],
});
await runtime.initialize();
}
const { message } = JSON.parse(event.body || "{}");
const response = await runtime.useModel("TEXT_LARGE", { prompt: message });
return {
statusCode: 200,
body: JSON.stringify({ response: String(response) }),
};
};
cd examples/aws/typescript
bun run start # Starts local server on port 3000
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello!"}'
| Variable | Default | Description |
|---|---|---|
OPENAI_API_KEY | Required | OpenAI API key |
CHARACTER_NAME | Eliza | Agent name |
CHARACTER_BIO | A helpful AI. | Agent bio |
LOG_LEVEL | INFO | Logging level |
512MB memory, 2s average duration, 10K requests/month: