docs/src/content/en/guides/deployment/aws-lambda.mdx
import Steps from "@site/src/components/Steps"; import StepItem from "@site/src/components/StepItem";
Deploy your Mastra application to AWS Lambda using Docker containers and the AWS Lambda Web Adapter. This approach runs your Mastra server as a containerized Lambda function with automatic scaling.
:::info This guide covers deploying the Mastra server. If you're using a server adapter or web framework, deploy the way you normally would for that framework. :::
You'll need:
aws configure to authenticate:::warning On AWS Lambda, the filesystem is ephemeral, so any local database file will be lost between invocations. If you're using LibSQLStore with a local file, configure it to use a remote LibSQL-compatible database (for example, Turso) instead. :::
Create a Dockerfile in your project root:
FROM node:22-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci
# Copy source and build Mastra
COPY src ./src
RUN npx mastra build
# Alpine compatibility for some native deps
RUN apk add --no-cache gcompat
# Add the Lambda Web Adapter
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 /lambda-adapter /opt/extensions/lambda-adapter
# Run as non-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S mastra -u 1001 && \
chown -R mastra:nodejs /app
USER mastra
# Adapter / app configuration
ENV PORT=8080
ENV NODE_ENV=production
ENV AWS_LWA_READINESS_CHECK_PATH="/api"
# Start the Mastra server
CMD ["node", ".mastra/output/index.mjs"]
:::note
This Dockerfile uses npm. If you're using pnpm, yarn, or another package manager, adjust the commands accordingly (e.g. npm ci won't work with pnpm lockfiles).
:::
```bash
export PROJECT_NAME="your-mastra-app"
export AWS_REGION="us-east-1"
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
```
```bash
export DOCKER_BUILDKIT=0
docker build --platform linux/arm64 -t "$PROJECT_NAME" .
```
:::note
BuildKit's default image format can cause compatibility issues with Lambda. Setting `DOCKER_BUILDKIT=0` uses the classic builder, which produces images in a format Lambda reliably accepts.
:::
```bash
aws ecr create-repository --repository-name "$PROJECT_NAME" --region "$AWS_REGION"
```
```bash
aws ecr get-login-password --region "$AWS_REGION" | \
docker login --username AWS --password-stdin \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com"
```
```bash
docker tag "$PROJECT_NAME":latest \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROJECT_NAME":latest
docker push \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROJECT_NAME":latest
```
- **Function name**: Enter a name (e.g. `mastra-app`)
- **Container image URI**: Click **Browse images**, select your ECR repository, and choose the `latest` tag
- **Architecture**: Select **arm64**
Click **Create function** to create the Lambda function.
- **Memory**: 512 MB (adjust based on your application needs)
- **Timeout**: 1 minute or higher (the default 3 seconds is too low for most Mastra applications)
Click **Save**.
- `OPENAI_API_KEY`: Your OpenAI API key (if using OpenAI)
- `ANTHROPIC_API_KEY`: Your Anthropic API key (if using Anthropic)
- Other provider-specific API keys as needed (e.g. `TURSO_DATABASE_URL` and `TURSO_AUTH_TOKEN` if using LibSQL with Turso)
Click **Save**.
- **Auth type**: Select **NONE** for public access
- Under **Additional settings**, check **Configure cross-origin resource sharing (CORS)**, then configure:
- **Allow origin**: `*`
- **Allow headers**: `content-type` (`x-amzn-request-context` is also required when used with services like CloudFront/API Gateway)
- **Allow methods**: `*`
Click **Save**.
:::warning
For production deployments, set up [authentication](/docs/server/auth) before exposing your endpoints publicly, restrict CORS origins to your trusted domains, use AWS IAM roles for secure access to other AWS services, and store sensitive environment variables in AWS Secrets Manager or Parameter Store.
:::
This guide provides a quickstart for deploying Mastra to AWS Lambda. For production workloads, consider enabling CloudWatch monitoring for your Lambda function, setting up AWS X-Ray for distributed tracing, and configuring provisioned concurrency for consistent performance.