observability/sentry/README.md
Sentry AI Observability exporter for Mastra applications.
npm install @mastra/sentry
The exporter automatically reads credentials from environment variables:
# Required
SENTRY_DSN=https://[email protected]/...
# Optional
SENTRY_ENVIRONMENT=production
SENTRY_RELEASE=1.0.0
import { SentryExporter } from '@mastra/sentry';
const mastra = new Mastra({
...,
observability: {
configs: {
sentry: {
serviceName: 'my-service',
exporters: [new SentryExporter()],
},
},
},
});
You can also pass credentials directly:
import { SentryExporter } from '@mastra/sentry';
const mastra = new Mastra({
...,
observability: {
configs: {
sentry: {
serviceName: 'my-service',
exporters: [
new SentryExporter({
dsn: 'https://[email protected]/...',
environment: 'production', // Optional - deployment environment
tracesSampleRate: 1.0, // Optional - send 100% of transactions to Sentry
release: '1.0.0', // Optional - version of your code deployed
}),
],
},
},
},
});
| Option | Type | Description |
|---|---|---|
dsn | string | Data Source Name - tells the SDK where to send events. Defaults to SENTRY_DSN env var |
environment | string | Deployment environment (enables filtering issues and alerts by environment). Defaults to SENTRY_ENVIRONMENT env var or 'production' |
tracesSampleRate | number | Percentage of transactions sent to Sentry (0.0 = 0%, 1.0 = 100%). Defaults to 1.0 |
release | string | Version of your code deployed (helps identify regressions and track deployments). Defaults to SENTRY_RELEASE env var |
options | object | Additional Sentry SDK options (integrations, beforeSend, etc.) |
MODEL_GENERATION spans include token usage, model parameters, and streaming infoTOOL_CALL and MCP_TOOL_CALL spans track tool executionsWORKFLOW_RUN and WORKFLOW_STEP spans track workflow execution| Mastra SpanType | Sentry Operation | Span Name Pattern | Notes |
|---|---|---|---|
AGENT_RUN | gen_ai.invoke_agent | invoke_agent {agent} | Accumulates tokens from the child MODEL_GENERATION span |
MODEL_GENERATION | gen_ai.chat | chat {model} [stream] | Contains aggregated streaming data |
MODEL_STEP | (skipped) | - | Skipped to simplify trace hierarchy |
MODEL_CHUNK | (skipped) | - | Too granular; data aggregated in MODEL_GENERATION |
TOOL_CALL | gen_ai.execute_tool | execute_tool {tool} | |
MCP_TOOL_CALL | gen_ai.execute_tool | execute_tool {tool} | |
WORKFLOW_RUN | workflow.run | workflow | |
WORKFLOW_STEP | workflow.step | step | |
WORKFLOW_CONDITIONAL | workflow.conditional | step | |
WORKFLOW_CONDITIONAL_EVAL | workflow.conditional | step | |
WORKFLOW_PARALLEL | workflow.parallel | step | |
WORKFLOW_LOOP | workflow.loop | step | |
WORKFLOW_SLEEP | workflow.sleep | step | |
WORKFLOW_WAIT_EVENT | workflow.wait | step | |
PROCESSOR_RUN | ai.processor | step | |
GENERIC | ai.span | span |
Common attributes (all spans):
sentry.origin: auto.ai.mastra (identifies spans from Mastra)ai.span.type: Mastra span type (e.g., model_generation, tool_call)For MODEL_GENERATION and MODEL_STEP spans:
gen_ai.operation.name: chatgen_ai.system: Model provider (e.g., openai, anthropic)gen_ai.request.model: Model identifier (e.g., gpt-4)gen_ai.request.messages: Input messages/prompts (JSON)gen_ai.response.text: Output text responsegen_ai.usage.input_tokens: Input token countgen_ai.usage.output_tokens: Output token countgen_ai.usage.cache_read_input_tokens: Cached input tokensgen_ai.usage.cache_write_input_tokens: Cache write tokensgen_ai.usage.reasoning_tokens: Reasoning tokens (for models like o1)gen_ai.request.temperature: Temperature parametergen_ai.request.max_tokens: Max tokens parametergen_ai.request.top_p, top_k, frequency_penalty, presence_penalty: Other parametersgen_ai.request.stream: Whether streaming was requestedgen_ai.response.streaming: Whether response was streamedgen_ai.response.tool_calls: Tool calls made during generation (JSON array)gen_ai.completion_start_time: Time first token arrived (for TTFT calculation)For TOOL_CALL spans:
gen_ai.operation.name: ai.toolCallgen_ai.tool.name: Tool identifiergen_ai.tool.type: functiongen_ai.tool.call.id: Tool call IDgen_ai.tool.input: Tool input (JSON)gen_ai.tool.output: Tool output (JSON)gen_ai.tool.description: Tool descriptiontool.success: Whether the tool call succeededFor AGENT_RUN spans:
gen_ai.operation.name: invoke_agentgen_ai.agent.name: Agent identifiergen_ai.pipeline.name: Agent name (for Sentry AI view)gen_ai.agent.instructions: Agent instructionsgen_ai.agent.prompt: Agent promptgen_ai.request.messages: Input message (normalized)gen_ai.request.available_tools: Available tools (JSON array)gen_ai.response.model: Model from the child MODEL_GENERATION spangen_ai.response.text: Output text from the child MODEL_GENERATION spangen_ai.usage.input_tokens: Input tokens from the child MODEL_GENERATION spangen_ai.usage.output_tokens: Output tokens from the child MODEL_GENERATION spangen_ai.usage.total_tokens: Total tokens from the child MODEL_GENERATION spangen_ai.usage.cache_read_input_tokens: Cached input tokens from the child MODEL_GENERATION spangen_ai.usage.cache_write_input_tokens: Cache write tokens from the child MODEL_GENERATION spangen_ai.usage.reasoning_tokens: Reasoning tokens from the child MODEL_GENERATION spanagent.max_steps: Maximum steps allowedagent.available_tools: Available tools (comma-separated)import { Mastra } from '@mastra/core';
import { SentryExporter } from '@mastra/sentry';
import { Agent } from '@mastra/core';
import { openai } from '@ai-sdk/openai';
const mastra = new Mastra({
observability: {
configs: {
sentry: {
serviceName: 'my-ai-app',
exporters: [
new SentryExporter({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
}),
],
},
},
},
});
const agent = new Agent({
name: 'customer-support',
instructions: 'Help customers with their questions',
model: openai('gpt-4'),
mastra,
});
// All agent executions will be traced in Sentry
const result = await agent.generate('How do I reset my password?');
tracesSampleRate - set to 1.0 for testingAdjust the tracesSampleRate to send fewer transactions to Sentry:
new SentryExporter({
tracesSampleRate: 0.1, // Send only 10% of transactions (recommended for high-load applications)
});
Note: To disable tracing entirely, don't set tracesSampleRate at all rather than setting it to 0.