observability/otel-exporter/README.md
Export Mastra traces to any OpenTelemetry-compatible observability platform.
⚠️ Important: This package requires you to install an additional exporter package based on your provider. Each provider section below includes the specific installation command.
All providers support zero-config setup via environment variables. Set the appropriate variables and the exporter will automatically use them:
| Provider | Environment Variables |
|---|---|
| Dash0 | DASH0_API_KEY (required), DASH0_ENDPOINT (required), DASH0_DATASET (optional) |
| SigNoz | SIGNOZ_API_KEY (required), SIGNOZ_REGION (optional), SIGNOZ_ENDPOINT (optional) |
| New Relic | NEW_RELIC_LICENSE_KEY (required), NEW_RELIC_ENDPOINT (optional) |
| Traceloop | TRACELOOP_API_KEY (required), TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINT (optional) |
| Laminar | LMNR_PROJECT_API_KEY (required), LAMINAR_ENDPOINT (optional) |
# Dash0 uses gRPC protocol, requires both packages
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js
# Required
DASH0_API_KEY=your-api-key
DASH0_ENDPOINT=ingress.us-west-2.aws.dash0.com:4317
# Optional
DASH0_DATASET=production
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'my-service',
exporters: [new OtelExporter({ provider: { dash0: {} } })],
},
},
},
});
new OtelExporter({
provider: {
dash0: {
apiKey: 'your-api-key',
endpoint: 'ingress.us-west-2.aws.dash0.com:4317',
dataset: 'production', // Optional
},
},
});
Note: Get your endpoint from your Dash0 dashboard. It should be in the format ingress.{region}.aws.dash0.com:4317.
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-proto
# Required
SIGNOZ_API_KEY=your-api-key
# Optional
SIGNOZ_REGION=us # 'us' | 'eu' | 'in'
SIGNOZ_ENDPOINT=https://my-signoz.example.com # For self-hosted
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'my-service',
exporters: [new OtelExporter({ provider: { signoz: {} } })],
},
},
},
});
new OtelExporter({
provider: {
signoz: {
apiKey: 'your-api-key',
region: 'us', // Optional: 'us' | 'eu' | 'in'
endpoint: 'https://my-signoz.example.com', // Optional: for self-hosted
},
},
});
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-proto
# Required
NEW_RELIC_LICENSE_KEY=your-license-key
# Optional
NEW_RELIC_ENDPOINT=https://otlp.eu01.nr-data.net # For EU region
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'my-service',
exporters: [new OtelExporter({ provider: { newrelic: {} } })],
},
},
},
});
new OtelExporter({
provider: {
newrelic: {
apiKey: 'your-license-key',
endpoint: 'https://otlp.eu01.nr-data.net', // Optional: for EU region
},
},
});
# Traceloop uses HTTP/JSON protocol
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-http
# Required
TRACELOOP_API_KEY=your-api-key
# Optional
TRACELOOP_DESTINATION_ID=my-destination
TRACELOOP_ENDPOINT=https://custom.traceloop.com
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'my-service',
exporters: [new OtelExporter({ provider: { traceloop: {} } })],
},
},
},
});
new OtelExporter({
provider: {
traceloop: {
apiKey: 'your-api-key',
destinationId: 'my-destination', // Optional
endpoint: 'https://custom.traceloop.com', // Optional
},
},
});
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-proto
# Required
LMNR_PROJECT_API_KEY=your-api-key
# Optional
LAMINAR_ENDPOINT=https://api.lmnr.ai/v1/traces
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'my-service',
exporters: [new OtelExporter({ provider: { laminar: {} } })],
},
},
},
});
new OtelExporter({
provider: {
laminar: {
apiKey: 'your-api-key',
endpoint: 'https://api.lmnr.ai/v1/traces', // Optional
},
},
});
npm install @mastra/otel-exporter @opentelemetry/exporter-zipkin
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'mastra-service',
exporters: [
new OtelExporter({
provider: {
custom: {
endpoint: 'http://localhost:9411/api/v2/spans',
protocol: 'zipkin',
}
},
})
],
},
},
},
});
Choose the appropriate exporter based on your collector's protocol:
# For HTTP/JSON: Human-readable, larger payload, good for debugging
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-http
# For HTTP/Protobuf: Binary format, smaller payload, recommended for production
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-proto
# For gRPC: Bidirectional streaming, lowest latency, requires gRPC support
npm install @mastra/otel-exporter @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js
# For Zipkin: Zipkin-specific format
npm install @mastra/otel-exporter @opentelemetry/exporter-zipkin
Most providers recommend HTTP/Protobuf for production use.
import { OtelExporter } from '@mastra/otel-exporter';
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
...,
observability: {
configs: {
otel: {
serviceName: 'mastra-service',
exporters: [
new OtelExporter({
provider: {
custom: {
endpoint: 'https://your-collector.example.com/v1/traces', // Required at runtime
protocol: 'http/protobuf', // Optional: 'http/json' | 'http/protobuf' | 'grpc' | 'zipkin'
headers: { // Optional
'x-api-key': process.env.API_KEY,
},
}
}
})
],
},
},
},
});
We've made exporter dependencies optional to:
If you forget to install the required exporter, you'll get a helpful error message telling you exactly what to install.
/v1/traces to the base endpoint/v1/traces or provider-specific pathsheaders with standard HTTP headersauthorization instead of Authorization)| Provider | Protocol | Endpoint Format | Notes |
|---|---|---|---|
| Dash0 | gRPC | ingress.{region}.aws.dash0.com:4317 | Get from dashboard |
| SigNoz | HTTP/Protobuf | https://ingest.{region}.signoz.cloud:443/v1/traces | Cloud hosted |
| New Relic | HTTP/Protobuf | https://otlp.nr-data.net:443/v1/traces | US region |
| Traceloop | HTTP/JSON | https://api.traceloop.com/v1/traces | Default endpoint |
| Laminar | HTTP/Protobuf | https://api.lmnr.ai/v1/traces | Default endpoint |
// Main configuration interface
interface OtelExporterConfig {
// Provider configuration (discriminated union)
provider?: ProviderConfig;
// Export configuration
timeout?: number; // Export timeout in milliseconds (default: 30000)
batchSize?: number; // Max spans per batch (default: 512)
// Debug
logLevel?: 'debug' | 'info' | 'warn' | 'error';
}
The OtelExporter uses OpenTelemetry's BatchSpanProcessor for efficient span export:
batchSize)This approach ensures:
This exporter follows the OpenTelemetry Semantic Conventions for GenAI to ensure compatibility with observability platforms.
Spans are named following OTEL conventions:
chat {model} or tool_selection {model}tool.execute {tool_name}invoke_agent {agent_name}workflow.{workflow_id}The exporter maps Mastra's tracing data to OTEL-compliant attributes:
gen_ai.operation.name - Operation type (chat, tool.execute, agent.run, workflow.run)gen_ai.provider.name - AI provider (openai, anthropic, etc.)gen_ai.request.model - Model identifiergen_ai.usage.input_tokens - Number of input tokensgen_ai.usage.output_tokens - Number of output tokensgen_ai.usage.total_tokens - Total token countgen_ai.request.temperature - Sampling temperaturegen_ai.request.max_tokens - Maximum tokens to generategen_ai.request.top_p - Top-p sampling parametergen_ai.request.top_k - Top-k sampling parametergen_ai.response.finish_reasons - Reason for completiongen_ai.response.model - Actual model used in response (may differ from request)gen_ai.response.id - Unique response identifiergen_ai.prompt - Input prompt (for Model spans)gen_ai.completion - Model output (for Model spans)server.address - Server address for the model endpointserver.port - Server port for the model endpointgen_ai.tool.name - Tool identifiergen_ai.tool.description - Tool descriptiongen_ai.tool.success - Whether tool execution succeededgen_ai.tool.input - Tool input parametersgen_ai.tool.output - Tool execution resultgen_ai.agent.id - Agent identifiergen_ai.agent.name - Human-readable agent namegen_ai.conversation.id - Conversation/thread/session identifieragent.id - Agent identifier (also included for compatibility)agent.max_steps - Maximum agent stepsworkflow.id - Workflow identifierworkflow.status - Workflow execution statuserror - Boolean indicating error occurrederror.type - Error identifiererror.message - Error descriptionerror.domain - Error domain/categoryFor enhanced observability, you can enable additional content attributes that capture detailed message data. These attributes may contain sensitive information and should only be enabled with proper consent and security considerations.
To enable content attributes:
new OtelExporter({
provider: {
/* your provider config */
},
genAiConventions: {
includeContentAttributes: true, // Default: false
},
});
When enabled, the following additional attributes are captured:
gen_ai.input.messages - Structured input messages in OpenTelemetry formatgen_ai.output.messages - Structured output messages in OpenTelemetry formatThese attributes convert Mastra's message format to the OpenTelemetry GenAI standard message schema, providing detailed conversation history and tool interactions.
gen_ai.system_instructions - Agent system instructions/promptsPrivacy Considerations:
If you forget to install the required exporter package, you'll get a clear error message:
HTTP/Protobuf exporter is not installed (required for signoz).
To use HTTP/Protobuf export, install the required package:
npm install @opentelemetry/exporter-trace-otlp-proto
# or
pnpm add @opentelemetry/exporter-trace-otlp-proto
# or
yarn add @opentelemetry/exporter-trace-otlp-proto
Apache 2.0