src/platform/packages/shared/kbn-synthtrace/EXAMPLES.md
This document contains all usage examples for @kbn/synthtrace. See the README.md for general documentation.
import { service, timerange, toElasticsearchOutput } from '@kbn/synthtrace';
const instance = service({ name: 'synth-go', environment: 'production', agentName: 'go' }).instance(
'instance-a'
);
const from = new Date('2021-01-01T12:00:00.000Z').getTime();
const to = new Date('2021-01-01T12:00:00.000Z').getTime();
const traceEvents = timerange(from, to)
.interval('1m')
.rate(10)
.flatMap((timestamp) =>
instance
.transaction({ transactionName: 'GET /api/product/list' })
.timestamp(timestamp)
.duration(1000)
.success()
.children(
instance
.span('GET apm-*/_search', 'db', 'elasticsearch')
.timestamp(timestamp + 50)
.duration(900)
.destination('elasticsearch')
.success()
)
.serialize()
);
const metricsets = timerange(from, to)
.interval('30s')
.rate(1)
.flatMap((timestamp) =>
instance
.appMetrics({
'system.memory.actual.free': 800,
'system.memory.total': 1000,
'system.cpu.total.norm.pct': 0.6,
'system.process.cpu.total.norm.pct': 0.7,
})
.timestamp(timestamp)
.serialize()
);
const esEvents = toElasticsearchOutput(traceEvents.concat(metricsets));
@kbn/synthtrace can also automatically generate transaction metrics, span destination metrics and transaction breakdown metrics based on the generated trace events:
import {
getTransactionMetrics,
getSpanDestinationMetrics,
getBreakdownMetrics,
} from '@kbn/synthtrace';
const esEvents = toElasticsearchOutput([
...traceEvents,
...getTransactionMetrics(traceEvents),
...getSpanDestinationMetrics(traceEvents),
...getBreakdownMetrics(traceEvents),
]);
For live data ingestion:
node scripts/synthtrace simple_trace.ts --target=http://admin:changeme@localhost:9200 --live
For a fixed time window:
node scripts/synthtrace simple_trace.ts --target=http://admin:changeme@localhost:9200 --from=now-24h --to=now
When running the CLI locally, you can simply use the following command to ingest data to a locally running Elasticsearch and Kibana instance:
node scripts/synthtrace simple_trace.ts
Assuming both Elasticsearch and Kibana are running on the default localhost ports with default credentials.
If the Kibana URL differs from the Elasticsearch URL in protocol or hostname, you should explicitly pass the --kibana option to the CLI along with --target.
For example when running ES (with ssl) and Kibana (without ssl) locally in Serverless mode:
node scripts/synthtrace simple_trace.ts --target=https://elastic_serverless:changeme@localhost:9200 --kibana=http://elastic_serverless:changeme@localhost:5601
If you are ingesting data to Elastic Cloud, you can pass the --target option with the Elastic Cloud URL:
node scripts/synthtrace simple_trace.ts --target=https://<username>:<password>@your-cloud-cluster.kb.us-west2.gcp.elastic-cloud.com/
You can use a Kibana API key for authentication by passing the --apiKey option:
node scripts/synthtrace simple_trace.ts --target=https://my-deployment.es.us-central1.gcp.elastic.cloud --apiKey="your-api-key"
Synthtrace includes a wide variety of scenarios for generating different types of synthetic data. Each scenario is designed to test specific features or use cases in Kibana.
simple_traceGenerates basic APM trace data with transactions and spans.
Options:
numServices (number, default: 3): Number of services to generatepipeline (string): APM pipeline to useUsage:
node scripts/synthtrace simple_trace --live
node scripts/synthtrace simple_trace --from=now-24h --to=now
Example with options:
node scripts/synthtrace simple_trace --scenarioOpts.numServices=5 --live
distributed_traceGenerates distributed traces across multiple services with parent-child relationships.
Usage:
node scripts/synthtrace distributed_trace --live
distributed_trace_longGenerates long-running distributed traces with deep call stacks.
Usage:
node scripts/synthtrace distributed_trace_long --live
many_servicesGenerates data for a large number of services.
Usage:
node scripts/synthtrace many_services --live
many_instancesGenerates data for multiple instances of the same service.
Usage:
node scripts/synthtrace many_instances --live
many_transactionsGenerates a high volume of transactions.
Usage:
node scripts/synthtrace many_transactions --live
many_errorsGenerates a high volume of APM error documents with varied messages and types.
Usage:
node scripts/synthtrace many_errors --live
many_dependenciesGenerates traces with many service dependencies.
Usage:
node scripts/synthtrace many_dependencies --live
service_mapGenerates data for testing the service map visualization with predefined service relationships.
Usage:
node scripts/synthtrace service_map --live
service_map_oomGenerates data designed to test service map with out-of-memory scenarios.
Usage:
node scripts/synthtrace service_map_oom --live
diagnostic_service_mapGenerates diagnostic data for service map testing.
Usage:
node scripts/synthtrace diagnostic_service_map --live
high_throughputGenerates high-throughput APM data for performance testing.
Usage:
node scripts/synthtrace high_throughput --live
low_throughputGenerates low-throughput APM data.
Usage:
node scripts/synthtrace low_throughput --live
varianceGenerates data with high variance in metrics and durations.
Usage:
node scripts/synthtrace variance --live
mobileGenerates mobile APM data (iOS/Android).
Usage:
node scripts/synthtrace mobile --live
trace_with_orphan_itemsGenerates traces with orphan spans/transactions.
Usage:
node scripts/synthtrace trace_with_orphan_items --live
trace_with_service_names_with_slashesGenerates traces with service names containing slashes.
Usage:
node scripts/synthtrace trace_with_service_names_with_slashes --live
span_linksGenerates traces with span links.
Usage:
node scripts/synthtrace span_links --live
services_without_transactionsGenerates services that have metrics but no transactions.
Usage:
node scripts/synthtrace services_without_transactions --live
other_bucket_groupGenerates data for testing "other" bucket grouping.
Usage:
node scripts/synthtrace other_bucket_group --live
service_summary_field_version_dependentGenerates data for testing service summary field version dependencies.
Usage:
node scripts/synthtrace service_summary_field_version_dependent --live
apm_ml_anomaliesGenerates data designed to trigger ML anomaly detection.
Usage:
node scripts/synthtrace apm_ml_anomalies --live
simple_logsGenerates simple, structured log documents with varying log levels.
Usage:
node scripts/synthtrace simple_logs --type=log --live
node scripts/synthtrace simple_logs --type=log --from=now-24h --to=now
sample_logsGenerates sample logs from various systems (Linux, Windows, Android).
Options:
rpm (number): Requests per minutesystems (string|string[]): Systems to generate logs forstreamType ('classic'|'wired'): Stream typeUsage:
node scripts/synthtrace sample_logs --type=log --live
Example with options:
node scripts/synthtrace sample_logs --type=log --scenarioOpts.rpm=100 --live
logs_and_metricsGenerates a combination of log documents and APM metrics for several services with a fixed 50% error rate.
Usage:
node scripts/synthtrace logs_and_metrics --type=log --live
logs_and_metrics_custom_error_rateGenerates a combination of log documents and APM metrics with configurable error and debug rates.
Options:
errorRate (number, default: 0.5): Error rate between 0 and 1 (minimum 1% if provided)
0.0 = 0% errors (all successful)0.1 = 10% errors0.5 = 50% errors1.0 = 100% errors (all failures)debugRate (number, default: 0): Debug log rate between 0 and 1 (minimum 1% if provided)numServices (number, default: 3): Number of services to generatetransactionsPerMinute (number, default: 360): Total transactions per minutelogsPerMinute (number): Override calculated logs per minuteinterval (string, default: '1m'): Time interval for rate calculation
'1s' = per second (better for live mode)'1m' = per minute (default)'5m' = per 5 minutesisLogsDb (boolean, default: false): Use LogsDB formatAccuracy Notes:
The scenario uses advanced algorithms to generate accurate log distributions:
logsPerMinute manuallyUsage:
node scripts/synthtrace logs_and_metrics_custom_error_rate --live --scenarioOpts.errorRate=0.1
node scripts/synthtrace.js logs_and_metrics_custom_error_rate --live --scenarioOpts='{"errorRate":0.2,"debugRate":0.1}'
Examples with different configurations:
# Generate with 10% error rate and 5% debug rate
node scripts/synthtrace logs_and_metrics_custom_error_rate --type=log --live --scenarioOpts='{"errorRate":0.1,"debugRate":0.05}'
# Generate with custom service count and transaction rate
node scripts/synthtrace logs_and_metrics_custom_error_rate --type=log --live --scenarioOpts='{"errorRate":0.2,"numServices":10,"transactionsPerMinute":72}'
apache_logsGenerates Apache access and error logs.
Usage:
node scripts/synthtrace apache_logs --type=log --live
kubernetes_logsGenerates Kubernetes container and audit logs.
Usage:
node scripts/synthtrace kubernetes_logs --type=log --live
unstructured_logsGenerates unstructured log documents.
Usage:
node scripts/synthtrace unstructured_logs --type=log --live
distributed_unstructured_logsGenerates distributed unstructured logs across multiple services.
Usage:
node scripts/synthtrace distributed_unstructured_logs --live --scenarioOpts.distribution=uniform
failed_logsGenerates logs that represent failed operations.
Usage:
node scripts/synthtrace failed_logs --type=log --live
degraded_logsGenerates degraded or malformed logs.
Usage:
node scripts/synthtrace degraded_logs --type=log --live
slash_logsGenerates logs with slashes in field values.
Usage:
node scripts/synthtrace slash_logs --type=log --live
simple_non_ecs_logsGenerates simple logs that don't follow ECS (Elastic Common Schema).
Usage:
node scripts/synthtrace simple_non_ecs_logs --type=log --live
simple_otel_logsGenerates OpenTelemetry format logs.
Usage:
node scripts/synthtrace simple_otel_logs --type=log --live
otel_logs_and_metrics_onlyGenerates OpenTelemetry logs and metrics only.
Usage:
node scripts/synthtrace otel_logs_and_metrics_only --type=log --live
infra_hosts_with_apm_hostsGenerates infrastructure host metrics along with APM host data.
Usage:
node scripts/synthtrace infra_hosts_with_apm_hosts --live
infra_docker_containersGenerates Docker container metrics.
Usage:
node scripts/synthtrace infra_docker_containers --live
infra_k8s_containersGenerates Kubernetes container metrics.
Usage:
node scripts/synthtrace infra_k8s_containers --live
infra_aws_rdsGenerates AWS RDS infrastructure metrics.
Usage:
node scripts/synthtrace infra_aws_rds --live
logs_traces_hostsGenerates a comprehensive set of correlated logs, APM traces, and host metrics.
Options:
numSpaces (number, default: 1): Number of spacesnumServices (number, default: 10): Number of servicesnumHosts (number, default: 10): Number of hostsnumAgents (number, default: 5): Number of agentsnumDatasets (number, default: 6): Number of datasetsdatasets (string[]): Custom list of datasetsdegradedRatio (number, default: 0.25): Percentage of malformed logsnumCustomFields (number, default: 50): Number of custom fields per documentcustomFieldPrefix (string, default: 'field'): Prefix for custom fieldslogsInterval (string, default: '1m'): Log generation intervallogsRate (number, default: 1): Log generation rateingestHosts (boolean, default: true): Whether to ingest host metricsingestTraces (boolean, default: true): Whether to ingest traceslogsdb (boolean, default: false): Use LogsDB formatUsage:
node scripts/synthtrace logs_traces_hosts --type=log --live
Example with options:
node scripts/synthtrace logs_traces_hosts --type=log --scenarioOpts='{"numServices":20,"numHosts":50}' --live
kubernetes_logs_traces_podsGenerates a comprehensive set of correlated Kubernetes logs, APM traces, and Kubernetes pod/container metrics.
Options:
numServices (number, default: 10): Number of Kubernetes services to generatenumPods (number, default: 20): Number of Kubernetes pods to generatenumContainers (number, default: 30): Number of Kubernetes containers to generatenumAgents (number, default: 5): Number of agentslogsInterval (string, default: '1m'): Log generation intervallogsRate (number, default: 1): Log generation rateingestPods (boolean, default: true): Whether to ingest pod metricsingestContainers (boolean, default: true): Whether to ingest container metricsingestTraces (boolean, default: true): Whether to ingest traceslogsdb (boolean, default: false): Use LogsDB formatUsage:
node scripts/synthtrace kubernetes_logs_traces_pods --live
Example with options:
node scripts/synthtrace kubernetes_logs_traces_pods --scenarioOpts='{"numServices":20,"numPods":50,"numContainers":100}'
aws_lambdaGenerates AWS Lambda function traces and metrics.
Usage:
node scripts/synthtrace aws_lambda --live
azure_functionsGenerates Azure Functions traces and metrics.
Usage:
node scripts/synthtrace azure_functions --live
many_otel_servicesGenerates data for many OpenTelemetry services.
Usage:
node scripts/synthtrace many_otel_services --live
otel_simple_traceGenerates simple OpenTelemetry traces.
Usage:
node scripts/synthtrace otel_simple_trace --live
degraded_synthetics_monitorsGenerates degraded synthetic monitor data.
Usage:
node scripts/synthtrace degraded_synthetics_monitors --live
cloud_services_iconsGenerates data for testing cloud service icons.
Usage:
node scripts/synthtrace cloud_services_icons --live
agent_configGenerates data for testing agent configuration.
Usage:
node scripts/synthtrace agent_config --live
Located in sre_incidents/ directory:
spiked_latencyGenerates data simulating a latency spike incident.
Usage:
node scripts/synthtrace sre_incidents/spiked_latency --live
bad_feature_flagGenerates data simulating a bad feature flag deployment.
Usage:
node scripts/synthtrace sre_incidents/bad_feature_flag --live
Many scenarios accept custom options via --scenarioOpts. Options can be passed in two formats:
JSON format:
node scripts/synthtrace logs_and_metrics_custom_error_rate --scenarioOpts='{"errorRate":0.1,"debugRate":0.05,"numServices":5}'
Key-value format:
node scripts/synthtrace logs_and_metrics_custom_error_rate --scenarioOpts=errorRate=0.1,debugRate=0.05,numServices=5
Note: When using the key-value format, use --scenarioOpts= (with equals sign) not --scenarioOpts. (with dot). The dot notation (--scenarioOpts.key=value) only works for single options and cannot be combined with comma-separated values.
You can combine multiple options:
node scripts/synthtrace simple_trace --live --clean --scenarioOpts.numServices=10 --logLevel=debug
You can backfill historical data and then continue with live generation:
# First, backfill 24 hours of data
node scripts/synthtrace simple_trace --from=now-24h --to=now
# Then start live generation
node scripts/synthtrace simple_trace --from=now-24h --to=now --live
For high-volume scenarios, you can tune performance:
# Use multiple workers for parallel processing
node scripts/synthtrace high_throughput --workers=4 --live
# Increase concurrency for bulk indexing
node scripts/synthtrace high_throughput --concurrency=5 --live
# Adjust live bucket size
node scripts/synthtrace high_throughput --liveBucketSize=500 --live
To clean existing data before generating new data:
node scripts/synthtrace simple_trace --clean --live
Note: The --clean option only cleans APM indices tracked by the client. For custom indices (like heartbeat), scenarios may implement their own cleanup logic.
You can use date math expressions for time ranges:
# Last 24 hours
node scripts/synthtrace simple_trace --from=now-24h --to=now
# Last week
node scripts/synthtrace simple_trace --from=now-7d --to=now
# Specific date range
node scripts/synthtrace simple_trace --from=2024-01-01T00:00:00Z --to=2024-01-02T00:00:00Z
Control verbosity with log levels:
# Verbose output
node scripts/synthtrace simple_trace --logLevel=verbose --live
# Debug output
node scripts/synthtrace simple_trace --logLevel=debug --live
# Quiet mode
node scripts/synthtrace simple_trace --logLevel=error --live