docs-mintlify/admin/monitoring/monitoring-integrations/index.mdx
Monitoring Integrations
Cube Cloud allows exporting logs and metrics to external monitoring tools so you can leverage your existing monitoring stack and retain logs and metrics for the long term.
<Note>Available on Enterprise plan. You can also choose a Monitoring Integrations tier.
</Note> <Warning>Monitoring integrations suspend their work when a deployment goes to auto-suspension.
</Warning>Monitoring integrations are only available for production environments.
Under the hood, Cube Cloud uses Vector, an open-source tool for collecting and delivering monitoring data. It supports a wide range of destinations, also known as sinks.
<Frame> </Frame> <iframe width="100%" height="400" src="https://www.youtube.com/embed/iPD0axEYU6k" title="YouTube video" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowFullScreen />Monitoring integrations work with various popular monitoring tools. Check the following guides and configuration examples to get tool-specific instructions:
<CardGroup cols={2}> <Card title="Amazon CloudWatch" img="https://static.cube.dev/icons/aws.svg" href="/admin/monitoring/monitoring-integrations/cloudwatch"> </Card> <Card title="Amazon S3" img="https://static.cube.dev/icons/aws.svg" href="/admin/monitoring/monitoring-integrations/s3"> </Card> <Card title="Datadog" img="https://static.cube.dev/icons/datadog.svg" href="/admin/monitoring/monitoring-integrations/datadog"> </Card> <Card title="Grafana Cloud" img="https://static.cube.dev/icons/grafana.svg" href="/admin/monitoring/monitoring-integrations/grafana-cloud"> </Card> <Card title="New Relic" img="https://static.cube.dev/icons/new-relic.svg" href="/admin/monitoring/monitoring-integrations/new-relic"> </Card> </CardGroup>To enable monitoring integrations, navigate to Settings → Monitoring Integrations and click Enable Vector to add a Vector agent to your deployment. You can use the dropdown to select a Monitoring Integrations tier.
<Frame> </Frame>Under Metrics export, you will see credentials for the
prometheus_exporter sink, in case you'd like to setup metrics
export.
Additionally, create a vector.toml configuration file
next to your cube.js file. This file is used to keep sinks configuration. You
have to commit this file to the main branch of your deployment for Vector
configuration to take effect.
You can use environment variables prefixed with CUBE_CLOUD_MONITORING_ to
reference configuration parameters securely in the vector.toml file.
Example configuration for exporting logs to Datadog:
[sinks.datadog]
type = "datadog_logs"
default_api_key = "$CUBE_CLOUD_MONITORING_DATADOG_API_KEY"
Sinks accept the inputs option that allows to specify which components of a
Cube Cloud deployment should export their logs:
| Input name | Description |
|---|---|
cubejs-server | Logs of API instances |
refresh-scheduler | Logs of the refresh worker |
warmup-job | Logs of the pre-aggregation warm-up |
cubestore | Logs of Cube Store |
query-history | Query History export |
Example configuration for exporting logs to Datadog:
[sinks.datadog]
type = "datadog_logs"
inputs = [
"cubejs-server",
"refresh-scheduler",
"warmup-job",
"cubestore"
]
default_api_key = "da8850ce554b4f03ac50537612e48fb1"
compression = "gzip"
When exporting Cube Store logs using the cubestore input, you can filter logs
by providing an array of their severity levels via the levels option. If not
specified, only error and info logs will be exported.
| Level | Exported by default? |
|---|---|
error | ✅ Yes |
info | ✅ Yes |
debug | ❌ No |
trace | ❌ No |
If you'd like to adjust severity levels of logs from API instances and the
refresh scheduler, use the CUBEJS_LOG_LEVEL environment variable.
You can use a wide range of destinations for logs, including the following ones:
Example configuration for exporting all logs, including all Cube Store logs to Azure Blob Storage:
[sinks.azure]
type = "azure_blob"
container_name = "my-logs"
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
inputs = [
"cubejs-server",
"refresh-scheduler",
"warmup-job",
"cubestore"
]
[sinks.azure.cubestore]
levels = [
"trace",
"info",
"debug",
"error"
]
Metrics are exported using the metrics input. Metrics will have their respective
metric names and_types: gauge or
counter.
All metrics of the counter type reset to zero at the midnight (UTC) and increment
during the next 24 hours.
You can filter metrics by providing an array of input names via the list option.
| Input name | Metric name, type | Description |
|---|---|---|
cpu | cube_cpu_usage_ratio, gauge | CPU usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
memory | cube_memory_usage_ratio, gauge | Memory usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
requests-count | cube_requests_total, counter | Number of API requests to the deployment |
requests-success-count | cube_requests_success_total, counter | Number of successful API requests to the deployment |
requests-errors-count | cube_requests_errors_total, counter | Number of errorneous API requests to the deployment |
requests-duration | cube_requests_duration_ms_total, counter | Total time taken to process API requests, milliseconds |
requests-success-duration | cube_requests_duration_ms_success, counter | Total time taken to process successful API requests, milliseconds |
requests-errors-duration | cube_requests_duration_ms_errors, counter | Total time taken to process errorneous API requests, milliseconds |
You can further filter exported metrics by providing an array of inputs. It applies to
metics only.
Example configuration for exporting all metrics from cubejs-server to
Prometheus using the prometheus_remote_write
sink:
[sinks.prometheus]
type = "prometheus_remote_write"
inputs = [
"metrics"
]
endpoint = "https://prometheus.example.com:8087/api/v1/write"
[sinks.prometheus.auth]
# Strategy, credentials, etc.
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Metrics are exported in the Prometheus format which is compatible with the following sinks:
prometheus_exporter (native to
Prometheus, compatible with Mimir)prometheus_remote_write (compatible with
Grafana Cloud)Example configuration for exporting all metrics from cubejs-server to
Prometheus using the
prometheus_exporter sink:
[sinks.prometheus]
type = "prometheus_exporter"
inputs = [
"metrics"
]
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Navigate to Settings → Monitoring Integrations to take the
credentials prometheus_exporter under Metrics export:
You can also customize the user name and password for prometheus_exporter by
setting CUBE_CLOUD_MONITORING_METRICS_USER and
CUBE_CLOUD_MONITORING_METRICS_PASSWORD environment variables, respectively.
With Query History export, you can bring Query History data to an external monitoring solution for further analysis, for example:
Requires the M tier of Monitoring Integrations.
</Info> <iframe width="100%" height="400" src="https://www.youtube.com/embed/6Xf2ayeQZC8" title="YouTube video" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowFullScreen />To configure Query History export, add the query-history input to the inputs
option of the sink configuration. Example configuration for exporting Query History data
to the standard output of the Vector agent:
[sinks.my_console]
type = "console"
inputs = [
"query-history"
]
target = "stdout"
encoding = { codec = "json" }
Exported data includes the following fields:
| Field | Description |
|---|---|
trace_id | Unique identifier of the API request. |
account_name | Name of the Cube Cloud account. |
deployment_id | Identifier of the deployment. |
environment_name | Name of the environment, NULL for production. |
api_type | Type of data API used (rest, sql, etc.), NULL for errors. |
api_query | Query executed by the API, represented as string. |
security_context | Security context of the request, represented as a string. |
status | Status of the request: success or error. |
error_message | Error message, if any. |
start_time_unix_ms | Start time of the execution, Unix timestamp in milliseconds. |
end_time_unix_ms | End time of the execution, Unix timestamp in milliseconds. |
api_response_duration_ms | Duration of the execution in milliseconds. |
cache_type | Cache type: no_cache, pre_aggregations_in_cube_store, etc. |
See this recipe for an example of analyzing data from Query History export.
</Note>