rfd/0027-mtls-metrics.md
This RFD proposes the option of securing the /metrics endpoint read by prometheus with mTLS.
To prevent leaking metrics shipped to prometheus over an insecure network.
Currently the /metrics handler resides under initDiagnosticService() in lib/service/service.go which will be hosted at the address provided by the --diag-addr flag given to teleport start.
We could have enabled TLS globally in the server implemented in initDiagnosticService and only verify it in the metrics handler while keeping the other healthz readyz and debug endpoints intact. That would trigger a tls renegotiation which is both not supported by prometheus and not supported in TLS1.3 anymore. So this isn't really an option we can consider.
So in order to achieve this we have to move the /metrics endpoint to its own initMetricsService server where tls is either on/off depending on the settings supplied in the config.
This implementation will only support user provided certs and CA for now. Using Teleport's Host CA and generated certs is an option that can be considered in the future for self hosted teleport instances. That's not optimal for teleport cloud because prometheus would have to wait for teleport to start before it could be provisioned. There are other security and design concerns you can read about here https://github.com/gravitational/teleport/pull/6469
The metrics's service config will look like the following.n
metrics_service:
# 'enabled: no' or the absence of this section alltogether means that metrics
# will still be hosted at the 'diag-addr' provided to teleport start as a flag.
enabled: yes
# 'listen_addr' is the new address where the metrics will be hosted.
# defaults to port 3081
listen_addr: localhost:3081
# 'mtls: no' will ship metrics in clear text to prometheus.
mtls: yes
# 'keypairs' should be provided alongside 'mtls: yes'. Only user generated
# certs and ca are currently supported, but that can change to support
# certs provided by teleport if there is a demand for it.
keypairs:
- key_file: key.pem
cert_file: cert.pem
# 'ca_certs' should be provided alongside 'mtls: yes'. Those are the CA certs
# of the prometheus instances consuming the metrics.
ca_certs:
- ca.pem
Having the metrics_service enabled in the config will override metrics from being hosted at the diag-addr endpoint.
To be clear metrics will be only available through the metrics service if both the metrics_service is enabled and the diag-addr is set.
Teleport will still support shipping metrics over the diag-addr endpoint for those who wish to continue using it and there is no current timeline on when it will be deprecated.
To use the new metrics service, prometheus will have to be reconfigured to start listening at the new address defined in the config alongside using certs for mtls if needed.
Here are the steps to a simple migration scenario:
metrics_service endpoint. mTLS is optional. Example:- job_name: 'teleport_new_metrics_service'
scheme: https
tls_config:
ca_file: "ca.pem"
cert_file: "cert.pem"
key_file: "key.pem"
metrics_path: /metrics
static_configs:
- targets:
- localhost:3081
metrics_service. Example:metrics_service:
enabled: yes
listen_addr: localhost:3081
mtls: yes
keypairs:
- key_file: key.pem
cert_file: cert.pem
ca_certs:
- ca.pem
diag-addr when metrics_service is up.diag-addr to pull from the metrics_service before updating Teleport with the new config will cause a gap in the metrics between the time prometheus has been updated and the new teleport config has been applied.Update the documentation with all the relevant changes