HashiCorp Vault enables teams to secure, store, and control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
Configure Vault to send server metrics to Honeycomb with an OpenTelemetry Collector.
Vault server’s metric endpoint supports Prometheus-formatted metrics. As with other services which expose such an endpoint, use an OpenTelemetry Collector to scrape this endpoint and get these metrics into Honeycomb.
Refer to Vault’s documentation for a list of key metrics, as well as the full telemetry reference.
Prometheus metrics are not enabled by default.
Therefore, to enable, set the prometheus_retention_time
value to at least twice the scrape interval of your OpenTelemetry Collector.
The HashiCorp documentation also suggests setting disable_hostname
to avoid having hostname-prefixed metrics.
A suggested configuration can be created as metrics.hcl
for each Vault server, as follows:
telemetry {
disable_hostname = true
prometheus_retention_time = "12h"
}
Since Vault’s /sys/metrics
endpoint is authenticated, we need to create both a read-metrics
ACL policy and a metrics token
for the OpenTelemetry Collector to use when scraping Vault metrics.
The following is an example of creating and defining a read-metrics
ACL policy that grants read capabilities to the metrics endpoint:
vault policy write read-metrics - << EOF
path "/sys/metrics" {
capabilities = ["read"]
}
EOF
Once the read-metrics
ACL policy is created, the next step is to create a metrics-token
for use when scraping metrics from Vault.
The following is an example of writing the token ID to the file metrics
in the vault configuration directory:
vault token create \
-field=token \
-policy read-metrics \
> /etc/vault/metrics-token
Scraping the Vault server’s Prometheus metrics endpoint requires configuring a OpenTelemetry Collector with a pipeline that starts with a prometheus receiver and ends with an OTLP exporter. Depending on your chosen method of Vault deployment, the resource detection processor may be helpful to further enrich the OTLP Metrics being sent to Honeycomb.
The following example OpenTelemetry Collector configuration uses the system
resource detector processor:
receivers:
prometheus:
config:
scrape_configs:
- job_name: vault
scrape_interval: 60s
metrics_path: /v1/sys/metrics
authorization:
credentials_file: /etc/vault/metrics-token
static_configs:
- targets:
- localhost:8500
processors:
batch:
resourcedetection/os:
detectors:
- system
system:
hostname_sources:
- os
exporters:
otlp/metrics:
endpoint: api.honeycomb.io:443 # US instance
#endpoint: api.eu1.honeycomb.io:443 # EU instance
headers:
"x-honeycomb-team": "YOUR_API_KEY"
"x-honeycomb-dataset": "vault"
service:
pipelines:
metrics:
receivers:
- prometheus
processors:
- resourcedetection/os
- batch
exporters:
- otlp/metrics