This feature is available as part of the Honeycomb Enterprise and Pro plans.
Querying metrics requires sending metrics data to Honeycomb first.
Metrics are stored in Honeycomb as fields on events.
They can be queried just like any other data in a dataset.
The values for any given metric field are the measurement collected at the timestamp associated with the event.
Metric resources and attributes are also stored as fields, so you should be able to use the WHERE and GROUP BY clauses to plot specific timeseries.
In particular, the
RATE_SUM aggregate operators can be a useful way to query for counter-style metrics.
Multiple metrics will appear together on the same event if they were received as part of the same OTLP request, have equivalent timestamps when truncated to the second (we truncate metric timestamps to the second for improved compaction), and share the same set of unique resources and attributes. Find out how Honeycomb converts incoming metrics data into events.
It may be useful to view infrastructure metrics for your systems alongside query results from non-metrics datasets. For instance, a system running out of memory, CPU, or network resources might be the reason for an out-of-compliance SLO or an alerting trigger, and seeing the graph of the problem alongside graphs of relevant system resources could confirm or deny this kind of hypothesis.
The query page has a Metrics tab that allows you to view a selected set of metrics timeseries that cover the same time range as the main query. The timeseries shown can be configured in dataset settings for the main query’s dataset. Correlations can come from a suggested set of metrics, generated by Honeycomb based on the fields in your metrics dataset, or they can come from a Board.
To modify the correlations that are shown for a dataset:
Datasets that contain metrics are periodic: data is captured at a regular, known interval, or granularity. For these Datasets, it is helpful to ensure that all queries default to using that granularity or higher, which avoids spiky or confusing graphs.
The Default Granularity setting allows you to specify the expected interval for a periodic dataset Queries in this Dataset will not drop below the default granularity. You can still override the default on any individual queries, if needed.
To modify the Default Granularity setting:
Did you find what you were looking for?