Querying Metrics | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

Querying Metrics

Querying metrics requires sending metrics data to Honeycomb first.

Write Queries for Metrics Data 

Metrics are stored in Honeycomb as fields on events. They can be queried just like any other data in a dataset. However, the kinds of queries typically written for metrics differs from traces.

Common VISUALIZE Operations 

Use any of the following common operations in the VISUALIZE clause of Query Builder when visualizing metrics data:

  • HEATMAP(<metric_field_name>)
  • AVG(<metric_field_name>)
  • SUM(<metric_field_name>)
  • MAX(<metric_field_name>)
  • MIN(<metric_field_name>)
  • PXX(<metric_field_name>)

We recommend that you combine HEATMAP with other Visualize Operations to get a better sense of trends over time.

Refer to the Visualize Operations documentation for more information on these operators.

For metrics data, avoid using the COUNT VISUALIZE operation. COUNT measures the total number of metrics events rather than the actual value of a metric.

For example, if tracking memory utilization of a host, the COUNT operator will not show the counter associated with memory utilization over time. Instead, use HEATMAP(host.memory_bytes) and AVG(host.memory_bytes) to visualize, assuming the instrument that measures memory utilization is called host.memory_bytes.

Track the Rate of Change 

Tracking the rate at which a measurement changes over time is a common operation when working with metrics data. To do that, use RATE_MAX, RATE_AVG, and RATE_SUM aggregate operators.

A common way to query metrics is to have two stacked visualization operations, such as:

VISUALIZE
AVG(host.memory_bytes)
RATE_AVG(host.memory_bytes)

When you visualize both operations, the results show the average memory utilization over time, and also interesting spikes in the rate of change.

How Metrics are Stored in Honeycomb 

The values for any given metric field are the measurement collected at the timestamp associated with the event.

Multiple metrics appear together on the same event if they were received as part of the same OTLP request, have equivalent timestamps when truncated to the second (we truncate metric timestamps to the second for improved compaction), and share the same set of unique resources and attributes. Find out how Honeycomb converts incoming metrics data into events.

Metrics Correlations 

It may be useful to view infrastructure metrics for your systems alongside query results from non-metrics datasets. For instance, a system running out of memory, CPU, or network resources might be the reason for an out-of-compliance SLO or an alerting trigger, and seeing the graph of the problem alongside graphs of relevant system resources could confirm or deny this kind of hypothesis.

The query page has a Metrics tab that allows you to view a selected set of metrics timeseries that cover the same time range as the main query. The timeseries shown can be configured in dataset settings for the main query’s dataset. Correlations can come from a suggested set of metrics, generated by Honeycomb based on the fields in your metrics dataset, or they can come from a Board.

To modify the correlations that are shown for a dataset:

  1. Navigate to the Datasets tab in Honeycomb.
  2. Select Settings on the right side of a dataset’s row.
  3. Under MetricsDisplay metrics in context, use the dropdown to select a source for the Metrics tab for that dataset.

Default Granularity 

Datasets that contain metrics are periodic: data is captured at a regular, known interval, or granularity. For these Datasets, it is helpful to ensure that all queries default to using that granularity or higher, which avoids spiky or confusing graphs.

The Default Granularity setting allows you to specify the expected interval for a periodic dataset. Queries in this Dataset will not drop below the default granularity. You can still override the default on any individual queries, if needed.

To modify the Default Granularity setting:

  1. Select the Datasets icon in the left sidebar in Honeycomb.
  2. Select the name of the desired Dataset in the list of Datasets. The next screen shows the Dataset Settings.
  3. Under OverviewDefault Granularity, use the dropdown to select the minimum interval for this dataset.

Did you find what you were looking for?