Temporal Aggregation Concepts


Explore how temporal aggregation shapes metrics data for visualization and analysis.

Important

This feature is in beta, and we would love your feedback!

To opt in, join our Pollinators Community Slack and ping us in the #discuss-metrics channel. Pro and Enterprise users can also contact Honeycomb Support or email support@honeycomb.io.

Introduction 

When working with metrics, time alignment is key. Metrics data arrives as timeseries—streams of values for a single metric, segmented by attributes like http.route or k8s.node.name.

Raw metric values often arrive at irregular intervals, making direct comparison or visualization challenging. Additionally, some metrics are reported as monotonic sums, where the meaningful insight comes from the difference between consecutive values over a time period, while correctly handling counter resets when values jump back to zero.

Raw values may also represent measurements over varying time ranges—the interval between the current and previous capture—which may not align neatly with the fixed time steps used in your queries.

Honeycomb solves this by using temporal aggregation: a process that reshapes raw timeseries values into regularly spaced, query-aligned values.

What is Temporal Aggregation? 

Temporal aggregation groups raw metric values into fixed-duration time steps and applies a summarizing function to each group.

A step represents one slice of time in your query, like a single minute in a one-minute granularity query. Honeycomb collects all the raw metric values that fall within that step and applies a temporal aggregation function, such as LAST(), INCREASE(), RATE(), or SUMMARIZE(), to compute a single value per timeseries for that time slice. The result is a time-aligned series of points that you can cleanly visualize or group by attribute like route or node.

Without temporal aggregation, your charts would be incomplete, misaligned, or misleading, especially when comparing multiple timeseries.

How It Works 

When you run a metrics query, Honeycomb automatically:

  1. Takes into account the relevant timeseries based on the filters and time-range of your query. Each unique combination of metric and attributes (for example, http.server.request.count by http.route and k8s.node.name) is stored as its own timeseries when your instrumentation reaches Honeycomb.
  2. Divides the query’s time range into evenly-spaced steps using the desired granularity (for example, 60-second intervals).
  3. Applies a temporal aggregation function to each timeseries within each step to produce a single value per step.

This step-aligned output forms the foundation for your charts, groupings, and further analysis.

Example 

Suppose you are tracking memory usage across multiple hosts with the gauge process.memory.usage, collected every 10 seconds. If you query it over a two-hour range with one-minute granularity, Honeycomb will:

  • Divide the time range into 120 one-minute steps,
  • Pull the six data points per minute from each host,
  • Apply the LAST() function to each group of six values,
  • Return one value per minute, per host.

Why It Matters: Preparing for Spatial Aggregation 

Once temporal aggregation aligns your timeseries in time, Honeycomb can group and summarize across dimensions like route, node, or service. This step is known as spatial aggregation; it aggregates across multiple timeseries at the same time step to produce a single summarized value per group. In Honeycomb, spatial aggregation corresponds to the operations you define in the VISUALIZE clause.

For example, if you group your query by http.route, Honeycomb first aligns all timeseries that share the same value for http.route to the same time steps, then computes a percentile or average across those aligned series.

Spatial aggregation depends on clean, aligned time steps, so temporal aggregation always comes first.

Understanding Monotonicity and Temporality 

All metric types include extra metadata that hints at how they should be aggregated over time. These properties guide how aggregations should be applied:

  • Monotonicity describes whether a sum metric only increases or can go both up and down.

    • Monotonic: The value always increases or resets to zero (for example, total requests served).
    • Non-monotonic: The value may increase or decrease (for example, queue length).
  • Temporality describes what each data point represents in time.

    • Cumulative: Each value represents the total since the start of the measurement.
    • Delta: Each value represents the change since the previous measurement.
    Note
    Honeycomb supports both cumulative and delta metrics natively, unlike some legacy metrics systems that forced users to choose one. This flexibility lets you use your existing data as is, and use aggregation functions to generate different views, which reduces complexity at ingest time.

Supported Temporal Aggregation Functions 

Honeycomb supports four core temporal aggregation functions. Each one reshapes raw metrics into time-aligned results that suit different types of analysis.

To learn how Honeycomb applies these functions and how you can override them when needed, visit Applying Temporal Aggregation Functions.

LAST(metric) 

LAST(metric) returns the most recent data point in each step.

Use this function for metrics that represent a current state or sample, such as memory usage or thread count. It can also be used for non-monotonic sums, where values may fluctuate up and down.

Example: Show the last reported memory usage per node every minute.

SUMMARIZE(metric) 

SUMMARIZE(metric) adds up all values within each time step. If a value spans multiple steps, Honeycomb interpolates to distribute the value proportionally.

This function is best for delta-style metrics that track a count or total within a given window, like requests received, log entries written, or bytes transferred.

Note
For histograms, SUMMARIZE() adds the values in each bucket independently, preserving the bucket structure across time steps.

Example: Count the total number of HTTP requests per minute across all Kubernetes pods.

INCREASE(metric[, range_interval_seconds]) 

INCREASE(metric[, range_interval_seconds]) measures the change in a metric’s value across a range. It handles counter resets automatically and interpolates to match the range’s boundaries.

Use this function for monotonic, cumulative metrics where the total always increases, like total bytes sent or number of connections handled.

Note

For histograms, INCREASE() calculates the difference for each bucket independently, as well as for the total value.

If data points are missing at interval boundaries, Honeycomb will extrapolate, but only up to half the duration of a captured interval. This avoids overestimating changes when samples are dropped.

Example: Calculate the total number of errors over time, even if the service restarts.

RATE(metric[, range_interval_seconds]) 

RATE(metric[, range_interval_seconds]) calculates the per-second rate of change over the time range. It works just like INCREASE(), but divides the result by the time range to get a rate.

This function is useful for smoothing spikes or understanding trends as normalized rates.

Example: Track request throughput as requests per second, even if raw request counts vary dramatically.

Understanding the range_interval_seconds Argument 

Some temporal aggregation functions accept an optional range_interval_seconds argument. This argument controls the size of the window Honeycomb uses to calculate changes over time. Use range_interval_seconds to make temporal aggregation more resilient to sparse data or uneven reporting intervals.

By default, Honeycomb uses the query’s granularity, or time step, as the range interval, but sometimes, you may want to override this default to get more accurate results. By setting INCREASE(metric, 300), you allow Honeycomb to look back over a five-minute window when calculating the increase for each step.

Use range_interval_seconds when:

  • You want to smooth results by averaging or increasing over a longer time window.
  • You want consistent results even if you zoom in or zoom out of your graph.
  • You are troubleshooting gaps or unexpected zero values in your charts.

Handling Counter Resets 

The INCREASE() function is designed for monotonic cumulative metrics, which are metrics that count up over time, like total requests served or bytes sent.

But sometimes counters reset, such as during service restarts or container redeployments. When this happens, a raw difference calculation would produce a misleading negative value.

Honeycomb automatically detects and corrects for these resets:

  • If a later value is lower than an earlier one within the same step, Honeycomb treats it as a reset and starts counting from the new value.
  • If the data point includes a start time (as with OpenTelemetry) and that start time changes, Honeycomb treats this as a reset, even if the new value is higher than the previous one.
  • Instead of returning a negative delta, Honeycomb calculates the increase from zero after the reset.

This logic ensures that your results reflect real activity, not artifacts from service restarts or instrumentation quirks.

Example

A counter reports these values during a one-minute step:

10:01:05 — 8,450
10:01:30 — 8,700
10:01:45 — 250  ← service restarted

Without reset handling, the calculation would incorrectly show a drop of 8,450.

With INCREASE(), Honeycomb computes:

  • +250 from 8,450 to 8,700
  • Reset detected
  • +250 from the restart point (250, assumed 0)

This leads to a total increase of 500.