Filter spans, metrics, and logs using the filter processor for the OpenTelemetry Collector.
The filter processor for the OpenTelemetry (OTel) Collector filters telemetry based on conditions you provide. If you have instrumentations creating a lot of unneeded signals, the filter processor is a great way to reduce this noisy, noncritical data.
You can use the OpenTelemetry Transformation Language (OTTL) to create filtering conditions for different types of telemetry. If any condition is met, the telemetry is dropped. If there are multiple conditions, each condition is ORed together.
Configuration Option | OTTL Context |
---|---|
traces.span |
Span |
traces.spanevent |
SpanEvent |
metrics.metric |
Metric |
metrics.datapoint |
DataPoint |
logs.log_record |
Log |
instrumentation_scope.name
field into library.name
.
To filter based on the value of an instrumentation scope, use instrumentation_scope.name
instead of library.name
.To use the filter processor, add the filter
component as a processor in your OTel Collector configuration file:
processors:
# add the filter processor
filter/simple:
error_mode: ignore
# tell it to operate on span data
traces:
span:
- 'attributes["container.name"] == "app_container_1"'
Then add the filter processor to a compatible pipeline:
service:
pipelines:
traces:
processors: [filter/simple, batch]
An example Collector configuration:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
filter/simple:
error_mode: ignore
traces:
span:
- 'attributes["container.name"] == "app_container_1"'
exporters:
otlp:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "YOUR_API_KEY"
service:
pipelines:
traces:
receivers: [otlp]
processors: [filter/simple, batch]
exporters: [otlp]
Here are some example configurations for filtering spans, metrics, and logs. You can find more examples in the filter processor repository.
An example with filters for each type:
processors:
filter:
error_mode: ignore
traces:
span:
- 'attributes["container.name"] == "app_container_1"'
- 'resource.attributes["host.name"] == "localhost"'
- 'name == "app_3"'
spanevent:
- 'attributes["grpc"] == true'
- 'IsMatch(name, ".*grpc.*")'
metrics:
metric:
- 'name == "my.metric" and resource.attributes["my_label"] == "abc123"'
- 'type == METRIC_DATA_TYPE_HISTOGRAM'
datapoint:
- 'metric.type == METRIC_DATA_TYPE_SUMMARY'
- 'resource.attributes["service.name"] == "my_service_name"'
logs:
log_record:
- 'IsMatch(body, ".*password.*")'
- 'severity_number < SEVERITY_NUMBER_WARN'
Drop spans based on a resource attribute:
processors:
filter:
error_mode: ignore
traces:
span:
- IsMatch(resource.attributes["k8s.pod.name"], "my-pod-name.*")
Drop span events based on attribute and span event name:
processors:
filter:
traces:
# Filter out only span events with both the 'grpc' attribute and
# that have a span event name with 'grpc' in it.
spanevent:
- 'attributes["grpc"] == true and IsMatch(name, ".*grpc.*") == true'
Drop metrics with an invalid type:
processors:
filter:
error_mode: ignore
metrics:
metric:
- type == METRIC_DATA_TYPE_NONE
Drop metrics based on name and value:
processors:
filter:
error_mode: ignore
metrics:
datapoint:
- metric.name == "k8s.pod.phase" and value_int == 4
Drop metrics based on attribute key with the filter processor’s HasAttrKeyOnDatapoint() function:
filter:
error_mode: ignore
metrics:
metric:
- 'HasAttrKeyOnDatapoint("some.metric")'
Drop metrics with a given attribute and given value using the filter processor’s HasAttrOnDataPoint() function:
filter:
error_mode: ignore
metrics:
metric:
- 'HasAttrOnDatapoint("some.metric", "true")'
Drop logs based on log body or log severity:
filter:
error_mode: ignore
logs:
log_record:
- 'IsMatch(body, ".*password.*")'
- 'severity_number < SEVERITY_NUMBER_WARN'