OpenTelemetry Collector | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

OpenTelemetry Collector

The OpenTelemetry Collector offers a vendor-agnostic way to gather observability data from a variety of instrumentation solutions and send that data to Honeycomb. Applications instrumented with OpenTelemetry SDKs or with Jaeger, Zipkin, or OpenCensus can use the OpenTelemetry Collector to send trace data to Honeycomb as events. Additionally, applications instrumented with OpenTelemetry SDKs or with metrics data from Prometheus, StatsD, Influx, and others can use the OpenTelemetry Collector to send metrics data to Honeycomb.

Honeycomb supports the OpenTelemetry Protocol version 0.7.0 over gRPC and HTTP/Protobuf. This means you can use the OpenTelemetry Collector and its standard OTLP exporter to send data to Honeycomb without any additional exporters or plugins.

The Collector consists of three components: receivers, processors and exporters, which are then used to construct telemetry pipelines. To send trace or metrics data to Honeycomb, you must configure an OTLP exporter, passing in your Honeycomb API Key and dataset as headers:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"

You must then include the OTLP exporter in the relevant pipeline:

service:
  extensions: []
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [otlp]
    metrics:
      receivers: [hostmetrics]
      processors: []
      exporters: [otlp]

Note: Ingesting metrics is available as part of the Honeycomb Enterprise plan. Find out more.

The following is a complete configuration file example for a Collector instance that accepts Jaeger and OpenTelemetry (over gRPC and HTTP) trace data and exports the data to Honeycomb:

receivers:
  jaeger:
    protocols:
      thrift_http:
        endpoint: "0.0.0.0:14268"
  otlp:
    protocols:
      grpc: # on port 4317
      http: # on port 4318

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [batch]
      exporters: [otlp]

See the Collector documentation for more examples.

Running the Collector  🔗

You can run the Collector in Docker to try it out locally. This is needed when adding instrumentation in development to send events to Honeycomb.

For instance, if your config file is called otel_collector_config.yaml in the current working directory, the following command will run the Collector with open ports for sending OTLP protocol:

$ docker run \
  -p 14268:14268 \
  -p 4317-4318:4317-4318 \
  -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
  otel/opentelemetry-collector-contrib:latest

More details on running the Collector can be found in its documentation.

Scrubbing Sensitive Information  🔗

Sometimes you want to make sure that certain information doesn’t escape your application or service. This could be for regulatory reasons regarding personally identifiable information, or you want to ensure that certain information does not end being stored with a vendor. Scrubbing attributes from a span is possible with the built-in Attributes Span Processor. Span processors in OpenTelemetry allow you to hook into the lifecycle of a span and modify the span contents before sending to a backend.

The Attributes Span Processor can be configured with actions, and specifically the delete action can be used to remove span attributes. There are other actions that could be used, such as upsert to replace the value or hash to encrypt the value with a SHA1 hash.

processors:
  attributes:
    actions:
      - key: my.sensitive.data
        action: delete

Handling Large Requests  🔗

If request sizes are too large, there will be an error when trying to send to Honeycomb. The request size limit is 15MB. To help mitigate errors for requests being too large, it is recommended to set a limit on the batch size and use compression when exporting to Honeycomb.

Configuring Max Batch Size  🔗

A Batch Processor has a configuration option for send_batch_size and send_batch_max_size. These options specify the number of data points, regardless of size, to include in a batch to be sent. A one-size-fits-all value does not exist for each of these, since different requests will vary in size. However, these are worth tuning to find the right limit to ensure data is being sent reliably.

Enabling Compression  🔗

Collector exporters have configuration options to enable compression, including for both gRPC and HTTP. Current supported compression types include gzip, snappy, and zstd.

Replacing the Honeycomb OpenTelemetry Collector Exporter  🔗

The Honeycomb API supports the OpenTelemetry Protocol (OTLP) for OpenTelemetry trace ingest. This means the Honeycomb-specific exporter is no longer required when using the OpenTelemetry Collector and instead you should use the built-in OTLP exporter to send trace data to Honeycomb.

For most users, migrating to the OTLP exporter in the OpenTelemetry Collector means updating your configuration file to include an OTLP exporter.

For example, below is a basic OpenTelemetry Collector configuration using the Honeycomb exporter:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "{your-api-key}"
    dataset: "{your-dataset}"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [honeycomb]

And can be updated to use the OTLP exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "{your-api-key}"
      "x-honeycomb-dataset": "{your-dataset}"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

Sample Rate Attribute  🔗

The Honeycomb exporter’s sample_rate_attribute configuration option allows you to use a custom attribute to specify the sample rate per span that gets sent to Honeycomb’s backend. This feature is not supported by the OTLP exporter, but can be replaced with a configuration-driven attribute processor.

For example, below is an OpenTelemetry Collector configuration that uses a sample rate attribute:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "{your-api-key}"
    dataset: "{your-dataset}"
    sample_rate_attribute: "hny.sample_rate"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

And can be updated to use the attribute processor exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:
  attributes/copy-sample-rate:
    actions:
      - key: sampleRate
        action: upsert
        from_attribute: hny.sample_rate

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "{your-api-key}"
      "x-honeycomb-dataset": "{your-dataset}"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, attributes/copy-sample-rate]
      exporters: [otlp]

This attributes processor informs the OpenTelemetry collector to copy the value of the hny.sample_rate attribute to an attribute called sampleRate, which the Honeycomb backend will automatically read and use as the sample rate for that span. You can read more about the attributes processor and sampling with Honeycomb.

Troubleshooting  🔗

If data is not arriving in Honeycomb as expected, add a debug-level logger to emit the data to the console for review. In the exporters section of your config file, add a logging exporter with loglevel of debug. The logging exporter should also be added to the service section, either replacing or accompanying the otlp exporter.

If the collector is running in Docker or otherwise difficult to parse via the console, you can also send the data to a specific file for review. Add an additional file exporter with a path to the file that should contain the output.

This example includes an otlp exporter for sending to Honeycomb, a logging exporter for debug-level logging to the console, and a file exporter for storing the data logged.

receivers:
    otlp:
        protocols:
            grpc:
            http:
processors:
    batch:
exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"
    logging:
        loglevel: debug
  file: # optionally export data to a file
    path: /var/lib/data.json # optional file to store exported data
service:
    pipelines:
        traces:
            receivers: [otlp]
            processors: [batch]
            exporters: [otlp, logging, file] # only add file if added above