OpenTelemetry Collector | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

OpenTelemetry Collector

The OpenTelemetry Collector offers a vendor-agnostic way to gather observability data from a variety of instrumentation solutions and send that data to Honeycomb. Applications instrumented with OpenTelemetry SDKs or with Jaeger, Zipkin, or OpenCensus can use the OpenTelemetry Collector to send trace data to Honeycomb as events. Additionally, applications instrumented with OpenTelemetry SDKs or with metrics data from Prometheus, StatsD, Influx, and others can use the OpenTelemetry Collector to send metrics data to Honeycomb.

Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC and HTTP/protobuf. Currently supported versions of OTLP protobuf definitions are 0.7.0 through 0.16.0 for traces, and 0.7.0 through 0.11.0 for metrics.

This means you can use the OpenTelemetry Collector and its standard OTLP exporter to send data to Honeycomb without any additional exporters or plugins.

The Collector consists of three components: receivers, processors and exporters, which are then used to construct telemetry pipelines.

If using the dataset-only data model, refer to the Honeycomb Classic tab for instructions. Not sure? Learn more about Honeycomb versus Honeycomb Classic.

To send trace or metrics data to Honeycomb, you must configure an OTLP exporter, passing in your Honeycomb API Key and dataset as headers:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"

If metrics data are also being sent through to Honeycomb, you may consider adding an additional (separate) exporter and dataset:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"
  otlp/metrics:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_METRICS_DATASET"

You must then include the OTLP exporter in the relevant pipeline:

service:
  extensions: []
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [otlp]
    metrics:
      receivers: [hostmetrics]
      processors: []
      exporters: [otlp/metrics]

To send trace data to Honeycomb, you must configure an OTLP exporter, passing in your Honeycomb API Key as a header:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"

If metrics data are also being sent through to Honeycomb, the destination dataset for metrics must also be added as an additional header:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_METRICS_DATASET"

Alternatively, you may consider adding an additional (separate) exporter and dataset:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
  otlp/metrics:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_METRICS_DATASET"

You must then include the OTLP exporter(s) in the relevant pipeline:

service:
  extensions: []
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [otlp]
    metrics:
      receivers: [hostmetrics]
      processors: []
      exporters: [otlp/metrics]

Note: Ingesting metrics is available as part of the Honeycomb Enterprise and Pro plans. Find out more about metrics.

The following is a complete configuration file example for a Collector instance that accepts Jaeger and OpenTelemetry (over gRPC and HTTP) trace data, as well as Prometheus metrics, and exports the data to Honeycomb:

If using the dataset-only data model, refer to the Honeycomb Classic tab for instructions. Not sure? Learn more about Honeycomb versus Honeycomb Classic.

receivers:
  jaeger:
    protocols:
      thrift_http:
        endpoint: "0.0.0.0:14268"
  otlp:
    protocols:
      grpc: # on port 4317
      http: # on port 4318
  prometheus:
    config:
      scrape_configs:
        - job_name: "prometheus"
          scrape_interval: 15s
          static_configs:
            - targets: ["0.0.0.0:9100"]

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"
  otlp/metrics:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_METRICS_DATASET"

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [prometheus]
      processors: []
      exporters: [otlp/metrics]
receivers:
  jaeger:
    protocols:
      thrift_http:
        endpoint: "0.0.0.0:14268"
  otlp:
    protocols:
      grpc: # on port 4317
      http: # on port 4318
  prometheus:
    config:
      scrape_configs:
        - job_name: "prometheus"
          scrape_interval: 15s
          static_configs:
            - targets: ["0.0.0.0:9100"]

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
  otlp/metrics:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_METRICS_DATASET"

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [prometheus]
      processors: []
      exporters: [otlp/metrics]

See the Collector documentation for more examples.

HTTP Instead of gRPC 

To use HTTP instead of gRPC, use the otlphttp exporter and update the endpoint:

receivers:
  otlp:
    protocols:
      grpc: # on port 4317
      http: # on port 4318

exporters:
  otlphttp:
    endpoint: "https://api.honeycomb.io"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [otlphttp]

Running the Collector 

You can run the Collector in Docker to try it out locally. This is needed when adding instrumentation in development to send events to Honeycomb.

For instance, if your configuration file is called otel_collector_config.yaml in the current working directory, the following command will run the Collector with open ports for sending OTLP protocol:

$ docker run \
  -p 14268:14268 \
  -p 4317-4318:4317-4318 \
  -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
  otel/opentelemetry-collector-contrib:latest

More details on running the Collector can be found in its documentation.

Scrubbing Sensitive Information 

Sometimes you want to make sure that certain information does not escape your application or service. This could be for regulatory reasons regarding personally identifiable information, or you want to ensure that certain information does not end being stored with a vendor. Scrubbing attributes from a span is possible with the built-in Attributes Span Processor. Span processors in OpenTelemetry allow you to hook into the lifecycle of a span and modify the span contents before sending to a backend.

The Attributes Span Processor can be configured with actions, and specifically the delete action can be used to remove span attributes. There are other actions that could be used, such as upsert to replace the value or hash to encrypt the value with a SHA1 hash.

processors:
  attributes:
    actions:
      - key: my.sensitive.data
        action: delete

Handling Large Requests 

If request sizes are too large, there will be an error when trying to send to Honeycomb. The request size limit is 15MB. To help mitigate errors for requests being too large, it is recommended to set a limit on the batch size and use compression when exporting to Honeycomb.

Configuring Max Batch Size 

A Batch Processor has a configuration option for send_batch_size and send_batch_max_size. These options specify the number of data points, regardless of size, to include in a batch to be sent. A one-size-fits-all value does not exist for each of these, since different requests will vary in size. However, these are worth tuning to find the right limit to ensure data is being sent reliably.

Enabling Compression 

Collector exporters have configuration options to enable compression, including for both gRPC and HTTP. Current supported compression types include gzip, snappy, and zstd.

Replacing the Honeycomb OpenTelemetry Collector Exporter 

The Honeycomb API supports the OpenTelemetry Protocol (OTLP) for OpenTelemetry trace ingest. This means the Honeycomb-specific exporter is no longer required when using the OpenTelemetry Collector and instead you should use the built-in OTLP exporter to send trace data to Honeycomb.

For most users, migrating to the OTLP exporter in the OpenTelemetry Collector means updating your configuration file to include an OTLP exporter.

For example, below is a basic OpenTelemetry Collector configuration using the Honeycomb exporter:

If using the dataset-only data model, refer to the Honeycomb Classic tab for instructions. Not sure? Learn more about Honeycomb versus Honeycomb Classic.

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "YOUR_API_KEY"
    dataset: "YOUR_DATASET"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [honeycomb]

And can be updated to use the OTLP exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "YOUR_API_KEY"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [honeycomb]

And can be updated to use the OTLP exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

Sample Rate Attribute 

The Honeycomb exporter’s sample_rate_attribute configuration option allows you to use a custom attribute to specify the sample rate per span that gets sent to Honeycomb’s backend. This feature is not supported by the OTLP exporter, but can be replaced with a configuration-driven attribute processor.

For example, below is an OpenTelemetry Collector configuration that uses a sample rate attribute:

If using the dataset-only data model, refer to the Honeycomb Classic tab for instructions. Not sure? Learn more about Honeycomb versus Honeycomb Classic.

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "YOUR_API_KEY"
    dataset: "YOUR_DATASET"
    sample_rate_attribute: "hny.sample_rate"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

And can be updated to use the attribute processor exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:
  attributes/copy-sample-rate:
    actions:
      - key: sampleRate
        action: upsert
        from_attribute: hny.sample_rate

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
      "x-honeycomb-dataset": "YOUR_DATASET"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, attributes/copy-sample-rate]
      exporters: [otlp]
receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  honeycomb:
    api_key: "YOUR_API_KEY"
    sample_rate_attribute: "hny.sample_rate"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

And can be updated to use the attribute processor exporter like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:
  attributes/copy-sample-rate:
    actions:
      - key: sampleRate
        action: upsert
        from_attribute: hny.sample_rate

exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, attributes/copy-sample-rate]
      exporters: [otlp]

This attributes processor informs the OpenTelemetry collector to copy the value of the hny.sample_rate attribute to an attribute called sampleRate, which the Honeycomb backend will automatically read and use as the sample rate for that span. You can read more about the attributes processor and sampling with Honeycomb.

Overriding or Setting the Service Name in a Collector Host 

In some cases, it may be preferable to override or set a service name in a collector host rather than have SDKs that generate OpenTelemetry data set the service name.

To do this, use the OTEL_SERVICE_NAME environment variable:

OTEL_SERVICE_NAME=my-service-name

Troubleshooting 

If data is not arriving in Honeycomb as expected, add a debug-level logger to emit the data to the console for review. In the exporters section of your config file, add a logging exporter with loglevel of debug. The logging exporter should also be added to the service section, either replacing or accompanying the otlp exporter.

If the collector is running in Docker or otherwise difficult to parse via the console, you can also send the data to a specific file for review. Add an additional file exporter with a path to the file that should contain the output.

This example includes an otlp exporter for sending to Honeycomb, a logging exporter for debug-level logging to the console, and a file exporter for storing the data logged.

receivers:
  otlp:
    protocols:
      grpc:
      http:
processors:
  batch:
exporters:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "YOUR_API_KEY"
  logging:
    loglevel: debug
  file: # optionally export data to a file
    path: /var/lib/data.json # optional file to store exported data
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp, logging, file] # only add file if added above

Verify OTLP Protobuf Definitions 

Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC and HTTP/protobuf. Currently supported versions of OTLP protobuf definitions are 0.7.0 through 0.16.0 for traces, and 0.7.0 through 0.11.0 for metrics.

If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is newer than this range, temporarily use an SDK with a supported version until the newer version is supported. If the SDK’s protobuf version is older than this range, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.

Did you find what you were looking for?