Send Logs with OpenTelemetry's Filelog Receiver | Honeycomb

Send Logs with OpenTelemetry's Filelog Receiver

If you are running applications that are not using OpenTelemetry in your Kubernetes cluster, you can still collect the logs from your containers.

In this guide, you will learn how to use an OpenTelemetry Collector to get additional insight into your data by sending your container logs to Honeycomb using OpenTelemetry’s Filelog Receiver.

Before You Begin 

Before beginning this guide, you should have:

Collect Logs 

Enable the Filelog Receiver to collect logs by adding the logsCollection preset in the values file for your OpenTelemetry DaemonSet-mode Collector. Place it near the top of the values file under the config section:

presets:
  logsCollection:
    enabled: true

Tune Logs Collection 

By default, the logsCollection preset in the OpenTelemetry Helm chart will configure the Collector to collect all the pod logs in a cluster. In larger clusters, you may want to configure the preset to target specific pods or applications to avoid being overwhelmed.

Restrict Logs by Location or Name 

To collect only logs in a specific directory or with a specific filename, combine the preset with some configuration:

presets:
  logsCollection:
    enabled: true

config:
  receivers:
      filelog:
        include:
          - /var/log/pods/my-namespace*/*/*.log
          - /var/log/pods/*mypodname*/*/*.log
          - /var/log/pods/*/my-containername/*.log

Restrict Logs by Label Selector 

To collect logs for only certain label selectors, use the Kubernetes Attributes processor to enable labels, and then the Filter processor to remove unwanted data:

presets:  
  kubernetesAttributes:
    enabled: true
    extractAllPodLabels: true
  logsCollection:
    enabled: true

config:
  processors:
    filter:
      error_mode: ignore
      logs:
        log_record:
          - resource.attributes["app.kubernetes.io/component"] != "myapp1"

  exporters:
    otlp:
      endpoint: "api.honeycomb.io:443" # US instance
      #endpoint: "api.eu1.honeycomb.io:443" # EU instance
      headers:
        "x-honeycomb-team": "YOUR_API_KEY"
        "x-honeycomb-dataset": "myapp1-logs"

  service:
    pipelines:
      logs:
        receivers:
          - filelog
        processors:
          - memory_limiter
          - k8sattributes
          - filter
          - batch
        exporters:
          - otlp

Send Logs to Different Datasets 

To send specific logs to different datasets, use the Filter processor and multiple OTLP exporters:

presets:  
  kubernetesAttributes:
    enabled: true
    extractAllPodLabels: true
  logsCollection:
    enabled: true

config:
  processors:
    filter/myapp1:
      error_mode: ignore
      logs:
        log_record:
          - resource.attributes["app.kubernetes.io/component"] != "myapp1"

    filter/myapp2:
      error_mode: ignore
      logs:
        log_record:
          - resource.attributes["app.kubernetes.io/component"] != "myapp2"

  exporters:
    otlp/myapp1:
      endpoint: "api.honeycomb.io:443" # US instance
      #endpoint: "api.eu1.honeycomb.io:443" # EU instance
      headers:
        "x-honeycomb-team": "YOUR_API_KEY"
        "x-honeycomb-dataset": "myapp1-logs"
    otlp/myapp2:
      endpoint: "api.honeycomb.io:443" # US instance
      #endpoint: "api.eu1.honeycomb.io:443" # EU instance
      headers:
        "x-honeycomb-team": "YOUR_API_KEY"
        "x-honeycomb-dataset": "myapp2-logs"

  service:
    pipelines:
      logs:
        receivers:
          - filelog
        processors:
          - memory_limiter
          - k8sattributes
          - filter/myapp1
          - batch
        exporters:
          - otlp/myapp1
      logs/myapp2:
        receivers:
          - filelog
        processors:
          - memory_limiter
          - k8sattributes
          - filter/myapp2
          - batch
        exporters:
          - otlp/myapp2