Get answers about how your applications on Kubernetes are performing in production using OpenTelemetry Collectors and Honeycomb—in under 10 minutes.
Whether you want to maximize Honeycomb’s capabilities or begin with a more limited set of telemetry data for your Kubernetes applications, we recommend that you use OpenTelemetry, a highly configurable, open-source, and vendor-neutral instrumentation framework.In this guide, you will learn how to get answers about how your applications on Kubernetes are performing in production using OpenTelemetry Collectors and Honeycomb—and you’ll do it in under 10 minutes.When you finish, you’ll have visibility into in-depth Kubernetes data, including Kubernetes logs, events, and node/cluster metrics.
And you’ll be given the opportunity to take the next step toward leveraging Honeycomb’s full potential by instrumenting your code.
In the next 10 minutes, you will create a series of OpenTelemetry Collectors that will work together to pull in the correct telemetry and apply Kubernetes-specific data to it, which will help correlate issues.
Your implementation will also lay the foundation for your applications to send telemetry data, if you choose to instrument them.
Each node will contain a Collector, which will use the node’s Kubelet API to gather metrics data about the node and the node’s pod resources.
The entire cluster will contain a separate Collector, which will use the Kubernetes API to get details about Kubernetes Events, such as active deployments.
Applications will be able to use the node’s IP address to send telemetry data (logs, metrics, and traces) to the Collector that is local to the node, if you instrument them.
Each Collector will send telemetry data directly to Honeycomb over gRPC.
To help you manage your objects in the cluster, create a namespace to contain the collector infrastructure.
In this example, we call the namespace honeycomb.
Step 2: Configure Kubernetes with Your Honeycomb API Key
Within your new namespace, create a Kubernetes Secret that contains your Honeycomb API Key.
You can find your Honeycomb API Key in your environment in Honeycomb.
mode: deploymentimage: repository: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-k8sextraEnvs: - name: HONEYCOMB_API_KEY valueFrom: secretKeyRef: name: honeycomb key: api-key# We only want one of these collectors - any more and we'd produce duplicate datareplicaCount: 1presets: # enables the k8sclusterreceiver and adds it to the metrics pipelines clusterMetrics: enabled: true # enables the k8sobjectsreceiver to collect events only and adds it to the logs pipelines kubernetesEvents: enabled: trueconfig: receivers: k8s_cluster: collection_interval: 30s metrics: # Disable replicaset metrics by default. These are typically high volume, low signal metrics. # If volume is not a concern, then the following blocks can be removed. k8s.replicaset.desired: enabled: false k8s.replicaset.available: enabled: false jaeger: null zipkin: null processors: transform/events: error_mode: ignore log_statements: - context: log statements: # adds a new watch-type attribute from the body if it exists - set(attributes["watch-type"], body["type"]) where IsMap(body) and body["type"] != nil # create new attributes from the body if the body is an object - merge_maps(attributes, body, "upsert") where IsMap(body) and body["object"] == nil - merge_maps(attributes, body["object"], "upsert") where IsMap(body) and body["object"] != nil # Transform the attributes so that the log events use the k8s.* semantic conventions - merge_maps(attributes, attributes[ "metadata"], "upsert") where IsMap(attributes[ "metadata"]) - set(attributes["k8s.pod.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Pod" - set(attributes["k8s.node.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Node" - set(attributes["k8s.job.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Job" - set(attributes["k8s.cronjob.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "CronJob" - set(attributes["k8s.namespace.name"], attributes["regarding"]["namespace"]) where attributes["regarding"]["kind"] == "Pod" or attributes["regarding"]["kind"] == "Job" or attributes["regarding"]["kind"] == "CronJob" # Transform the type attribtes into OpenTelemetry Severity types. - set(severity_text, attributes["type"]) where attributes["type"] == "Normal" or attributes["type"] == "Warning" - set(severity_number, SEVERITY_NUMBER_INFO) where attributes["type"] == "Normal" - set(severity_number, SEVERITY_NUMBER_WARN) where attributes["type"] == "Warning" exporters: otlp/k8s-metrics: endpoint: "api.honeycomb.io:443" # US instance #endpoint: "api.eu1.honeycomb.io:443" # EU instance headers: "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}" "x-honeycomb-dataset": "k8s-metrics" otlp/k8s-events: endpoint: "api.honeycomb.io:443" # US instance #endpoint: "api.eu1.honeycomb.io:443" # EU instance headers: "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}" "x-honeycomb-dataset": "k8s-events" service: pipelines: traces: null metrics: receivers: [k8s_cluster] exporters: [ otlp/k8s-metrics ] logs: receivers: [k8sobjects] processors: [ memory_limiter, transform/events, batch ] exporters: [ otlp/k8s-events ]ports: jaeger-compact: enabled: false jaeger-thrift: enabled: false jaeger-grpc: enabled: false zipkin: enabled: false
mode: deploymentimage: repository: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-k8sextraEnvs: - name: HONEYCOMB_API_KEY valueFrom: secretKeyRef: name: honeycomb key: api-key# We only want one of these collectors - any more and we'd produce duplicate datareplicaCount: 1presets: # enables the k8sclusterreceiver and adds it to the metrics pipelines clusterMetrics: enabled: true # enables the k8sobjectsreceiver to collect events only and adds it to the logs pipelines kubernetesEvents: enabled: trueconfig: receivers: k8s_cluster: collection_interval: 30s metrics: # Disable replicaset metrics by default. These are typically high volume, low signal metrics. # If volume is not a concern, then the following blocks can be removed. k8s.replicaset.desired: enabled: false k8s.replicaset.available: enabled: false jaeger: null zipkin: null processors: transform/events: error_mode: ignore log_statements: - context: log statements: # adds a new watch-type attribute from the body if it exists - set(attributes["watch-type"], body["type"]) where IsMap(body) and body["type"] != nil # create new attributes from the body if the body is an object - merge_maps(attributes, body, "upsert") where IsMap(body) and body["object"] == nil - merge_maps(attributes, body["object"], "upsert") where IsMap(body) and body["object"] != nil # Transform the attributes so that the log events use the k8s.* semantic conventions - merge_maps(attributes, attributes[ "metadata"], "upsert") where IsMap(attributes[ "metadata"]) - set(attributes["k8s.pod.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Pod" - set(attributes["k8s.node.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Node" - set(attributes["k8s.job.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Job" - set(attributes["k8s.cronjob.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "CronJob" - set(attributes["k8s.namespace.name"], attributes["regarding"]["namespace"]) where attributes["regarding"]["kind"] == "Pod" or attributes["regarding"]["kind"] == "Job" or attributes["regarding"]["kind"] == "CronJob" # Transform the type attribtes into OpenTelemetry Severity types. - set(severity_text, attributes["type"]) where attributes["type"] == "Normal" or attributes["type"] == "Warning" - set(severity_number, SEVERITY_NUMBER_INFO) where attributes["type"] == "Normal" - set(severity_number, SEVERITY_NUMBER_WARN) where attributes["type"] == "Warning" exporters: otlp/k8s-metrics: # endpoint: "api.honeycomb.io:443" # US instance endpoint: "api.eu1.honeycomb.io:443" # EU instance headers: "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}" "x-honeycomb-dataset": "k8s-metrics" otlp/k8s-events: # endpoint: "api.honeycomb.io:443" # US instance endpoint: "api.eu1.honeycomb.io:443" # EU instance headers: "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}" "x-honeycomb-dataset": "k8s-events" service: pipelines: traces: null metrics: receivers: [k8s_cluster] exporters: [ otlp/k8s-metrics ] logs: receivers: [k8sobjects] processors: [ memory_limiter, transform/events, batch ] exporters: [ otlp/k8s-events ]ports: jaeger-compact: enabled: false jaeger-thrift: enabled: false jaeger-grpc: enabled: false zipkin: enabled: false
Check that the Collectors are installed by using the kubectl command to see if the pods are running:
Report incorrect code
Copy
kubectl get pods --namespace honeycomb
This command should return something like:
Report incorrect code
Copy
NAME READY STATUS RESTARTS AGEotel-collector-cluster-opentelemetry-collector-7c9cc9f8d-k9ncw 1/1 Running 0 9m21sotel-collector-opentelemetry-collector-agent-fcn5v 1/1 Running 0 17m
The result should contain one pod running under a name containing the prefix otel-collector-cluster, and one pod running under a name containing the word agent for each node in your Kubernetes cluster.
You should now have an OpenTelemetry installation in your cluster that can:
Receive tracing data from service applications in your cluster and forward it to Honeycomb.
Gather and send metrics data from all of the pods in your cluster.
Gather and send metrics data about the nodes in your cluster.
To explore metrics related to your Kubernetes cluster, log in to Honeycomb and query the k8s-metrics dataset in your environment.
Try asking the Query Assistant questions like:
“Show me the average CPU of my pods”
“What’s the P99 memory usage of my nodes?”
To explore the events emitted by Kubernetes itself, query the k8s-events dataset. These may be a little harder to understand, so try asking the Query Assistant questions like:
“Show me the pods that have a reason of Started”
“Show me pods that are crashing”
For each question you ask, the Query Assistant will create a general query.
You can further customize each query by adding visualizations, or by filtering or grouping the data.
Now that you have created an observability pipeline and have gotten some metrics, you can use these to get even more visibility into your Kubernetes cluster.
Add low-code, automatic instrumentation to your applications
Do you want more insight into your application data, but can’t fully instrument your code yet?
You can get even more insight by using the OpenTelemetry Operator to automatically instrument your applications.
To learn more, visit Low-Code Auto-Instrumentation with the OpenTelemetry Operator for Kubernetes.