Whether you want to maximize Honeycomb’s capabilities or begin with a more limited set of telemetry data for your Kubernetes applications, we recommend that you use OpenTelemetry, a highly configurable, open-source, and vendor-neutral instrumentation framework.
In this guide, you will learn how to get answers about how your applications on Kubernetes are performing in production using OpenTelemetry Collectors and Honeycomb–and you’ll do it in under 10 minutes.
When you finish, you’ll have visibility into in-depth Kubernetes data, including Kubernetes logs, events, and node/cluster metrics. And you’ll be given the opportunity to take the next step toward leveraging Honeycomb’s full potential by instrumenting your code.
In the next 10 minutes, you will create a series of OpenTelemetry Collectors that will work together to pull in the correct telemetry and apply Kubernetes-specific data to it, which will help correlate issues. Your implementation will also lay the foundation for your applications to send telemetry data, if you choose to instrument them.
When you finish, you will have access to:
uptime
, *_request_utilization
, and *_limit_utilization
metricsk8s.namespace.name
k8s.deployment.name
k8s.statefulset.name
k8s.daemonset.name
k8s.cronjob.name
k8s.job.name
k8s.node.name
k8s.pod.name
k8s.pod.uid
k8s.pod.start_time
Before beginning this guide, you should have:
kubectl
command-line utility locally.You’ll also need your Honeycomb API Key. You can find your Honeycomb API Key in your Environment Settings.
Let’s get started!
To help you manage your objects in the cluster, create a namespace to contain the collector infrastructure.
In this example, we call the namespace honeycomb
.
kubectl create namespace honeycomb
Within your new namespace, create a Kubernetes Secret that contains your Honeycomb API Key. You can find your Honeycomb API Key in your environment in Honeycomb.
export HONEYCOMB_API_KEY=mykey
kubectl create secret generic honeycomb --from-literal=api-key=$HONEYCOMB_API_KEY --namespace=honeycomb
OpenTelemetry’s Helm GitHub repository includes Helm charts with all of the resources you need to deploy Collectors to your Kubernetes cluster.
Add the repo:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
Update your repos to ensure Helm is aware of the latest versions:
helm repo update
Deploy your Collectors–a Deployment-mode Collector to collect the metrics for your cluster and a DaemonSet-mode Collector to collect the application telemetry data and metrics from your cluster’s node(s). You can deploy both Collectors using the same Helm chart, but with different names and values files.
Deploy the Deployment-mode Collector:
helm install otel-collector-cluster open-telemetry/opentelemetry-collector --namespace honeycomb --values https://docs.honeycomb.io/send-data/kubernetes/values-files/values-deployment.yaml
mode: deployment
image:
repository: otel/opentelemetry-collector-k8s
extraEnvs:
- name: HONEYCOMB_API_KEY
valueFrom:
secretKeyRef:
name: honeycomb
key: api-key
# We only want one of these collectors - any more and we'd produce duplicate data
replicaCount: 1
presets:
# enables the k8sclusterreceiver and adds it to the metrics pipelines
clusterMetrics:
enabled: true
# enables the k8sobjectsreceiver to collect events only and adds it to the logs pipelines
kubernetesEvents:
enabled: true
config:
receivers:
k8s_cluster:
collection_interval: 30s
metrics:
# Disable replicaset metrics by default. These are typically high volume, low signal metrics.
# If volume is not a concern, then the following blocks can be removed.
k8s.replicaset.desired:
enabled: false
k8s.replicaset.available:
enabled: false
jaeger: null
zipkin: null
processors:
transform/events:
error_mode: ignore
log_statements:
- context: log
statements:
# adds a new watch-type attribute from the body if it exists
- set(attributes["watch-type"], body["type"]) where IsMap(body) and body["type"] != nil
# create new attributes from the body if the body is an object
- merge_maps(attributes, body, "upsert") where IsMap(body) and body["object"] == nil
- merge_maps(attributes, body["object"], "upsert") where IsMap(body) and body["object"] != nil
# Transform the attributes so that the log events use the k8s.* semantic conventions
- merge_maps(attributes, attributes[ "metadata"], "upsert") where IsMap(attributes[ "metadata"])
- set(attributes["k8s.pod.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Pod"
- set(attributes["k8s.node.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Node"
- set(attributes["k8s.job.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "Job"
- set(attributes["k8s.cronjob.name"], attributes["regarding"]["name"]) where attributes["regarding"]["kind"] == "CronJob"
- set(attributes["k8s.namespace.name"], attributes["regarding"]["namespace"]) where attributes["regarding"]["kind"] == "Pod" or attributes["regarding"]["kind"] == "Job" or attributes["regarding"]["kind"] == "CronJob"
# Transform the type attribtes into OpenTelemetry Severity types.
- set(severity_text, attributes["type"]) where attributes["type"] == "Normal" or attributes["type"] == "Warning"
- set(severity_number, SEVERITY_NUMBER_INFO) where attributes["type"] == "Normal"
- set(severity_number, SEVERITY_NUMBER_WARN) where attributes["type"] == "Warning"
exporters:
otlp/k8s-metrics:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
"x-honeycomb-dataset": "k8s-metrics"
otlp/k8s-events:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
"x-honeycomb-dataset": "k8s-events"
service:
pipelines:
traces: null
metrics:
exporters: [ otlp/k8s-metrics ]
logs:
processors: [ memory_limiter, transform/events, batch ]
exporters: [ otlp/k8s-events ]
ports:
jaeger-compact:
enabled: false
jaeger-thrift:
enabled: false
jaeger-grpc:
enabled: false
zipkin:
enabled: false
Deploy the DaemonSet-mode Collector:
helm install otel-collector open-telemetry/opentelemetry-collector --namespace honeycomb --values https://docs.honeycomb.io/send-data/kubernetes/values-files/values-daemonset.yaml
mode: daemonset
image:
repository: otel/opentelemetry-collector-k8s
# Required to use the kubeletstats cpu/memory utilization metrics
clusterRole:
create: true
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- get
extraEnvs:
- name: HONEYCOMB_API_KEY
valueFrom:
secretKeyRef:
name: honeycomb
key: api-key
presets:
# enables the k8sattributesprocessor and adds it to the traces, metrics, and logs pipelines
kubernetesAttributes:
enabled: true
extractAllPodLabels: true
extractAllPodAnnotations: true
# enables the kubeletstatsreceiver and adds it to the metrics pipelines
kubeletMetrics:
enabled: true
config:
receivers:
jaeger: null
zipkin: null
kubeletstats:
insecure_skip_verify: true # required as most clusters use self-signed certificates
collection_interval: 30s
metric_groups:
- node
- pod
metrics:
k8s.node.uptime:
enabled: true
k8s.pod.uptime:
enabled: true
k8s.pod.cpu_limit_utilization:
enabled: true
k8s.pod.cpu_request_utilization:
enabled: true
k8s.pod.memory_limit_utilization:
enabled: true
k8s.pod.memory_request_utilization:
enabled: true
exporters:
otlp:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
otlp/k8s-metrics:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
"x-honeycomb-dataset": "k8s-metrics"
otlp/k8s-logs:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
"x-honeycomb-dataset": "k8s-logs"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
metrics:
exporters: [otlp/k8s-metrics]
logs:
exporters: [otlp/k8s-logs]
ports:
jaeger-compact:
enabled: false
jaeger-thrift:
enabled: false
jaeger-grpc:
enabled: false
zipkin:
enabled: false
If Collector installation fails and returns an error like the following:
Error: INSTALLATION FAILED: template: opentelemetry-collector/templates/service.yaml:38:28: executing "opentelemetry-collector/templates/service.yaml" at <include "opentelemetry-collector.service
InternalTrafficPolicy" .>: error calling include: template: opentelemetry-collector/templates/_helpers.tpl:148:44: executing "opentelemetry-collector.serviceInternalTrafficPolicy" at <eq .Values.
service.enabled true>: error calling eq: incompatible types for comparison
Make sure you have Helm 3.9+ installed, as older Helm versions typically cause this error.
Check that the Collectors are installed by using the kubectl
command to see if the pods are running:
kubectl get pods --namespace honeycomb
This command should return something like:
NAME READY STATUS RESTARTS AGE
otel-collector-cluster-opentelemetry-collector-7c9cc9f8d-k9ncw 1/1 Running 0 9m21s
otel-collector-opentelemetry-collector-agent-fcn5v 1/1 Running 0 17m
otel-collector-cluster
, and one pod running under a name containing the word agent
for each node in your Kubernetes cluster.You should now have an OpenTelemetry installation in your cluster that can:
After a few minutes, data should start flowing into Honeycomb.
To explore metrics related to your Kubernetes cluster, log in to Honeycomb and query the k8s-metrics
dataset in your environment.
Try asking the Query Assistant questions like:
To explore the events emitted by Kubernetes itself, query the k8s-events
dataset. These may be a little harder to understand, so try asking the Query Assistant questions like:
For each question you ask, the Query Assistant will create a general query. You can further customize each query by adding visualizations, or by filtering or grouping the data.
Now that you have created an observability pipeline and have gotten some metrics, you can use these to get even more visibility into your Kubernetes cluster.
Configure your applications to send data to the OpenTelemetry Collectors Have you already instrumented your applications with OpenTelemetry? You’ll need to configure your pods and applications to send data to your new Collectors.
Add low-code, automatic instrumentation to your applications Do you want more insight into your application data, but can’t fully instrument your code yet? You can get even more insight by using the OpenTelemetry Operator to automatically instrument your applications. To learn more, visit Low-Code Auto-Instrumentation with the OpenTelemetry Operator for Kubernetes.
Get no-code insights into Kubernetes network telemetry Are you working in a coding language that is currently unsupported by OpenTelemetry trace instrumentation? Are you not able to touch your code? Honeycomb can still give you visibility into the traffic traveling between your pods. To learn more about the quickest approach to getting insights into the interconnectivity of your services, visit No-code Kubernetes Insights with the Honeycomb Network Agent.
The DaemonSet-mode Collector uses the following components:
The Deployment-mode Collector uses the following components: