If you need to troubleshoot sending data to Honeycomb, explore these solutions to common issues.
Troubleshoot issues related to the OpenTelemetry Collector.
If data is not arriving in Honeycomb as expected, add a debug-level logger to emit the data to the console for review.
In the exporters section of your config file, add a logging
exporter with loglevel
of debug
.
The logging
exporter should also be added to the service
section, either replacing or accompanying the otlp
exporter.
If the collector is running in Docker or otherwise difficult to parse via the console, you can also send the data to a specific file for review.
Add an additional file
exporter with a path to the file that should contain the output.
This example includes an otlp
exporter for sending to Honeycomb, a logging
exporter for debug-level logging to the console, and a file
exporter for storing the data logged.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
exporters:
otlp:
endpoint: "api.honeycomb.io:443" # US instance
#endpoint: "api.eu1.honeycomb.io:443" # EU instance
headers:
"x-honeycomb-team": "YOUR_API_KEY"
logging:
loglevel: debug
file: # optionally export data to a file
path: /var/lib/data.json # optional file to store exported data
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp, logging, file] # only add file if added above
Sometimes a CORS error may occur when setting up browser telemetry.
Confirm the receiver’s setup has the correct port defined.
The default port for http
is 4318
.
If this port or endpoint is overwritten in the collector configuration file, ensure it matches the endpoint set in the application sending telemetry.
Confirm the allowed_origins
list in the receivers matches the origin of the browser telemetry.
If there is a load balancer in front of the Collector, it should also be configured to accept requests from the browser origin.
One way to determine whether the issue is rooted in how the application is exporting as opposed to network connectivity is to issue a curl
command to the server from the browser origin.
For example, if the application was running on http://localhost:3000
, and the collector was listening on port 4318
at http://otel-collector.com/v1/traces
:
curl -i http://otel-collector.com/v1/traces -H "Origin: http://localhost:3000" -H "Access-Control-Request-Method: POST" -H "Access-Control-Request-Headers: X-Requested-With" -H "Content-Type: application/json" -X OPTIONS --verbose
The response from the server should include Access-Control-Allow-Credentials: true
.
Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC, HTTP/protobuf, and HTTP/JSON. The minimum supported versions of OTLP protobuf definitions are 0.7.0 for traces and metrics.
If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is older than the minimum supported version, and telemetry is not appearing as expected in Honeycomb, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.
Troubleshoot issues related to the OpenTelemetry SDKs and Honeycomb Distributions.
Sometimes a CORS error may occur when setting up browser telemetry.
Confirm the receiver’s setup has the correct port defined.
The default port for http
is 4318
.
If this port or endpoint is overwritten in the collector configuration file, ensure it matches the endpoint set in the application sending telemetry.
Confirm the allowed_origins
list in the receivers matches the origin of the browser telemetry.
If there is a load balancer in front of the Collector, it should also be configured to accept requests from the browser origin.
One way to determine whether the issue is rooted in how the application is exporting as opposed to network connectivity is to issue a curl
command to the server from the browser origin.
For example, if the application was running on http://localhost:3000
, and the collector was listening on port 4318
at http://otel-collector.com/v1/traces
:
curl -i http://otel-collector.com/v1/traces -H "Origin: http://localhost:3000" -H "Access-Control-Request-Method: POST" -H "Access-Control-Request-Headers: X-Requested-With" -H "Content-Type: application/json" -X OPTIONS --verbose
The response from the server should include Access-Control-Allow-Credentials: true
.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
To enable debugging when running the Honeycomb OpenTelemetry Node SDK, set the DEBUG
environment variable to true
:
export DEBUG=true
To set the debug option in code instead of an environment variable, add debug
to the HoneycombSDK
:
const sdk = new HoneycombSDK({
apiKey: "your-api-key",
serviceName: "your-service-name",
instrumentations: [getNodeAutoInstrumentations()],
sampleRate: 5,
localVisualizations: true,
debug: true,
})
When the debug setting is enabled, the Honeycomb SDK configures a DiagConsoleLogger that logs telemetry to the console with the log level of Debug.
The debug setting in the Honeycomb SDK will also output to the console the full options configuration, including but not limited to protocol, API Key, and endpoint.
If you are not using the Honeycomb SDK, or if you wish to change the logging level, you can still use the logger directly:
const opentelemetry = require("@opentelemetry/api");
opentelemetry.diag.setLogger(
new opentelemetry.DiagConsoleLogger(),
opentelemetry.DiagLogLevel.DEBUG
);
Keep in mind that printing to the console is not recommended for production and should only be used for debugging purposes.
Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC, HTTP/protobuf, and HTTP/JSON. The minimum supported versions of OTLP protobuf definitions are 0.7.0 for traces and metrics.
If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is older than the minimum supported version, and telemetry is not appearing as expected in Honeycomb, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
.proto
Files Error If your application builds using a bundler, like Webpack or ESBuild, or if your application uses TypeScript, you will see an error that says that a specific .proto
file is not found:
opentelemetry/proto/collector/trace/v1/trace_service.proto not found in any of the include paths <directory>/protos
This error appears because the .proto
files are not imported or required by the library that uses them, so bundlers do not know to include them in the final build.
There are two ways to work around this error:
/protos
Directory to the Correct Location Modify the bundler configuration to copy the /protos
directory from the library to the same level as the build directory:
const CopyPlugin = require("copy-webpack-plugin");
module.exports = {
... other config ...
plugins: [
new CopyPlugin({
patterns: [
{
from: "./node_modules/@opentelemetry/otlp-grpc-exporter-base/build/protos/**/*",
to: "./protos"
}
],
}),
],
};
const { copy } = require('esbuild-plugin-copy')
require('esbuild').build({
... other config ...
plugins: [
copy({
resolveFrom: 'cwd',
assets: {
from: ['./node_modules/@opentelemetry/otlp-grpc-exporter-base/build/protos/**/*'],
to: ['./protos'],
keepStructure: true
},
}),
],
}).catch(() => process.exit(1));
TypeScript configuration currently cannot specify which extra files to copy for your build.
In this case, use a postbuild
npm script to copy the proto files to the correct location.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
Using MyPy requires turning on support for namespace packages.
To turn on support from the command line, run:
mypy --namespace-packages
Or to turn on support from your project configuration file, add:
[tool.mypy]
namespace_packages = true
The OpenTelemetry Python SDK typically shows errors in the console when applicable. If no errors appear but your data is not in Honeycomb as expected, you can enable debug mode, which prints all spans to the console. This will help confirm whether your app is being instrumented with the data you expect.
Set the DEBUG
environment variable:
export DEBUG=true
Keep in mind that printing to the console is not recommended for production and should only be used for debugging purposes.
If the application uses Flask, instrumentation will not work if Flask debugging is enabled.
Unset the FLASK_DEBUG
variable or set it to false
.
If DEBUG
is enabled and FLASK_DEBUG
is disabled, the output in the console will show:
DEBUG:sitecustomize:Instrumented flask
* Debug mode: off
If the application uses Django, instrumentation requires the DJANGO_SETTINGS_MODULE
and the --noreload
flag.
First set the environment variable for DJANGO_SETTINGS_MODULE
based on the name of your settings file.
export DJANGO_SETTINGS_MODULE=myapp.settings
Then run the application with the --noreload
flag to avoid Django running main
twice.
For example, the command to run the application may look like this:
opentelemetry-instrument python manage.py runserver --noreload
The service name is a required configuration value.
If it is unspecified, all trace data will be sent to a default dataset called unknown_service
.
Honeycomb’s OpenTelemetry Distribution for Python can create a link to a trace visualization in the Honeycomb UI for local traces. Local visualizations enables a faster feedback cycle when adding, modifying, or verifying instrumentation.
To enable local visualizations, set the HONEYCOMB_ENABLE_LOCAL_VISUALIZATIONS
environment variable to true
:
export HONEYCOMB_ENABLE_LOCAL_VISUALIZATIONS=true
Then, run your application:
opentelemetry-instrument python myapp.py
The output displays the name of the root span and a link to Honeycomb that shows its trace. For example:
Trace for <root-span-name>
Honeycomb link: <link to Honeycomb trace>
Select the link to view the trace in detail within the Honeycomb UI.
Running the opentelemetry-bootstrap
command with --action=install
does not add packages to a requirements.txt
file, and instead only adds to the current environment.
This can result in inadvertently missing dependencies in checked-in code or container images.
If using pip
, our recommendation is as listed in Acquire Dependencies:
opentelemetry-bootstrap
.requirements.txt
file manually or using opentelemetry-bootstrap >> requirements.txt
.pip install -r requirements.txt
.If you do install with the bootstrap command, run pip freeze
and manually add the necessary packages to the requirements.txt
file.
The opentelemetry-bootstrap
command does not include the option to install packages when using Poetry.
Refer to the outputted list of packages to manually install each package and add to the pyproject.toml
file.
To confirm the installed packages in your poetry environment, run poetry show
.
Upgrading OpenTelemetry packages generally work best when they are all upgraded together.
The Releases for opentelemetry-python
and the Releases for opentelemetry-python-contrib
list the compatible versions for each.
For example, Version 1.20.0/0.41b0
means core packages versioned 1.20.0
are compatible with Contrib packages versioned 0.41b0
Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC, HTTP/protobuf, and HTTP/JSON. The minimum supported versions of OTLP protobuf definitions are 0.7.0 for traces and metrics.
If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is older than the minimum supported version, and telemetry is not appearing as expected in Honeycomb, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
The service name is a required configuration value.
If it is unspecified, all trace data will be sent to a default dataset called unknown_service
.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
To enable debugging when running with the OpenTelemetry Java Agent, set the otel.javaagent.debug
system property or OTEL_JAVAAGENT_DEBUG
environment variable to true
.
When this setting is provided, the Agent configures a LoggingSpanExporter that logs traces & metrics data.
If you are not using the OpenTelemetry Java Agent, you can add a LoggingSpanExporter
to your builder configuration.
This will require adding another dependency on io.opentelemetry:opentelemetry-exporter-logging
.
import io.honeycomb.opentelemetry.OpenTelemetryConfiguration;
import io.opentelemetry.exporter.logging.LoggingSpanExporter; // for debugging
import io.opentelemetry.sdk.trace.export.SimpleSpanProcessor; // for debugging
public OpenTelemetry honeycomb() {
return OpenTelemetryConfiguration.builder()
.setApiKey("your-api-key")
.setServiceName("your-service-name")
.addSpanProcessor(SimpleSpanProcessor.create(new LoggingSpanExporter())) // for debugging
.buildAndRegisterGlobal();
}
Keep in mind that printing to the console is not recommended for production and should only be used for debugging purposes.
A gRPC transport is required to transmit OpenTelemetry data. HoneycombSDK includes grpc-netty-shaded
.
If you are using another gRPC dependency, version conflicts can come up with an error like this:
java.lang.NoSuchMethodError: io/grpc/ClientStreamTracer$StreamInfo$Builder.setPreviousAttempts(I)Lio/grpc/ClientStreamTracer$StreamInfo$Builder; (loaded from file:/app.jar by jdk.internal.loader.ClassLoaders$AppClassLoader@193b9e51) called from class io.grpc.internal.GrpcUtil (loaded from file:/io.grpc/grpc-core/1.41.0/882b6572f7d805b9b32e3993b1d7d3e022791b3a/grpc-core-1.41.0.jar by jdk.internal.loader.ClassLoaders$AppClassLoader@193b9e51).
If you would like to use another gRPC transport, you can exclude the grpc-netty-shaded
transitive dependency:
dependencies {
implementation('io.honeycomb:honeycomb-opentelemetry-sdk:1.7.0') {
exclude group: 'io.grpc', module: 'grpc-netty-shaded'
}
}
<dependency>
<groupId>io.honeycomb</groupId>
<artifactId>honeycomb-opentelemetry-sdk</artifactId>
<version>1.7.0</version>
<exclusions>
<exclusion>
<groupId>io.grpc</groupId>
<artifactId>grpc-netty-shaded</artifactId>
</exclusion>
</exclusions>
</dependency>
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
Additionally, older JVMs may not have sufficient gRPC support and may attempt to send telemetry using HTTP1.
To resolve this, either update to a newer JVM or use http/protobuf
as the transfer protocol.
The service name is a required configuration value.
If it is unspecified, all trace data will be sent to a default dataset called unknown_service
.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
By default, the .NET SDK does not emit error information to the console.
The SDK does include a self-diagnostics feature to help with troubleshooting by writing errors to a log file.
Enable self-diagnostics by creating a file in the current working directory called OTEL_DIAGNOSTICS.json
:
{
"LogDirectory": ".",
"FileSize": 1024,
"LogLevel": "Error"
}
Running the app with the above configuration in the OTEL_DIAGNOSTICS.json
file will generate a file named for ExecutableName.ProcessId.log
, such as console.30763.log
.
The newly generated log file will contain any applicable errors for the app.
The LogDirectory
represents the directory in which the log file will be stored, and can be changed to output to a different location.
The FileSize
is the maximum size the log file can grow in KiB
, and can be adjusted if a larger size is needed to prevent overwriting of logging output.
Adjust the log level as needed for more or less verbose logging, using the fields available with System Diagnostics.
To disable this error log, delete the OTEL_DIAGNOSTICS.json
file.
If you are using the Honeycomb OpenTelemetry Distribution, warnings will appear in the console if you are missing an API Key or Dataset.
If no errors appear but your data is not in Honeycomb as expected, use a ConsoleExporter
to print your spans to the console.
This will help confirm whether your app is being instrumented with the data you expect.
First, import the ConsoleExporter
:
dotnet add package OpenTelemetry.Exporter.Console --prerelease
Then add the ConsoleExporter
to your configuration:
using var tracerProvider = OpenTelemetry.Sdk.CreateTracerProviderBuilder()
.AddHoneycomb(options)
.AddConsoleExporter() // for debugging
.Build();
Keep in mind that printing to the console is not recommended for production and should only be used for debugging purposes.
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
Previous versions of the Honeycomb OpenTelemetry Distribution had a package called Honeycomb.OpenTelemetry.AutoInstrumentations
, which was renamed to Honeycomb.OpenTelemetry.CommonInstrumentations
.
The name change of the instrumentation package is intended to better reflect the purpose of the package, but is a potential upgrade step to ensure the appropriate package is being used.
Refer to our Releases page on GitHub as needed.
Previous versions of OpenTelemetry used AddOpenTelemetryTracing
and AddOpenTelemetryMetrics
to add tracing and metrics.
AddOpenTelemetryTracing
has been replaced with AddOpenTelemetry().WithTracing
.
AddOpenTelemetryMetrics
has been replaced with AddOpenTelemetry().WithMetrics
.
The service name is a required configuration value.
If it is unspecified, all trace data will be sent to a default dataset called unknown_service
.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
If no errors appear but your data is not in Honeycomb as expected, you can set the DEBUG
environment variable, which will both log the distribution configuration to stdout
and configure spans to be output to stdout
.
This will help confirm whether your app is being instrumented with the data you expect.
export DEBUG=true
Keep in mind that printing to the console should only be used for debugging purposes. It is not recommended for production.
By default, the Honeycomb Distribution uses a secure exporter. To export to an insecure endpoint, such as a local collector on the same network, set the Insecure option for the exporter with an environment variable:
export OTEL_EXPORTER_OTLP_INSECURE=true
To set the Insecure option in code instead of an environment variable:
otelconfig.ConfigureOpenTelemetry(otelconfig.WithExporterInsecure(true))
Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC, HTTP/protobuf, and HTTP/JSON. The minimum supported versions of OTLP protobuf definitions are 0.7.0 for traces and metrics.
If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is older than the minimum supported version, and telemetry is not appearing as expected in Honeycomb, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
If no errors appear in the console but your data is not in Honeycomb as expected, use a ConsoleSpanExporter
to print your spans to the console.
This will help confirm whether your app is being instrumented with the data you expect.
As shown in the OpenTelemetry SDK for Ruby, configuration options can be set via environment variables or programmatically.
To use an environment variable to print to the console, set the variable before calling configure
:
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'
ENV['OTEL_TRACES_EXPORTER'] = 'console' # for debugging
OpenTelemetry::SDK.configure do |c|
c.use_all()
end
Alternatively, add the ConsoleSpanExporter
to your configuration:
OpenTelemetry::SDK.configure do |c|
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
OpenTelemetry::SDK::Trace::Export::ConsoleSpanExporter.new
)
)
...
end
Keep in mind that printing to the console is not recommended for production and should only be used for debugging purposes.
The service name is a required configuration value.
If it is unspecified, all trace data will be sent to a default dataset called unknown_service
.
If a This trace has multiple spans sharing the same non-null span ID
error appears in Honeycomb, it is likely that your application is not instrumented correctly and is sending the same trace to Honeycomb more than once.
One possible misconfiguration is initializing OpenTelemetry more than once. Make sure to only initialize OpenTelemetry once when the application starts, and then use the Tracing API throughout the application runtime to add manual instrumentation.
The OpenTelemetry libraries may produce an error on Apple Mac M1 computers.
/rubygems/core_ext/kernel_require.rb:92:in `require': cannot load such file -- google/protobuf_c (LoadError)
Rubygems, by default, installs this gem with native extensions. Reinstallation of the gem without these extensions may remove this error.
First, uninstall the native version of the google-protobuf
library.
gem uninstall google-protobuf
Then, install the google-protobuf
library for the ruby platform.
gem install google-protobuf --platform=ruby
Honeycomb supports receiving telemetry data via OpenTelemetry’s native protocol, OTLP, over gRPC, HTTP/protobuf, and HTTP/JSON. The minimum supported versions of OTLP protobuf definitions are 0.7.0 for traces and metrics.
If the protobuf version in use by the SDK does not match a supported version by Honeycomb, a different version of the SDK may need to be used. If the SDK’s protobuf version is older than the minimum supported version, and telemetry is not appearing as expected in Honeycomb, upgrade the SDK to a version with the supported protobuf definitions. If using an added dependency on a proto library, ensure the version of protobuf definitions matches the supported version of the SDK.
You may receive a 464
error response from the Honeycomb API when sending telemetry using gRPC and HTTP1.
The gRPC format depends on using HTTP2 and any request over HTTP1 will be rejected by the Honeycomb servers.
Troubleshoot issues related to LibHoney.
If you do not specify a dataset, event data will be sent to a dataset called unknown_dataset
.
If using Honeycomb Classic without specifying a dataset, you will get an error at runtime and no data will be sent.
Our Python SDK supports an optional debug mode.
Simply pass debug=True
to the init
function at startup to get verbose logging of events sent to and responses from the Honeycomb API.
Popular servers like uWSGI and Gunicorn utilize a pre-fork model where requests are delegated to separate Python processes.
Initializing the SDK before the fork happens can lead to a state where events cannot be sent. To initialize the SDK correctly, you will need to run your init code inside a post-fork hook.
Users of uWSGI can use a postfork decorator.
Simply add the @postfork
decorator to the function that initializes the Python Beeline, and it will be executed post-fork.
import logging
import os
import libhoney
from uwsgidecorators import postfork
@postfork
def init_libhoney():
logging.info(f'libhoney initialization in process pid {os.getpid()}')
libhoney.init(writekey="YOUR_API_KEY", dataset="honeycomb-uwsgi-example", debug=True)
Gunicorn users can define a post_worker_init
function in the Gunicorn configuration, and initialize the SDK there.
# conf.py
import logging
import os
import libhoney
def post_worker_init(worker):
logging.info(f'libhoney initialization in process pid {os.getpid()}')
libhoney.init(writekey="YOUR_API_KEY", dataset="honeycomb-gunicorn-example", debug=True)
Then start gunicorn with the -c
option:
gunicorn -c /path/to/conf.py
Celery uses a pre-fork approach to create worker processes.
You can specify a worker_process_init
decorated function to initialize the Python SDK after each worker has started.
import logging
import os
import libhoney
from celery.signals import worker_process_init
@worker_process_init.connect
def initialize_honeycomb(**kwargs):
logging.info(f'libhoney initialization in process pid {os.getpid()}')
libhoney.init(writekey="YOUR_API_KEY", dataset="honeycomb-celery-example", debug=True)
If you do not specify a dataset, event data will be sent to a dataset called unknown_dataset
.
If using Honeycomb Classic without specifying a dataset, you will get an error at runtime and no data will be sent.
If you do not specify a dataset, event data will be sent to a dataset called unknown_dataset
.
If using Honeycomb Classic without specifying a dataset, you will get an error at runtime and no data will be sent.
If you do not specify a dataset, event data will be sent to a dataset called unknown_dataset
.
If using Honeycomb Classic without specifying a dataset, you will get an error at runtime and no data will be sent.
Troubleshoot issues related to Honeytail.
Below, find some general debugging tips when trying to send data to Honeycomb.
“Datasets” are created when we first begin receiving data under a new “Dataset Name” (used/specified by all of our SDKs and agents).
If you do not see an expected dataset yet, our servers mostly likely have not yet received anything from you.
To figure out why, the simplest step is to add a --debug
flag to your honeytail
call.
This should output information about whether lines are being parsed, failing to send to our servers, or whether honeytail
is receiving any input at all.
Another useful thing to try may be to add --status_interval=1
to your flags, which will output a line like the below, each second (newlines added for legibility):
INFO[0002] Summary of sent events avg_duration=295.783µs
count_per_status=map[400:10]
errors=map[]
fastest=259.689µs
response_bodies=map[request body is too large:10]
slowest=348.297µs
total=10
The total
here is the number of events sent to Honeycomb; the rest are stats characterizing how those events were sent and received.
(A total=0
value would clue us into the fact that honeytail
is not sending any events at all.)
In the line above, we see that events were, in fact, invalid and being rejected by the server.
When using honeytail
, the --dataset
(-d
for short) argument will determine the name of the dataset created on Honeycomb’s servers.
If you are writing into an existing dataset, the quickest way to check for new data is to run a COUNT
query over the last 30 minutes:
If your new events do not appear, try the --debug
or --status_interval=1
.
(Change the status interval from 1
to 5
to see the summary every 5 seconds).
honeytail
does not seem to be progressing on my log file Are you trying to send data from an existing file?
honeytail
’s default behavior is to watch files and process newly-appended data.
If you are attempting to send data from an existing file, make sure to use the --backfill
flag.
This flag will make sure honeytail
begins reading the file from the beginning and exits when finished.
Our JSON parser makes a best-effort attempt to parse and understand timestamps in your JSON logs. Take a look at the Timestamp parsing section of the JSON docs to see timestamp formats understood by default.
If you suspect your timestamp format is unconventional, or the time field is keyed by an unconventional field name, providing --json.timefield
and --json.format
arguments will nudge honeytail
in the right direction.
First, check out honeytail
General Troubleshooting section for general debugging tips.
log_format <format>
not found in given config Make sure the file referenced by --nginx.conf
contains your log format definitions.
The log format definition should look something like the example below, and should contain whatever format name you are passing to --nginx.format
:
log_format combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
… which defines the output log format for the log format name “combined.”
Note that, in more advanced nginx setups, it is possible for the log format to be defined in an overall nginx.conf
file, while a different config file (maybe under, say, /etc/nginx/sites-enabled/api.conf
) tells nginx how to output the access_log
and which format to use.
In this case, you will want to make sure and use the config file containing the log_format
line for the --nginx.conf
argument.
--debug
reveals failed to parse nginx log line
messages If your log format has fields that are likely to have spaces in them, make sure to surround that field with single quotes.
For example, if $my_upstream_var
is likely to contain spaces, you will want to change this:
log_format main '$remote_addr $host $my_upstream_var $request $other_field';
to a log_format
with quotes:
log_format main '$remote_addr $host "$my_upstream_var" $request $other_field';
You can make sure that your quotes had the right effect by peeking at the nginx logs your server is outputting to make sure that the $my_upstream_var
value is correctly surrounded by quotes.
It is good practice to put any variable that comes from an HTTP header in double quotes, because you are depending on whomever is sending you traffic to put only one string in the header.
Some headers also default to multiple words.
For example, the $http_authorization
header is represented by a -
if it is absent and is two words (Basic abcdef123456
) when present.
First, check out honeytail
General Troubleshooting section for general debugging tips.
--debug
does not seem to show anything useful Take a look at the --file
being handed to honeytail
and make sure they look like MySQL slow query logs, with blocks of comments containing metadata alternating with the MySQL commands issued.
An example excerpt from a MySQL slow query log might look like:
# Time: 151008 0:31:03
# User@Host: rails[rails] @ [10.252.10.158]
# Query_time: 0.000547 Lock_time: 0.000019 Rows_sent: 1 Rows_examined: 938
use rails;
SET timestamp=1444264263;
SELECT `app_data`.* FROM `app_data` WHERE (`app_data`.user_id = 69213) LIMIT 1;
Did you remember to SET
the GLOBAL long_query_time
?
Our parser relies on reading your server’s slow query logs, which contain much more valuable metadata than the general log—and the default slow query threshold is 10 seconds.
Try checking the output of:
mysql> SELECT @@GLOBAL.long_query_time;
If it is not 0
, take another look at the steps to Configure MySQL Query Logging.
First, check out honeytail
General Troubleshooting section for general debugging tips.
--debug
does not seem to show anything useful Take a look at the --file
being handed to honeytail
and make sure it contains PostgreSQL query statements.
An example excerpt from a PostgreSQL log file might look like:
2017-11-10 23:24:01 UTC [1998-1] LOG: autovacuum launcher started
2017-11-10 23:24:01 UTC [2000-1] [unknown]@[unknown] LOG: incomplete startup packet
2017-11-10 23:24:02 UTC [2003-1] postgres@postgres LOG: duration: 4.356 ms statement: SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
Also check that the value you are passing in the --postgresql.log_line_prefix
flag matches PostgreSQL’s configured value, which you can find using SHOW log_line_prefix
at a psql
prompt:
# SHOW log_line_prefix;
log_line_prefix
---------------------
%t [%p-%l] %q%u@%d