We use cookies or similar technologies to personalize your online experience and tailor marketing to you. Many of our product features require cookies to function properly. Your use of this site and online product constitutes your consent to these personalization technologies. Read our Privacy Policy to find out more.

X

Getting JSON logs into Honeycomb

JSON is one of the most flexible formats in the data landscape we have today, and our JSON connector is perfect for your application’s custom log data.

Unstructured text logs are so 2009; whether you’re primarily using Honeycomb, json over Logstash, or some other JSON-friendly service, pointing your existing logs at Honeycomb is simple.

Data expectations

Honeycomb expects data with a flat structure. By default, any structure deeper than top level keys will be serialized and a string representation of the content will be used in the field. However, Honeycomb can automatically unpack nested JSON objects and flatten them into unique columns. This is a per-dataset setting, and it is off by default. You must be a team owner to change this setting.

If you enable this setting, nested objects will be flattened with new fields and field names created based on the keys. For example, {"outer": {"inner": 42}} would become a field outer.inner with a value of 42.

To tell Honeycomb to automatically unpack JSON objects:

  1. Navigate to Settings > Schema for the dataset you want to configure.
  2. Check “Automatically unpack nested JSON”
  3. Choose the “Maximum unpacking depth” for your data.

Changing this setting takes effect within 60 seconds.

Note: If your objects are deeply-nested, unpacking may result in a very large number of columns in Honeycomb. Consider unpacking only to the level of columns you will find useful. Any objects nested more deeply than the depth you select here will be converted to strings under the last unpacked column. In particular, if nested structures in your data can be created/added by your users (for example, HTTP headers), consider not unpacking them to that level.

Installation

Download and install the latest honeytail by running:

wget -q https://honeycomb.io/download/honeytail/linux/honeytail_1.733_amd64.deb && \
      echo 'bd135df2accd04d37df31aa4f83bd70227666690143829e2866de8086a1491d2  honeytail_1.733_amd64.deb' | sha256sum -c && \
      sudo dpkg -i honeytail_1.733_amd64.deb

The packages install honeytail, its config file /etc/honeytail/honeytail.conf, and some start scripts. The binary is just honeytail, available if you need it in an unpackaged form or for ad-hoc use.

You should modify the config file and uncomment and set:

Launch the agent

Start up a honeytail process using upstart or systemd or by launching the process by hand. This will tail the log file specified in the config and leave the process running as a daemon.

$ sudo initctl start honeytail

Backfilling archived logs

To backfill existing data, run honeytail with --backfill the first time:

honeytail -c /etc/honeytail/honeytail.conf \
  --file /var/log/myapp/log12.json \
  --backfill

This command can also be used at any point to backfill from older, rotated log files. You can read more about our backfill behavior here.

Note: (If you’ve chosen to backfill from old JSON logs, don’t forget to transition into the default streaming behavior to stream live logs to Honeycomb!)

Timestamp parsing

Honeycomb expects all events to contain a timestamp field; if one is not provided, the server will associate the current time of ingest with the given payload.

By default, we look for a few candidate fields based on name (e.g. "timestamp", "time", etc) and handle the following time formats:

If your timestamps aren’t correctly handled by the above formats, use the --json.timefield and --json.format flags to help honeytail understand where and how to extract the event’s timestamp.

For example, given a JSON log file with events like the following:

{"color":"orange","size":3,"server_time":"Aug 12 2016, 15:12:06 -0800"}
{"color":"blue","server_time":"Sep 01 2016, 06:10:32 -0800","size":4}

The command to consume those log lines (while retaining the "server_time" field as the event’s timestamp) would look something like:

honeytail --writekey=YOUR_API_KEY --dataset="API Server Logs" --parser=json \
  --file=/var/log/api_server.log \
  --json.timefield="server_time" --json.format="%b %d %Y, %k:%M:%S %z"

You are currently logged in to the team, so we have populated the write key here to the first write key for that team.

The --json.timefield="server_time" argument tells honeytail to consider the "server_time" value to be the canonical timestamp for the events in the specified file.

The --json.format argument specifies the timestamp format to be used while parsing. (It understands common strftime formats.)

Ultimately, the above command would would produce events with the fields (note the times below are represented in UTC; Honeycomb parses time zone information if provided).

time color size
2016-08-12T23:12:06Z orange 3
2016-09-01T14:10:32Z blue 4