Connecting Logstash to Honeycomb | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

Connecting Logstash to Honeycomb

Thanks to Logstash’s flexible plugin architecture, you can send a copy of all the traffic that Logstash is processing to Honeycomb. This topic explains how to use Logstash plugins to convert incoming log data into events and then send them to Honeycomb.

Data format requirements  🔗

Honeycomb is at its best when the events you send are broad and capture lots of information about a given process or transaction. For guidance on how to think about building events, start with Building Better Events. To learn more, check out the rest of the “Event Foo” series on our blog.

To process the log data coming into Logstash into Honeycomb events, you can use Logstash filter plugins. These filter plugins transform the data into top-level keys based on the original source of the data. We’ve found these to be especially useful:

  • grok matches regular expressions and has configs for many common patterns (such as the apache, nginx, or haproxy log format).
  • json matches JSON-encoded strings and breaks them up in to individual fields.
  • kv matches key=value patterns and breaks them out into individual fields.

To add and configure filter plugins, refer to Working with Filter Plugins on the Logstash documentation site.

Example: Using Logstash filter plugins to process haproxy logs for Honeycomb ingestion  🔗

Let’s say you’re sending haproxy logs (in HTTP mode) to Logstash. A log line describing an individual request looks something like this (borrowed from the haproxy config manual):

Feb  6 12:14:14 localhost \
          haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \
          static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \
          {} "GET /index.html HTTP/1.1"

Logstash puts this line in a message field, so in the filter parameter of the logstash.yaml config fragment below, we use the grok filter plugin and tell it to parse the message and make all the content available in top-level fields. And, since we don’t need it anymore, we tell grok to remove the message field.

The mutate filter plugin takes the numeric fields extracted by haproxy and turns them into integers so that Honeycomb can do math on them (later).

filter {
  grok {
    match => ["message", "%{HAPROXYHTTP}"]
    remove_field => ["message"]
  }
  mutate {
    convert => {
      "actconn" => "integer"
      "backend_queue" => "integer"
      "beconn" => "integer"
      "bytes_read" => "integer"
      "feconn" => "integer"
      "http_status_code" => "integer"
      "retries" => "integer"
      "srv_queue" => "integer"
      "srvconn" => "integer"
      "time_backend_connect" => "integer"
      "time_backend_response" => "integer"
      "time_duration" => "integer"
      "time_queue" => "integer"
      "time_request" => "integer"
    }
  }
}

Sending data to Honeycomb  🔗

Now that all the fields in the message are nicely extracted into events, send them on to Honeycomb! To send events, configure an output plugin.

You can use Logstash’s HTTP output plugin to craft HTTP requests to the Honeycomb API.

This config example sends the data to a dataset called “logstash.”

output {
  http {
    url => "https://api.honeycomb.io/1/batch/logstash"
    http_method => "post"
    headers => {
      "X-Honeycomb-Team" => "YOUR_API_KEY"
    }
    format => "json_batch"
    http_compression => true
  }
}
  • Specify a URL to send the data to: https://api.honeycomb.io/1/batch/<dataset_name>.
  • Add your Honeycomb Team API key so that Logstash is authorized to send data to Honeycomb.
  • Specify the output format as JSON batch.
  • Specify the use of HTTP compression.

Then, restart Logstash. When it’s back up, you will find the new dataset on your landing page.