Our connector pulls your PostgreSQL logs into Honeycomb for analysis, so you can analyze PostgreSQL traffic on your machines and finally get a quick handle on the database queries triggered by your application logic. It surfaces attributes like:Documentation Index
Fetch the complete documentation index at: https://docs.honeycomb.io/llms.txt
Use this file to discover all available pages before exploring further.
- The normalized query shape
- Time spent executing the query
- Transaction ID
- Client information
- … and more!
This document is for running PostgreSQL directly.
If running PostgreSQL on RDS, Honeycomb offers support for ingesting RDS PostgreSQL logs via CloudWatch Logs with the option to convert these unstructured logs into structured logs.
honeytail.
Configure PostgreSQL Query Logging
Before runninghoneytail, turn slow query logging on for all queries if possible.
To turn on slow query logging, edit your postgresql.conf and set:
log_statement indicates which types of queries are logged, but is superseded when setting log_min_duration_statement to 0, as this effectively logs all queries.
Setting log_statement to any other value will change the format of the query logs in a way that is not currently supported by the Honeycomb PostgreSQL parser.psql shell by running
log_line_prefix configuration line. It will look something like this:
Install and Run Honeytail
On your PostgreSQL host, download and install the latesthoneytail by running:
- deb-amd64
- deb-arm64
- rpm
- bin-linux-amd64
- bin-linux-arm64
- bin-darwin-amd64
- source
Download the Verify the package.Install the package.The packages install
honeytail_1.10.0_amd64.deb package.honeytail, its config file /etc/honeytail/honeytail.conf,
and some start scripts.
Build honeytail from source if you need it in an unpackaged form or for ad-hoc use.honeytail.
To consume the current slow query log from the beginning, run:
Troubleshooting
Check outhoneytail Troubleshooting for debugging tips.
Run Honeytail Continuously
To runhoneytail continuously as a daemon process, first modify the configuration file /etc/honeytail/honeytail.conf and uncomment and set:
ParserNametopostgresqlWriteKeyto your API key, available from the account pageLogFilesto the path for your PostgreSQL log file.Datasetto the name of the dataset you wish to create with this log file.
honeytail using upstart or systemd:
- upstart
- systemd
Backfill Archived Logs
You may have archived logs that you would like to import into Honeycomb. If you have a log file located at/var/log/postgresql/postgresql-main.log, you can backfill using this command:
honeytail’s backfill behavior here.
honeytail does not unzip log files, so you will need to do this before backfilling.Scrub Personally Identifiable Information
While we believe strongly in the value of being able to track down the precise query causing a problem, we understand the concerns of exporting log data, which may contain sensitive user information. With that in mind, we recommend usinghoneytail’s PostgreSQL parser, but adding a --scrub_field=query flag to hash the concrete query value.
The normalized_query attribute will still be representative of the shape of the query, and identifying patterns including specific queries will still be possible—but the sensitive information will be completely obscured before leaving your servers.
More information about dropping or scrubbing sensitive fields can be found here.