We use cookies or similar technologies to personalize your online experience and tailor marketing to you. Many of our product features require cookies to function properly. Your use of this site and online product constitutes your consent to these personalization technologies. Read our Privacy Policy to find out more.

X

Getting RDS PostgreSQL logs into Honeycomb

Amazon’s Relational Database Service (RDS) lets you use a number of databases without having to administer them yourself. The Honeycomb RDS connector gives you access to the same data as if you were running PostgreSQL on your own server.

Honeycomb allows you to calculate metrics and statistics on the fly while retaining the full-resolution log lines (and the original query that started it all).

Note: Run the following commands from any Linux host with the appropriate AWS credentials to access the RDS API.

Before you run the RDS connector

Before running the RDS connector, configure your RDS PostgreSQL instance to output queries in its log file. Refer to Amazon’s documentation on setting Parameter Groups to get started.

Set the following option in the Parameter Group:

If you switch to a new Parameter Group when you make this change, make sure you restart the database.

Once you’ve made the change, verify you are getting RDS logs via the RDS Console

Download the RDS connector (rdslogs)

rdslogs will stream the current PostgreSQL query log from RDS or download older log files. You can view the rdslogs source here.

Get and verify the current Linux version of rdslogs:

wget -q https://honeycomb.io/download/rdslogs/rdslogs_1.108_amd64.deb && \
      echo 'f289b871552170a88e8f5a545d4587343acda1e208a73ef92ae4dda8aa477a5d  rdslogs_1.108_amd64.deb' | sha256sum -c && \
      sudo dpkg -i rdslogs_1.108_amd64.deb

Stream current logs to Honeycomb

Use the rdslogs command with the --output flag set to honeycomb to connect to RDS and send data from the current log to Honeycomb.

You will need the following information:

rdslogs \
    -i <instance-identifier> \
    --region=<region-code> \
    --output=honeycomb \
    --writekey=YOUR_API_KEY \
    --dataset='RDS PostgreSQL' \
    --dbtype=postgresql

You are currently logged in to the team, so we have populated the write key here to the first write key for that team.

Use --sample_rate to send a subset (1/N log lines, defaults to N=1) of your data. Sampling in Honeycomb is described in detail in Sampling high volume data.

Scrub personally identifiable information

We believe strongly in the value of being able to track down the precise query causing a problem, but we also understand the concerns of exporting log data which may contain sensitive user information, so you have the option of hashing the contents of the data returned by a query.

To hash the concrete query, add the flag --scrub_query. The normalized_query attribute will still be representative of the shape of the query and identifying patterns (including specific queries) will still be possible, but the sensitive information will be completely obscured before leaving your servers.

For more information about dropping or scrubbing sensitive fields, see “Dropping or scrubbing fields” in the Agent documentation section.

Backfill existing logs

If you’re getting started with Honeycomb, you can load the past 24 hours of logs into Honeycomb to start finding interesting things right away. Launch this command to run in the background (it will take some time) while you hook up the live stream. (However, if you just now enabled the slow query log, you won’t have the past 24 hours of logs. You can skip this step and go straight to streaming.)

The following commands will download all available slow query logs to a newly created slow_logs directory and then start up honeytail to send the parsed events to Honeycomb. You’ll need your RDS instance identifier (from the instances page of the RDS Console) and your Honeycomb API key (from your Honeycomb account page).

mkdir slow_logs && \
rdslogs \
    -i <instance-identifier> \
    --download \
    --download_dir=slow_logs \
    --dbtype=postgresql && \
honeytail \
    --writekey=YOUR_API_KEY \
    --dataset='RDS PostgreSQL' \
    --parser=postgresql \
    --postgresql.log_line_prefix="%t:%r:%u@%d:[%p]:" \
    --file='slow_logs/*' \
    --backfill

Once you’ve finished backfilling your old logs, we recommend transitioning to the default streaming behavior to stream current logs.