We use cookies or similar technologies to personalize your online experience and tailor marketing to you. Many of our product features require cookies to function properly. Your use of this site and online product constitutes your consent to these personalization technologies. Read our Privacy Policy to find out more.

X

Getting RDS logs for MySQL into Honeycomb

Amazon’s Relational Database Service (RDS) lets you use a number of databases without having to administer them yourself. The Honeycomb RDS connector gives you access to the same data as if you were running MySQL on your own server.

The Honeycomb RDS connector surfaces attributes like:

Honeycomb allows you to calculate metrics and statistics on the fly while retaining the full-resolution log lines (and the original MySQL query that started it all).

Once you’ve got data flowing, be sure to take a look at our starter queries; our entry points provide our recommendations for comparing lock retention by normalized query, scan efficiency by collection, or read vs. write distribution by host.

rdslogs CLI integration

rdslogs is a CLI tool that polls the RDS API for database instance logs, parses them, and submits them to Honeycomb.

Download the RDS connector (rdslogs)

rdslogs will stream the MySQL slow query log from RDS or download older log files. It can stream them to STDOUT or directly to Honeycomb. You can view the rdslogs source here.

Get and verify the current Linux version of rdslogs:

wget -q https://honeycomb.io/download/rdslogs/rdslogs_1.108_amd64.deb && \
      echo 'f289b871552170a88e8f5a545d4587343acda1e208a73ef92ae4dda8aa477a5d  rdslogs_1.108_amd64.deb' | sha256sum -c && \
      sudo dpkg -i rdslogs_1.108_amd64.deb

Stream current logs to Honeycomb

Use the rdslogs command with the --output flag set to honeycomb to connect to RDS and send data from the current log to Honeycomb.

You will need the following information:

rdslogs \
    -i <instance-identifier> \
    --region=<region-code> \
    --output=honeycomb \
    --writekey=YOUR_API_KEY \
    --dataset='RDS MySQL'

You are currently logged in to the team, so we have populated the write key here to the first write key for that team.

Use --sample_rate to send a subset (1/N log lines, defaults to N=1) of your data. Sampling in Honeycomb is described in detail in Sampling high volume data.

Backfill existing logs

If you’re getting started with Honeycomb, you can load the past 24 hours of logs into Honeycomb to start finding interesting things right away. Launch this command to run in the background (it will take some time) while you hook up the live stream. (However, if you just now enabled the slow query log, you won’t have the past 24 hours of logs. You can skip this step and go straight to streaming.)

The following command will download all available slow query logs to a newly created slow_logs directory and then start up honeytail to send the parsed events to Honeycomb. You’ll need your RDS instance identifier (from the instances page of the RDS Console) and your Honeycomb API key (from your Honeycomb account page).

mkdir slow_logs && \
    rdslogs \
    -i <instance-identifier> \
    --download --download_dir=slow_logs && \
    honeytail \
    --writekey=YOUR_API_KEY \
    --dataset='RDS MySQL' \
    --parser=mysql \
    --file='slow_logs/*' \
    --backfill

Once you’ve finished backfilling your old logs, we recommend transitioning to the default streaming behavior to stream current logs.

Scrub personally identifiable information

We believe strongly in the value of being able to track down the precise query causing a problem, but we also understand the concerns of exporting log data which may contain sensitive user information, so you have the option of hashing the contents of the data returned by a query.

To hash the concrete query, add the flag --scrub_query. The normalized_query attribute will still be representative of the shape of the query and identifying patterns (including specific queries) will still be possible, but the sensitive information will be completely obscured before leaving your servers.

For more information about dropping or scrubbing sensitive fields, see “Dropping or scrubbing fields” in the Agent documentation section.

Agentless Integration for MySQL Logs

As an alternative to using the rdslogs CLI tool, you can configure your RDS instance to mirror its logs to Cloudwatch Logs, then install the Agentless Integration for MySQL Logs. This integration is a Lambda function subscribed to your instance’s RDS Log Group. It parses log events as they arrive and submits them to Honeycomb. Note that configuring your RDS instance to send its logs to Cloudwatch will incur additional costs.

Before you run the RDS connector

Before running the RDS connector, configure MySQL running on RDS to output the slow query log to a file. Refer to Amazon’s documentation on setting Parameter Groups to get started, and find more detail about the configuration options below in the MySQL docs for the slow query log.

Set the following options in the Parameter Group:

If you switch to a new Parameter Group when you make these changes, make sure you restart the database.

Next, enable publishing of MySQL slow query logs to AWS Cloudwatch Logs. You can do this in the RDS console in the instance configuration. See the AWS docs for full details.

Configuring Cloudwatch Logs in RDS

This change can be done without instance downtime. Once you’ve made the above changes, verify that logs are being received by Cloudwatch Logs via the Cloudwatch Logs Console.

Install the Agentless Integration for Mysql Logs

The MySQL integration exists as an AWS Lambda function deployed in your AWS account. It subscribes to the Cloudwatch Log Group created by RDS, parses log lines, and submits them as events to Honeycomb. You can view the source here.

To install the integration, you will need:

To get started, click this AWS quick-create link. This will launch the Cloudformation Stack creation wizard and will prompt you for a few key inputs:

Installing MySQL Integration

Scrub personally identifiable information

To hash the concrete query, set the ScrubQuery parameter to true when installing the integration. The normalized_query attribute will still be representative of the shape of the query and identifying patterns (including specific queries) will still be possible, but the sensitive information will be completely obscured before being submitted to our API.