Getting AWS Elastic Load Balancer (ELB) logs into Honeycomb | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

Getting AWS Elastic Load Balancer (ELB) logs into Honeycomb

Honeycomb provides an agentless integration for ingesting AWS Elastic Load Balancer (ELB)-based data. The integration runs as one or more Lambda functions, subscribed to PutObject events on your bucket.

The source is available on GitHub and instructions for getting started are provided here. Do you have a use case not covered here? Please open an issue.

If using an Application Load Balancer (ALB), you should use our ALB agentless integration instead.

Prerequisites  🔗

You will need permission to:

  • deploy a Cloudformation (CFN) stack with an IAM role in your AWS account
  • to edit your S3 bucket events configuration

Install  🔗

To install, you can use the Honeycomb ELB AWS Cloudformation template. Selecting this template link launches the AWS Cloudformation console with the appropriate template to guide you through the installation process.

Cloudformation Stack Creation

ELB JSON Integration  🔗

This ELB integration accepts lines with arbitrary JSON, such as structured logs written in JSON format.

Provide the following required parameters:

  • Stack Name
  • S3 Bucket Name
  • Your Honeycomb API Key (optionally encrypted)
  • Honeycomb Dataset Name

Optional parameters to supply:

  • Sample rate
  • The ID of the AWS Key Management Service key used to encrypt your API Key. If your API Key is not encrypted, do not set a value here
  • List of fields to remove from any generated events sent to Honeycomb
  • Match and filter patterns to include or exclude S3 keys

Example Log Format  🔗

ELB logs are stored in the designated S3 bucket as JSON-formatted payloads that the Lambda function reads and extracts into Honeycomb events. The integration expects each line in the S3 files to contain a JSON object and nothing else.

{"field1": "data1", "field2": "data2", "field3": 12345, "field4": {"field5": false}}
{"field1": "data1", "field2": "data2", "field3": 12345, "field4": {"field5": false}}

Subscribing to Bucket Events  🔗

After installing the ELB integration, configure your bucket to trigger the Lambda after each PutObject event. To do this, access the S3 Console and follow these steps:

From the S3 console, select the bucket that you want to subscribe. Then, select the Properties tab.

S3 Console Bucket Properties

Find Advanced Settings and select Events.

S3 Console Advanced Settings

Under the Events section, enable Put and Complete Multipart Upload. Then, select the Lambda function belonging to the Honeycomb ALB integration. If you have multiple integrations, remember to select the integration belonging to the stack that has permissions to access your bucket.

Optionally, set a prefix and suffix if you want only a subset of objects to be processed by the integration, which is recommended if the bucket has multiple uses.

S3 Console Enable Events

Encrypting Your API Key  🔗

When installing the integration, you must supply your Honeycomb API Key via Cloudformation parameter. Cloudformation parameters are not encrypted, and are plainly viewable to anyone with access to your Cloudformation stacks or Lambda functions. For this reason, we strongly recommend that your Honeycomb API Key be encrypted. To encrypt your key, use AWS’s KMS service.

First, you’ll need to create a KMS key if you don’t have one already. The default account keys are not suitable for this use case.

$ aws kms create-key --description "used to encrypt secrets"
{
    "KeyMetadata": {
        "AWSAccountId": "123455678910",
        "KeyId": "a38f80cc-19b5-486a-a163-a4502b7a52cc",
        "Arn": "arn:aws:kms:us-east-1:123455678910:key/a38f80cc-19b5-486a-a163-a4502b7a52cc",
        "CreationDate": 1524160520.097,
        "Enabled": true,
        "Description": "used to encrypt honeycomb secrets",
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyState": "Enabled",
        "Origin": "AWS_KMS",
        "KeyManager": "CUSTOMER"
    }
}
# optionally, create an alias for the KMS key to describe the key's usage: 
$ aws kms create-alias --alias-name alias/secrets_key --target-key-id=a38f80cc-19b5-486a-a163-a4502b7a52cc

Save a file containing only your Honeycomb API Key to be passed into the encryption step. For example, if abc123 is your Honeycomb API Key and my-key is the name of the file, create the file like this:

echo -n abc123 > my-key

Next, encrypt your Honeycomb API Key:

$ aws kms encrypt --key-id=a38f80cc-19b5-486a-a163-a4502b7a52cc --plaintext fileb://my-key
{
    "CiphertextBlob": "AQICAHge4+BhZ1sURk1UGUjTZxmcegPXyRqG8NCK8/schk381gGToGRb8n3PCjITQPDKjxuJAAAAcjBwBgkqhkiG9w0BBwagYzBhAgEAMFwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM0GLK36ChLOlHQiiiAgEQgC9lYlR3qvsQEhgILHhT0eD4atgdB7UAMW6TIAJw9vYsPpnbHhqhO7V8/mEa9Iej+g==",
    "KeyId": "arn:aws:kms:us-east-1:702835727665:key/a38f80cc-19b5-486a-a163-a4502b7a52cc"
}

Record the CiphertextBlob and the last part of the KeyId from the encryption step. In the example above, the last part of the KeyId is a38f80cc-19b5-486a-a163-a4502b7a52cc. Enter the CiphertextBlob into the Cloudformation template as the HoneycombWriteKey. Enter the KeyId into the Cloudformation template as the KMSKeyId.

For more information about the need to use fileb:// prefix, see the AWS Reference Guide.

Troubleshooting  🔗

Integration Logs  🔗

The ELB integration is a normal Lambda function, which means you can see its metrics and log messages from the Lambda Console. Look for functions starting with S3LambdaHandler. From there, you can view error rate, latency, and Cloudwatch logs.

Missing Events  🔗

If not all of your events appear and sampling is not enabled, your S3 files may be too large to process inside the maximum Lambda runtime of 5 minutes. Some possible solutions:

  • Increase the LambdaMemorySize parameter in the stack creation screen. Lambda increases CPU proportionally with reserved memory, and allocating more CPU can allow the integration to process more data in less time.
  • Send smaller files more frequently. Lambda is meant to scale horizontally and can handle many small log files better than a few large ones.

Updating and Redeploying  🔗

When updating to a newer integration version or correcting an existing, misconfigured installation, it is better to completely delete the CFN stack and re-create it using the available template link.

Advanced Use  🔗

If you have an existing workflow for configuring infrastructure, consider directly configuring the Lambdas to meet your needs. Examine our Cloudformation and Terraform example templates in our repository to get started.