AWS Integrations Quick Start | Honeycomb

AWS Integrations Quick Start

Honeycomb AWS Integrations can be set up with either AWS CloudFormation or HashiCorp Terraform, depending on your preferred Infrastructure-as-Code (IaC) method and deployment processes.

Note
Honeycomb AWS Integrations utilize AWS CloudWatch, Amazon Kinesis, and AWS Lambda to send data to Honeycomb. Refer to How AWS Integrations Work to reference which AWS services use which methods. Note that standard AWS charges apply. Please refer to Amazon for specifics on associated egress costs.

AWS CloudFormation 

Honeycomb provides a CloudFormation template to automate configuration of various AWS services to Honeycomb. Each integration has an independently deployable stack that the template deploys all together with the ability to turn functionality on and off as needed.

Supported CloudFormation Integrations:

  • CloudWatch Logs
  • CloudWatch Metrics
  • RDS Cloudwatch Logs
  • Amazon S3 Bucket Logs

AWS CloudFormation Setup 

Choose from the available AWS CloudFormation templates.

Each integration option features a Launch Stack button and a list of required parameters or inputs during installation.

We also offer a “quick start” AWS CloudFormation template that provides a streamlined path to integrate your AWS environments with Honeycomb. The quick start template uses all of the per-integration templates below to offer the configuration of many integrations in a single CloudFormation stack.

The quick start template may be suitable for many production purposes, but we encourage you to use per-integration templates in a way that suits your AWS environment.

Note
If a misconfiguration happens during AWS CloudFormation installation, it is better to completely delete the CloudFormation stack and re-create it using the quick-create links.

This AWS CloudFormation template allows the configuration of multiple integrations from a single CloudFormation Template.

Select Launch Stack to start the install.

Launch Stack

Required Inputs 

Enter a value for the required input in the UI, or if using the CLI or API, ensure the inclusion of the required input and its value.

  • HoneycombAPIKey: Your Honeycomb Team’s API Key.
Note
All other parameters are optional. If you provide no additional parameters, the template only creates an S3 Bucket.

This AWS CloudFormation template integrates up to six Cloudwatch Log Groups and ships them to a Honeycomb dataset.

Select Launch Stack to start the install.

Launch Stack

Required Inputs 

Enter a value for each required input in the UI, or if using the CLI or API, ensure the inclusion of each required input and its value.

  • HoneycombAPIKey: Your Honeycomb Team’s API Key.
  • HoneycombDataset: The target Honeycomb dataset for the Stream to publish to.
  • LogGroupName: A CloudWatch Log Group name. Additional Log Groups can be added with the LogGroupNameX parameters.
  • S3FailureBucketArn: The ARN of the S3 Bucket that will store any logs that failed to be sent to Honeycomb.

This AWS CloudFormation template integrates all metrics flowing to Cloudwatch Metrics and ships them to a Honeycomb dataset.

Select Launch Stack to start the install.

Launch Stack

Required Inputs 

Enter a value for each required input in the UI, or if using the CLI or API, ensure the inclusion of each required input and its value.

  • HoneycombAPIKey: Your Honeycomb Team’s API Key.
  • S3FailureBucketArn: The ARN of the S3 Bucket that will store any logs that failed to be sent to Honeycomb.

This AWS CloudFormation template streams RDS logs from Cloudwatch to a Kinesis Firehose that includes a data transform to structure the logs before it sends them to Honeycomb.

Select Launch Stack to start the install.

Launch Stack

Required Inputs 

Enter a value for each required input in the UI, or if using the CLI or API, ensure the inclusion of each required input and its value.

  • HoneycombAPIKey: Your Honeycomb Team’s API Key.
  • HoneycombDataset: The target Honeycomb dataset for the Stream to publish to.
  • DBEngineType: The Engine type of your RDS database. One of aurora-mysql, aurora-postgresql mariadb, sqlserver,mysql, oracle, or postgresql.
  • LogGroupName: A CloudWatch Log Group name for RDS logs. Additional Log Groups can be added with the LogGroupNameX parameters.
  • S3FailureBucketArn: The ARN of the S3 Bucket that will store any logs that failed to be sent to Honeycomb.

This AWS CloudFormation template supports sending logs from a S3 bucket to Honeycomb.

Select Launch Stack to start the install.

Launch Stack

Required Inputs 

Enter a value for each required input in the UI, or if using the CLI or API, ensure the inclusion of each required input and its value.

  • HoneycombAPIKey: Your Honeycomb Team’s API Key.
  • HoneycombDataset: The target Honeycomb dataset for to publish to.
  • ParserType: The type of log file to parse. Choose one of alb, elb, cloudfront, keyval, json, s3-access, or vpc-flow.
  • S3BucketArn: The ARN of the S3 Bucket storing the logs.

HashiCorp Terraform 

Honeycomb provides a Terraform module to automate configuration of various AWS services to Honeycomb. Each integration has an independently deployable submodule that the top-level module deploys all together with the ability to turn functionality on and off as needed.

Supported Terraform Integrations:

  • CloudWatch Logs
  • CloudWatch Metrics
  • RDS Logs
  • Amazon S3 Bucket Logs

Terraform Setup 

Implement all of the Terraform submodules at once, or choose among the available Terraform submodules.

To configure for all supported Terraform integrations, add the minimal Terraform configuration, which includes the required fields for all supported Terraform integrations.

module "honeycomb-aws-integrations" {
  source = "honeycombio/integrations/aws"

  # aws cloudwatch logs integration
  cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name] // CloudWatch Log Group names to stream to Honeycomb.

  # aws rds logs integration
  enable_rds_logs  = true
  rds_db_name      = var.db_name
  rds_db_engine    = "mysql"
  rds_db_log_types = ["slowquery"] // valid types include general, slowquery, error, and audit (audit will be unstructured)

  # aws metrics integration - pro/enterprise Honeycomb teams only
  # enable_cloudwatch_metrics = true

  # s3 logfile - alb access logs
  s3_bucket_arn  = var.s3_bucket_arn
  s3_parser_type = "alb" // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval

  #honeycomb
  honeycomb_api_key = var.honeycomb_api_key             // Honeycomb API key.
  honeycomb_dataset = "terraform-aws-integrations-test" // Your Honeycomb dataset name that will receive the logs.

  # Users generally do not need to set this, but it may be necessary when working with a proxy like Honeycomb's Refinery.
  honeycomb_api_host = var.honeycomb_api_host
}

Set the TF_VAR_HONEYCOMB_API_KEY environment variable to your team’s Honeycomb API Key.

export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY

Set the environment variables with your AWS credentials.

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION

Please refer to Terraform documentation for more details and options.

Now, you can run terraform plan/apply in sequence.

For more configuration options, refer to USAGE.md.

To configure for CloudWatch Logs, add the minimal Terraform configuration, which includes the required fields:

module "honeycomb-aws-cloudwatch-logs-integration" {
  source = "honeycombio/integrations/aws//modules/cloudwatch-logs"

  name = var.cloudwatch_logs_integration_name // A name for the Integration.

  #aws cloudwatch integration
  cloudwatch_log_groups = ["/aws/lambda/S3LambdaHandler-test"] // CloudWatch Log Group names to stream to Honeycomb.
  s3_failure_bucket_arn        = var.s3_bucket_name
  // S3 bucket ARN that will store any logs that failed to be sent to Honeycomb.

  #honeycomb
  honeycomb_api_key      = var.HONEYCOMB_API_KEY // Honeycomb API key.
  honeycomb_dataset_name = "cloudwatch-logs" // Your Honeycomb dataset name that will receive the logs.
}

Set the TF_VAR_HONEYCOMB_API_KEY environment variable to your team’s Honeycomb API Key.

export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY

Set the environment variables with your AWS credentials.

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION

Please refer to Terraform documentation for more details and options.

Now you can run terraform plan/apply in sequence.

For more configuration options, refer to USAGE.md.

To configure for CloudWatch Metrics, add the minimal Terraform configuration, which includes the required fields:

module "honeycomb-aws-cloudwatch-metrics-integration" {
  source = "honeycombio/integrations/aws//modules/cloudwatch-metrics"

  name = var.cloudwatch_metrics_integration_name // A name for the Integration.
  
  honeycomb_api_key      = var.HONEYCOMB_API_KEY // Honeycomb API key.
  honeycomb_dataset_name = "cloudwatch-metrics" // Your Honeycomb dataset name that will receive the metrics.

  s3_failure_bucket_arn = var.s3_bucket_arn // A S3 bucket that will store any metrics that failed to be sent to Honeycomb.
}

Set the TF_VAR_HONEYCOMB_API_KEY environment variable to your team’s Honeycomb API Key.

export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY

Set the environment variables with your AWS credentials.

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION

Please refer to Terraform documentation for more details and options.

Now you can run terraform plan/apply in sequence.

For more configuration options, refer to USAGE.md.

To configure for RDS Logs, add the minimal Terraform configuration, which includes the required fields:

module "honeycomb-aws-rds-logs-integration" {
  source = "honeycombio/integrations/aws//modules/rds-logs"

  name                   = "rds-logs-integration"
  db_engine              = "mysql"
  db_name                = "mysql-db-name"
  db_log_types           = ["slowquery"]
  honeycomb_api_key      = var.honeycomb_api_key // Your Honeycomb team's API key
  honeycomb_dataset_name = "rds-mysql-logs"

  s3_failure_bucket_arn = var.s3_bucket_arn      // The full ARN of the bucket storing Kinesis Firehose failure logs.
}

Set the TF_VAR_HONEYCOMB_API_KEY environment variable to your team’s Honeycomb API Key.

export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY

Set the environment variables with your AWS credentials.

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION

Please refer to Terraform documentation for more details and options.

Now you can run terraform plan/apply in sequence.

For more configuration options, refer to USAGE.md.

To configure for Amazon S3 Logs from a bucket, add the minimal Terraform configuration, which includes the required fields:

module "logs_from_a_bucket_integrations" {
  source = "honeycombio/integrations/aws//modules/s3-logfile"
  name   = var.logs_integration_name

  parser_type   = var.parser_type // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval
  s3_bucket_arn = var.s3_bucket_arn     // The full ARN of the bucket storing the logs.


  honeycomb_api_key      = var.honeycomb_api_key // Your Honeycomb team's API key.
  honeycomb_dataset_name = "alb-logs" // Your Honeycomb dataset name that will receive the metrics.
}

Set the TF_VAR_HONEYCOMB_API_KEY environment variable to your team’s Honeycomb API Key.

export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY

Set the environment variables with your AWS credentials.

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION

Please refer to Terraform documentation for more details and options.

Now you can run terraform plan/apply in sequence.

For more configuration options, refer to USAGE.md.

Troubleshooting 

Kinesis Firehose 

Kinesis Firehose sends failed data to the configured S3 bucket. Depending on the stage of the flow when the error occurred, the error log ends up in a different subdirectory. For example, if our HTTP endpoint sends a non-2xx error code back, the error log will be located in the http-endpoint-failed directory with the exact status code and error message.

RDS 

A pre-requisite to using both the RDS logs Terraform module and the RDS logs CloudFormation stack is enabling RDS log exports to CloudWatch. Refer to Publishing database logs to Amazon CloudWatch for instructions. Once enabled, the module can scoop logs from those CloudWatch log groups and begin streaming them to Honeycomb.

Note
When enabling log exports for RDS Postgresql, ensure the log_statement attribute on the parameter group is not set to ALL.

Currently, structured logs are supported for the following:

  • MySQL general logs
  • MySQL slow query logs
  • MySQL error logs
  • Postgresql slow query logs

For any RDS log types not listed above, the integration will still deliver them to Honeycomb but they will be unstructured logs.