About Honeycomb Classic | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

About Honeycomb Classic

Honeycomb now supports an expanded data model with Environments and Services. An Environment represents a named group of your datasets. A Service represents one or more programs sharing an executable.

Previously, datasets in Honeycomb were independent, unconnected buckets of information. This existing, dataset-only data model is now known as Honeycomb Classic.

Today, Honeycomb supports datasets, services, and environments as groupable, relational structures. Moving forward, users who create a new team will have access to Honeycomb and this expanded data model. Read more about how to structure Environments and Services in our best practices guide.

Is my Team Using Honeycomb Classic 

You can check if your team is Honeycomb Classic by selecting Account > My Account in the left menu. Select Your Teams in the secondary left menu, which will display a list of your teams and their types.

Your teams page showing two team types for Honeycomb and Honeycomb Classic (screenshot)

Get Access to Environments and Services 

Until the migration process is released, Environments and Services in your existing Honeycomb Classic team(s) cannot be created. If you want to test the new features, you can create a new team. To create a new team with your existing account, select Account > My Account in the left menu. Select Your Teams in the secondary left menu, which will display a list of teams associated to your account. Scroll to the Create Team section to create a new team.

What Honeycomb Classic Means for You 

Existing Honeycomb users will continue to use Honeycomb Classic and its dataset-only structure. Honeycomb Classic datasets can be transformed to be grouped under an Environment, and to do so, a migration process will be released in the near future. You can run pre-checks and prepare for upcoming migrations in the meantime.

Differences between Honeycomb and Honeycomb Classic 

Many features in Honeycomb now allow you to define and see data for a particular Service or the entire Environment:

Feature Honeycomb Classic Honeycomb
Scope API keys by team
Scope API keys by Environment
Query across a single Dataset
Query across an Environment or multiple Datasets
Create Markers for a Dataset
Create Markers for an Environment
Create Service Datasets directly from traces in OpenTelemetry
View events in Home with span.kind = server spans
Improved organization of Honeycomb with specific Environment and Service structures

With Environments and Services, you can start to scope more of what you see and send into Honeycomb.

New Dataset Changes 

A classic dataset describes any dataset in a Classic team. Classic datasets follow the old data model where datasets were independent and unconnected buckets of data. Datasets in the expanded data model with Environments and Services still exist as buckets for your data. Now, two additional types of datasets exist:

  • Service datasets are trace datasets created with defined trace fields and split by Service name.
  • General datasets refer to all other datasets and are typically non-trace datasets.

About Migration 

A migration process is being developed to help migrate your data from Classic datasets to Environments. When released, the migration process will include information on the following tasks to convert your datasets:

  • Assess existing datasets and perform proactive cleanup
  • Create Environments and generate new API keys
  • Update certain instrumentation packages and SDKs
  • Update any references to the Honeycomb API
  • Configure any schema mismatches across Services

Migration Preparation 

There are some pre-checks and clean up that should be done before migration:

Query Builder page showing a COUNT GROUP BY SERVICE NAME LIMIT 1000 query (screenshot)

  1. Identify the datasets that currently hold trace data for your applications today. You will want to review all datasets with traces before migrating them to service datasets.
  2. Ensure your Service names reflect actual Service names, and that they are not empty strings or include high cardinality values such as a unique ID like process ID or build ID.
    1. In the Query Builder, run a VISUALIZE COUNT GROUP BY service.name LIMIT 1000 query. Note that service.name is the field in OpenTelemetry that defines the service. If you are using Beelines, this may be different depending on the language. You may need to run the query on another column based on your data or instrumentation library.
    2. Review the service.name results and check for organizational relevance or indicators of mis-instrumentation. For any strange service names, update your instrumentation to correct there.
    3. Read more about instrumenting services in our best practices guide.
  3. Review trace schema definitions for any mismatches that may impact your environment querying experience.
    1. Mismatches may include situations where Service A uses trace-id for the trace ID while Service B uses t.id for the trace id.
    2. If you do not want to write verbose environment queries such as COUNT where trace-id = X or t.id = X, you may want to update your instrumentation to send consistent fields.

Did you find what you were looking for?