About Honeycomb Classic | Honeycomb

We use cookies or similar technologies to personalize your online experience & tailor marketing to you. Many of our product features require cookies to function properly.

Read our privacy policy I accept cookies from this site

About Honeycomb Classic

Honeycomb now supports an expanded data model with Environments and Services. An Environment represents a named group of your datasets. A Service represents one or more programs sharing an executable.

Previously, datasets in Honeycomb were independent, unconnected buckets of information. This existing, dataset-only data model is now known as Honeycomb Classic.

Today, Honeycomb supports datasets, services, and environments as groupable, relational structures. Read more about how to structure Environments and Services in our best practices guide.

Am I Using Honeycomb Classic 

Check for Honeycomb Classic by referencing the label below the Honeycomb logo in the left navigation menu. When working within Honeycomb Classic, a Classic label with a gray background appears.

The environment selector set to Classic (screenshot)

Select the label below the Honeycomb logo in the left navigation menu to reveal the Environments list. If Honeycomb Classic exists for your team, Classic and its gray color will appear in your Environments list.

The environment selector selected and showing the list of Classic and other Environments (screenshot)

All Secure Tenancy customers default to the Honeycomb Classic experience, which does not support Environments and Services.

What Honeycomb Classic Means for You 

Existing Honeycomb users who had a team before the Honeycomb Environments and Services change now have a Classic section in the Environments list.

All data created with our previous dataset-only data model are located now in the Classic section. Your data and configurations will still continue to work in Honeycomb Classic, however, there are some product differences due to the new Environment-related features.

A migration process will be released in the near future to transform Honeycomb Classic datasets into being grouped under an Environment. In the meantime, you can run pre-checks and prepare for upcoming migrations.

Differences Between Honeycomb and Honeycomb Classic 

Many features in Honeycomb now allow you to define and see data for a particular Service or the entire Environment:

Feature Honeycomb Classic Honeycomb
Scope API keys to Classic
Scope API keys by Environment
Query across a single Dataset
Query across an Environment or multiple Datasets
Create Markers for a Dataset
Create Markers for an Environment
Create Service Datasets directly from traces in OpenTelemetry
View events in Home with span.kind = server spans
Improved organization of Honeycomb with specific Environment and Service structures

With Environments and Services, you begin to scope more of what you see and send into Honeycomb. See a detailed list of changes between Honeycomb and Honeycomb Classic.

New Dataset Changes 

A classic dataset describes any dataset in a Classic team. Classic datasets follow the old data model where datasets were independent and unconnected buckets of data. Datasets in the expanded data model with Environments and Services still exist as buckets for your data. Now, two additional types of datasets exist:

  • Service datasets are trace datasets created with defined trace fields and split by Service name.
  • General datasets refer to all other datasets and are typically non-trace datasets.

About Migration 

A migration process is being developed to help migrate your data from Classic datasets to Environments. The migration process is intended to support teams that require a process to move Honeycomb assets and relevant data to Environments. A migration may not be necessary for all teams. When released, the migration process will include information on the following tasks to convert your datasets:

  • Assess existing datasets and perform proactive cleanup
  • Create Environments and generate new API keys
  • Update certain instrumentation packages and SDKs
  • Update any references to the Honeycomb API
  • Configure any schema mismatches across Services

Migration Preparation 

There are some pre-checks and clean up that should be done before migration:

Query Builder page showing a COUNT GROUP BY SERVICE NAME LIMIT 1000 query (screenshot)

  1. Identify the datasets that currently hold trace data for your applications today. You will want to review all datasets with traces before migrating them to service datasets.
  2. Ensure your Service names reflect actual Service names, and that they are not empty strings or include high cardinality values, such as a unique ID like process ID or build ID.
    1. In the Query Builder, run a VISUALIZE COUNT GROUP BY service.name LIMIT 1000 query. Note that service.name is the field in OpenTelemetry that defines the service. If you are using Beelines, this field may be different depending on the language. You may need to run the query on another column based on your data or instrumentation library.
    2. Review the service.name results and check for organizational relevance or indicators of mis-instrumentation. For any strange service names, update your instrumentation to correct them.
    3. Read more about instrumenting services in our best practices guide.
  3. Review trace schema definitions for any mismatches that may impact your environment querying experience.
    1. Mismatches may include situations where Service A uses trace-id for the trace ID while Service B uses t.id for the trace id.
    2. If you do not want to write verbose environment queries, such as COUNT where trace-id = X or t.id = X, you may want to update your instrumentation to send consistent fields.

Questions and Support 

If you have questions about Honeycomb Classic, go to the #discuss-hny-classic channel in Honeycomb Pollinators Community Slack to ask or learn more.

Did you find what you were looking for?