The Environments and Services feature in Honeycomb adds an extended data model with groupable, relational structures for datasets, services, and environments. The Honeycomb E & S diagram (below) reflects the new data model where a team can have multiple environments, such as Non-Prod and Prod, and group multiple datasets under each environment.
As a result, this change means some differences in requirements, behaviors, and features when compared to Honeycomb Classic, as detailed on this page.
A migration process is under development, but teams can create Environments to send data.
Learn about how to structure Environments and Services, how many datasets can be in an environment, and more in our best practices guide.
Previously, teams would need to make dataset structuring decisions. Accommodating OpenTelemetry traces often meant sending all traces to a single dataset.
Environments provide an additional level of organization to structure your data in Honeycomb. This allows traces to be distributed across multiple datasets within an environment.
If initially sending traces that may create new Service Datasets in Honeycomb, the API key will need send events
and create datasets
permissions.
If any new Service Dataset creation is not anticipated, the API Key does not need the create datasets
permission.
With the introduction of Environments, trace data is linked to an Environment that is identified implicitly by the API Key used. Specifying a Dataset name is no longer required to submit trace data, but a Dataset name is still required to submit metrics data. The service name specified in each trace span is used to assign the span to a dataset.
Environments and Services in Honeycomb means major changes for the API used to send data. As a result, some libraries require updating, while other libraries recommend updating to a minimum version. While minimum version guidance appears here, we recommend using the latest version.
A minimum Beeline version is required for traces to split across service datasets. Please use the minimum required version for each library in use:
beeline-go
, version 1.7.0
beeline-java
, version 2.0.0
beeline-nodejs
, version 3.3.0
beeline-python
, version 3.3.0
beeline-ruby
, version 2.9.0
With Environments, Beelines require a serviceName
configuration instead of a Dataset
configuration.
The appropriate configuration is found in the Beelines documentation for each language.
A minimum Refinery version is required for traces to split across service datasets.
refinery
has a minimum required version of 1.12.0
.
Update to the minimum recommended version for your Honeycomb OpenTelemetry Distribution, or OTel distro, to provide the best experience for configuration and default behaviors.
honeycomb-opentelemetry-java
, version 0.9.0
honeycomb-opentelemetry-dotnet
, version 0.21-0-beta
The minimum recommended version provides the best experience for configuration and default behaviors. Please use the minimum recommended version or higher for each library in use:
libhoney-dotnet
, version 1.3.0
libhoney-go
, version 2.1.0
libhoney-java
, version 1.5.0
libhoney-js
, version 3.1.0
libhoney-py
, version 2.1.0
libhoney-rb
, version 2.2.0
With Environments, query across all of your environment’s datasets or within a specific Dataset. When you scope your queries to a specific dataset, the Query Builder will show the schema related to only that specific dataset, which can simplify queries. Use the Dataset selector in Query Builder to select your scope.
Triggers and SLOs can be filtered by dataset. Select a dataset name when viewing Triggers or SLOs to filter the list to that specific dataset.
Mark notable points in time in both Environment and Dataset queries with Markers.
Charts on the Home page includes additional filters to show source requests.
The API Key header is the same, but accepts an Environment API key: "X-Honeycomb-Team: YOUR_ENVIRONMENT_API_KEY"
Use this Environment API key to scope your data to the specific Environment instead of the entire team.
You can currently only create a derived column for a dataset. If you need a derived column across multiple services, you will need to create them for each service. Derived Columns for your Datasets will be available for both Environment queries and Dataset Queries. To quickly create many derived columns at once, use the Derived Columns API.
Currently, SLOs and Triggers are only available for datasets. If you need a SLO or Trigger across multiple services, you will need to create one for each service. To quickly create many at once, use the SLOs API and Triggers API.
Did you find what you were looking for?