Last Updated: May 20, 2025
Honeycomb provides this notice to inform our approach to developing artificial intelligence (AI) features for our service and to provide more information about Honeycomb Intelligence, a suite of AI features made available through the service. Currently, Honeycomb Intelligence includes the Query Assistant feature with more features to come. We have also included some frequently asked questions about our use of AI at the bottom of this notice.
It is our goal to continually improve our services and customer experience, and we believe AI can help improve the value and quality of experience we provide. As we develop and deploy AI-based features, we are committed to the following principles:
Use AI Where it Makes Sense
AI features should be developed to enhance our products and services in places where AI can uniquely benefit those products and services.
Transparency
AI features should be well-scoped to their purpose and not claim capabilities that are impossible or unreasonable to perform. Limitations of AI features should be disclosed. It should be clear when a user is interacting with or using AI features and how that AI is being used.
Fairness and Inclusivity
AI features should avoid bias and discrimination. AI features should be useful and accessible to all.
Reliability and Safety
AI features should be designed to function in a reliable and safe manner. We are committed to monitoring and addressing unreliable behaviors when they arise, including potentially removing a feature if it is deemed too problematic.
Privacy and Security
Through our development of AI features, we remain committed to respecting our customers’ privacy and to enable our customers to do the same. We design our AI features to meet the same privacy and security standards as our other product functionality.
Accountability
AI features should be monitored on an ongoing basis to ensure goals are met. Issues presented by AI features should be tracked and remediated.
Feature | Description | Model Providers | Data Interaction |
---|---|---|---|
Query Assistant | Textual interface that helps users create runnable Honeycomb queries. | OpenAI, AWS Bedrock | Uses user input, dataset/environment schema information, and sample telemetry values to produce runnable Honeycomb queries. |
AI Assisted Calculated Fields | Text-to-expression UI that helps users create valid Calculated Field expressions. | OpenAI, AWS Bedrock | Uses user input, dataset/environment schema information, and sample telemetry values to produce valid Calculated Fields. |
Feature | Description | Model Providers | Data Interaction |
---|---|---|---|
Honeycomb MCP | Interface for your Honeycomb telemetry in any client application that connects to the Honeycomb MCP. | Client-dependent / user-controlled model provider | Uses user-provided text, dataset/environment schema information, and sample telemetry values to create/read/update Honeycomb queries and any entity within Honeycomb. |
Yes. Honeycomb Intelligence can be toggled on or off at the Team level by Team Owners.
No. While there may be AI features that activate “passively” so as to surface insights proactively, these are governed by the Team-level AI settings.
Not at the moment. We may place limits in the future for some Honeycomb Intelligence features to mitigate costs or misuse and abuse. These limits may change over time.
No. Honeycomb also uses AI for features such as anomaly detection or statistical (deterministic) models to power some features.
Honeycomb does not use any AI model providers that train foundation models based on input. We may fine-tune pre-trained models to provide a better product offering. Other machine learning systems may require training a different kind of machine learning model on a per-Team basis, or performing a “fit” operation for a statistical model.
No. At this time, Honeycomb Intelligence may not be used with protected health information or other sensitive data. Please review our Supplemental Terms for more information.
In some cases, we use “offline” models, where the underlying model provider does not process or otherwise have access to any input data. In other scenarios, we may use “online” models, in which case the AI model provider may serve as a subprocessor. Where an AI model provider used by Honeycomb may receive personal data on a subprocessor basis, Honeycomb will add any such provider to its subprocessor list.
Currently the offline models we use are self hosted within our cloud service provider AWS, through AWS Bedrock, such that the underlying AI model provider does not have access to the data.
Yes. We regularly test newly-released models from our model providers to test their efficacy. A given feature of Honeycomb Intelligence may call multiple different models in the course of producing a response depending on the task, sometimes from different model providers.
We do not currently support multi-modality (image, audio) inputs and outputs, but we may do so in the future.
Honeycomb Intelligence can be enabled or disabled in Team Settings by a Team Owner.