Query Assistant | Honeycomb

Query Assistant

Query Assistant uses machine learning systems, such as a Large Language Model (LLM) with a generative pre-trained transformer, to assist in creating Honeycomb Queries using natural language.

Query Assistant explores how Machine Learning (ML) can be employed to improve the overall Honeycomb product experience, starting with querying. Our guiding philosophy is to build tools that enhance, not replace, user intuition.

Use Query Assistant to:

  • learn how to query in Honeycomb faster
  • start with a query and then further refine to explore your data

Using Query Assistant 

You can access Query Assistant underneath the Query Builder. To use Query Assistant, enter your prompt in the search box and select Get Query, or select one of the suggested questions below the search box. Based on your entry or selection, Query Assistant creates and runs a query in the Query Builder. Results appear after the screen refreshes.

Screenshot of Query Assistant with search box and three suggested queries

Viewing Query Assistant 

You can expand and collapse the Query Assistant display. Any changes you make will persist. You can also control your team’s ability to use Query Assistant in Team Settings.

Limitations 

A user can use up to 25 natural language queries per 24 hours.

Query Assistant is not available to any Honeycomb customer who has signed a HIPAA Business Associate Agreement (BAA).

Data Use 

Honeycomb uses OpenAI’s API for Query Assistant. Honeycomb sends information to OpenAI’s API for the purpose of generating a runnable query based on your input. Data is only sent when you execute a natural language query. In addition, Honeycomb does not use any data to train ML models. In the future, we are interested in using data to create more personalized user experiences, but we have no plans to incorporate data itself, and all data is still subject to our Data Retention window.

What Honeycomb sends to OpenAI:

  • Your natural language input
  • The names of fields in your dataset schema

What Honeycomb does NOT send to OpenAI:

  • Identifying information
  • The values of data sent to Honeycomb

OpenAI does not train models on data sent via their API. OpenAI does retain all data for a short period of time to monitor for abuse and misuse. Honeycomb does not use their opt-in mechanism for training and has no plans to offer that as an option for users at this time.

OpenAI’s API exposes the base Large Language Model (LLM) that ChatGPT also uses. ChatGPT adds additional layers of machine learning systems suited for a general-purpose chat application and uses a subset of data it receives to further train their systems. The systems that ChatGPT adds on top of the LLM are not part of Honeycomb’s product implementation.

Troubleshooting 

Limited Fields in Large Dataset 

Users with large datasets may not have an optimal Query Assistant experience due to the full schema being truncated.

If you have a very large Dataset, Honeycomb sorts your Dataset fields by recency and truncates them. This is because the full schema would not fit into the context window for the ML model we use. We are exploring ways to only use the most relevant subset of a schema instead of truncating, which should improve accuracy of query generation.

Different Answers with Same Input 

Query Assistant may give a different answer for the same input as Large Language Models (LLMs) are nondeterministic. While we try to do our best to achieve a degree of consistency for similar inputs, we cannot guarantee the same query for the same input each time. If you care about having a consistent query to run, we recommend saving a query to a board for later use.

Did you find what you were looking for?