Skip to main content
Ask questions about your telemetry data in natural language. Canvas combines conversational AI with interactive visualizations to help you analyze observability data, understand system behavior, and troubleshoot issues within Honeycomb. Ask questions in natural language, and Canvas queries your data and displays results visually.

What is Canvas?

Canvas is an AI-guided workspace that translates natural language questions into queries, runs them against your data, and presents results through interactive visualizations. Unlike traditional query building, Canvas maintains context throughout your investigation, letting you ask follow-up questions and refine your analysis conversationally. Use Canvas to:
  • Query and analyze traces, logs, and metrics without writing queries manually.
  • Investigate errors, traffic spikes, and performance issues.
  • Understand why a Service Level Objective (SLO) alert fired or discover potential SLOs based on specific fields.
  • Explore patterns across user cohorts or service dependencies.

How It Works

Canvas combines AI-powered query generation with Honeycomb’s core capabilities to support conversational investigation.

Technical integration

Canvas integrates directly with Honeycomb’s query engine and has access to:
  • Team information
  • Environment and Dataset metadata
  • Field schemas and sample values
  • Query execution capabilities
  • Trace visualization tools
  • SLO and Trigger information
  • BubbleUp for identifying outliers
Canvas translates natural language requests into structured queries using Honeycomb’s API and presents results in an accessible format with appropriate visualizations.

Prompt intelligence

Canvas understands references to your Datasets, Environments, and fields. It can also link to existing queries and traces, making it easier to explore complex questions without switching between multiple tools.

Query transparency

Canvas makes all generated queries available throughout your investigation. You can examine the exact query syntax Canvas creates from your natural language questions and modify queries directly in the Query Builder.

Persistent workspaces

Canvas auto-saves investigations as persistent workspaces that you can revisit and share with Team members.

Privacy and security

Canvas operates within strict security boundaries to protect your data. Canvas:
  • Only accesses data within your Honeycomb workspace
  • Respects existing access controls and permissions
  • Doesn’t store conversation history beyond the current session
  • Can’t execute actions outside of the Honeycomb platform
Additionally, you can choose whether to share your individual Canvas investigations with your team or keep them private.

Creating a Canvas

Begin an investigation by creating a new Canvas or opening an existing query to use as context in Canvas. To start a new Canvas investigation:
  1. Select Canvas () from the navigation menu.
  2. Enter your question or investigation goal as a prompt.
  3. Submit () your prompt.
To open an existing query in Canvas:
  • Build and run a query, then select the sparkles icon () on the Query Results page.
  • From a query panel on a Board, select Open in Canvas () from the options menu ().

Adding context to prompts

Reference specific Datasets, Environments, or fields in your prompts using mentions (@). You can also include links to queries.

Team-level context

Team owners can add custom context that is included in every prompt sent to Canvas by their teammates. To add team-level custom context:
  1. Select Canvas () from the navigation menu.
  2. Select the gear icon () for Team Context Settings.
  3. Enter your team-level prompt in the editor and Save to apply it to your organization.
    An empty Team Canvas Prompt form.

Example prompts

Use these examples as starting points to explore what’s possible in Canvas:
  • “What’s causing the spike in database request volume?”
  • “Investigate /link/to/a/trace/
  • “What SLOs would you recommend for @my_dataset based on fields prefixed with my_prefix?”
  • “Which user cohorts are experiencing higher latency?”
  • “What fields are available in the frontend dataset?”
  • “Compare database query performance between production and staging.”

Effective question types

  • Diagnostic questions to identify causes of specific issues: Why did service X have high latency at 2pm yesterday?
  • Comparative questions to analyze differences between services or environments: How does the error rate in production compare to staging?
  • Trend analysis questions to examine patterns over time: Show me the pattern of database connections over the last week.
  • Correlation questions explore relationships between metrics: Is there a relationship between cache miss rate and API latency?

Working with Canvas Results

Canvas displays the queries it generates from your natural language questions in visualization panels alongside your chat session.

Interacting with visualizations

Interact with Canvas visualizations the same way you would with Query Results:
  • Select data points to view traces or examine the underlying query.
  • Hover over data points to explore details, such as timestamps and breakdowns of the selected dimension.

Editing generated queries

Each visualization panel includes options to examine the query syntax:
  • Select View query.
  • Select Open in new tab from the options menu ().
Both options open the query in Query Builder, where you can modify it to refine results.

Sharing a query

Share individual queries from a Canvas investigation by selecting the link icon () on any query panel. This copies a shareable link to that specific query.

Sharing a Canvas investigation

You can generate a shareable link to your Canvas investigation, so you can share it with your Team. To share a non-private Canvas, select Share ().

Setting Canvas Privacy

Control who can access your Canvas investigation by setting its privacy level. Canvas investigations default to Shared with my team (), making them visible to all Team members. To create a private investigation that only you can access, select Private to me () from the privacy dropdown. You can change the privacy setting at any time during your investigation.

Revisiting a Canvas

Canvas automatically saves your investigations as named, persistent workspaces that can be revisited. To return to a previous investigation, select it from the Recent investigations section on the Canvas entry page.

Best Practices

Follow these guidelines to get the most accurate and relevant results from Canvas.
  • Be specific in your prompts:
    • Include specific service names when applicable.
    • Specify time ranges when looking at historical data.
    • Mention particular metrics or dimensions of interest.
    • State the relationship or pattern you are exploring.
  • Iterate on your investigation:
    • Start with a general question and refine based on initial results.
    • Ask follow-up questions to dig deeper into patterns Canvas identifies.
  • Provide context:
    • If switching topics, give Canvas enough context to understand the new direction.
    • Use mentions (@) to reference specific Datasets, Environments, or fields.
  • Ask for explanations:
    • Ask Canvas to explain query logic or data interpretation to deepen your understanding.

Limitations

Canvas works within these boundaries:
  • Canvas works with data already collected in Honeycomb and can’t access external systems.
  • Complex analytical questions may require iterative refinement.
  • Results are limited by the data retention policy of your Honeycomb account.
  • Canvas can’t modify your Honeycomb configuration or infrastructure.

Troubleshooting

If Canvas doesn’t respond as expected, try these troubleshooting steps.

Canvas can’t find my Dataset

Canvas needs accurate Dataset names and proper permissions to access your data. If Canvas reports it can’t find a Dataset:
  • Verify Dataset name spelling and case sensitivity.
  • Check permissions and access rights to the Dataset.
  • Confirm the Dataset exists in the selected Environment.
  • Ensure data is actively being sent to the Dataset.

Canvas returns no results

Query parameters like time range and filters determine whether Canvas finds matching data. If Canvas returns no results:
  • Verify the time range covers periods with expected data.
  • Check if filters are too restrictive or contain logical errors.
  • Confirm the queried fields exist and contain the expected data type.
  • Increase the time window to capture more potential matches.

Canvas created the wrong query

Like all AI tools, Canvas may not always provide the exact answer you need. If Canvas makes a mistake or doesn’t understand your question, try these approaches:
  • Provide more specific guidance in your initial request.
  • Break complex questions into simpler components.
  • Check for ambiguous terms that might be misinterpreted.
  • Ask follow-up questions to clarify your investigation.
  • Use mentions (@) to specify Datasets, Environments, or fields.
  • Provide additional details about what you’re investigating.