Honeycomb MCP Use Cases


Review real-world use cases and strategies for guiding AI agents to query and analyze data with Honeycomb MCP.

Important
This feature is in beta, and we would love your feedback!

Note
Most examples were created and tested with models like Claude Opus 4 and GPT-4.1.

Querying Honeycomb 

Agents can use Model Context Protocol (MCP) tools to explore your data and answer detailed questions about system behavior. Both goal-directed queries (like responding to an alert) and broader investigations (like identifying performance issues) tend to work well with modern, state-of-the-art large language models (LLMs).

To get useful results:

  • Give specific instructions: If you are responding to a Trigger, mention it by name and tell the agent to use it as a starting point.
  • Point to known issues: For example, if you have observed a latency spike or anomaly, describe it in your prompt so the agent can focus on the relevant time window or service.

Even open-ended prompts like “Investigate latency in the api-gateway service” can be productive. In our testing, agents often begin with duration_ms percentiles (p50, p95, p99) as a baseline.

Tip
Clean, well-described fields will improve results. Unclear field names or calculated fields without descriptions can confuse the agent or lead to weaker analysis.

Improving Instrumentation 

MCP can help agents understand and improve your instrumentation, especially when paired with code access or examples.

Keep these key patterns in mind:

  • Use live examples: Ask the agent to look at how other services are instrumented in your codebase. For example: “Write a new service and base its instrumentation on other Golang services in this repo.”

  • Combine auto-instrumentation with refinement: Apply zero-code OpenTelemetry instrumentation, then let the agent analyze the results using MCP. The agent can:

    • Identify duplicated telemetry
    • Consolidate or remove redundant spans
    • Create net new instrumentation based on actual business logic
  • Audit and iterate: Pair with an agent on your actual data shape, and ask it to evaluate your overall instrumentation quality. Once the agent builds understanding, you can commit its artifacts, or share them with teammates or other agents as part of a continuous loop of instrumentation improvement.

Migrating Queries to Honeycomb 

LLMs are generally very good at translating between observability query languages, especially when you already have telemetry available in Honeycomb that maps to your old system.

If you are migrating from PromQL, Datadog, or another system:

  1. Paste the existing query into the prompt.
  2. Ask the agent to use MCP to generate an equivalent Honeycomb query.
  3. Let it iterate until the result is either a match or a useful approximation.

Running Autonomous Agents with Honeycomb 

If you are building fully autonomous agents that use Honeycomb regularly, you will get better results by helping your agents build context and avoid unnecessary work. Iterating on your prompts and agent guidance is key.

  • Be explicit about what matters: Tell the agent exactly how to query your data. For example, list which environments and datasets are relevant. This prevents the agent from relearning the structure of your system each time.
  • Reduce ambiguity: Provide access to source-of-truth files beyond Honeycomb, like your telemetry schemas. These can help the agent investigate more effectively.
  • Capture useful patterns: Save reliable prompts, queries, or instructions in agent memory files. Reusing these lets the agent build on past successes instead of starting from scratch.