Review real-world use cases and strategies for guiding AI agents to query and analyze data with Honeycomb MCP.
Agents can use Model Context Protocol (MCP) tools to explore your data and answer detailed questions about system behavior. Both goal-directed queries (like responding to an alert) and broader investigations (like identifying performance issues) tend to work well with modern, state-of-the-art large language models (LLMs).
To get useful results:
Even open-ended prompts like “Investigate latency in the api-gateway
service” can be productive.
In our testing, agents often begin with duration_ms
percentiles (p50, p95, p99) as a baseline.
MCP can help agents understand and improve your instrumentation, especially when paired with code access or examples.
Keep these key patterns in mind:
Use live examples: Ask the agent to look at how other services are instrumented in your codebase. For example: “Write a new service and base its instrumentation on other Golang services in this repo.”
Combine auto-instrumentation with refinement: Apply zero-code OpenTelemetry instrumentation, then let the agent analyze the results using MCP. The agent can:
Audit and iterate: Pair with an agent on your actual data shape, and ask it to evaluate your overall instrumentation quality. Once the agent builds understanding, you can commit its artifacts, or share them with teammates or other agents as part of a continuous loop of instrumentation improvement.
LLMs are generally very good at translating between observability query languages, especially when you already have telemetry available in Honeycomb that maps to your old system.
If you are migrating from PromQL, Datadog, or another system:
If you are building fully autonomous agents that use Honeycomb regularly, you will get better results by helping your agents build context and avoid unnecessary work. Iterating on your prompts and agent guidance is key.