Core Concepts of Honeycomb MCP


Understand how Honeycomb MCP uses Model Context Protocol to let AI agents explore your observability data.

Important
This feature is in beta, and we would love your feedback!

What is Model Context Protocol? 

Model Context Protocol (MCP) is a standard that lets AI agents and large language models (LLMs) interact with external tools and services in a consistent, structured way. With MCP, you can enable AI agents to perform specific actions, like browsing the web, editing local files, or fetching GitHub issues and pull requests.

Why Does it Matter? 

You may already use AI tools like Cursor, Claude Code, Codename Goose, or one of the many (many) AI assistants that have emerged in the developer ecosystem since 2024. These tools have empowered developers and operators by expanding what LLMs, such as Anthropic Claude, OpenAI GPT, and DeepSeek R1, can do. Instead of only answering questions and generating text snippets, LLMs can now perform tasks by using tools. MCP provides a standardized way to define and expose those tools, making it easier for AI agents to discover and use them reliably.

How MCP Servers Fit In 

An MCP server is the implementation of the MCP standard. It exposes your tools in a structured, machine-readable format. Think of the server as the bridge between AI agents and the services of data sources they need to interact with.

Honeycomb MCP Server 

Honeycomb MCP Server brings Honeycomb’s observability investigation approach to LLMs via AI agents. We want to let AI query, explore, and iterate on telemetry data just like you do in our UI.

In practice, we have seen that AI agents using Honeycomb MCP can do meaningful work with Honeycomb. They can:

  • Investigate and diagnose latency or error spikes
  • Identify performance outliers and suggest optimization opportunities
  • Translate existing dashboards and alerts into Honeycomb’s query language

We are excited to see the new ways you will use this integration to enhance your workflows!

Key Concepts 

Get familiar with how the Honeycomb MCP server works under the hood. Learn about the tools your agent can access, how security is handled, and how to get the most out of prompting.

Tools 

MCP makes Honeycomb functionality available to AI agents by exposing it as discrete tools. Each tool performs a specific task, like running a query or fetching a trace.

Some available tools include:

  • run_unsaved_query: Runs a query against one or more datasets without saving it.
  • search_columns: Finds field names in a dataset using regular expressions.
  • get_trace: Retrieves all spans in a trace from an environment by trace ID.

Agents can also use tools to list available environments, datasets, and fields, which can be useful for building context before querying.

Security Model 

Honeycomb MCP follows the same security standards as the rest of the Honeycomb platform.

By design, agents can only read data; they cannot write data to Honeycomb or modify existing Honeycomb resources. The only exception is the feedback tool, which agents can use to submit structured feedback to Honeycomb.

This design keeps your data safe while still giving agents the power to explore, investigate, and ask meaningful questions.

Best Practices 

Once your agent is connected, the next challenge is helping it deliver useful, accurate results. This section offers practical tips for crafting better prompts and managing context across interactions.

Effective Prompting and Context Management 

The quality of your prompt directly affects how well your agent performs. Clear direction and good context go a long way toward helping the agent deliver useful, accurate results.

Some ways to guide your agent effectively include:

  • Be specific: Vague prompts like “Why is the system slow?” leave too much room for guesswork. Instead, try something more focused: “Investigate a latency spike between 12:00 and 13:00 in the api-gateway service.” Include details like service names, attributes, or signal types.
  • Provide context up front: If you are working with a specific codebase, run the agent from that repo and let it know that it can look at the code for details. Mention relevant services, environments, or datasets in your prompt to narrow its focus.
  • Use files to manage context across steps: For multi-step tasks, like plotting series data or comparing results over time, ask the agent to store responses in files. It can read those files later as it continues to reason or assemble output.

Next Steps 

Continue your MCP journey with these focused resources:

  • Connecting AI Agents to Honeycomb MCP: Follow step-by-step instructions to connect common agents to Honeycomb MCP.
  • Example Use Cases: Explore real-world use cases and tips for working with Honeycomb via MCP.
  • Troubleshooting: Find solutions to common configuration issues and learn how to verify that your agent is connected and working correctly.