AI agents may invoke tools, call other agents, or prompt a Generative AI (GenAI) model with multi-step user input. The distributed and non-deterministic nature of agentic workflows can make monitoring and debugging difficult. Instrument your AI agents with OpenTelemetry (OTel) GenAI semantic conventions to get full visibility into agent sessions and see your agents in the Agent Timeline.Documentation Index
Fetch the complete documentation index at: https://docs.honeycomb.io/llms.txt
Use this file to discover all available pages before exploring further.
- Check out Fast AI Feedback Loops with Honeycomb and OpenTelemetry for an example of agent instrumentation using Pydantic.
- See the example storechat application repo for further agent instrumentation examples.
Enriching your traces with GenAI context
Add the following OTel GenAI attributes to your agent spans.Unique identifier for the conversation or session. Used to group all traces and spans belonging to the same agent conversation.
Name of the agent emitting the span.In multi-agent workflows each agent should have a unique name.
Type of agentic operation occurring:
chatcreate_agentembeddingsexecute_toolgenerate_contentinvoke_agentinvoke_workflowretrievaltext_completion
Number of tokens used in the GenAI input prompt.
Number of tokens used in the GenAI response.
Name of the model requested.
Name of the model that generated the response. This can differ from the requested model.
Why the model stopped generating tokens.Examples:
["stop"], ["tool_calls"], ["stop", "length"]Name of the tool called by the agent.
Unique identifier for the tool call.
Parameters passed to the tool call.
Result returned by the tool call (if any).See recording errors and exceptions for handling failed tool calls.
Span events for input prompts, completions, and evaluations
Full prompts, chat history, and completion responses may be too large or contain personally identifiable information (PII) or other sensitive data. Store prompts, chat history, and completion responses in span events so they can be filtered by your OTel Collector.Chat history or input prompts provided to the model.
GenAI prompts or chats may contain PII or other sensitive data.
Messages returned by the model. Each message represents a specific model response.
GenAI responses may contain PII or other sensitive data.
Attach
gen_ai.evaluation.result events to the GenAI operation span to see evaluations in the GenAI tab.Use unique names for each agent
Each agent should have its own uniquegen_ai.agent.name.
Sub-agents should use their own distinct name, instead of inheriting the parent agent’s name.
The Agent Timeline uses the gen_ai.agent.name value for agent grouping.
If
gen_ai.agent.name is omitted on a span, it will show up as "Unknown" on the Agent Timeline.How to instrument one agent calling another agent
When one agent calls or invokes another agent, the calling agent should emit theinvoke_agent span, not the agent being called.
The called agent then emits its own spans (chat, execute_tool, and so on) under its own unique gen_ai.agent.name.
For more information on agent invocation spans, check out the OpenTelemetry documentation:
Recording errors and exceptions
Record errors or exceptions following the OTel specification and include as many attributes as you can:error.type/exception.typeerror.message/exception.messageerror.stacktrace/exception.stacktrace
Naming generative AI operation spans
Generative AI operation spans should follow these naming conventions. Naming your spans this way ensures they render correctly in the Agent Timeline.| Operation | gen_ai.operation.name | Span Name Pattern |
|---|---|---|
| Chat | chat | chat {model} |
| Create GenAI agent | create_agent | create_agent {agent_name} |
| Tool execution | execute_tool | execute_tool {tool_name} |
| Agent invocation | invoke_agent | invoke_agent {agent_name} |
| Embeddings | embeddings | embeddings {model} |
| RAG retrieval | retrieval | retrieval {data_source} |
| Multimodal content generation | generate_content | generate_content {model} |
| Text completions | text_completion | text_completion {model} |