Text to JSON Quick Start

An extremely common use case for large language models (LLMs) is to convert text to JSON where the LLM:

  • generates a JSON object from user input
  • parses that object
  • validates its contents
  • uses the content elsewhere in the application

Collecting observability data for this process is a great fit as clear and measurable outcomes are present for each step. This data answers questions like:

  • Do some user inputs result in higher latency or more errors?
  • Was the JSON object structurally valid as per the JSON schema?
  • Did the JSON object contain expected fields?
  • Did the new version of our prompt result in more invalid JSON objects?
  • Are there particular patterns in the user input that result in invalid JSON objects?

Adding telemetry to your LLM and then analyzing with Honeycomb provides a flexible way to contribute towards and measure the overall user experience. Use our Quick Start to instrument your LLM application.

Before You Begin 

Before instrumenting your LLM application, you’ll need to do a few things:

  1. Create a Honeycomb Account. Signup is free!
  2. Create a Honeycomb Team. Complete your account creation by giving us a team name. Honeycomb uses teams to organize groups of users, grant them access to data, and create a shared work history in Honeycomb.
    Tip
    We recommend using your company or organization name as your Honeycomb team name.
  3. Get Your Honeycomb API Key. To send data to Honeycomb, you’ll need your Honeycomb API Key. Once you create your team, you will be able to view or copy your API key. Make note of it; you will need it later! You can also find your Honeycomb API Key any time in your Environment Settings.

Send Telemetry Data to Honeycomb 

Once you have your Honeycomb API key and your LLM application to instrument, it’s time to send telemetry data to Honeycomb!

To instrument your LLM, you will add automatic instrumentation to your code for standard trace data telemetry, and then add custom instrumentation specifically for your LLM.

Add Automatic Instrumentation to Your Code 

The quickest way to start seeing your trace data in Honeycomb is to use OpenTelemetry, an open-source collection of tools, APIs, and SDKs, to automatically inject instrumentation code into your application without requiring explicit changes to your codebase.

Note
Automatic instrumentation works slightly differently within each language, but the general idea is that it attaches hooks into popular tools and frameworks and “watches” for certain functions to be called. When they’re called, the instrumentation automatically starts and completes trace spans on behalf of your application.

When you add automatic instrumentation to your code, OpenTelemetry will inject spans, which represent units of work or operations within your application that you want to capture and analyze for observability purposes.

Note
This Quick Start uses the npm dependency manager. For instructions with yarn or if using TypeScript, read our OpenTelemetry Node.js documentation.

Acquire Dependencies 

Open your terminal, navigate to the location of your project on your drive, and install OpenTelemetry’s automatic instrumentation meta package:

npm install --save @opentelemetry/auto-instrumentations-node
Module Description
auto-instrumentations-node OpenTelemetry’s meta package that provides a way to add automatic instrumentation to any Node application to capture telemetry data from a number of popular libraries and frameworks, like express, dns, http, and more.

Alternatively, install individual instrumentation packages.

Initialize 

Create an initialization file, commonly known as the tracing.js file:

// Example filename: tracing.js
'use strict';

const opentelemetry = require('@opentelemetry/sdk-node');
const { OTLPTraceExporter } =  require('@opentelemetry/exporter-trace-otlp-http');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');

const sdk = new opentelemetry.NodeSDK({
    traceExporter: new OTLPTraceExporter(),
    instrumentations: [
        getNodeAutoInstrumentations({
            // we recommend disabling fs autoinstrumentation since it can be noisy
            // and expensive during startup
            '@opentelemetry/instrumentation-fs': {
                enabled: false,
            },
        }),
    ],
});

sdk.start();

Configure the OpenTelemetry SDK 

Use environment variables to configure the OpenTelemetry SDK:

export OTEL_SERVICE_NAME="your-service-name"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" # US instance
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu1.honeycomb.io:443" # EU instance
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key"
Variable Description
OTEL_SERVICE_NAME Service name. When you send data, Honeycomb creates a dataset in which to store your data and uses this as the name. Can be any string.
OTEL_EXPORTER_OTLP_PROTOCOL The data format that the SDK uses to send telemetry to Honeycomb. For more on data format configuration options, read Choosing between gRPC and HTTP.
OTEL_EXPORTER_OTLP_ENDPOINT Honeycomb endpoint to which you want to send your data.
OTEL_EXPORTER_OTLP_HEADERS Adds your Honeycomb API Key to the exported telemetry headers for authorization. Learn how to find your Honeycomb API Key.
Note

If you use Honeycomb Classic, you must also specify the Dataset using the x-honeycomb-dataset header.

export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key,x-honeycomb-dataset=your-dataset"
Note
If you are sending data directly to Honeycomb, you must configure the API key and service name. If you are using an OpenTelemetry Collector, configure your API key at the Collector level instead.

Run Your Application 

Run the Node.js app and include the initialization file you created:

node -r ./tracing.js YOUR_APPLICATION_NAME.js

Be sure to replace YOUR_APPLICATION_NAME with the name of your application’s main file.

Alternatively, you can import the initialization file as the first step in your application lifecycle.

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

Note
This Quick Start uses the pip package manager. For instructions with poetry, read our OpenTelemetry Python documentation.

Acquire Dependencies 

  1. Install the OpenTelemetry Python packages:

    python -m pip install opentelemetry-instrumentation \
        opentelemetry-distro \
        opentelemetry-exporter-otlp
    
  2. Install instrumentation libraries for the packages used by your application. We recommend using the opentelemetry-bootstrap tool that comes with the OpenTelemetry SDK to scan your application packages and print out a list of available instrumentation libraries. You should then add these libraries to your requirements.txt file:

    opentelemetry-bootstrap >> requirements.txt
    pip install -r requirements.txt
    

    If you do not use a requirements.txt file, you can install the libraries directly in your current environment:

    opentelemetry-bootstrap --action=install
    

Configure the OpenTelemetry SDK 

Use environment variables to configure the OpenTelemetry SDK:

export OTEL_SERVICE_NAME="your-service-name"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" # US instance
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu1.honeycomb.io:443" # EU instance
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<your-api-key>"
Variable Description
OTEL_SERVICE_NAME Service name. When you send data, Honeycomb creates a dataset in which to store your data and uses this as the name. Can be any string.
OTEL_EXPORTER_OTLP_PROTOCOL The data format that the SDK uses to send telemetry to Honeycomb. For more on data format configuration options, read Choosing between gRPC and HTTP.
OTEL_EXPORTER_OTLP_ENDPOINT Honeycomb endpoint to which you want to send your data.
OTEL_EXPORTER_OTLP_HEADERS Adds your Honeycomb API Key to the exported telemetry headers for authorization. Learn how to find your Honeycomb API Key.

To learn more about configuration options, visit Agent Configuration in the OpenTelemetry documentation.

Note

If you use Honeycomb Classic, you must also specify the Dataset using the x-honeycomb-dataset header.

export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key,x-honeycomb-dataset=your-dataset"
Note
If you are sending data directly to Honeycomb, you must configure the API key and service name. If you are using an OpenTelemetry Collector, configure your API key at the Collector level instead.

Run Your Application 

Run your Python application using the OpenTelemetry Python automatic instrumentation tool opentelemetry-instrument, which configures the OpenTelemetry SDK:

opentelemetry-instrument python YOUR_APPLICATION_NAME.py

Be sure to replace YOUR_APPLICATION_NAME with the name of your application’s main file.

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

Acquire Dependencies 

The automatic instrumentation agent for OpenTelemetry Java will automatically generate trace data from your application. The agent is packaged as a JAR file and is run alongside your app.

In order to use the automatic instrumentation agent, you must first download it:

curl -L -O https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

Configure the OpenTelemetry SDK 

As per the OpenTelemetry specification, you must set a service.name resource in your SDK configuration. The service name is used as the name of a dataset to store trace data in Honeycomb.

When using OpenTelemetry for Java, all of the following configuration properties are required:

System Property /
Environment Variable
Value
otel.traces.exporter
OTEL_TRACES_EXPORTER
otlp
otel.metrics.exporter
OTEL_METRICS_EXPORTER
otlp (*)
otel.exporter.otlp.endpoint
OTEL_EXPORTER_OTLP_ENDPOINT
https://api.honeycomb.io (US instance)
https://api.eu1.honeycomb.io (EU instance)
otel.exporter.otlp.traces.endpoint
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
https://api.honeycomb.io/v1/traces (defaults to value of OTEL_EXPORTER_OTLP_ENDPOINT)
otel.exporter.otlp.metrics.endpoint
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
https://api.honeycomb.io/v1/metrics (US instance)
https://api.eu1.honeycomb.io/v1/metrics (EU instance)(*)
otel.exporter.otlp.headers
OTEL_EXPORTER_OTLP_HEADERS
x-honeycomb-team=HONEYCOMB_API_KEY
otel.exporter.otlp.traces.headers
OTEL_EXPORTER_OTLP_TRACES_HEADERS
x-honeycomb-team=HONEYCOMB_API_KEY (defaults to value of OTEL_EXPORTER_OTLP_HEADERS)
otel.exporter.otlp.metrics.headers
OTEL_EXPORTER_OTLP_METRICS_HEADERS
x-honeycomb-team=HONEYCOMB_API_KEY,x-honeycomb-dataset=HONEYCOMB_DATASET (*)
otel.service.name
OTEL_SERVICE_NAME
service.name attribute to be used for all spans

Fields marked with an asterisk (*) are required for exporting metrics to Honeycomb.

To learn more about configuration options, visit the OpenTelemetry SDK Autoconfigure GitHub repository.

Run Your Application 

Run your application with the automatic instrumentation agent as a sidecar:

java -javaagent:opentelemetry-javaagent.jar -jar /path/to/myapp.jar

You can also include configuration values with an invocation of your application:

java \
-Dotel.javaagent.configuration-file=/path/to/properties/file \
-javaagent:opentelemetry-javaagent.jar \
-jar /path/to/myapp.jar

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

Note
This Quick Start uses ASP.NET Core.

Acquire Dependencies 

Install the OpenTelemetry .NET packages. For example, with the .NET CLI, use:

dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Extensions.Hosting
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Instrumentation.Http

Initialize 

Initialize the TracerProvider during application setup.

services.AddOpenTelemetry().WithTracing(builder => builder
    .AddAspNetCoreInstrumentation()
    .AddHttpClientInstrumentation()
    .AddOtlpExporter());

Configure 

Use environment variables to configure the OpenTelemetry SDK:

export OTEL_SERVICE_NAME="your-service-name"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" # US instance
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu1.honeycomb.io:443" # EU instance
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<your-api-key>"
Variable Description
OTEL_SERVICE_NAME Service name. When you send data, Honeycomb creates a dataset in which to store your data and uses this as the name. Can be any string.
OTEL_EXPORTER_OTLP_PROTOCOL The data format that the SDK uses to send telemetry to Honeycomb. For more on data format configuration options, read Choosing between gRPC and HTTP.
OTEL_EXPORTER_OTLP_ENDPOINT Honeycomb endpoint to which you want to send your data.
OTEL_EXPORTER_OTLP_HEADERS Adds your Honeycomb API Key to the exported telemetry headers for authorization. Learn how to find your Honeycomb API Key.

Run 

Run your application. You will see the incoming requests and outgoing HTTP calls generate traces.

dotnet run

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

Acquire Dependencies 

Install OpenTelemetry Go packages:

go get \
  github.com/honeycombio/otel-config-go/otelconfig \
  go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp

Initialize 

Prepare your application to send spans to Honeycomb.

Open or create a file called main.go:

package main

import (
    "fmt"
    "log"
    "net/http"

    "github.com/honeycombio/otel-config-go/otelconfig"
    "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)

// Implement an HTTP Handler function to be instrumented
func httpHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello, World")
}

func main() {
    // use otelconfig to set up OpenTelemetry SDK
    otelShutdown, err := otelconfig.ConfigureOpenTelemetry()
    if err != nil {
        log.Fatalf("error setting up OTel SDK - %e", err)
    }
    defer otelShutdown()

    // Initialize HTTP handler instrumentation
    handler := http.HandlerFunc(httpHandler)
    wrappedHandler := otelhttp.NewHandler(handler, "hello")
    http.Handle("/hello", wrappedHandler)

    // Serve HTTP server
    log.Fatal(http.ListenAndServe(":3030", nil))
}

Configure the OpenTelemetry SDK 

Once you have acquired the necessary dependencies, you can configure your SDK to send events to Honeycomb, and then run your application to see traces.

export OTEL_SERVICE_NAME="your-service-name"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" # US instance
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu1.honeycomb.io:443" # EU instance
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key"
Variable Description
OTEL_EXPORTER_OTLP_ENDPOINT Honeycomb endpoint to which you want to send your data.
OTEL_EXPORTER_OTLP_HEADERS Header containing x-honeycomb-team=, plus your API Key generated in Honeycomb.
OTEL_SERVICE_NAME Service name. When you send data, Honeycomb creates a dataset in which to store your data and uses this as the name. Can be any string.

Run Your Application 

Run your application:

go run YOUR_APPLICATION_NAME.go

Be sure to replace YOUR_APPLICATION_NAME with the name of your application’s main file.

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

Acquire Dependencies 

Add these gems to your Gemfile:

gem 'opentelemetry-sdk'
gem 'opentelemetry-exporter-otlp'
gem 'opentelemetry-instrumentation-all'
Gem Description
opentelemetry-sdk Required to create spans
opentelemetry-exporter-otlp An exporter to send data in the OTLP format
opentelemetry-instrumentation-all A meta package that provides instrumentation for Rails, Sinatra, several HTTP libraries, and more

Install the gems using your terminal:

bundle install

Initialize 

Initialize OpenTelemetry early in your application lifecycle. For Rails applications, we recommend that you use a Rails initializer. For other Ruby services, initialize as early as possible in the startup process.

# config/initializers/opentelemetry.rb
require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'

OpenTelemetry::SDK.configure do |c|
    c.use_all() # enables all instrumentation!
end

Configure the OpenTelemetry SDK 

Use environment variables to configure OpenTelemetry to send events to Honeycomb:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io" # US instance
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu1.honeycomb.io" # EU instance
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key"
export OTEL_SERVICE_NAME="your-service-name"
Variable Description
OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint to which you want to send your telemetry data.
OTEL_EXPORTER_OTLP_HEADERS List of headers to apply to all outgoing telemetry data. Place your API Key generated in Honeycomb in the x-honeycomb-team header. Learn how to find your Honeycomb API Key.
OTEL_SERVICE_NAME Service name. When you send data, Honeycomb creates a dataset in which to store your data and uses this as the name. Can be any string.

Run Your Application 

Run your Ruby application.

In Honeycomb’s UI, you should now see your application’s incoming requests and outgoing HTTP calls generate traces.

If your preferred language is not covered here, you can find relevant instrumentation information in the OpenTelemetry community documentation.

Tip
For Rust, we recommend you use the opentelemetry and opentelemetry-otlp crates to send data to Honeycomb over OTLP.

For any required configuration values, see Using the Honeycomb OpenTelemetry Endpoint.

Generate Automated Data 

Now that you have added automatic instrumentation to your application and have it running in your development environment, interact with your application by making a few requests. Making requests to your service will generate telemetry data and send it to Honeycomb where it will appear in the Honeycomb UI within seconds.

Add Custom Instrumentation for LLMs 

With OpenTelemetry’s automatic instrumentation now installed in your application, trace data is being sent to Honeycomb.

The next step is to add custom instrumentation that tracks all relevant information related to your LLM feature. In OpenTelemetry, custom instrumentation is called manual instrumentation. To get the most out of your traces, you must use OpenTelemetry APIs to instrument.

To add custom instrumentation, create a single span that tracks all relevant information related to your LLM feature. Specifically, the minimum information to track must include:

  • User ID
  • Prompt version
  • User input
  • Full prompt text
  • Full LLM response
  • Any error including parsing JSON and/or validating it
  • Error message
  • Token count

Examples 

The following code examples show how to capture the correct information on an OpenTelemetry span:

import { trace, Span, SpanStatusCode } from "@opentelemetry/api";

const tracer = trace.getTracer("llm.tracer");

function getJsonFromText(
  userInput: string,
  userId: string,
  promptTemplate: string,
  promptVersion: string
) {
  return tracer.startActiveSpan("app.get_json_from_text", (span: Span) => {
    span.setAttribute("app.user_id", userId);
    span.setAttribute("app.llm.prompt_version", promptVersion);
    span.setAttribute("app.llm.user_input", userInput);

    try {
        // Programmatically build the full prompt.
        // The output is the entire prompt you'd send to the LLM,
        // after RAG or any other context-building operations.
        const fullPrompt = buildFullPrompt(promptTemplate, userInput);

        span.setAttribute("app.llm.prompt_text", fullPrompt);

        // Call the LLM and get back the text of the result
        // and the number of tokens used.
        const { response, tokenCount } = callLLM(full_prompt);

        span.setAttribute("app.llm.response", response);
        span.setAttribute("app.llm.token_count", tokenCount);

        // Parse the JSON object and validate it,
        // capturing any errors you might encounter.
        const result = parseAndValidateResponse(response);

        return result;
    } catch (ex) {
        // Track any unexpected errors.
        span.setStatus({ code: SpanStatusCode.ERROR });
        span.recordException(ex);
    } finally {
        span.end();
    }
  });
}

function parseAndValidateResponse(llmResult: string) {
  const currentSpan = trace.getActiveSpan();

  // Extract and parse the JSON object from the LLM response.
  const { extracted, result, extractionError } = extractAndParseJson(llmResult);
  if (!extracted) {
    currentSpan.setAttribute("error.message", extractionError);
    currentSpan.setStatus({ code: SpanStatusCode.ERROR });
    return null;
  }

  // Validate the structure of the result, capturing
  // any validation errors you might encounter.
  const { validated, validationError } = validateResult(result);
  if (!validated) {
    currentSpan.setAttribute("error.message", validationError);
  }

  return result;
}
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

tracer = trace.get_tracer("llm.tracer")

def get_json_from_text(user_input, user_id, prompt_template, prompt_version):
    with tracer.start_as_current_span("app.get_json_from_text") as span:
        span.set_attribute("app.user_id", user_id)
        span.set_attribute("app.llm.prompt_version", prompt_version)
        span.set_attribute("app.llm.user_input", user_input)

        try:
            # Programmatically build the full prompt.
            # The output is the entire prompt you'd send to the LLM,
            # after RAG or any other context-building operations.
            full_prompt = build_full_prompt(prompt_template, user_input)

            span.set_attribute("app.llm.prompt_text", full_prompt)

            # Call the LLM and get back the text of the result
            # and the number of tokens used.
            llm_result, token_count = call_llm(full_prompt)

            span.set_attribute("app.llm.response", response)
            span.set_attribute("app.llm.token_count", token_count)

            # Parse the JSON object and validate it,
            # capturing any errors you might encounter.
            result = parse_and_validate_response(llm_result)

            return result
        except Exception as ex:
            # Track any unexpected errors.
            current_span.set_status(Status(StatusCode.ERROR))
            current_span.record_exception(ex)

def parse_and_validate_response(llm_result):
    current_span = trace.get_current_span()

    # Extract and parse the JSON object from the LLM response.
    extracted, result, extraction_error = extract_and_parse_json(llm_result)
    if not extracted:
        current_span.set_attribute("error.message", extraction_error)
        current_span.set_status(Status(StatusCode.ERROR))
        return None

    # Validate the structure of the result, capturing
    # any validation errors you might encounter.
    validated, validation_error = validate_result(result)
    if not validated:
        current_span.set_attribute("error.message", validation_error)

    return result

Explore Your Data 

With your app running and telemetry being sent to Honeycomb, it’s time to explore your data.

Create a Board 

For quick reference over time, you should create a Board to show LLM-specific queries of interest. We recommend creating a Board first before trying queries, so you can save with ease later.

To create a Board:

  1. In the Honeycomb UI’s left navigation menu, select Boards with its Bulletin Board icon. When the left navigation menu is compact, only the icon appears.
  2. Select New Board.
  3. In the modal that appears, name your new board, such as “LLM Dashboard.” Optionally, give your Board a description to help others find and use it. Determine the board’s Sharing setting - Public to the Team or Limited to Collaborators.
  4. Select Create to finish. Your new board appears next.

Next Steps 

  1. Select Add Query to go to the Query Builder display.
  2. Use the example queries in the next section to populate your LLM Board.
  3. Follow the directions to add queries to an existing Board.

Create Queries 

Now it’s time to create your first queries for LLMs!

Use the query examples below to explore the performance and behavior of your LLM application. The specific attributes should exist in your data and environment if you added custom instrumentation for LLMs in the previous step.

Enter each example query using the Query Builder. These example queries use two to three of the VISUALIZE, WHERE, and GROUP BY clauses, located at the top of the Query Builder.

  • VISUALIZE - Performs a calculation and displays a corresponding graph over time. Most VISUALIZE queries return a line graph while the HEATMAP visualization shows the distribution of data over time
  • WHERE - Filters based on attribute parameter(s)
  • GROUP BY - Groups fields by attribute parameter(s)
Screenshot of Visualize, Where, and Group by clauses in Query Builder

Track Overall Latency 

This query tracks overall latency of all LLM-related operations and the slowest requests.

VISUALIZE WHERE
HEATMAP(duration_ms)
MAX(duration_ms)
name = app.get_json_from_text

Use to identify any spikes in latency, or if latency is increasing over time. In the event of a spike, you can investigate what happened by using BubbleUp to find outliers.

Track Invalid JSON Objects 

This query shows each instance that a user input led to a bad JSON object, whether that was because of a parsing error or a validation error.

VISUALIZE WHERE GROUP BY
COUNT name = app.get_json_from_text
error exists
app.llm.input
app.llm.response
error.message

Use to identify exactly which inputs lead to bad behavior, which makes it easier to identify specific bugs to solve.

Track all User Inputs Grouped by Response and Errors 

This query shows groups of all inputs and LLM outputs that succeeded.

VISUALIZE WHERE GROUP BY
COUNT name = app.get_json_from_text
error does-not-exists
app.llm.user_input
app.llm.response

Use to understand general user behavior, and to identify any patterns in user input that leads to a particularly useful response. It’s just as helpful to understand what’s working as it is to understand what isn’t.

Show Token Usage Over Time 

This query tracks token use over time, grouped by user ID.

VISUALIZE WHERE GROUP BY
HEATMAP(app.llm.token_count) name = app.get_json_from_text app.user_id

Use to understand how many tokens being used over time, but also to identify that usage down to specific users, as often a small number of users is responsible for the majority of usage.

Investigate Specific Traces 

The queries on our LLM Board act as a starting point. If curious about specific behavior, you can view a specific trace that represents one request. Select any point on a graph, and in the menu that appears, select View trace. The next screen displays a trace detail view that lets you see what happened step by step.

Next Steps 

The queries on our LLM board act as a referenceable starting point. If you’re curious about specific behavior(s), start with any query and:

  1. Add additional fields in the GROUP BY clause to slice your data into revealing interesting field values.
  2. Use BubbleUp to find outlier behavior and identify its contributing characteristics.
  3. Select a specific trace that represents one request to see what happened step-by-step.