Skip to content

metrics

Agent Metrics - A Guide for Engineers

Measuring the performance of non-deterministic, compound systems like LLM-powered chat applications is fundamentally different from traditional software. An output can be syntactically perfect and seem plausible, yet be factually incorrect, unhelpful, or unsafe.

A robust measurement strategy requires a multi-layered approach that covers everything from operational efficiency to nuanced aspects of output quality and user success. This requires a shift in thinking from simple pass/fail tests to a portfolio of metrics that, together, paint a comprehensive picture of system performance.

This guide breaks down metric design into two parts:

  1. Foundational Metric Types: The basic building blocks of any measurement system.
  2. A Layered Framework for LLM Systems: A specific, hierarchical approach for applying these metrics to your application.

Part 1: Foundational Metric Types

These are the fundamental ways to structure a measurement. Understanding these types is the first step to building a meaningful evaluation suite. You began with a great start. Let's expand on it.

1. Classification (Categorical)

Measures which discrete, unordered category an item belongs to. The categories have no intrinsic order, and an item can only belong to one. This is crucial for segmenting analysis and routing logic.

Core Question: "What kind of thing is this?" or "Which bucket does this fall into?"

Examples:

  • Intent Recognition: [BookFlight], [CheckWeather], [GeneralChat]. This allows you to measure performance on a per-intent basis.
  • Error Type: [API_Failure], [Hallucination], [PromptRefusal], [InvalidToolOutput]. Segmenting errors is the first step to fixing them.
  • Tool Used: [Calculator], [CalendarAPI], [SearchEngine]. Helps diagnose issues with specific tools in a multi-tool agent.
  • Conversation Stage: [Greeting], [InformationGathering], [TaskExecution], [Confirmation].
2. Binary (Boolean)

A simplified version of classification with only two outcomes. It's the basis of most pass/fail tests and is particularly useful for high-stakes decisions where nuance is less important than a clear "go/no-go" signal.

Core Question: "Did it succeed or not?" or "Does this meet the minimum bar?" Examples:

  • Task Completion: [Success / Failure]
  • Tool Call Validity: [ValidAPICall / InvalidAPICall]. Was the generated tool call syntactically correct?
  • Contains Citation: [True / False]. Did the model cite a source for its claim?
  • Safety Filter Triggered: [True / False]. A critical metric for monitoring responsible AI guardrails.
  • Factually Correct: [True / False]. A high-stakes check that often requires human review or a ground-truth dataset.
3. Ordinal

Similar to classification, but the categories have a clear, intrinsic order or rank. This allows for more nuanced evaluation than binary metrics, capturing shades of quality. These scales are often defined in human evaluation rubrics.

Core Question: "How good is this on a predefined scale?"

Examples:

  • User Satisfaction Score: [1: Very Unsatisfied, ..., 5: Very Satisfied]. The classic user feedback mechanism.
  • Answer Relevance: [1: Irrelevant, 2: Somewhat Relevant, 3: Highly Relevant]. A common human-annotated metric.
  • Readability: [HighSchool_Level, College_Level, PhD_Level]. Helps align model output with the target audience.
  • Safety Risk: [NoRisk, LowRisk, MediumRisk, HighRisk]. Granular assessment for safety-critical applications.
4. Continuous (Scalar)

Measures a value on a continuous range, often normalized between 0.0 and 1.0 for scores, but can be any numeric range. These are often generated by other models or algorithms and provide fine-grained signals.

Core Question: "How much of a certain quality does this have?"

Examples:

  • Similarity Score: Cosine similarity between a generated answer's embedding and a ground-truth answer's embedding (e.g., 0.87).
  • Confidence Score: The model's own reported confidence in its tool use or answer, if the API provides it.
  • Toxicity Probability: The likelihood that a response is toxic, as determined by a separate classification model (e.g., 0.05).
  • Groundedness Score: A score from 0 to 1 indicating how much of the generated text is supported by provided source documents.
5. Count & Ratio

Measures the number of occurrences of an event or the proportion of one count to another. These are fundamental for understanding frequency, cost, and efficiency.

Core Question: "How many?" or "What proportion?"

Examples:

  • Token Count: Number of tokens in the prompt or response. This directly impacts both cost and latency. This directly impacts both cost and latency.
  • Number of Turns: How many back-and-forths in a conversation. A low number can signal efficiency (quick resolution) or failure (user gives up). Context is key.
  • Hallucination Rate: (Count of responses with hallucinations) / (Total responses). A key quality metric.
  • Tool Use Attempts: The number of times the agent tried to use a tool before succeeding or failing. High numbers can indicate a flawed tool definition or a confused model.
6. Positional / Rank

Measures the position of an item in an ordered list. This is crucial for systems that generate multiple options or retrieve information, as the ordering of results is often as important as the results themselves.

Core Question: "Where in the list was the correct answer?" or "How high up was the user's choice?"

Examples:

  • Retrieval Rank: In a RAG system, the position of the document chunk that contained the correct information. A rank of 1 is ideal; a rank of 50 suggests a poor retriever.
  • Candidate Generation: If the system generates 3 draft emails, which one did the user select? (1st, 2nd, or 3rd). If users consistently pick the 3rd option, maybe it should be the 1st.

Part 2: A Layered Framework for LLM Systems

Thinking in layers helps isolate problems and understand the system's health from different perspectives. A failure at a lower level (e.g., high latency) will inevitably impact the higher levels (e.g., user satisfaction).

Layer 1: Operational & System Metrics (Is it working?)

This is the foundation. If the system isn't running, nothing else matters. These metrics are objective, easy to collect, and tell you about the health and efficiency of your service.

Latency (Time-based):
  • Time to First Token (TTFT): How long until the user starts seeing a response? This is a primary driver of perceived performance. A low TTFT makes an application feel responsive, even if the total generation time is longer. How long until the user starts seeing a response? This is a primary driver of perceived performance. A low TTFT makes an application feel responsive, even if the total generation time is longer.
  • Total Generation Time: Full time from prompt submission to completion.
Throughput (Volume-based):
  • Requests per Second (RPS): How many requests can the system handle? Essential for capacity planning.
Cost (Resource-based):
  • Tokens per Request: Average prompt and completion tokens. This is the primary driver of direct LLM API costs.
  • Cost per Conversation: Total cost of a multi-turn interaction, including all LLM calls, tool calls, and other API services.
Reliability (Error-based):
  • API Error Rate: How often do calls to the LLM or other external tools fail (e.g., due to network issues, rate limits, or invalid requests)? How often do calls to the LLM or other external tools fail (e.g., due to network issues, rate limits, or invalid requests)?
  • System Uptime: The classic operational metric, representing the percentage of time the service is available.

Layer 2: Output Quality Metrics (Is the output good?)

This is the most complex layer and specific to generative AI. "Goodness" is multi-faceted and often subjective. These metrics require more sophisticated evaluation, including other models ("LLM-as-Judge") or structured human review.

Faithfulness / Groundedness (Is it true?):
  • Citation Accuracy (Binary/Ratio): Does the provided source actually support the generated statement? This can be a simple check (the source is relevant) or a strict one (the exact passage is highlighted).
  • Hallucination Rate (Ratio): What percentage of responses contain fabricated information? Defining a "hallucination" requires a clear rubric for human evaluators.
  • Contradiction Score (Continuous): A score from an NLI (Natural Language Inference) model on whether the response contradicts the source documents.
Relevance (Is it on-topic?):
  • Relevance Score (Ordinal/Continuous): How relevant is the response to the user's prompt? Often rated on a scale (e.g., 1-5) or scored by another model using embeddings.
  • Instruction Following (Binary/Ordinal): Did the model adhere to all constraints in the prompt (e.g., "Answer in 3 sentences," "Use a formal tone," "Format the output as a JSON object with keys 'name' and 'email'")? This is a key measure of model steerability.
Clarity & Coherence (Is it well-written?):
  • Readability Score (Continuous): Flesch-Kincaid or similar automated scores to ensure the output is appropriate for the target audience.
  • Grammar/Spelling Errors (Count): Number of detected mistakes.
  • Coherence Score (Ordinal): Does the response make logical sense from beginning to end? This is highly subjective and almost always requires human judgment.
Safety & Responsibility (Is it safe?):
  • Toxicity Score (Continuous): Output from a public or custom-trained toxicity classifier.
  • PII Detection Rate (Binary/Ratio): Does the model leak personally identifiable information, either from its training data or from provided context?
  • Jailbreak Attempt Detection (Binary): Was the user prompt an attempt to bypass safety filters?
  • Bias Measurement (Classification/Ratio): Using a benchmark dataset of templated prompts (e.g., "The [profession] from [country] went to..."), does the model generate responses that reinforce harmful stereotypes?

Layer 3: Task & User Success Metrics (Did it help?)

This is the ultimate measure of value. A model can produce a perfect, factual, safe answer, but if it doesn't help the user achieve their goal, the system has failed. These metrics connect model performance to real-world impact.

Task Success:
  • Task Completion Rate (Binary/Ratio): For goal-oriented systems (e.g., booking a ticket, summarizing a document), did the user successfully complete the task? This is often measured by tracking clicks on a final "confirm" button or reaching a specific state.
  • Goal Completion Rate (GCR): A more nuanced version asking if the user achieved their ultimate goal, even if it took a few tries. For example, a user might complete the "task" of finding a recipe but fail their "goal" because it required an ingredient they didn't have.
User Interaction:
  • Thumbs Up/Down Rate (Ratio): Simple, direct user feedback. The most valuable signal when available.
  • Conversation Length (Count): Shorter might mean efficiency; longer might mean engagement. This needs to be correlated with task success to be interpreted correctly.
  • Response Edit Rate (Ratio): How often do users have to copy and then significantly edit the AI's generated response? A high rate is a strong negative signal.
  • Follow-up Question Rate (Ratio): Are users asking clarifying questions because the first answer was incomplete, or are they naturally continuing the conversation?
Business Value:
  • Deflection Rate: In a customer support context, what percentage of issues were solved without escalating to a human agent? A high deflection rate is only good if user satisfaction is also high. This is also the pricing structure for Fin by Intercom.
  • Conversion Rate: Did the interaction lead to a desired business outcome (e.g., a sale, a sign-up)?
  • User Retention (Ratio): Are users coming back to use the application? This is a powerful long-term indicator of value.