Skip to content

2025

The Case for Open Inference: What Signoz is missing

There’s a quiet but important debate happening in the LLM observability world — whether developers should stick with OpenTelemetry (OTel) or move to OpenInference, a spec built by Arize for tracing LLM workloads.

Most people frame this as “OTel vs OpenInference,” as if one will win and the other will vanish. That’s the wrong lens. The real question is: what are you trying to observe — systems or reasoning?

If your product is a traditional service with predictable APIs, OTel works beautifully. But once your system starts reasoning, calling tools, and chaining LLMs together, you’ve left OTel’s comfort zone. That’s where OpenInference comes in — it extends OTel with span types and attributes that actually make sense for LLMs: token usage, cost per request, time to first token, and tool calls. It’s not competing with OTel — it’s extending it into a new domain.

So when I read the recent Signoz essay arguing that "OTel is enough for LLM observability" I disagreed — not because OTel is wrong, but because the framing misses what's fundamentally new about GenAI workloads.

OpenInference is an enriched version of OTel built specifically for LLMs. It offers more specific span types for LLMs, including LLM, tool, and agent. It is first and foremost built for developers to use in their own LLM-powered applications, where they want to log LLM choices and actions. OpenInference was designed to be complementary to OpenTelemetry GenAI 1, not an alternative. It already provides OTel-friendly instrumentations and semantic conventions.

As an example, the metrics which OpenInference takes seriously are: Token usage, cost per request, time to first token, tool call rates. And the original OTel cares about the RED metrics 2 e.g. latency, error rates, and throughput.

Signoz is not alone in making a case for OTel against OpenInference. The main case is quite simple: OTel is a well-established standard with broad language support, while OpenInference is newer, has limited adoption, and its OTel compatibility is superficial.

The Case for OpenInference

I disagree with the main case for OTel against OpenInference: compatibility triumphs everything else.

If this was the case, we should not have needed anything except Postgres for structured data. From ClickHouse to Snowflake, we have a lot of differentiation in the usecases and tradeoffs. And that is precisely the case with OpenInference and OTel.

To extend this metaphor: Every system adopts the SQL dialect to make it their own. Like SQL dialects, OTel and OpenInference share a syntax but serve different workloads. If you’re building an LLM product that needs to explain its reasoning, optimize latency, or debug tool calls — OpenInference gives you visibility that OTel simply can’t today:

OpenInference is a better choice for agentic systems

Agents are products where LLMs are in the driver's seat and not the developers. All modern agents 3 have this property. This means that the debug and RCA loop is much much faster and easier with OpenInference.

For instance, when an agent fails, a standard OTel trace might show a single long-running process_request span. An OpenInference trace, however, would immediately break this down into LLM -> TOOL_CALL(search) -> RETRIEVER -> LLM spans, instantly revealing that the failure occurred because the search tool got stuck in a loop or something in the query rewriting step needs to improve.

This nesting visualization is also quite powerful and something I deeply appreciate when trying to debug agents:

Phoenix Tracing Visualization Source: Arize Phoenix Tracing Documentation

Logs as Analytics data

OpenInference logs are actually data meant for not just alerting 500 and 429 but for product analytics and observability. This means that the data is actually meant to be used for more than just API alerting.

This shift towards product analytics is critical because we lack traditional user intent signals, like 'click events,' in conversational interfaces. With the high effort of text and voice interactions, we can't rely on the law of large numbers.

For example: With semantically rich spans for TOOL_CALL, a product manager can finally answer questions like, 'Which tools are my users' agents invoking most often?' or 'Are users getting stuck in a loop trying to use the calendar tool?'. These are product questions, not just engineering alerts, and they are invisible in a standard RED metrics dashboard.

The third reason is a little different: I think it's useful to revisit why OpenInference even exists: OTel didn't have a decent spec for GenAI workloads! It is on its way to have one, but as modern APIs change e.g. OpenAI went from Chat Completions to Responses API 4. As a specialized, single-focus project, OpenInference is structurally positioned to adapt to the rapid evolution of GenAI APIs far more quickly than a large, consensus-driven standards body like OTel.

What can OpenInference do better?

  1. OpenInference is completely maintained and developed by Arize.ai. It is not a community effort, despite being Apache 2.0 (the project's contributors reveals this quickly5). This single-vendor stewardship creates a natural friction against broader adoption.

  2. The most effective way for Arize to counter this would be to aggressively pursue OTel compatibility, proving that OpenInference is a good faith extension of the ecosystem, not a replacement aimed at vendor lock-in. This would mean that all OTel logging frameworks and tools should be able to use OpenInference, and hence switch to Arize Phoenix from other logging frameworks and tools.

Putting myself in others' shoes

From Arize's perspective, delaying the push for OTel compatibility might be the right move. The standard hasn't matured fast enough, and moving independently lets them stay ahead of the curve and address developer needs more quickly.

If I were at Signoz and wanted to own GenAI logging and alerting, I'd create a Signoz plugin that accepts both OpenInference traces and OTel GenAI 1 traces, offering the same nested visualization capabilities that make Phoenix compelling.

Conclusion

Ultimately, the debate isn't about OTel versus OpenInference. We should acknowledge that observing generative AI is a fundamentally new problem that requires more than just knowing if an endpoint is healthy. We need a richer vocabulary to understand what our applications are doing and why. And the strategic bet is clear: LLMs are not another web service — they’re runtime decision-makers. And observing them needs a richer language than latency, error, and throughput.

If OTel is the lingua franca of distributed systems, OpenInference is the dialect for reasoning systems. The two are not competitors — they’re layers. OTel tells you how your system behaves, OpenInference tells you why.

References


  1. OTel GenAI Spec 

  2. RED metrics 

  3. A good alternative definition is a LLM while loop with tools: https://www.braintrust.dev/blog/agent-while-loop 

  4. OpenAI built the Responses API to support multi-modal and tool calling as first class concerns: OpenAI blog on Why Responses API 

  5. Most contributors to Arize's OpenInference are all their employees, with top 3 contributors coming from their own DevRel team. Source: Github Contributions 

  6. Law of large numbers makes clicks way useful for SaaS and consumer applications alike 

Agent Metrics - A Guide for Engineers

Measuring the performance of non-deterministic, compound systems like LLM-powered chat applications is fundamentally different from traditional software. An output can be syntactically perfect and seem plausible, yet be factually incorrect, unhelpful, or unsafe.

A robust measurement strategy requires a multi-layered approach that covers everything from operational efficiency to nuanced aspects of output quality and user success. This requires a shift in thinking from simple pass/fail tests to a portfolio of metrics that, together, paint a comprehensive picture of system performance.

This guide breaks down metric design into three parts:

  1. Foundational Metric Types: The basic building blocks of any measurement system.
  2. A Layered Framework for LLM Systems: A specific, hierarchical approach for applying these metrics to your application.
  3. Multi-Turn Chat Metrics: Specialized metrics for evaluating conversational systems beyond single-turn interactions.

Skip Ahead

Part 1: Foundational Metric Types

These are the fundamental ways to structure a measurement. Understanding these types is the first step to building a meaningful evaluation suite.

1. Classification (Categorical)

Measures which discrete, unordered category an item belongs to. The categories have no intrinsic order, and an item can only belong to one. This is crucial for segmenting analysis and routing logic.

Core Question: "What kind of thing is this?" or "Which bucket does this fall into?"

Examples:

  • Intent Recognition: [BookFlight], [CheckWeather], [GeneralChat]. This allows you to measure performance on a per-intent basis.
  • Error Type: [API_Failure], [Hallucination], [PromptRefusal], [InvalidToolOutput]. Segmenting errors is the first step to fixing them.
  • Tool Used: [Calculator], [CalendarAPI], [SearchEngine]. Helps diagnose issues with specific tools in a multi-tool agent.
  • Conversation Stage: [Greeting], [InformationGathering], [TaskExecution], [Confirmation].
2. Binary (Boolean)

A simplified version of classification with only two outcomes. It's the basis of most pass/fail tests and is particularly useful for high-stakes decisions where nuance is less important than a clear "go/no-go" signal.

Core Question: "Did it succeed or not?" or "Does this meet the minimum bar?"

Examples:

  • Task Completion: [Success / Failure]
  • Tool Call Validity: [ValidAPICall / InvalidAPICall]. Was the generated tool call syntactically correct?
  • Contains Citation: [True / False]. Did the model cite a source for its claim?
  • Safety Filter Triggered: [True / False]. A critical metric for monitoring responsible AI guardrails.
  • Factually Correct: [True / False]. A high-stakes check that often requires human review or a ground-truth dataset.
3. Ordinal

Similar to classification, but the categories have a clear, intrinsic order or rank. This allows for more nuanced evaluation than binary metrics, capturing shades of quality. These scales are often defined in human evaluation rubrics.

Core Question: "How good is this on a predefined scale?"

Examples:

  • User Satisfaction Score: [1: Very Unsatisfied, ..., 5: Very Satisfied]. The classic user feedback mechanism.
  • Answer Relevance: [1: Irrelevant, 2: Somewhat Relevant, 3: Highly Relevant]. A common human-annotated metric.
  • Readability: [HighSchool_Level, College_Level, PhD_Level]. Helps align model output with the target audience.
  • Safety Risk: [NoRisk, LowRisk, MediumRisk, HighRisk]. Granular assessment for safety-critical applications.
4. Continuous (Scalar)

Measures a value on a continuous range, often normalized between 0.0 and 1.0 for scores, but can be any numeric range. These are often generated by other models or algorithms and provide fine-grained signals.

Core Question: "How much of a certain quality does this have?"

Examples:

  • Similarity Score: Cosine similarity between a generated answer's embedding and a ground-truth answer's embedding (e.g., 0.87).
  • Confidence Score: The model's own reported confidence in its tool use or answer, if the API provides it.
  • Toxicity Probability: The likelihood that a response is toxic, as determined by a separate classification model (e.g., 0.05).
  • Groundedness Score: A score from 0 to 1 indicating how much of the generated text is supported by provided source documents.
5. Count & Ratio

Measures the number of occurrences of an event or the proportion of one count to another. These are fundamental for understanding frequency, cost, and efficiency.

Core Question: "How many?" or "What proportion?"

Examples:

  • Token Count: Number of tokens in the prompt or response. This directly impacts both cost and latency.
  • Number of Turns: How many back-and-forths in a conversation. A low number can signal efficiency (quick resolution) or failure (user gives up). Context is key.
  • Hallucination Rate: (Count of responses with hallucinations) / (Total responses). A key quality metric.
  • Tool Use Attempts: The number of times the agent tried to use a tool before succeeding or failing. High numbers can indicate a flawed tool definition or a confused model.
6. Positional / Rank

Measures the position of an item in an ordered list. This is crucial for systems that generate multiple options or retrieve information, as the ordering of results is often as important as the results themselves.

Core Question: "Where in the list was the correct answer?" or "How high up was the user's choice?"

Examples:

  • Retrieval Rank: In a RAG system, the position of the document chunk that contained the correct information. A rank of 1 is ideal; a rank of 50 suggests a poor retriever.
  • Candidate Generation: If the system generates 3 draft emails, which one did the user select? (1st, 2nd, or 3rd). If users consistently pick the 3rd option, maybe it should be the 1st.

Part 2: A Layered Framework for LLM Systems

Thinking in layers helps isolate problems and understand the system's health from different perspectives. A failure at a lower level (e.g., high latency) will inevitably impact the higher levels (e.g., user satisfaction).

These three layers form a hierarchy where each builds on the previous:

graph TB
    Op[Layer 1: Operational<br/>Is it working?]
    Qual[Layer 2: Output Quality<br/>Is it good?]
    Success[Layer 3: User Success<br/>Does it help?]

    Op --> Qual --> Success

    style Op fill:#e8f4f8
    style Qual fill:#e1f5ff
    style Success fill:#ffe1f5

Layer 1: Operational & System Metrics (Is it working?)

This is the foundation. If the system isn't running, nothing else matters. These metrics are objective, easy to collect, and tell you about the health and efficiency of your service.

Latency (Time-based):
  • Time to First Token (TTFT): How long until the user starts seeing a response? This is a primary driver of perceived performance. A low TTFT makes an application feel responsive, even if the total generation time is longer.
  • Total Generation Time: Full time from prompt submission to completion.
Throughput (Volume-based):
  • Requests per Second (RPS): How many requests can the system handle? Essential for capacity planning.
Cost (Resource-based):
  • Tokens per Request: Average prompt and completion tokens. This is the primary driver of direct LLM API costs.
  • Cost per Conversation: Total cost of a multi-turn interaction, including all LLM calls, tool calls, and other API services.
Reliability (Error-based):
  • API Error Rate: How often do calls to the LLM or other external tools fail (e.g., due to network issues, rate limits, or invalid requests)?
  • System Uptime: The classic operational metric, representing the percentage of time the service is available.

Layer 2: Output Quality Metrics (Is the output good?)

This is the most complex layer and specific to generative AI. "Goodness" is multi-faceted and often subjective. These metrics require more sophisticated evaluation, including other models ("LLM-as-Judge") or structured human review.

Faithfulness / Groundedness (Is it true?):
  • Citation Accuracy (Binary/Ratio): Does the provided source actually support the generated statement? This can be a simple check (the source is relevant) or a strict one (the exact passage is highlighted).
  • Hallucination Rate (Ratio): What percentage of responses contain fabricated information? Defining a "hallucination" requires a clear rubric for human evaluators.
  • Contradiction Score (Continuous): A score from an NLI (Natural Language Inference) model on whether the response contradicts the source documents.
Relevance (Is it on-topic?):
  • Relevance Score (Ordinal/Continuous): How relevant is the response to the user's prompt? Often rated on a scale (e.g., 1-5) or scored by another model using embeddings.
  • Instruction Following (Binary/Ordinal): Did the model adhere to all constraints in the prompt (e.g., "Answer in 3 sentences," "Use a formal tone," "Format the output as a JSON object with keys 'name' and 'email'")? This is a key measure of model steerability.
Clarity & Coherence (Is it well-written?):
  • Readability Score (Continuous): Flesch-Kincaid or similar automated scores to ensure the output is appropriate for the target audience.
  • Grammar/Spelling Errors (Count): Number of detected mistakes.
  • Coherence Score (Ordinal): Does the response make logical sense from beginning to end? This is highly subjective and almost always requires human judgment.
Safety & Responsibility (Is it safe?):
  • Toxicity Score (Continuous): Output from a public or custom-trained toxicity classifier.
  • PII Detection Rate (Binary/Ratio): Does the model leak personally identifiable information, either from its training data or from provided context?
  • Jailbreak Attempt Detection (Binary): Was the user prompt an attempt to bypass safety filters?
  • Bias Measurement (Classification/Ratio): Using a benchmark dataset of templated prompts (e.g., "The [profession] from [country] went to..."), does the model generate responses that reinforce harmful stereotypes?

Layer 3: Task & User Success Metrics (Did it help?)

This is the ultimate measure of value. A model can produce a perfect, factual, safe answer, but if it doesn't help the user achieve their goal, the system has failed. These metrics connect model performance to real-world impact.

Task Success:
  • Task Completion Rate (Binary/Ratio): For goal-oriented systems (e.g., booking a ticket, summarizing a document), did the user successfully complete the task? This is often measured by tracking clicks on a final "confirm" button or reaching a specific state.
  • Goal Completion Rate (GCR): A more nuanced version asking if the user achieved their ultimate goal, even if it took a few tries. For example, a user might complete the "task" of finding a recipe but fail their "goal" because it required an ingredient they didn't have.
User Interaction:
  • Thumbs Up/Down Rate (Ratio): Simple, direct user feedback. The most valuable signal when available.
  • Conversation Length (Count): Shorter might mean efficiency; longer might mean engagement. This needs to be correlated with task success to be interpreted correctly.
  • Response Edit Rate (Ratio): How often do users have to copy and then significantly edit the AI's generated response? A high rate is a strong negative signal.
  • Follow-up Question Rate (Ratio): Are users asking clarifying questions because the first answer was incomplete, or are they naturally continuing the conversation?
Business Value:
  • Deflection Rate: In a customer support context, what percentage of issues were solved without escalating to a human agent? A high deflection rate is only good if user satisfaction is also high. This is also the pricing structure for Fin by Intercom.
  • Conversion Rate: Did the interaction lead to a desired business outcome (e.g., a sale, a sign-up)?
  • User Retention (Ratio): Are users coming back to use the application? This is a powerful long-term indicator of value.

Part 3: Multi-Turn Chat — Measuring Conversations, Not Just Responses

Parts 1 and 2 established the foundational metric types and a layered framework applicable to any LLM system. Part 2's Layer 2 covered output quality metrics—relevancy, faithfulness, coherence—that apply to individual model responses. This foundation is essential, and now we extend it for agentic and conversational systems.

Multi-turn chat introduces complexities that single-turn evaluation cannot capture: context management across turns, user intent shifts, conversational flow, and the ability to recover from errors. A response that scores perfectly on relevancy and faithfulness in isolation can still derail an entire conversation if it ignores previous context or misinterprets evolving user intent.

This section focuses on what's unique to conversational systems: Conversation-Specific Metrics that evaluate the entire user journey, and User Interaction Signals that reveal implicit feedback traditional metrics miss.

The evaluation flow for conversations extends the single-turn approach:

graph LR
    Turn[Turn Metrics<br/>Quality per response]
    Conv[Conversation Metrics<br/>Success across turns]
    Signals[User Signals<br/>Behavior patterns]

    Turn --> Conv
    Signals --> Conv
    Conv --> Insights[Product Decisions]

Turn-Specific Metrics — Extending the Question-Answer Framework

At the micro-level, we must ensure each turn is high-quality. These metrics adapt classical QA evaluation to the conversational setting, and many have been operationalized in open-source frameworks like Ragas1.

1. Comparing Answer with Question: Relevancy

The most fundamental requirement is that the model's answer directly addresses the user's most recent query. If a user asks, "What were NVIDIA's Q2 earnings?" the model shouldn't respond with the stock price. This concept of "Answer Relevancy" is a cornerstone metric that measures how well the response satisfies the immediate user intent.

How to measure: This is often scored by human raters on a Likert scale. It can also be automated by using a powerful LLM as an evaluator, a technique that has shown strong correlation with human judgment2. Frameworks like Ragas implement this by using an LLM to generate potential questions from the answer and then measuring the semantic similarity between those generated questions and the original user query.

2. Comparing Answer with Context: Faithfulness

A model's response must be factually consistent with the information it was given. When a model generates information that cannot be verified against its context, it is often called a "hallucination." Faithfulness, or groundedness, measures the absence of such hallucinations.

How to measure: This involves a form of automated fact-checking. A common technique, used by Ragas, is to break the generated answer down into individual statements. Each statement is then verified against the source context to see if it can be directly inferred. The final score is the ratio of verified statements to the total number of statements3.

3. Answer vs. Pre-defined Aspects

Not all quality attributes are about factual correctness. Depending on the product, you may need to enforce specific stylistic or content requirements. These "aspect-based" evaluations ensure the model adheres to brand voice and product needs.

  • Common Aspects:
  • Tone: Is the response professional, friendly, empathetic, or neutral, as required?
  • Length: Does the answer respect length constraints (e.g., staying under a certain character count for a mobile interface)?
  • Required Information: Does the answer include necessary elements like legal disclaimers, links to sources, or specific product mentions?

Conversation-Specific Metrics — Capturing the Flow

A conversation can be composed of individually perfect turns and still be a total failure. Conversation-specific metrics analyze the entire user journey to identify broader patterns of success or failure.

Drop-off @ K Turns

This metric identifies the average number of turns after which a user abandons the conversation. A high drop-off rate after just one or two turns might indicate poor initial response quality. Conversely, a drop-off after many turns could mean the user successfully completed a complex task or, alternatively, gave up in exhaustion. Segmenting this metric by conversation outcome (e.g., user clicks "thumbs up" vs. just closes the window) is crucial for correct interpretation.

Conversation Success Rate

Beyond individual turn quality, did the entire conversation achieve its goal? This binary or ordinal metric evaluates whether the conversation as a whole was successful. For goal-oriented dialogues (e.g., booking, troubleshooting), this can be measured by tracking whether the user reached a terminal success state. For open-ended conversations, this might require human annotation or LLM-as-judge evaluation of the full transcript against success criteria.

User Interactions as Product Intelligence

Beyond explicit feedback, how users physically interact with the chat interface provides powerful signals about the quality and utility of the model's responses. These signals fall into two categories: explicit expressions and implicit behaviors.

Explicit Signals

1. User Frustration

Frustration is a critical emotional signal to capture. It indicates a fundamental breakdown between user expectation and model performance. Users often directly express frustration. Look for patterns like: * Repeated question marks: ??
* Direct challenges: Why??, That's not what I asked * Rephrasing the same query multiple times * Use of all-caps

2. User Confusion

Confusion differs from frustration. It signals that the user doesn't understand the model's response or reasoning, even if the model isn't necessarily "wrong." Look for: * Where did this come from? (Indicates a need for better citation/attribution).
* What are you talking about? (Indicates the model may have lost context).

3. Need for Explanations

When users start asking meta-questions about the AI's capabilities, it reveals a gap in their mental model of how the system works. These questions are a goldmine for product improvement.

Examples: * Why can't you update the glossary for me? * Can you add a new contact to my list?

These interactions highlight user expectations about the model's agency and tool-use capabilities. Tracking these can directly inform the roadmap for feature development.

Implicit Signals

How users physically interact with the chat interface provides powerful, implicit signals about the quality and utility of the model's responses4.

  • User Copies Text: If a user highlights and copies the AI's answer, it's a strong positive signal. It suggests the information was useful enough to save or use elsewhere. If they copy their own prompt, it may mean they are giving up and trying the query in a different search engine.
  • User Takes a Screenshot: This is a powerful indicator of a peak moment—either extreme delight (a shockingly good answer) or extreme failure (a bizarre or hilarious error). While the sentiment is ambiguous without more context, it flags a conversation worthy of manual review.
  • User Copies a Citation Link: When a model provides sources, a user copying the URL is a stronger signal of interest and trust than a simple click. It implies an intent to save or share the source.
  • Long Click-Through Rate (CTR) to a Citation: A standard CTR simply tells you a link was clicked. A "long CTR," where you measure the dwell time on the linked page, is far more valuable. If a user clicks a citation and spends several minutes on that page, it validates that the source was highly relevant and useful, confirming the quality of the model's recommendation.

There is no single magic metric for evaluating multi-turn chat. A comprehensive strategy requires a multi-layered approach. It starts with the foundation of turn-specific quality—relevancy, faithfulness, and adherence to style. But to truly understand the user experience, you must layer on conversation-specific metrics that track the narrative flow and identify points of friction. Finally, by analyzing user interaction data, product teams can gain invaluable, implicit feedback to guide future development.

Conclusion

Building a comprehensive evaluation strategy for LLM-powered agents requires thinking beyond traditional software metrics. The framework presented here provides a systematic approach:

  1. Start with the fundamentals: Understand the six foundational metric types (Classification, Binary, Ordinal, Continuous, Count & Ratio, and Positional) and apply them appropriately to your specific use case.
  2. Think in layers: Structure your measurement strategy across three interconnected layers: operational metrics ensure your system is running efficiently, output quality metrics verify your responses are correct and safe, and user success metrics confirm you're delivering real value.
  3. Embrace multi-turn complexity: For conversational systems, evaluate both individual turns and entire conversations. Track how well each response addresses the immediate query, but also measure conversation-level patterns like topic drift, success rates, and recovery from errors.
  4. Combine implicit signals: User interactions—copying text, taking screenshots, abandonment patterns—often reveal more than explicit feedback. Build instrumentation to capture these behavioral signals.

The metrics you choose should reflect your product goals and constraints. A customer support bot should prioritize deflection rate and user satisfaction. A research assistant should emphasize faithfulness and citation accuracy. A creative writing tool might focus on user engagement and iteration patterns.

Start small. Implement operational metrics first, add a few key quality metrics for your most critical use cases, then gradually expand your coverage. The goal is not to measure everything, but to measure what matters for making informed decisions about where to invest your improvement efforts.

References


  1. Es, Shahul, et al. (2023). Ragas: Automated Evaluation of Retrieval Augmented Generation. arXiv preprint arXiv:2309.15217 

  2. Chiang, Wei-Lin, et al. (2023). Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/ 

  3. Min, Sewon, et al. (2023). FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. arXiv preprint arXiv:2305.14251 

  4. Radlinski, Filip, et al. (2019). How Am I Doing?: Evaluating Conversational Search Systems Offline. Proceedings of the 2019 Conference on Human Information Interaction and Retrieval 

  5. NVIDIA NeMo Evaluator. (2023). NVIDIA Developer Documentation. https://docs.nvidia.com/nemo-framework/user-guide/latest/nemollm/nemo_evaluator.html 

  6. Zhang, Tianyi, et al. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv preprint arXiv:2311.05232 

RAG Metrics for Technical Leaders: Beyond Recall

Title: MRR, nDCG, Hit Rate, and Recall: Know Your Retrieval Metrics

If you're working on RAG, search, or anything that touches vector databases, you've probably run into a mess of evaluation metrics: MRR, nDCG, hit rate, recall. Everyone throws these terms around. Few explain them well.

This post is for practitioners who want to go from vague intuition to confident decisions.

If you're just starting out and debugging a hallucinating LLM, use Hit Rate. If you're ready to get serious, use MRR + Recall during retriever tuning. If you're ready to get serious, use nDCG + Hit Rate when tuning reranker or doing system evals.

TL;DR: When to Use What

Use Case / Need Metric to Use Why
You just want to check if any correct result was retrieved Hit Rate Binary success metric, useful for RAG "was it in the top-k?"
You want to know how many of the correct results were found Recall Focused on completeness — how much signal did you recover
You want to know how early the 1st correct result appears MRR Good for single-answer QA and fast-hit UIs
You care about ranking quality across all relevant results nDCG Ideal for multi-relevance tasks like document or product search

Understanding Each Metric in Detail

✅ Hit Rate

  • Binary metric: did any relevant doc show up in top-k?
  • Doesn’t care if it’s Rank 1 or Rank 5, just needs a hit.

Use Hit Rate when: You're debugging RAG. Great for checking if the chunk with the answer even made it through.

Think: "Did we even get one hit in the top-k?"

↑ Recall

  • Measures what fraction of all relevant documents were retrieved in top-k.
  • Penalizes for missing multiple relevant items.

Use Recall when: You want completeness. Think medical retrieval, financial documents, safety-critical systems.

Think: "Did we find enough of what we needed?"

🔮 MRR (Mean Reciprocal Rank)

  • Tells you how early the first relevant document appears.
  • If the first correct answer is at Rank 1 → score = 1.0
  • Rank 2 → score = 0.5; Rank 5 → score = 0.2

Use MRR when: Only one answer matters (QA, intent classification, slot filling). You care if your system gets it fast.

Think: "Do we hit gold in the first result?"

🔠 nDCG (Normalized Discounted Cumulative Gain)

  • Looks at all relevant docs, not just the first.
  • Discounts docs by rank: higher = better.
  • Supports graded relevance ("highly relevant" vs "somewhat relevant").

Use nDCG when: Ranking quality matters. Ideal for search, recsys, anything with many possible good results.

Think: "Did we rank the good stuff higher overall?"

How They Differ

Metric Binary or Graded 1st Hit Only? Sensitive to Rank? Use For...
Hit Rate Binary ❌ No ❌ No (thresholded) RAG debugging, presence check
Recall Binary or Graded ❌ No ❌ No Completeness, coverage
MRR Binary ✅ Yes ✅ Yes Fast hits, QA
nDCG Graded ❌ No ✅ Yes Ranking quality, search

Retrieval Is Not One Metric

People default to one number because it's convenient. But retrieval is multi-objective:

  • You want early relevant hits (MRR)
  • You want most relevant hits (Recall)
  • You want them ranked well (nDCG)
  • You want to know if you're even in the game (Hit Rate)

Choose the metric that matches your product surface.

Pro Tips

  • Use Hit Rate when you're just starting out and debugging a hallucinating LLM

And then use the right metric for the right job: - Use MRR + Recall during retriever tuning - Use nDCG + Hit Rate when tuning reranker or doing system evals

Final Word

MRR isn’t better than nDCG. Recall isn’t cooler than Hit Rate. They just answer different questions.

So the next time someone asks, "What's your retrieval performance?" You can say: "Depends. What do you care about?"