Definition
The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.
In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.
"A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time."
- Hallucination(The specific failure mode fact checking aims to resolve.)
- Groundedness(The primary metric used to quantify successful fact checking.)
- Chain-of-Verification (CoVe)(A prompting technique used to implement self-correction/fact checking.)
- NLI (Natural Language Inference)(The underlying logical task of determining if a premise supports a hypothesis.)
Conceptual Overview
The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.
Disambiguation
In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.
Visual Analog
A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time.