Definition
The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.
In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.
"A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time."
Conceptual Overview
The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.
Disambiguation
In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.
Visual Analog
A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time.