SmartFAQs.ai
Back to Learn
Intermediate

Fact Checking

The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.

Definition

The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.

Disambiguation

In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.

Visual Metaphor

"A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time."

Key Tools
RAGASTruLensDeepEvalLangSmithGiskardUpTrain
Related Connections

Conceptual Overview

The systematic process of verifying an LLM’s generated response against retrieved source documents or external knowledge bases to ensure groundedness and eliminate hallucinations. Implementing this in a pipeline provides higher reliability but introduces architectural trade-offs such as increased latency and higher inference costs due to secondary verification LLM calls or NLI models.

Disambiguation

In AI, this refers to automated 'Hallucination Detection' or 'Groundedness' rather than manual journalistic verification.

Visual Analog

A Court Reporter cross-referencing a witness's live testimony against a stack of official evidence folders in real-time.

Related Articles