SmartFAQs.ai
Back to Learn
Intermediate

Hallucination

A phenomenon where a Large Language Model (LLM) generates outputs that are factually incorrect, nonsensical, or ungrounded in the provided retrieval context, occurring when token probability outweighs source attribution. In RAG pipelines, this specifically refers to 'faithfulness' failures where the agent ignores retrieved documents in favor of its internal parametric memory.

Definition

A phenomenon where a Large Language Model (LLM) generates outputs that are factually incorrect, nonsensical, or ungrounded in the provided retrieval context, occurring when token probability outweighs source attribution. In RAG pipelines, this specifically refers to 'faithfulness' failures where the agent ignores retrieved documents in favor of its internal parametric memory.

Disambiguation

Distinguish from 'Creativity'; hallucinations are unintended factual confabulations, not stylistic variations.

Visual Metaphor

"A confident tour guide describing historical landmarks in a city they have never actually visited."

Key Tools
RAGASTruLensDeepEvalG-EvalLangCheck
Related Connections

Conceptual Overview

A phenomenon where a Large Language Model (LLM) generates outputs that are factually incorrect, nonsensical, or ungrounded in the provided retrieval context, occurring when token probability outweighs source attribution. In RAG pipelines, this specifically refers to 'faithfulness' failures where the agent ignores retrieved documents in favor of its internal parametric memory.

Disambiguation

Distinguish from 'Creativity'; hallucinations are unintended factual confabulations, not stylistic variations.

Visual Analog

A confident tour guide describing historical landmarks in a city they have never actually visited.

Related Articles