Definition
A phenomenon where a Large Language Model (LLM) generates outputs that are factually incorrect, nonsensical, or ungrounded in the provided retrieval context, occurring when token probability outweighs source attribution. In RAG pipelines, this specifically refers to 'faithfulness' failures where the agent ignores retrieved documents in favor of its internal parametric memory.
Distinguish from 'Creativity'; hallucinations are unintended factual confabulations, not stylistic variations.
"A confident tour guide describing historical landmarks in a city they have never actually visited."
- Grounding(Countermeasure)
- Faithfulness(Metric)
- Temperature(Influencing Factor)
- Parametric Memory(Root Cause)
Conceptual Overview
A phenomenon where a Large Language Model (LLM) generates outputs that are factually incorrect, nonsensical, or ungrounded in the provided retrieval context, occurring when token probability outweighs source attribution. In RAG pipelines, this specifically refers to 'faithfulness' failures where the agent ignores retrieved documents in favor of its internal parametric memory.
Disambiguation
Distinguish from 'Creativity'; hallucinations are unintended factual confabulations, not stylistic variations.
Visual Analog
A confident tour guide describing historical landmarks in a city they have never actually visited.