Definition
A prompting technique in RAG and AI agent workflows where a small number of input-output examples are included in the context to guide the LLM’s reasoning, output schema, or tool-calling logic. It leverages in-context learning to improve accuracy on domain-specific tasks, though it involves a trade-off of higher token costs and increased inference latency per request.
Not to be confused with fine-tuning (weight updates) or zero-shot prompting (no examples provided).
"A set of three solved practice problems provided at the top of a worksheet to demonstrate the required method for the remaining questions."
- In-Context Learning(Prerequisite)
- Dynamic Example Selection(Component)
- Chain-of-Thought (CoT)(Prerequisite)
Conceptual Overview
A prompting technique in RAG and AI agent workflows where a small number of input-output examples are included in the context to guide the LLM’s reasoning, output schema, or tool-calling logic. It leverages in-context learning to improve accuracy on domain-specific tasks, though it involves a trade-off of higher token costs and increased inference latency per request.
Disambiguation
Not to be confused with fine-tuning (weight updates) or zero-shot prompting (no examples provided).
Visual Analog
A set of three solved practice problems provided at the top of a worksheet to demonstrate the required method for the remaining questions.