TLDR
Tool use (Action) is the functional bridge that transforms an AI agent from a passive "predictor of text" into an active "executor of tasks." In the context of modern AI, tool use involves the ability of a Large Language Model (LLM) to recognize when it lacks the internal data or capability to solve a problem and subsequently invoke an external software function, API, or physical actuator. This process is rooted in distributed cognition, where the agent's "intelligence" is not confined to its weights but extends into the environment via the tools it manipulates. Key technical pillars include function calling, structured JSON output, and the Perception-Action loop, which allows agents to observe the results of their actions and iterate toward a goal.
Conceptual Overview
The Distributed Cognition System
Traditional AI models were viewed as "closed systems"—repositories of static knowledge. However, tool use reframes the agent as part of a distributed cognition system [src:001]. In this framework, cognition is not something that happens solely inside the "brain" (the neural network); rather, it is a process spread across the agent, the tool, and the environment.
When an agent uses a calculator, the "math" is not happening in the LLM's attention heads; it is happening in the external ALU (Arithmetic Logic Unit). The agent's role is to represent the problem, select the tool, and interpret the result. This creates a bidirectional relationship: the agent's mental model guides the tool manipulation, while the tool's output updates the agent's internal state [src:001, src:002].
The Perception-Action Framework
Contemporary research suggests that tool use is not merely a sequence of commands but a perception-action framework [src:003]. For an AI agent, "perception" involves parsing the current context (user prompt + previous tool outputs), and "action" involves generating a tool call.
This framework relies on the concept of affordances—the qualities of an object or tool that allow an individual to perform an action. For an AI agent, an API's documentation defines its affordances. The agent must learn to map its goals to these affordances through perceptuomotor adaptation—essentially learning the "feel" of the API through trial, error, and feedback [src:003, src:004].
Evolutionary and Developmental Context
Tool use is a hallmark of advanced biological intelligence. From primates using sticks to extract termites to humans developing complex machinery, tools extend physical and cognitive reach [src:005]. In AI, we see a parallel "developmental" arc:
- Reflexive Action: Simple if-then triggers.
- Representational Tool Use: Using tools based on internal models (e.g., early symbolic AI).
- Adaptive Tool Use: Modern agents that can select tools dynamically based on real-time environmental feedback [src:004].

Practical Implementations
Function Calling and Structured Output
The most common implementation of tool use in modern LLMs (like GPT-4, Claude 3.5, or Llama 3) is Function Calling. This is a protocol where the model is provided with a list of tool definitions (usually in JSON Schema format) and is trained to output a structured JSON object instead of natural language when a tool is required.
The Tool Definition Schema:
The Execution Environment
The agent itself does not "run" the code. Instead, the system architecture follows a specific flow:
- Generation: The LLM generates the tool call:
get_stock_price(ticker="NVDA"). - Interception: The agentic framework (e.g., LangChain, AutoGPT) intercepts this string.
- Execution: The framework executes the actual Python code or API request.
- Injection: The result (e.g.,
{"price": 135.50}) is injected back into the LLM's prompt as a "Tool Role" message.
Reinforcement Learning and Tool Manipulation (ART)
Advanced implementations use Augmenting Reinforcement Learning with Tool Manipulation (ART) [src:006]. In these systems, agents are not just prompted to use tools; they are trained via RL to optimize their tool selection. This is critical in complex environments where multiple tools could solve a problem, but one is more efficient or cost-effective.
Advanced Techniques
Tool Composition and Chains
Complex tasks often require Tool Composition, where the output of Tool A becomes the input for Tool B.
- Chain of Tools: Similar to "Chain of Thought," this technique encourages the model to plan a sequence of tool invocations [src:007].
- DAG-based Orchestration: For highly complex agents, tools are organized in a Directed Acyclic Graph (DAG), allowing for parallel tool execution (e.g., fetching weather and news simultaneously) before synthesizing a final answer.
Tool Retrieval (RAG for Tools)
As the number of available tools grows (e.g., an enterprise with 1,000+ internal APIs), it becomes impossible to fit all tool definitions into the LLM's context window. Tool Retrieval uses Retrieval-Augmented Generation (RAG) to find the top-k most relevant tools based on the user's query, injecting only those definitions into the prompt.
Error Handling and Self-Correction
A "naive" agent fails if a tool returns a 404 error. An "advanced" agent treats the error as an Observation.
- Retry Logic: If a tool fails due to a timeout, the agent can decide to retry.
- Parameter Correction: If an API returns a "Missing Argument" error, the agent parses the error, realizes its mistake, and generates a corrected tool call.
The ReAct Pattern
The ReAct (Reason + Act) pattern is the industry standard for tool use. It forces the agent to follow a structured cycle:
- Thought: "I need to find the current price of Bitcoin to answer the user."
- Action:
get_crypto_price(symbol="BTC") - Observation:
{"price": 64000} - Thought: "I have the price. Now I can formulate the final response."
Research and Future Directions
Toolformer and Self-Supervised Learning
A major research milestone is Toolformer, a model trained to decide which APIs to call, when to call them, and how to best incorporate the results. Unlike standard models that require manual prompting, Toolformer was trained in a self-supervised manner, learning to insert API calls into its own text generation to improve accuracy.
Gorilla and API-Specific LLMs
Gorilla is a fine-tuned Llama model that surpasses GPT-4 in writing API calls. It addresses the "hallucination" problem in tool use—where models invent parameters that don't exist—by being trained specifically on massive API documentation datasets.
Autonomous Tool Discovery
The "holy grail" of tool use research is Autonomous Tool Discovery. Instead of being given a list of tools, the agent is given a "sandbox" (like a Linux terminal) and must explore the environment to discover what tools are available, how they work, and how to combine them to solve novel problems [src:003, src:006].
Embodied AI and Physical Tools
As AI agents move into robotics, "Action" shifts from API calls to physical motor control. Research in this area focuses on how the Perception-Action loop handles the high-latency, high-noise environment of the physical world, where a "tool" might be a physical hammer or a robotic gripper [src:002].
Frequently Asked Questions
Q: What is the difference between "Function Calling" and "Tool Use"?
Function calling is the technical mechanism (the structured output protocol), while tool use is the broader cognitive capability. An agent uses function calling to perform the act of tool use.
Q: Can an LLM use a tool it has never seen before?
If the tool's affordances (documentation/schema) are provided in the context window, a sufficiently advanced LLM (like GPT-4) can perform "zero-shot" tool use. However, for complex tools, fine-tuning or few-shot examples are often required for reliability.
Q: How do agents handle security when using tools?
Security is typically handled at the Execution Layer, not the LLM layer. This involves sandboxing (e.g., using Docker containers for code execution), API key scoping (giving the agent only the permissions it needs), and human-in-the-loop (HITL) confirmation for sensitive actions like deleting data or making payments.
Q: Why does tool use sometimes lead to "infinite loops"?
Infinite loops occur when an agent's "Observation" doesn't satisfy its "Reasoning" step, causing it to call the same tool repeatedly. This is usually mitigated by setting a max_iterations limit in the agent framework or improving the prompt to recognize repetitive failure.
Q: What is "Tool Hallucination"?
Tool hallucination occurs when an agent generates a call to a function that doesn't exist or uses arguments that are not defined in the provided schema. This is often solved by using models with better instruction-following capabilities or by using "constrained decoding" to force the model to only output valid JSON.
{
"articleId": "article-tool-use-action",
"title": "Tool Use (Action)",
"description": "A comprehensive technical guide to how AI agents interact with the world through tools, APIs, and functions.",
"keywords": [
"Tool Use",
"Function Calling",
"AI Agents",
"ReAct Pattern",
"Distributed Cognition",
"API Integration",
"Embodied AI"
],
"author": {
"name": "Luigi Fischer",
"role": "Lead Architect",
"socialLinks": {
"linkedin": "https://www.linkedin.com/in/luigi-fischer/"
}
},
"contentMetadata": {
"wordCount": 1580,
"readingTime": "8-10 minutes",
"difficultyLevel": "Advanced",
"topics": [
"Cognitive Science",
"Artificial Intelligence",
"Software Engineering",
"Robotics"
]
},
"references": [
{
"id": "src:001",
"title": "Tool Use: Embodiment and the Development of Modern Cognition",
"relevance": "Foundational theory on distributed cognition."
},
{
"id": "src:006",
"title": "ART: Augmenting Reinforcement Learning with Tool Manipulation",
"relevance": "Technical implementation of RL in tool use."
},
{
"id": "src:007",
"title": "Chain of Tools: Tool-Augmented Language Models",
"relevance": "Advanced reasoning techniques for tool sequencing."
}
],
"relatedClusters": [
"cluster-core-foundations-what-makes-an-ai-agent",
"cluster-llm-reasoning-capabilities"
],
"schemaVersion": "1.0",
"updatedAt": "2025-12-24"
}
References
- Tool Use: Embodiment and the Development of Modern Cognitionofficial docs
- Tool Use as Embodied Cognitionofficial docs
- The development of tool use: From action to cognitionofficial docs
- Tool use and cognitive developmentofficial docs
- Tool Use in Animals: Cognition and Ecologyofficial docs
- ART: Augmenting Reinforcement Learning with Tool Manipulationarxiv
- Chain of Tools: Tool-Augmented Language Modelsarxiv