How LLM Tools work
Tools in Large Language Models (LLMs)
Section titled “Tools in Large Language Models (LLMs)”Tools enable large language models (LLMs) to interact with external systems, APIs, or data sources,
extending their capabilities beyond text generation.
Two aspects of tools are crucial:
- How to create tools
- How LLM finds and uses these tools
Create Tool
Section titled “Create Tool”{==Tool system is a form of metaprogramming==}
Tools are defined with metadata, including
- Name: A unique identifier (e.g., get_current_weather).
- Description: A natural language explanation of what the tool does (e.g., “Retrieve the current weather for a given city”).
- Schema: A JSON schema or similar structure specifying the input parameters (e.g., {“city”: {“type”: “string”}}).
# langchain tool@tooldef get_weather(city: str) -> str: """Return current weather in a city.""" ...name = “get_weather”
description = “Return current weather in a city.”
args = {“city”: str}
LangChain reads the metadata (function name, docstring, type hints)
:exclamation: Note: ==More comprehensive descriptions and schemas help LLMs understand and use tools effectively.==
Tool Detection
Section titled “Tool Detection”- How LLMs Detect the Required Tool?
- ==Query Parsing==:
- The LLM analyzes the user’s query using its natural language processing capabilities.
- It matches the query’s intent and content to the tool descriptions or keywords. For example, a query like “What’s the weather in New York?” aligns with a tool described as “Retrieve the current weather.”
- Modern LLMs, especially those fine-tuned for tool calling (e.g., OpenAI’s GPT-4o), use semantic understanding to infer intent rather than relying solely on keywords.
- ==Tool Selection==:
- Prompt-Based (LangChain): The LLM is given a prompt that includes tool descriptions and instructions to select the appropriate tool. The LLM reasons about the query (often using a framework like ReAct) and outputs a decision to call a specific tool with arguments.
- Fine-Tuned Tool Calling (OpenAI): The LLM is trained to output a structured JSON object specifying the tool name and arguments directly, based on the query and tool schemas provided in the API call.
- ==Query Parsing==:
Mock Tool Implementation
Section titled “Mock Tool Implementation”- Step 1: Define a Tool Function
def add_numbers(x: int, y: int) -> int: """Add two numbers and return the result.""" return x + y- Step 2: Use inspect to Introspect
import inspect
sig = inspect.signature(add_numbers)
# Print parameter names and typesfor name, param in sig.parameters.items(): print(f"{name}: {param.annotation} (default={param.default})")
# Print return typeprint(f"Returns: {sig.return_annotation}")- Step 3: Dynamically Call the Function
# Assume this comes from LLM tool calling outputllm_output = { "x": 5, "y": 7}
# Dynamically call itresult = add_numbers(**llm_output)print(result) # ➜ 12Summary
Section titled “Summary”- Uses
inspect.signature(func) to introspect argument names and types. - Formats this into metadata for LLM prompt.
- Parses LLM output ({tool_name, tool_args}).
- Validates the arguments.
- Calls the function like: tool.func(**tool_args).