Skip to main content
Large Language Models (LLMs) can answer most user questions but they can’t access or affect the real world. With access to tools and APIs, LLMs can perform tasks and actions that impact the real world such as booking a flight, sending an email, or updating a database. In this section, we will learn about how LLMs make use of tools and what MCP Servers are. You can skip to the next section to learn about tool use with Agentor. overview

Weather Agent Example

ChatGPT accesses weather using an external API.

What is an LLM Tool?

When we say “tool”, we mean a function that can be called by an LLM to perform a task or action. But how does an LLM know how to call a tool? tool-use-tutorial-1
The LLM doesn’t call the tool directly, it only returns a JSON object with the tool details and the arguments to call the tool.

Tool calling with OpenAI API

LLMs first need to know the details of the tool to call it. This is done by providing a to the LLM. In the following example, we define a tool schema for a weather API function that retrieves the current weather for a given location.
from openai import OpenAI

weather_tool_schema = {
    "type": "function",
    "name": "get_weather",
    "description": "Retrieves the current weather for the given location.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City and country, e.g. London, United Kingdom",
            },
            "units": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "The units in which the temperature will be returned.",
            },
        },
        "required": ["location", "units"],
        "additionalProperties": False,
    },
    "strict": True,
}

client = OpenAI()

response = client.responses.create(
    model="gpt-5-nano",
    input="What is the weather in London?",
    tools=[weather_tool_schema],  
)
print(response.content)
The LLM will respond with a JSON containing the tool name and the arguments to call the tool. It’s the job of the developer to implement the tool and call it with the arguments. The tool result is then fed back to the LLM to answer the user question.
Response(model='gpt-5-nano-2025-08-07', object='response', output=[
    ResponseReasoningItem(summary=[], type='reasoning', content=None, encrypted_content=None, status=None),
    ResponseFunctionToolCall(
        arguments='{"location":"London, United Kingdom","units":"celsius"}',
        call_id='call_d1K7mBkbN62s4MChtPOkpckW',
        name='get_weather',
        type='function_call',
        id='fc_0945c6bd7187945700690166502f7881978963e94cbe4d4ee5',
        status='completed'
    )
]
)

Tool Calling Flow

The following diagram illustrates the complete flow of tool calling with an LLM:

MCP (Model Context Protocol) Server

MCP (Model Context Protocol) is a standard for communicating with LLMs that allows them to access external tools, APIs, and other resources. mcp Let’s understand the “why” behind MCP with a comparison between MCP and without MCP.
Without MCPMCP
Developers need to implement the tool calling logic in the application.A prebuilt MCP Server can be plugged into the LLM to provide tools and APIs.
Developers need to implement integrations logic for all external tools such as weather API, email API, etc.Think of MCP like a USB-C port for AI applications.