Tool Calling
Both the Chat Completions and Responses APIs support function/tool calling — the model can decide to invoke functions you define, and you return the results.
Chat Completions Tool Calling
Defining Tools
Wrap your function schema in a GPTTool:
using UniLM
using JSON
weather_tool = GPTTool(
func=GPTFunctionSignature(
name="get_weather",
description="Get current weather for a location",
parameters=Dict(
"type" => "object",
"properties" => Dict(
"location" => Dict("type" => "string", "description" => "City name"),
"unit" => Dict("type" => "string", "enum" => ["celsius", "fahrenheit"])
),
"required" => ["location"]
)
)
)
println("Tool type: ", weather_tool.type)
println("Function name: ", weather_tool.func.name)
println("Tool JSON:")
println(JSON.json(JSON.lower(weather_tool)))Tool type: function
Function name: get_weather
Tool JSON:
{"type":"function","function":{"name":"get_weather","description":"Get current weather for a location","parameters":{"properties":{"location":{"type":"string","description":"City name"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"],"type":"object"}}}Making Tool-Enabled Requests
chat = Chat(model="gpt-5.2", tools=[weather_tool])
push!(chat, Message(Val(:system), "You are a helpful assistant with access to weather data."))
push!(chat, Message(Val(:user), "What's the weather in Paris?"))
println("Chat has ", length(chat.tools), " tool(s) registered")
println("Request body:")
println(JSON.json(chat))Chat has 1 tool(s) registered
Request body:
{"tools":[{"type":"function","function":{"name":"get_weather","description":"Get current weather for a location","parameters":{"properties":{"location":{"type":"string","description":"City name"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"],"type":"object"}}}],"messages":[{"role":"system","content":"You are a helpful assistant with access to weather data."},{"role":"user","content":"What's the weather in Paris?"}],"parallel_tool_calls":false,"model":"gpt-5.2"}Handling Tool Calls
When the model wants to call a function, the result message will have finish_reason == "tool_calls":
chat = Chat(
model="gpt-5.2",
tools=[weather_tool],
tool_choice=UniLM.GPTToolChoice(func=:get_weather)
)
push!(chat, Message(Val(:system), "Use the provided tools to answer."))
push!(chat, Message(Val(:user), "What's the weather in Paris?"))
result = chatrequest!(chat)
if result isa LLMSuccess
println("Finish reason: ", result.message.finish_reason)
tc = result.message.tool_calls[1]
println("Function: ", tc.func.name)
println("Arguments: ", JSON.json(tc.func.arguments, 2))
else
println("Request failed — see result for details")
endFinish reason: tool_calls
Function: get_weather
Arguments: {
"location": "Paris",
"unit": "celsius"
}Controlling Tool Choice
# Let the model decide
chat = Chat(tools=[weather_tool], tool_choice="auto")
# Force the model to use a tool
chat = Chat(tools=[weather_tool], tool_choice="required")
# Prevent tool use
chat = Chat(tools=[weather_tool], tool_choice="none")Responses API Tool Calling
The Responses API makes tool calling more ergonomic with dedicated types.
Function Tools
tool = function_tool(
"calculate",
"Evaluate a math expression",
parameters=Dict(
"type" => "object",
"properties" => Dict(
"expression" => Dict("type" => "string")
),
"required" => ["expression"]
),
strict=true
)
println("Tool: ", tool.name, " (strict=", tool.strict, ")")
println("JSON: ", JSON.json(JSON.lower(tool)))Tool: calculate (strict=true)
JSON: {"strict":true,"type":"function","name":"calculate","description":"Evaluate a math expression","parameters":{"properties":{"expression":{"type":"string"}},"required":["expression"],"type":"object"}}weather_fn = function_tool(
"get_weather",
"Get current weather for a location",
parameters=Dict(
"type" => "object",
"properties" => Dict(
"location" => Dict("type" => "string", "description" => "City name"),
"unit" => Dict("type" => "string", "enum" => ["celsius", "fahrenheit"])
),
"required" => ["location"]
)
)
result = respond("What's the weather in Tokyo? Use celsius.", tools=[weather_fn])
calls = function_calls(result)
if !isempty(calls)
println("Function: ", calls[1]["name"])
println("Arguments: ", JSON.json(JSON.parse(calls[1]["arguments"]), 2))
else
println("No function calls — ", output_text(result))
endFunction: get_weather
Arguments: {
"location": "Tokyo",
"unit": "celsius"
}Web Search
The model can search the web — no function implementation needed:
ws = web_search(context_size="high")
println("Web search tool type: ", typeof(ws))
println("Context size: ", ws.search_context_size)Web search tool type: WebSearchTool
Context size: highresult = respond(
"What is the latest stable release of the Julia programming language?",
tools=[web_search()]
)
if result isa ResponseSuccess
println(output_text(result))
else
println("Request failed — ", output_text(result))
endThe latest **stable** release of the Julia programming language is **Julia 1.12.4**. ([julialang.org](https://julialang.org/blog/2026/01/this-month-in-julia-world/?utm_source=openai))File Search
Search over your uploaded vector stores:
result = respond(
"Find the error handling policy",
tools=[file_search(["vs_store_id_123"], max_results=5)]
)Combining Tools
Mix different tool types freely:
tools = [
web_search(),
function_tool("save_summary", "Save a summary to the database",
parameters=Dict(
"type" => "object",
"properties" => Dict(
"title" => Dict("type" => "string"),
"content" => Dict("type" => "string")
)
)
)
]
println("Number of tools: ", length(tools))
for t in tools
println(" - ", typeof(t))
endNumber of tools: 2
- WebSearchTool
- FunctionToolAutomated Tool Loop
Instead of manually handling tool calls, use tool_loop! (Chat Completions) or tool_loop (Responses API) for automatic dispatch:
Chat Completions
ct = CallableTool(weather_tool, (name, args) -> "22C, sunny in $(args["location"])")
println("Callable tool wrapping: ", ct.tool.func.name)Callable tool wrapping: get_weatherchat = Chat(model="gpt-5.2", tools=[ct.tool])
push!(chat, Message(Val(:user), "What's the weather in Paris?"))
result = tool_loop!(chat; tools=[ct])
# result.completed == true when the model gives a text responseResponses API
ct = CallableTool(
function_tool("get_weather", "Get weather", parameters=Dict(...)),
(name, args) -> "22C, sunny")
result = tool_loop("What's the weather?"; tools=[ct])MCP Tool Integration
MCP servers expose tools that integrate directly with the tool loop via mcp_tools and mcp_tools_respond. See the MCP Guide for full details.
# Chat Completions + MCP
session = mcp_connect(`npx server`)
tools = mcp_tools(session)
chat = Chat(model="gpt-5.2", tools=map(t -> t.tool, tools))
push!(chat, Message(Val(:user), "Do something"))
result = tool_loop!(chat; tools)
# Responses API + MCP
tools = mcp_tools_respond(session)
result = tool_loop("Do something"; tools=tools)See Also
GPTTool,GPTFunctionSignature— Chat Completions tool typesFunctionTool,WebSearchTool,FileSearchTool— Responses API tool typesfunction_tool,web_search,file_search— convenience constructorsCallableTool,ToolCallOutcome,ToolLoopResult— tool loop typestool_loop!,tool_loop— automated tool dispatch- MCP Guide — MCP server integration