I spent two hours reading the Anthropic docs trying to understand tool_use. The docs are accurate — but they assume you already know what’s happening. This guide assumes you don’t, and shows the whole flow with working code you can run right now.
What “tool use” means in the Claude API
In the Claude API, a “tool” is what other platforms call a “skill” or “function.” You define tools when you call the API, and Claude can choose to call one of them instead of (or before) responding to you.
The flow goes like this:
1. You send a message to Claude with a list of tool definitions
2. Claude reads your message and decides: do I need a tool?
3. If yes — Claude returns a tool_use block (not a text response)
4. You run the actual function and get the result
5. You send the result back to Claude
6. Claude uses the result to write the final answer
That back-and-forth is the part the docs gloss over. Let’s build it step by step.
Setup
Install the Anthropic SDK:
npm install @anthropic-ai/sdk
Set your API key:
export ANTHROPIC_API_KEY="your-key-here"
Step 1 — Define a tool
A tool definition tells Claude three things: what it’s called, what it does, and what input it needs.
const tools = [
{
name: "get_weather",
description:
"Get current weather conditions for a city. " +
"Use this when the user asks about weather, temperature, rain, " +
"or what to wear. Returns temperature in Celsius and conditions.",
input_schema: {
type: "object",
properties: {
city: {
type: "string",
description: "The city name, e.g. 'Mumbai' or 'London'"
}
},
required: ["city"]
}
}
];
Key things to note:
input_schemafollows JSON Schema formatrequiredtells Claude which fields it must provide before calling- The
descriptionis what Claude uses to decide when to call this tool — write it clearly
Step 2 — Send a message with tools
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
tools: tools,
messages: [
{
role: "user",
content: "What's the weather like in Mumbai right now?"
}
]
});
console.log(response.stop_reason); // "tool_use" if Claude wants to call a tool
console.log(response.content); // array of content blocks
If Claude decides it needs the get_weather tool, stop_reason will be "tool_use" and content will contain a tool_use block:
{
"stop_reason": "tool_use",
"content": [
{
"type": "text",
"text": "I'll check the current weather in Mumbai for you."
},
{
"type": "tool_use",
"id": "toolu_01XYZ...",
"name": "get_weather",
"input": {
"city": "Mumbai"
}
}
]
}
Claude is saying: “I want to call get_weather with city: "Mumbai". You run it, give me the result.”
Step 3 — Run your function
Now you execute the actual function. This is just normal JavaScript — fetch an API, query a database, read a file, anything.
// Your actual implementation
async function get_weather({ city }) {
const geoRes = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
);
const geoData = await geoRes.json();
if (!geoData.results?.length) {
return { error: `City not found: ${city}` };
}
const { latitude, longitude, name, country } = geoData.results[0];
const weatherRes = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true&hourly=relativehumidity_2m`
);
const data = await weatherRes.json();
const current = data.current_weather;
const conditions = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
45: "Foggy", 61: "Light rain", 63: "Moderate rain", 65: "Heavy rain",
80: "Rain showers", 95: "Thunderstorm"
};
return {
city: `${name}, ${country}`,
temperature: `${current.temperature}°C`,
condition: conditions[current.weathercode] ?? `Code ${current.weathercode}`,
humidity: `${data.hourly.relativehumidity_2m[0]}%`
};
}
Step 4 — Send the result back to Claude
After running the function, you send the result back in a new message. The structure is specific — Claude needs both the tool_use block from its own response and your tool result:
// Find the tool_use block in Claude's response
const toolUseBlock = response.content.find(block => block.type === "tool_use");
// Run our function with the inputs Claude provided
const toolResult = await get_weather(toolUseBlock.input);
// Send Claude's response + our result back
const finalResponse = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
tools: tools,
messages: [
// Original user message
{ role: "user", content: "What's the weather like in Mumbai right now?" },
// Claude's tool_use response (must be included exactly)
{ role: "assistant", content: response.content },
// Our tool result
{
role: "user",
content: [
{
type: "tool_result",
tool_use_id: toolUseBlock.id, // must match the id from Claude's response
content: JSON.stringify(toolResult)
}
]
}
]
});
console.log(finalResponse.content[0].text);
// "Mumbai is currently experiencing partly cloudy skies at 31°C..."
Full working example
Here’s everything together in one file you can run:
// weather-agent.js
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const tools = [
{
name: "get_weather",
description:
"Get current weather for a city. Use when the user asks about " +
"weather, temperature, rain, or what to wear.",
input_schema: {
type: "object",
properties: {
city: { type: "string", description: "City name" }
},
required: ["city"]
}
}
];
async function get_weather({ city }) {
const geo = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
).then(r => r.json());
if (!geo.results?.length) return { error: "City not found" };
const { latitude, longitude, name, country } = geo.results[0];
const weather = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true&hourly=relativehumidity_2m`
).then(r => r.json());
const codes = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
61: "Light rain", 63: "Moderate rain", 65: "Heavy rain", 95: "Thunderstorm"
};
return {
city: `${name}, ${country}`,
temperature: `${weather.current_weather.temperature}°C`,
condition: codes[weather.current_weather.weathercode] ?? "Unknown",
humidity: `${weather.hourly.relativehumidity_2m[0]}%`
};
}
// Tool dispatch — maps tool name to function
const toolFunctions = { get_weather };
async function chat(userMessage) {
const messages = [{ role: "user", content: userMessage }];
let response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
tools,
messages
});
// Loop: keep handling tool calls until Claude gives a final text response
while (response.stop_reason === "tool_use") {
const toolUseBlock = response.content.find(b => b.type === "tool_use");
const fn = toolFunctions[toolUseBlock.name];
const result = fn ? await fn(toolUseBlock.input) : { error: "Unknown tool" };
messages.push(
{ role: "assistant", content: response.content },
{
role: "user",
content: [
{
type: "tool_result",
tool_use_id: toolUseBlock.id,
content: JSON.stringify(result)
}
]
}
);
response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
tools,
messages
});
}
return response.content[0].text;
}
// Run it
const answer = await chat("Is it raining in Chennai right now?");
console.log(answer);
Run it:
node weather-agent.js
Multiple tools — letting Claude choose
You can define multiple tools and let Claude pick the right one:
const tools = [
{
name: "get_weather",
description: "Get current weather for a city...",
input_schema: { /* ... */ }
},
{
name: "search_news",
description: "Search recent news articles on a topic. Use when the user asks about current events, news, or recent happenings.",
input_schema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
max_results: { type: "number", description: "Max articles to return (default 5)" }
},
required: ["query"]
}
},
{
name: "calculate",
description: "Perform a mathematical calculation. Use for arithmetic, percentages, unit conversion, or any calculation that needs to be exact.",
input_schema: {
type: "object",
properties: {
expression: { type: "string", description: "The math expression to evaluate, e.g. '(100 * 1.075) ^ 10'" }
},
required: ["expression"]
}
}
];
Claude will read all three descriptions and pick whichever one fits the user’s message. If the user asks “what’s 15% of 4500?”, Claude calls calculate. If they ask “any news about the election?”, Claude calls search_news.
Common mistakes
Sending the wrong message history
When you send the tool result, you must include the full conversation history — including Claude’s tool_use response. If you skip that, Claude gets confused because it doesn’t remember making the tool call.
Not handling stop_reason: "end_turn"
Claude might answer without using any tool (e.g. “What’s 2+2?” — it doesn’t need calculate for that). Always check stop_reason before assuming a tool was called.
Returning a string instead of JSON
The content field in tool_result should be a string — but stringify your result object so Claude gets structured data, not [object Object].
What’s next
Understand the concept: What Are Agent Skills? AI Tools Explained Simply
Handle failures gracefully: Handling Errors in Agent Skills: Retries and Fallbacks
Test your tools before deploying: Testing and Debugging Agent Skills Before You Deploy
Unified SDK for Claude + OpenAI + Gemini: Vercel AI SDK Tools: One API for Claude and OpenAI Skills
OpenAI version: Agent Skills with the OpenAI API
Related Reading.
Vercel AI SDK Tools: One API for Claude and OpenAI Skills
Vercel AI SDK's unified tool interface works with Claude, OpenAI, and Gemini. Write your skill once and switch AI providers without rewriting the agent loop.
Chaining Agent Skills: Research, Summarize, and Save
Build a skill chain where an agent searches the web, summarizes findings, and saves results to a file — all from a single prompt. Full Node.js walkthrough.
Agent Skills with Google Gemini: Function Calling Guide
Complete guide to Gemini function calling — define tools, handle function_call responses, return results, and compare syntax with Claude and OpenAI. Node.js.