I had a working agent with Claude tools. My client wanted to switch to GPT-4o. I rewrote the dispatch loop — different tool definition format, different stop signal, different result structure, different argument parsing.
Two weeks later they wanted to test Gemini 2.0. I rewrote it again.
The third time I said: enough. The Vercel AI SDK exists exactly for this — write your tools once, swap the AI provider with a single line. Here’s how it works.
What the Vercel AI SDK actually is
The Vercel AI SDK is an open-source TypeScript library for building AI-powered applications. Despite the name, it’s not tied to Vercel’s hosting platform. It runs in any Node.js process — scripts, servers, CLI tools, whatever you’re building.
The key thing it gives you: a unified interface that works identically with Claude, OpenAI, Gemini, Mistral, and others. You write a tool once. You call generateText() once. Switching providers is one line.
npm install ai @ai-sdk/anthropic @ai-sdk/openai
Provider adapters are separate packages — install only what you need.
Defining a tool: the tool() helper
Instead of writing raw JSON Schema like in the Claude and OpenAI posts, the Vercel AI SDK uses a tool() helper with Zod schemas for parameter validation.
import { tool } from "ai";
import { z } from "zod";
const getWeatherTool = tool({
description:
"Get current weather for a city. Use when the user asks about " +
"weather, temperature, rain, or what to wear.",
parameters: z.object({
city: z.string().describe("The city name, e.g. 'Mumbai' or 'London'")
}),
execute: async ({ city }) => {
// Your actual implementation
const geo = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
).then(r => r.json());
if (!geo.results?.length) return { error: `City not found: ${city}` };
const { latitude, longitude, name, country } = geo.results[0];
const weather = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true`
).then(r => r.json());
const codes: Record<number, string> = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
61: "Light rain", 63: "Moderate rain", 65: "Heavy rain", 95: "Thunderstorm"
};
return {
city: `${name}, ${country}`,
temperature: `${weather.current_weather.temperature}°C`,
condition: codes[weather.current_weather.weathercode] ?? "Unknown"
};
}
});
Three things to notice:
parametersis a Zod schema, not raw JSON Schema — you get TypeScript type inference and runtime validation for free- The
executefunction lives inside the tool definition — no separate dispatch map - The SDK calls
executeautomatically — you don’t write the tool dispatch loop at all
Calling generateText() — the loop you never write again
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const result = await generateText({
model: anthropic("claude-sonnet-4-6"),
tools: { get_weather: getWeatherTool },
maxSteps: 5, // allows up to 5 tool call/response rounds
messages: [
{ role: "user", content: "Is it raining in Mumbai?" }
]
});
console.log(result.text);
// "Mumbai is currently experiencing light rain at 27°C."
The maxSteps: 5 tells the SDK to keep executing tool calls and returning results until it gets a final text response (or hits 5 rounds). The entire tool_use → execute → return loop from the Claude post is handled internally.
You wrote the dispatch loop manually across Posts 1–6. With the Vercel AI SDK, it disappears.
Switching providers: one line
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
import { google } from "@ai-sdk/google"; // npm install @ai-sdk/google
// Claude
const result = await generateText({
model: anthropic("claude-sonnet-4-6"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "Weather in Delhi?" }]
});
// GPT-4o — change one line
const result = await generateText({
model: openai("gpt-4o"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "Weather in Delhi?" }]
});
// Gemini — change one line
const result = await generateText({
model: google("gemini-2.0-flash"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "Weather in Delhi?" }]
});
The tool definition, the execute function, and your business logic don’t change at all.
Zod schemas — why they’re better than raw JSON Schema
Every tool in Posts 1–6 used a JSON Schema object. The Vercel AI SDK uses Zod. Here’s why that’s better:
TypeScript inference:
// Raw JSON Schema — no type safety
const result = await fn(toolBlock.input); // input is typed as `any`
// Zod schema — fully typed
execute: async ({ city }) => {
// city is typed as `string` automatically
}
Runtime validation:
If the model passes city: 42 (a number instead of a string), Zod throws a validation error before your execute function runs. With raw JSON Schema, you’d get a runtime error inside your function — harder to debug.
Inline documentation:
z.object({
city: z.string().describe("The city name, e.g. 'Mumbai'"),
unit: z.enum(["celsius", "fahrenheit"]).default("celsius").describe("Temperature unit")
})
The .describe() calls feed directly into the tool definition sent to the model — no separate description field needed.
streamText() — real-time responses with tools
For UI applications where you want text to stream as it’s generated:
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const stream = streamText({
model: anthropic("claude-sonnet-4-6"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "Weather in Mumbai?" }]
});
// Stream text chunks as they arrive
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Tool calls happen transparently during the stream. The model fetches weather mid-stream and uses the result in the continuation — all without you handling any of it.
Before/after: the research chain
In the chaining post, the research script was ~60 lines of loop management. Here’s the same chain with the Vercel AI SDK:
Before (manual loop):
// ~60 lines managing messages array, tool_use blocks,
// tool result format, stop_reason checking, etc.
After (Vercel AI SDK):
import { generateText, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import { web_search } from "./web-search.js";
import { write_file } from "./file-skills.js";
const result = await generateText({
model: anthropic("claude-sonnet-4-6"),
maxSteps: 8,
tools: {
web_search: tool({
description: "Search the web for current information on a topic.",
parameters: z.object({ query: z.string(), maxResults: z.number().default(5) }),
execute: ({ query, maxResults }) => web_search({ query, maxResults })
}),
summarize_text: tool({
description: "Summarize a long text into key bullet points.",
parameters: z.object({ text: z.string(), focus: z.string().optional() }),
execute: async ({ text, focus }) => {
// inline implementation or call your existing function
return { summary: `Key points from: ${text.slice(0, 100)}...` };
}
}),
write_file: tool({
description: "Save content to a file.",
parameters: z.object({ path: z.string(), content: z.string(), mode: z.enum(["create", "append", "overwrite"]).default("create") }),
execute: ({ path, content, mode }) => write_file({ path, content, mode })
})
},
messages: [{ role: "user", content: `Research "TypeScript 5.5 features" and save a summary to notes/ts55.md` }]
});
console.log(result.text);
The loop management disappeared. The tool definitions are co-located with their implementations. The provider is swappable.
Accessing tool call details
If you need to see what tools were called (for logging, debugging, or building UI feedback):
const result = await generateText({ ... });
// All steps in the conversation
for (const step of result.steps) {
console.log("Step type:", step.stepType); // "initial", "tool-result", "continue"
for (const toolCall of step.toolCalls ?? []) {
console.log("Tool called:", toolCall.toolName, toolCall.args);
}
for (const toolResult of step.toolResults ?? []) {
console.log("Tool result:", toolResult.toolName, toolResult.result);
}
}
Limitations: when to drop to the raw SDK
The Vercel AI SDK abstracts away provider-specific features. A few situations where you need to go back to the raw provider SDK:
| Situation | Why the SDK doesn’t cover it | Workaround |
|---|---|---|
Claude’s tool_choice: { type: "any" } | Provider-specific options | Use raw @anthropic-ai/sdk |
| OpenAI structured outputs (JSON mode with schema) | OpenAI-specific feature | Use raw openai SDK |
| Gemini multimodal tool inputs (image data) | Gemini-specific feature | Use raw @google/generative-ai |
| Fine-grained token usage per tool call | SDK aggregates usage | Check result.usage for totals |
For most use cases — chat agents, tool chains, multi-step research — the Vercel AI SDK covers everything. Drop to the raw SDK only when you need a provider-specific feature that isn’t abstracted.
What’s next
Complete the platform set with Gemini: Agent Skills with Google Gemini: Function Calling Guide
Compare without the SDK abstraction: Agent Skills with the Claude API and Agent Skills with the OpenAI API
Framework vs SDK — where does Vercel AI SDK fit: LangGraph vs CrewAI vs Claude Agent Teams
Related Reading.
Agent Skills with Google Gemini: Function Calling Guide
Complete guide to Gemini function calling — define tools, handle function_call responses, return results, and compare syntax with Claude and OpenAI. Node.js.
Chaining Agent Skills: Research, Summarize, and Save
Build a skill chain where an agent searches the web, summarizes findings, and saves results to a file — all from a single prompt. Full Node.js walkthrough.
Agent Skills with the OpenAI API: Function Calling Explained
How to use OpenAI function calling with gpt-4o — define functions, handle tool_calls in responses, execute your code, and return results. Full Node.js working example.