Switching AI providers is usually a nightmare of rewriting dispatch loops and reformatting JSON schemas. You build an agent for Claude, and then someone decides GPT-4o is cheaper or Gemini is faster. Suddenly, you’re stuck translating Anthropic’s tool_use into OpenAI’s function_call syntax. I’ve been there, and it’s a waste of time. The Vercel AI SDK fixes this by giving you a unified interface that works with every major model. You write your skill once using Zod, and switching from Claude to GPT-4o becomes a one-line change. It’s the only way to build agents without losing your mind.
What is the Vercel AI SDK?
Don’t let the name fool you. This isn’t just for Vercel users. It’s an open-source library that runs anywhere Node.js lives—your laptop, a VPS, or a serverless function. It acts as a middleman between your code and the AI. Instead of learning five different APIs for five different models, you learn one. You write your logic once, and it just works everywhere.
The Scenario: You spent all night building a research agent using OpenAI. Then, your boss reads a tweet about Claude being better for coding and tells you to “just switch it over.” Without this SDK, you’re looking at four hours of refactoring. With it, you’re done in thirty seconds.
How do I define a tool with the tool helper?
Forget raw JSON objects. The SDK uses a tool() helper that integrates with Zod. This means you get real TypeScript types and automatic validation. If the model tries to pass a string where you expected a number, the SDK catches it before your code even runs.
import { tool } from "ai";
import { z } from "zod";
const getWeatherTool = tool({
description: "Get current weather for a city.",
parameters: z.object({
city: z.string().describe("The city name, e.g. 'Mumbai'")
}),
execute: async ({ city }) => {
// Your actual API call logic goes here
return { temperature: "28°C", condition: "Sunny" };
}
});
The best part is that the execute function is right there in the definition. No more giant switch statements to figure out which function to call.
The Scenario: You’re building a tool that expects a
zip_code. The model gets lazy and sends “90210” as a number. Your database expects a string. Zod catches this instantly, saving you from a weird “undefined” error that’s impossible to track down in the logs.
Can I really switch AI providers in one line?
Yes. The SDK abstracts the messy parts of the AI loop. You don’t have to worry about checking for “stop_reason” or manual tool results. You just pass your tools to generateText and set maxSteps.
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const result = await generateText({
model: anthropic("claude-3-5-sonnet-latest"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "Is it raining?" }]
});
To switch to OpenAI, you literally just change anthropic to openai. The rest of your code stays exactly the same. No more rewriting the message history logic or the tool response format.
The Scenario: You’re running out of credits on your Anthropic account in the middle of a big test run. You swap to OpenAI in the config file, hit restart, and keep going like nothing happened. It’s a lifesaver for your credit card.
Why is Zod better than raw JSON schemas?
Raw JSON schema is a pain to write and even harder to maintain. Zod is just JavaScript. It’s readable, it’s concise, and it gives you autocomplete in your IDE. Plus, the .describe() method is what the AI actually reads. It’s a win-win for both the developer and the model.
- Type Safety: Your IDE knows exactly what
cityis. - Validation: It rejects bad data before it hits your production database.
- Readability: It’s way shorter than a 20-line JSON object.
The Scenario: You’re collaborating on a project and your teammate adds a new parameter to a tool. Because it’s Zod, your IDE immediately shows you a red squiggly line everywhere you missed the change. No more hunting through documentation to see why the API is failing.
How do I handle real-time streaming with tools?
If you’re building a chat app, you don’t want the user waiting five seconds for a tool to finish. The streamText helper handles everything. It streams the text, pauses to run the tool, and then continues streaming the final answer. All of this happens without you writing a single line of streaming logic.
import { streamText } from "ai";
const stream = streamText({
model: anthropic("claude-3-5-sonnet-latest"),
tools: { get_weather: getWeatherTool },
maxSteps: 5,
messages: [{ role: "user", content: "What's the vibe in Mumbai?" }]
});
It feels like magic. The model decides it needs the weather, fetches it, and incorporates it into the response while the user is still reading the first sentence.
The Scenario: You’re building a mobile app with a slow 5G connection. Streaming makes the app feel “alive” even when the AI is doing heavy lifting in the background. Without it, your users think the app is frozen and close it.
What to Read Next
- Complete the set with Gemini: Agent Skills with Google Gemini: Function Calling Guide
- Do it the hard way (Claude): Agent Skills with the Claude API
- Do it the hard way (OpenAI): Agent Skills with the OpenAI API