The npx skills CLI is great for personal agents like Claude Code or Cursor. But what if you are building a custom AI application? You don’t want to copy-paste your company’s guidelines into your backend codebase.
You want to dynamically load a Vercel Agent Skill directly into the Vercel AI SDK.
This turns your standalone skills into a shared brain for all your applications. If the skill updates on GitHub, your application gets smarter automatically on the next pull.
Here is the no-nonsense guide to wiring it up.
Method 1: The Direct Injection (System Prompt)
The simplest way to use a skill is to read its SKILL.md file and append it to your system prompt. This is ideal for static, universal rules (like coding standards or tone-of-voice guidelines) that apply to every interaction.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import fs from 'fs';
// Load the skill instructions from your local filesystem
const reactSkill = fs.readFileSync('./skills/react-best-practices/SKILL.md', 'utf-8');
export async function askAgent(userPrompt: string) {
const { text } = await generateText({
model: openai('gpt-4o'),
system: `You are a senior frontend engineer. You must strictly follow these guidelines:
${reactSkill}`,
prompt: userPrompt,
});
return text;
}
The Scenario: You’re building an internal “Code Review Bot” for PRs. You have a
SKILL.mdfile that defines your strict accessibility (a11y) rules. By injecting the file directly into the system prompt, the bot evaluates every single line of code against your specific standards, not just OpenAI’s generic advice. When the accessibility team updates the rules, they just update the Markdown file.
Method 2: The Agentic Approach (Skills as Tools)
If you have a massive library of 50 different skills, injecting all of them into the system prompt will overflow the context window and cause the model to hallucinate.
Instead, you use the Vercel AI SDK Tools to let the agent decide when it needs to read a skill. You give it a “Registry Tool.”
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
export async function smartAgent(userPrompt: string) {
const { text } = await generateText({
model: openai('gpt-4o'),
tools: {
getSkillInstructions: tool({
description: 'Get specific instructions for a domain-specific task.',
parameters: z.object({
skillName: z.enum(['db-opt', 'react-a11y', 'auth-flow']).describe('The name of the skill'),
}),
execute: async ({ skillName }) => {
// Fetch the SKILL.md content from your filesystem or a remote URL
return await fetchSkillContent(skillName);
},
}),
},
prompt: userPrompt,
});
return text;
}
The Scenario: A user asks your generic coding assistant: “How do I fix this slow Prisma query?” Instead of guessing, the agent realizes it has a
getSkillInstructionstool. It calls the tool withskillName: "db-opt". The tool returns your custom Prisma optimization guide. The agent then reads the guide and tells the user exactly how to fix the N+1 problem according to your company standards. It’s efficient and highly targeted.
Executable Skills (OpenAPI)
The Vercel ecosystem also supports skills that do things, not just instruct things. If a skill defines an OpenAPI specification, you can automatically convert that spec into Vercel AI SDK tools using community packages or the SDK’s built-in OpenAPI integration.
This allows you to publish a skill that says “Here is how you interact with our internal billing API” and have any agent instantly understand how to call your endpoints without manual mapping.
The Final Verdict
Don’t write complex, monolithic infrastructure managers to handle tools and context. Write simple Markdown files. Publish them as Vercel Skills. Inject them dynamically. It keeps your codebase clean and your agents tightly focused.
Found this useful? Check out the beginning of this series: Introduction to Vercel Agent Skills.