I had an AI assistant that could answer any question I typed. It was impressive for about a week. Then I asked it to check my calendar and book a meeting. It said: “I can’t do that — I don’t have access to your calendar.”
That’s the wall every AI user hits eventually. The model knows a lot. But knowing and doing are different things.
Agent skills — sometimes called tools, functions, or actions — are how we give AI the ability to do things. Not just generate text about a task, but actually execute it.
The mental model
Think of an AI agent like a very smart new employee on their first day.
They have enormous knowledge. They can reason about any problem you give them. But they can’t do anything in your company yet — they don’t have accounts, access cards, or permissions.
Skills are the access cards.
Each skill gives the agent permission and instructions to interact with one specific system: your calendar, your database, a weather API, a file system, a search engine. Stack enough skills together and you have an agent that can actually get work done.
What a skill actually is
At its core, a skill is three things:
1. A name and description — so the AI knows what the skill does and when to use it.
2. A definition of inputs — what information the AI needs to provide when calling the skill (e.g., a search query, a date, a file path).
3. The actual code — a function that runs when the skill is called, talks to an API or service, and returns a result.
Here’s the simplest possible example. A skill that searches Wikipedia:
// The skill definition (what the AI sees)
{
name: "search_wikipedia",
description: "Search Wikipedia and return a summary of the article.",
parameters: {
query: {
type: "string",
description: "The search term"
}
}
}
// The actual code that runs
async function search_wikipedia({ query }) {
const response = await fetch(`https://en.wikipedia.org/api/rest_v1/page/summary/${query}`);
const data = await response.json();
return data.extract;
}
When you ask the AI “what is the James Webb Space Telescope?”, it sees the skill description and decides: “I should use search_wikipedia here.” It calls the skill with { query: "James Webb Space Telescope" }, gets back a real Wikipedia summary, and uses that in its response.
The AI didn’t make up the information. It fetched it.
Skills vs. prompts — what’s the difference?
A prompt changes what the AI says. A skill changes what the AI can do.
| Prompt | Skill | |
|---|---|---|
| What it changes | The AI’s writing style, focus, or knowledge framing | The AI’s ability to take real actions |
| Where it lives | In the system message or user message | As a separate function definition |
| Side effects | None — text only | Can read/write data, call APIs, modify files |
| Persistence | Per-conversation | Persistent, reusable across conversations |
A well-written prompt makes an AI sound more expert. A well-defined skill makes an AI actually do something.
The four types of skills you’ll encounter
1. Data retrieval skills
Fetch real-world information: weather, stock prices, news, web search, database queries. The AI uses these to answer questions with current facts instead of relying on its training data.
get_weather(city: "Mumbai")
→ returns: { temp: 32, condition: "Partly cloudy", humidity: 78 }
2. Action skills
Perform operations: send an email, create a calendar event, write to a file, post to Slack. These are the most powerful and require the most care — they have real-world side effects.
send_email(to: "[email protected]", subject: "Weekly report", body: "...")
→ returns: { sent: true, messageId: "..." }
3. Computation skills
Run code, do math, process data. Useful when the AI needs to perform calculations it can’t reliably do in its head (large numbers, date math, data transformation).
calculate_compound_interest(principal: 100000, rate: 7.5, years: 10)
→ returns: { amount: 206103.27, interest_earned: 106103.27 }
4. Memory skills
Read from or write to a persistent store — so the agent remembers things between conversations. This is how agents like OpenClaw maintain context over weeks and months.
remember(key: "user_preference_language", value: "Python")
recall(key: "user_preference_language")
→ returns: "Python"
How the AI decides which skill to use
This is the part that confuses most people. The AI doesn’t randomly call skills — it reasons about which one to use based on:
- The conversation context — what you’re asking for
- The skill’s name and description — this is critical, bad descriptions = wrong skill calls
- Whether it has enough information — if a skill needs a
cityparameter and you haven’t mentioned one, it’ll ask
The model reads your skill descriptions the same way it reads your prompts. Good skill descriptions are clear, specific, and say when to use the skill — not just what it does.
❌ Bad: "Gets weather"
✅ Good: "Get current weather conditions and temperature for a city.
Use this when the user asks about weather, temperature,
or whether to bring an umbrella."
The loop: how a skill-enabled conversation works
Here’s what happens behind the scenes when you chat with a skill-enabled AI:
You: "Is it going to rain in Delhi tomorrow?"
AI: [sees get_weather skill available]
[decides to call: get_weather(city: "Delhi", date: "tomorrow")]
System: [runs get_weather function]
[returns: { forecast: "Heavy rain", temp: 28 }]
AI: [reads the result]
"Yes, Delhi is expecting heavy rain tomorrow with a high of 28°C.
You'll want an umbrella."
The user never sees the skill call. It just feels like the AI knows the answer. But under the hood, it fetched real data.
Skills in different AI platforms
Every major AI platform supports skills — they just call them different things:
| Platform | What they call it |
|---|---|
| OpenClaw | Skills (SKILL.md files) |
| Claude (Anthropic API) | Tools (tool_use) |
| OpenAI API | Function calling |
| Vercel AI SDK | Tools |
| LangChain | Tools |
| Google Gemini | Function calling |
The concept is identical everywhere. The syntax and format vary. We’ll cover each in separate guides.
When should you add a skill?
Add a skill when:
- The AI needs information that changes over time (weather, prices, schedules)
- The AI needs to take a real action (send something, write something, call something)
- The AI needs to access your own data (databases, files, internal APIs)
- The AI’s answer depends on computation that needs to be exact
Don’t add a skill when:
- A good prompt is enough (writing, reasoning, analysis tasks)
- The skill would expose more risk than value (destructive operations without confirmation)
- You’re adding it “just in case” — unused skills add noise to the system prompt and slow down reasoning
What’s next
This was the concept. The next guides get practical:
Platform guides (pick your provider):
OpenClaw skills: Build Your First Agent Skill for OpenClaw — Using
SKILL.mdto add custom tools to your OpenClaw agent
Claude tools: Agent Skills with the Claude API —
tool_useblocks, input schemas, and handling tool results in Node.js
OpenAI functions: Agent Skills with the OpenAI API — Function calling with
gpt-4ofrom scratch
Gemini functions: Agent Skills with Google Gemini: Function Calling Guide — Gemini’s function declarations and chat session model
All providers in one SDK: Vercel AI SDK Tools: One API for Claude and OpenAI Skills — Switch providers without rewriting your tools
Practical skills:
Error handling: Handling Errors in Agent Skills: Retries and Fallbacks — Retries, fallbacks, and what the model sees when things go wrong
Testing: Testing and Debugging Agent Skills Before You Deploy — Unit testing, mocking AI calls, debugging bad descriptions
Memory: Agent Skills with Memory: Persisting State Between Chats — JSON and SQLite memory stores your agent builds over time
File access: File System Skills: Let Your Agent Read and Write Files — Safe sandboxed file access with path validation and size limits
Skill chaining: Chaining Agent Skills: Research, Summarize, and Save — Build a multi-step research pipeline from a single prompt
Real-world example: Build a GitHub Issue Creator Skill for Your AI Agent — Auth, duplicate detection, dry-run mode, and a Slack extension
Related Reading.
Vercel AI SDK Tools: One API for Claude and OpenAI Skills
Vercel AI SDK's unified tool interface works with Claude, OpenAI, and Gemini. Write your skill once and switch AI providers without rewriting the agent loop.
Build Your First Agent Skill for OpenClaw (Step-by-Step)
Learn how to create a custom OpenClaw skill using SKILL.md — from a simple weather lookup to a database query. Real code, real scenarios, no fluff.
Chaining Agent Skills: Research, Summarize, and Save
Build a skill chain where an agent searches the web, summarizes findings, and saves results to a file — all from a single prompt. Full Node.js walkthrough.