M
MeshWorld.
OpenAI GPT-4o Agent Skills Function Calling Node.js Tutorial Agentic AI 5 min read

Agent Skills with the OpenAI API: Function Calling Explained

Vishnu
By Vishnu
| Updated: Mar 12, 2026

OpenAI calls agent skills “function calling.” The concept is simple. The model decides it needs to do something. It asks you to run a function. You run the code. Then you hand the result back. It is the exact same idea as Claude’s tool use. The syntax is just different. If you can master this, your AI stops being a chatbot. It becomes an agent that can actually get things done in the real world.

How is OpenAI’s “function calling” different from a regular chat?

A regular chat is just text in and text out. Function calling adds a middle step where the AI pauses to ask for data.

The Scenario: You’re building a personal shopping bot. A user says “Buy me a black hoodie.” GPT-4o doesn’t have a credit card or a shipping address. It stops and says “I need to use the process_purchase tool with item: 'black hoodie'.” You run the checkout code, get a confirmation number, and hand it back. GPT-4o then tells the user “Done! Your hoodie is on its way.”

You are the muscles. The AI is the brain. You work together to finish the task.

What do I need to get started with OpenAI?

You just need the OpenAI library and an API key. Make sure your key has permissions for the model you’re using.

npm install openai

How do I tell GPT-4o about my tools?

You define your tools in an array. OpenAI expects a specific structure that wraps your function inside a function key.

const tools = [{
  type: "function",
  function: {
    name: "get_weather",
    description: "Get weather for a city. Use for rain or temp questions.",
    parameters: {
      type: "object",
      properties: {
        city: { type: "string", description: "The city name" }
      },
      required: ["city"]
    }
  }
}];

How do I start a conversation with functions enabled?

You pass the tools array into your chat completion request. The model will look at the tools and the user’s message to decide if it should “call” one.

const response = await client.chat.completions.create({
  model: "gpt-4o",
  tools: tools,
  messages: [{ role: "user", content: "What's the weather in Mumbai?" }]
});

Where does my actual code go?

If the AI wants a tool, it returns a finish_reason of "tool_calls". Now you run your regular JavaScript code.

The Scenario: You’re testing your agent. You get a tool call back and try to use the arguments directly. But OpenAI sends those arguments as a raw string. Your code crashes because it’s trying to read city from a string of text. You forgot to JSON.parse().

const toolCall = response.choices[0].message.tool_calls[0];
const args = JSON.parse(toolCall.function.arguments); // Don't skip this!
const result = await get_weather(args);

How do I hand the data back to the AI?

You send the result back using a message with the role "tool". You must include the tool_call_id so the AI knows which request you’re answering.

const finalResponse = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "user", content: "What's the weather in Mumbai?" },
    response.choices[0].message, // Include the original tool call
    {
      role: "tool",
      tool_call_id: toolCall.id,
      content: JSON.stringify(result)
    }
  ]
});

What does a complete OpenAI agent look like?

A real agent runs in a while loop. It keeps processing tool calls until the AI is satisfied and gives a final text answer.

OpenAI vs. Claude: Which one is easier for developers?

OpenAI is slightly more annoying because it sends arguments as strings. You have to parse them yourself. Claude sends them as objects, which is much cleaner. OpenAI does have a big win though. It can request multiple tool calls in a single response.

Can OpenAI do multiple things at once?

Yes. This is called parallel tool calling. It’s great for speed.

The Scenario: You ask your agent to “Compare the weather in Mumbai and Delhi.” OpenAI doesn’t want to wait. It sends two tool calls at the same time. You run both weather checks in parallel and give both answers back at once. It’s way faster than doing them one by one.

Can I force the AI to use a tool?

Sometimes you don’t want the AI to decide. You can set tool_choice to a specific function name to force its hand.

What are the biggest ways people break their OpenAI agents?

The most common mistake is skipping the conversation history.

  • History: You must include the assistant’s original tool call in the next request.
  • Parsing: Always JSON.parse the arguments.
  • Looping: Use a while loop to catch sequential tool calls.

What should I build next?