I was doing a late-night code review with my agent. It flagged three potential bugs—a race condition, a missing null check, and a deprecated API call. Then it asked: “Want me to file these as GitHub issues?” I said yes. Twenty seconds later, three properly formatted issues appeared in the repo. They had labels, clear descriptions, and reproduction steps. This is when agent skills stop being toy demos. They become actually useful. But getting here safely requires handling a few messy details that most tutorials just skip over.
Why is filing issues harder than fetching weather?
Read-only skills like weather apps are safe. Filing GitHub issues is an action. It has side effects. If you mess up, you’re not just getting a wrong answer. You’re cluttering a real repository.
The Scenario: You’re finishing a long coding session. Your agent finds a critical security flaw. You tell it to “fix it and file an issue.” Without proper checks, the agent gets stuck in a loop and files 50 identical issues. Now your email inbox is a total disaster and your teammates are annoyed.
You need authentication. You need rate limits. And you definitely need a way to stop duplicates.
What do I need to get started?
Install the GitHub client first. You’ll also need a way to manage your environment variables.
npm install @octokit/rest dotenv
Create a .env file for your token. Don’t ever hardcode this.
GITHUB_TOKEN=ghp_your_token_here
How do I write the actual code?
Start with a simple helper to grab your token. I like to keep the repository parsing separate so the agent doesn’t have to guess.
import { Octokit } from "@octokit/rest";
import "dotenv/config";
function getOctokit() {
if (!process.env.GITHUB_TOKEN) {
throw new Error("GITHUB_TOKEN is missing.");
}
return new Octokit({ auth: process.env.GITHUB_TOKEN });
}
How do I stop my agent from spamming my repo?
Before you create an issue, check if it already exists. This is the most important part of an action skill.
The Scenario: You and a coworker are both using AI agents on the same project. You both find the same bug. Without duplicate detection, your GitHub “Issues” tab suddenly has two identical reports. The rest of the team wastes time triaging the same problem twice.
export async function search_github_issues({ repo, query, state = "open" }) {
const [owner, name] = repo.split("/");
const octokit = getOctokit();
try {
const response = await octokit.search.issuesAndPullRequests({
q: `${query} repo:${repo} is:issue is:${state}`,
per_page: 5
});
return { found: response.data.total_count, issues: response.data.items };
} catch (err) {
return { error: `Search failed: ${err.message}` };
}
}
Can I preview an issue before it goes live?
Always include a dry-run mode. This lets the agent show you what it’s about to do. It builds trust.
The Scenario: You ask your agent to “file an issue about the login bug.” Before it posts, it shows you a preview. You notice it included your personal API key in the reproduction steps. You tell it to remove the key before posting. Crisis averted.
export async function create_github_issue({ repo, title, body, dryRun = false }) {
if (dryRun) {
return { dryRun: true, message: "Would create issue: " + title };
}
const [owner, name] = repo.split("/");
const octokit = getOctokit();
try {
const res = await octokit.issues.create({ owner, repo: name, title, body });
return { created: true, url: res.data.html_url };
} catch (err) {
return { error: err.message };
}
}
How do I tell the AI what an issue should look like?
Your tool definition is the “instruction manual” for the AI. Be specific about what makes a good issue.
{
name: "create_github_issue",
description: "Create a new issue. Body must include steps to reproduce and expected behavior.",
input_schema: {
type: "object",
properties: {
repo: { type: "string", description: "Format: owner/repo" },
title: { type: "string" },
body: { type: "string" },
dryRun: { type: "boolean", description: "Set to true for a preview first" }
},
required: ["repo", "title"]
}
}
What does the final agent look like?
You can wrap this in a simple loop. The agent will check for duplicates, show a preview, and then post.
The Scenario: You point the agent at a messy legacy codebase. It finds a deprecated function. It searches GitHub, finds no existing issue, shows you a dry-run preview, and with your “okay,” files the bug. You just saved fifteen minutes of manual work.
Can I use this same pattern for Slack?
Yes. The logic is the same for every action API. Check for existing state, offer a dry run, and then commit.
export async function send_slack_message({ channel, message, dryRun = false }) {
if (dryRun) return { dryRun: true, message };
// ... actual Slack API call here
}
Is my agent safe to run on production repos?
Before you turn this on for real, run through this checklist. Don’t skip it.
- Token has the absolute minimum permissions (repo scope only).
- Token is stored in a
.envfile, not your code. -
dryRun: trueis your default setting for the first week. - You’ve tested it on a private “sandbox” repo first.
- You told the agent exactly which repo it’s allowed to touch.
What should I build next?
- One API for everything: Vercel AI SDK Tools: One API for Claude and OpenAI Skills
- Chain it with search: Chaining Agent Skills: Research, Summarize, and Save
- Handle the errors: Handling Errors in Agent Skills: Retries and Fallbacks