I gave an AI agent direct fs access. No restrictions, no guardrails.
It worked great for a week. Then I asked it to “organize my notes folder.” It scanned the directory, decided half the files were duplicates based on similar filenames, and deleted 40 files. Permanently. No recycle bin.
It was trying to be helpful. The problem was mine — I gave it a footgun with no safety.
File system skills need boundaries. Not because the AI is malicious — but because it optimizes for the task it’s given, and “organize” means something very different to an AI than it does to you.
The three failure modes
Before writing any code, understand what can go wrong:
1. Path traversal — the agent writes outside the intended directory.
User asks: "Save this note to ../../../etc/cron.daily/cleanup.sh"
Agent calls: write_file({ path: "../../../etc/cron.daily/cleanup.sh", content: "rm -rf /" })
This is a path traversal attack. Even without malicious intent, an agent following a user instruction could write anywhere if paths aren’t validated.
2. Context-blowing large reads — reading a large file fills the context window with noise.
Agent calls: read_file({ path: "./node_modules/.cache/webpack/main.pack" })
// Returns 8MB of binary data into the context window
3. Recursive or destructive writes — operations that cascade.
Agent calls: write_file({ path: "notes/", content: "..." })
// Without guardrails, this overwrites the directory entirely
All three are preventable with a sandbox pattern.
The sandbox: resolveSecure(basePath, userPath)
This function is the foundation of all four skills. It resolves a path, validates it stays inside basePath, and throws on any traversal attempt.
// sandbox.js
import { resolve } from "node:path";
export function resolveSecure(basePath, userPath) {
const base = resolve(basePath);
const target = resolve(base, userPath);
// Ensure the resolved path starts with the base path
if (!target.startsWith(base + "/") && target !== base) {
throw new Error(
`Access denied: "${userPath}" resolves outside the allowed directory.`
);
}
return target;
}
Test it:
import { resolveSecure } from "./sandbox.js";
// These work:
resolveSecure("/home/vishnu/notes", "ideas.md"); // → "/home/vishnu/notes/ideas.md"
resolveSecure("/home/vishnu/notes", "work/project.md"); // → "/home/vishnu/notes/work/project.md"
// These throw:
resolveSecure("/home/vishnu/notes", "../passwords.txt"); // ❌ Access denied
resolveSecure("/home/vishnu/notes", "../../etc/passwd"); // ❌ Access denied
resolveSecure("/home/vishnu/notes", "/etc/cron.daily"); // ❌ Access denied
Every skill below calls resolveSecure before touching the file system.
Skill 1 — read_file
// file-skills.js
import { readFile, stat } from "node:fs/promises";
import { existsSync } from "node:fs";
import { resolveSecure } from "./sandbox.js";
const BASE_DIR = process.env.AGENT_FILES_DIR ?? `${process.env.HOME}/agent-files`;
const MAX_LINES = 200;
const MAX_SIZE_BYTES = 500 * 1024; // 500KB
export async function read_file({ path, maxLines = MAX_LINES }) {
let safePath;
try {
safePath = resolveSecure(BASE_DIR, path);
} catch (err) {
return { error: err.message };
}
if (!existsSync(safePath)) {
return { error: `File not found: "${path}"` };
}
const stats = await stat(safePath);
if (stats.isDirectory()) {
return { error: `"${path}" is a directory. Use list_directory to explore it.` };
}
if (stats.size > MAX_SIZE_BYTES) {
return {
error: `File is too large to read (${Math.round(stats.size / 1024)}KB). Maximum is ${MAX_SIZE_BYTES / 1024}KB.`,
fileSizeKb: Math.round(stats.size / 1024)
};
}
const content = await readFile(safePath, "utf8");
const lines = content.split("\n");
const truncated = lines.length > maxLines;
return {
path,
lines: lines.length,
truncated,
content: truncated ? lines.slice(0, maxLines).join("\n") + `\n\n[...truncated at ${maxLines} lines]` : content,
sizeKb: Math.round(stats.size / 1024)
};
}
The maxLines limit prevents a 10,000-line log file from flooding the context window. The model can request more lines if needed by passing a higher value — but the default keeps things sane.
Skill 2 — write_file
import { writeFile, appendFile, mkdir } from "node:fs/promises";
import { dirname } from "node:path";
import { resolveSecure } from "./sandbox.js";
const MAX_WRITE_BYTES = 100 * 1024; // 100KB
export async function write_file({ path, content, mode = "create" }) {
if (!["create", "append", "overwrite"].includes(mode)) {
return { error: `Invalid mode "${mode}". Use "create", "append", or "overwrite".` };
}
if (!content && content !== "") {
return { error: "Content is required." };
}
if (Buffer.byteLength(content, "utf8") > MAX_WRITE_BYTES) {
return { error: `Content too large (${Math.round(Buffer.byteLength(content) / 1024)}KB). Maximum is ${MAX_WRITE_BYTES / 1024}KB.` };
}
let safePath;
try {
safePath = resolveSecure(BASE_DIR, path);
} catch (err) {
return { error: err.message };
}
// Create parent directories if they don't exist
await mkdir(dirname(safePath), { recursive: true });
const fileExists = existsSync(safePath);
if (mode === "create" && fileExists) {
return {
error: `File already exists: "${path}". Use mode "overwrite" to replace it or "append" to add to it.`,
exists: true
};
}
if (mode === "append") {
await appendFile(safePath, content, "utf8");
} else {
await writeFile(safePath, content, "utf8");
}
return {
written: true,
path,
mode,
sizeKb: Math.round(Buffer.byteLength(content) / 1024)
};
}
The "create" mode protects against accidental overwrites — the agent must explicitly use "overwrite" to replace an existing file. This means the model has to make a deliberate choice, which surfaces the intent clearly in the conversation.
Skill 3 — list_directory
import { readdir, stat } from "node:fs/promises";
import { join } from "node:path";
import { resolveSecure } from "./sandbox.js";
const MAX_DEPTH = 2;
const MAX_FILES = 200;
export async function list_directory({ path = ".", depth = 1 }) {
const clampedDepth = Math.min(depth, MAX_DEPTH);
let safePath;
try {
safePath = resolveSecure(BASE_DIR, path);
} catch (err) {
return { error: err.message };
}
async function walk(dirPath, currentDepth) {
const entries = await readdir(dirPath, { withFileTypes: true });
const results = [];
for (const entry of entries.slice(0, MAX_FILES)) {
const fullPath = join(dirPath, entry.name);
const relativePath = fullPath.replace(BASE_DIR + "/", "");
if (entry.isDirectory()) {
const children = currentDepth < clampedDepth ? await walk(fullPath, currentDepth + 1) : [];
results.push({ name: entry.name, type: "directory", path: relativePath, children });
} else {
const stats = await stat(fullPath);
results.push({
name: entry.name,
type: "file",
path: relativePath,
sizeKb: Math.round(stats.size / 1024),
modifiedAt: stats.mtime.toISOString()
});
}
}
return results;
}
const tree = await walk(safePath, 1);
return { path, entries: tree, baseDir: BASE_DIR };
}
Skill 4 — search_files
import { readdir, readFile, stat } from "node:fs/promises";
import { join } from "node:path";
import { resolveSecure } from "./sandbox.js";
const MAX_RESULTS = 100;
export async function search_files({ directory = ".", pattern, contentSearch }) {
let safePath;
try {
safePath = resolveSecure(BASE_DIR, directory);
} catch (err) {
return { error: err.message };
}
const matches = [];
async function scan(dirPath) {
if (matches.length >= MAX_RESULTS) return;
const entries = await readdir(dirPath, { withFileTypes: true });
for (const entry of entries) {
if (matches.length >= MAX_RESULTS) break;
const fullPath = join(dirPath, entry.name);
if (entry.isDirectory()) {
await scan(fullPath);
} else {
const relativePath = fullPath.replace(BASE_DIR + "/", "");
const nameMatch = !pattern || entry.name.includes(pattern);
if (nameMatch && !contentSearch) {
matches.push({ path: relativePath, name: entry.name });
} else if (nameMatch && contentSearch) {
try {
const fileStats = await stat(fullPath);
if (fileStats.size < 500 * 1024) { // skip large files
const content = await readFile(fullPath, "utf8");
if (content.includes(contentSearch)) {
const lineNum = content.split("\n").findIndex(l => l.includes(contentSearch)) + 1;
matches.push({ path: relativePath, name: entry.name, matchLine: lineNum });
}
}
} catch {
// Skip unreadable files silently
}
}
}
}
}
await scan(safePath);
return { found: matches.length, results: matches, truncated: matches.length >= MAX_RESULTS };
}
Tool definitions
export const fileSystemTools = [
{
name: "read_file",
description:
"Read the contents of a file. Use for markdown notes, text files, config files, and small code files. " +
"Returns the first 200 lines by default — pass maxLines for more. " +
"Do NOT use for binary files, images, or very large files.",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "File path relative to the files directory" },
maxLines: { type: "number", description: "Maximum lines to return (default 200)" }
},
required: ["path"]
}
},
{
name: "write_file",
description:
"Write content to a file. Use mode 'create' for new files (fails if file exists), " +
"'overwrite' to replace existing files, 'append' to add to the end. " +
"Always confirm with the user before using 'overwrite' on an important file.",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "File path relative to the files directory" },
content: { type: "string", description: "The text content to write" },
mode: { type: "string", enum: ["create", "append", "overwrite"], description: "Write mode (default: create)" }
},
required: ["path", "content"]
}
},
{
name: "list_directory",
description:
"List files and folders in a directory. Returns names, sizes, and last modified dates. " +
"Use to explore what files exist before reading or writing.",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "Directory path (default: root files directory)" },
depth: { type: "number", description: "How many levels deep to list (max 2, default 1)" }
}
}
},
{
name: "search_files",
description:
"Search for files by name pattern or content. " +
"Use when the user asks to find notes about a topic or locate a specific file.",
input_schema: {
type: "object",
properties: {
directory: { type: "string", description: "Directory to search in (default: root)" },
pattern: { type: "string", description: "Filename pattern to match (partial match)" },
contentSearch: { type: "string", description: "Text to search inside file contents" }
}
}
}
];
What NOT to build
Do not create these skills — or if you do, require explicit user confirmation before executing:
delete_file — permanent, unrecoverable. If you must build it, always return what would be deleted first and require a separate confirm_delete call.
move_file / rename_file — agents have deleted files by moving them to /dev/null in edge cases. Require confirmation.
execute_file — never. An agent that can execute arbitrary files can do anything on your system.
The principle: read access is safe, write access needs guardrails, delete and execute access should always involve a human confirmation step.
Full usage example
Set the base directory:
export AGENT_FILES_DIR="$HOME/notes"
mkdir -p ~/notes
Then the agent can:
User: Find all notes mentioning "OpenClaw" and add a summary at the end of each one.
Agent: [calls search_files({ contentSearch: "OpenClaw" })]
→ finds: notes/ai-tools.md, notes/setup.md
[calls read_file({ path: "notes/ai-tools.md" })]
→ returns content
[calls write_file({ path: "notes/ai-tools.md", content: "...original + summary...", mode: "overwrite" })]
→ written: true
The agent can now work with real files — safely, within defined boundaries.
See how MCP servers compare for production file access: Build Your First MCP Server for Claude — MCP is the standardized, production-grade approach when you need file access across multiple AI tools.
What’s next
Chain file skills with search and summarize: Chaining Agent Skills: Research, Summarize, and Save
Add error handling to your file skills: Handling Errors in Agent Skills: Retries and Fallbacks
Persistent memory for your agent: Agent Skills with Memory: Persisting State Between Chats
Related Reading.
Chaining Agent Skills: Research, Summarize, and Save
Build a skill chain where an agent searches the web, summarizes findings, and saves results to a file — all from a single prompt. Full Node.js walkthrough.
Agent Skills with Memory: Persisting State Between Chats
Teach your agent to remember across conversations. Build read/write memory skills backed by a JSON file, then upgrade to SQLite — full Node.js code.
Agent Skills with Google Gemini: Function Calling Guide
Complete guide to Gemini function calling — define tools, handle function_call responses, return results, and compare syntax with Claude and OpenAI. Node.js.