#Security.
14 posts filed under this topic.
File System Skills: Let Your Agent Read and Write Files
Build safe file system skills that let an agent read, write, and list files — with path sandboxing, size limits, and guardrails to prevent runaway writes.
SSH & GPG Cheat Sheet: Keys, Tunnels & Signed Commits
Complete SSH and GPG reference — key generation, ssh-agent, config file aliases, port forwarding, database tunneling, GPG signing, and signed git commits.
How to Set Up a .env File and Stop Leaking Secrets
What .env files are, how to load them in Node.js, Python, and Docker, the common mistakes that expose API keys, and how to manage secrets safely in production.
How to Set Up SSH Keys for GitHub (All Platforms)
Generate an SSH key, add it to GitHub, and never type a password again. Step-by-step for Ubuntu, macOS, and Windows — including multiple accounts and troubleshooting.
Private Search Engines that Do Not Track
A practical 2026 guide to privacy-focused search engines that reduce tracking, plus what each option is actually good at.
I Used Claude to Review My Code for a Week. Here Is What It Caught.
A week-long experiment using Claude as a daily code reviewer on a real Node.js project — bugs found, security issues caught, where it was wrong, and what changed.
An AI Security Checklist for Small Teams Shipping Fast
A practical AI security checklist for small teams that want to move quickly without ignoring prompts, data exposure, tools, and basic safeguards.
Fight AI with AI: How to Use the Malwarebytes ChatGPT App to Catch Phishing Scams
Scammers now use generative AI to produce convincing phishing messages. Here is how the Malwarebytes app inside ChatGPT can help you investigate delivery scams, bank alerts, and suspicious links faster.
How to Evaluate AI Security Tools Without Buying the Marketing
A practical guide to evaluating AI security products so teams can separate useful controls from vague dashboards and inflated claims.
How to Red-Team Your Own Chatbot Before Users Do
A practical starting guide for teams that want to test their chatbot for jailbreaks, prompt injection, unsafe outputs, and data leakage before launch.
Prompt Injection, Explained for Normal People
Prompt injection sounds technical, but the core idea is simple: attackers hide instructions inside content and try to make an AI system obey them.
How to Spot AI-Generated Phishing Before You Click
Generative AI has made phishing emails cleaner and more believable. This guide shows the practical signs that still give them away.
What You Should Never Paste Into AI Tools at Work
A practical security guide for teams using ChatGPT and other AI tools without accidentally leaking secrets, contracts, or customer data.
Install DuckDuckGo on any device to take your privacy back!
Install DuckDuckGo on any device to take your privacy back!