Most AI features are just lazy engineering in disguise. Developers slap a chat box on a database because they’re too tired to build a real UI. It’s a mess. AI adds latency, drains your bank account, and hallucinates at the worst possible moments. You need a framework to decide when a large language model is actually necessary and when a simple if statement is better. This guide breaks down the deterministic vs. non-deterministic divide. You’ll learn to spot the “AI trap” before you ship a slow, expensive product that nobody asked for. Stop guessing. Start architecting for reality.
Should I use code or AI for this feature?
Most features are deterministic. Given the same input, they should always produce the same output. Sorting a list or calculating a total doesn’t need a trillion-parameter model. Write code. AI is for the “fuzzy” stuff—language, judgment, and interpretation.
The Scenario: You’re building a checkout page and decide to use AI to “verify” the shipping address. The AI decides “123 Main St” sounds fake because it’s a cliché. Now your customer can’t buy their shoes and you’re losing money because you replaced a simple regex with a moody model.
Rule of thumb: if you can write a reliable unit test that covers all cases, use code. If the “correct” output depends on tone or context, AI might be the answer.
How do I know if AI is worth the cost?
AI isn’t free. It’s slow. It’s expensive. Every call to a model adds a “latency tax” to your user experience. If a user expects an instant response, putting an LLM in the middle is a risky move that could ruin your retention numbers.
The Scenario: You’re at a coffee shop with terrible Wi-Fi and you need to log an expense. The app uses AI to “categorize” the receipt in real-time. You’re staring at a loading spinner for 15 seconds while the model thinks about whether a latte is “Travel” or “Dining.” You just want to close the app and go home.
Ask yourself: what happens when it fails? If a silent wrong answer is catastrophic, keep it in code. If a slightly off output is fine, AI is viable.
When is plain code still the winner?
Code is predictable. It’s fast. It’s cheap. You don’t need a model to handle business logic, routing, or authentication. If you’re using AI for data validation or CRUD operations, you’re over-engineering your way into a technical debt nightmare.
The Scenario: Your app’s “smart” routing uses AI to guess which page a user wants next. It accidentally sends a guest user to the admin dashboard because it “felt” like they were an employee. You just had a major security breach because you trusted a prompt over a permission check.
Anything with a regulatory or compliance requirement belongs in code. Don’t let a model decide who gets access to what.
Where does AI actually earn its keep?
AI wins when it handles unstructured chaos. Generating first drafts, extracting data from messy emails, or summarizing long meetings is where the tech shines. These tasks require a level of “understanding” that rule-based code can’t match without years of work.
The Scenario: You’re staring at 500 angry customer feedback emails and your boss wants a summary in ten minutes. You feed them into a model. It identifies three main bugs and suggests a fix for the most common one. You look like a hero because you did two hours of work in sixty seconds.
Can I mix code and AI without breaking everything?
The best features are hybrids. Code handles the structure; AI handles the language. Use code to extract structured fields and AI to classify them with a confidence score. If the score is low, use code to route the task to a human.
The Scenario: A support bot identifies a frustrated customer using sentiment analysis (AI). It immediately uses a hard-coded rule (Code) to ping a senior manager on Slack. The customer feels heard, the manager is in the loop, and no one had to guess.
Why is my prompt making my app look stupid?
When AI handles a task, the prompt is the product. Vague prompts produce garbage. If you need JSON, define the schema. If you need a specific tone, give examples. Constrain the scope or the model will wander off into nonsense.
The Scenario: You built a “professional” financial assistant. You forgot to set the tone, so it starts using “bro” and fire emojis when talking about 401ks. Your users think your app was built by a teenager and they move their money elsewhere.
Am I making this too complicated?
The temptation to build a complex “multi-agent” pipeline is high. Resist it. Each extra call adds cost and failure points. If one well-designed prompt can solve the problem, don’t build a “swarm” of agents to do the same thing slower.
The Scenario: You build a five-agent system to “research and summarize” a news article. It takes two minutes to run and costs fifty cents per use. A simple regex and a single prompt could have done it in three seconds for a fraction of a penny.
Will this feature bankrupt my startup?
Design with real numbers. High-end models like Opus are expensive. If you’re calling them 10,000 times a day, your bill will explode. Use smaller, faster models for simple tasks and save the heavy hitters for when they’re actually needed.
The Scenario: You wake up to a $2,000 bill. A “power user” found a way to loop your most expensive prompt to generate a million cat poems. You didn’t set a cap, and now your runway just got shorter by a month because of a joke.
Related: AI Mistakes When Building Apps (And How to Fix Them) · Prompts That Go Wrong: What I Learned Shipping AI Features