The Anti-Patterns — Prompts That Produce Bad Code
Recognize and avoid the most common prompting mistakes that lead to buggy, bloated, or wrong code.
You've learned what makes a good prompt. Now let's talk about what makes a bad one.
These aren't hypothetical mistakes. They're patterns I see constantly — from beginners and experienced developers alike. Each one seems reasonable but consistently produces code that's buggy, bloated, over-complicated, or just wrong.
Recognizing these anti-patterns will save you hours of debugging and frustration.
Anti-Pattern 1: The Mega Prompt
What it looks like:
Build me a complete SaaS application with user registration, email
verification, OAuth with Google and GitHub, role-based access control,
a project dashboard with real-time updates, task management with
Kanban boards, file uploads with progress bars, Stripe integration
for three subscription tiers, an admin panel with user management,
email notifications, and a public API with rate limiting.Why it fails: This prompt asks for months of development work in one shot. The AI will produce something that looks like it works, but:
- Most features will be half-implemented stubs
- The code structure will be inconsistent (the AI's approach drifts as it generates)
- Edge cases won't be handled
- The features won't integrate well with each other
The fix: Use the decomposition pattern from the earlier lesson. Build one feature at a time, verify it works, then add the next.
Anti-Pattern 2: The Vague Request
What it looks like:
Make the app betterOr the slightly more specific but still vague:
Improve the user experienceWhy it fails: "Better" and "improve" are subjective terms with infinite interpretations. The AI will make changes — possibly many changes — based on its own judgment about what "better" means. Those changes might conflict with your design goals, break existing functionality, or take the project in a direction you didn't intend.
The fix: Be specific about what to change and what outcome you want:
On the product list page:
- Add loading skeletons that match the card layout while data loads
- Show a "No products found" message with a "Clear filters" button when filters return no results
- Add pagination (20 items per page) instead of loading all products at onceAnti-Pattern 3: The Implicit Standards Prompt
What it looks like:
Create a user profile page with best practicesWhy it fails: "Best practices" according to whom? The AI has been trained on millions of projects with different conventions, styles, and opinions about what "best" means. It will pick one interpretation, and it might not match yours.
This is especially dangerous because the AI will confidently implement its chosen approach without mentioning the alternatives it didn't choose.
The fix: Make your standards explicit:
Create a user profile page. Requirements:
- Server component that fetches user data
- Form validation with Zod
- Optimistic UI updates when saving
- Error boundary for failed data loads
- Mobile-first responsive layout
- Follow the same component structure as our existing SettingsPageAnti-Pattern 4: The Copy-Paste Error Dump
What it looks like:
Fix this error:
TypeError: Cannot read properties of undefined (reading 'map')
at ProductList (webpack-internal:///(app-pages-browser)/./src/components/ProductList.tsx:23:45)
at renderWithHooks (webpack-internal:///(app-pages-browser)/./node_modules/react-dom/cjs/react-dom.development.js:16305:18)
at mountIndeterminateComponent (webpack-internal:///(app-pages-browser)/./node_modules/react-dom/cjs/react-dom.development.js:20074:13)
at beginWork (webpack-internal:///(app-pages-browser)/./node_modules/react-dom/cjs/react-dom.development.js:21587:16)Why it fails: The error message tells you what went wrong but not why. Without seeing the code that produced the error and understanding what the code was supposed to do, the AI makes generic suggestions like "add a null check" that might suppress the error without fixing the underlying problem.
The fix: Include context with the error:
I'm getting this error when the product list page loads:
TypeError: Cannot read properties of undefined (reading 'map')
Here's the component:
[paste ProductList code]
Here's how it's called:
[paste parent component usage]
The products data comes from this function:
[paste data fetching function]
I expect the page to show a grid of product cards. The products
array should come from Supabase but seems to be undefined.Anti-Pattern 5: The Infinite Yes
What it looks like: Accepting every AI suggestion without reading it, then wondering why the code doesn't work or has weird behavior.
Why it fails: AI generates plausible code, not necessarily correct code. It's optimized to look right. And it usually does look right — until you actually run it and find:
- Edge cases that aren't handled
- Logic that works for the example but not for real data
- Dependencies or patterns that conflict with the rest of your project
- Security vulnerabilities hidden in reasonable-looking code
The fix: Develop a review habit. Before accepting any AI-generated code:
- Read the code at a high level — does the structure make sense?
- Check the logic — does it handle the cases you care about?
- Look for hardcoded values that should be configurable
- Check for security basics — is user input validated? Are there SQL injection risks?
- Run it and test with real-ish data
You don't need to understand every line. But you should understand the approach and verify it handles your important cases.
Anti-Pattern 6: The Moving Target
What it looks like:
You: Build a contact form with name, email, and message fields
AI: [builds the form]
You: Actually, add a phone number field and a subject dropdown
AI: [modifies the form]
You: Wait, change it to a multi-step form instead
AI: [rewrites as multi-step]
You: Can you also make the first version but with file uploads?
AI: [confused, generates something inconsistent]Why it fails: Each change causes the AI to modify its approach. After several redirections, the AI loses track of which version you want and produces code that's a Frankenstein of multiple conflicting approaches.
The fix: Decide what you want before you prompt. If you're not sure, use chain-of-thought:
I need a contact form but I'm not sure about the structure yet.
Before building anything, help me decide:
- Should it be single-step or multi-step?
- What fields make sense for a B2B SaaS contact form?
- Should it support file attachments?
Give me your recommendation with reasoning, then I'll tell you what to build.Anti-Pattern 7: The Trust Fall
What it looks like:
You: My payment system has a bug — sometimes users are charged twice
AI: Here's a fix [provides code change]
You: [Applies the fix directly to production without testing]Why it fails: AI doesn't have access to your production environment, your real data, or your actual payment processor's behavior. Its fix is based on the code you shared, which might not capture the full picture. Applying AI fixes to critical systems without testing is gambling.
The fix: Always test AI fixes in a safe environment first. For critical systems (payments, auth, data deletion), test with multiple scenarios:
Before I apply this fix, help me create test cases:
1. Normal payment flow — single charge
2. User clicks "Pay" twice quickly
3. Network timeout during payment
4. User's card is declined
5. Webhook arrives before the redirect completes
For each case, what should happen? What would indicate the bug is still present?Anti-Pattern 8: The Kitchen Sink
What it looks like:
Add error handling, loading states, animations, accessibility
attributes, unit tests, integration tests, documentation comments,
TypeScript types, logging, analytics tracking, and caching to
this component.Why it fails: This asks the AI to add ten different concerns to one component simultaneously. The result is bloated code where the actual business logic is buried under layers of plumbing. Worse, these additions often conflict — the animation logic interferes with the loading state, the caching breaks the analytics tracking.
The fix: Add one concern at a time. Start with the core functionality, then layer on complexity:
Step 1: Build the component with core functionality
Step 2: Add TypeScript types
Step 3: Add error and loading states
Step 4: Add accessibility attributes
Step 5: Write testsEach step is reviewable. Each step builds on confirmed-working code.
Anti-Pattern 9: The Outdated Assumption
What it looks like:
Create a React class component with componentDidMount lifecycle
method that fetches data from the APIOr:
Set up a Next.js page with getServerSidePropsWhy it fails: These prompts reference older patterns. The AI will comply — it knows how to write class components and getServerSideProps — but the result will be outdated code that doesn't take advantage of modern features and patterns.
AI models are trained on internet data that includes old tutorials, deprecated patterns, and legacy code. If you specifically ask for an old pattern, you'll get it.
The fix: Either specify modern patterns or let the AI choose:
Create a server component in Next.js App Router that fetches user
data from Supabase and displays it. Use the current recommended
patterns for data fetching.If you're unsure whether a pattern is current, ask:
What's the current recommended way to fetch data in Next.js 16?
I want to make sure I'm using modern patterns.The Meta Anti-Pattern: Not Learning From Mistakes
The worst anti-pattern isn't a single type of bad prompt. It's making the same prompting mistake repeatedly without adjusting your approach.
When an AI interaction goes poorly:
- Look at your prompt — was it vague? Too big? Missing context?
- Identify which anti-pattern (if any) caused the problem
- Adjust your prompting approach for next time
- If you keep correcting the same issue, add it to your
.cursorrulesorCLAUDE.md
Good vibe coders improve their prompting over time. Each failed interaction is data about what works and what doesn't.
Try this now
Look at your last five bad AI interactions and label each one with an anti-pattern from this lesson. If the same label appears twice, that is not bad luck. It is a habit you can fix.
Prompt to give your agent
Use this when a conversation with the agent keeps going sideways: "I am going to paste three prompts that produced bad results. For each prompt:
- Classify the anti-pattern causing the failure
- Explain why the prompt created bad code or bad direction
- Rewrite it into a safer, more specific prompt
- Add the right constraints, review gates, and stop conditions
Use these anti-pattern labels when relevant: mega prompt, vague request, implicit standards, error dump, moving target, trust fall, kitchen sink, outdated assumption.
After the rewrites, give me one reusable rule I should add to my workflow so I stop making the same mistake."
What you must review yourself
- Whether the rewritten prompt actually fixes the root problem instead of just sounding more polished
- Whether the new prompt adds review gates before risky work
- Whether the suggested modern pattern is actually current for your stack
- Whether critical systems still have a testing plan before you apply the change
Common mistakes to avoid
- Blaming the model instead of the prompt. Bad outputs often start with bad instructions.
- Correcting the answer but not the workflow. If the same prompt shape keeps failing, rewrite the shape.
- Shipping high-risk fixes without validation. Payments, auth, deletion, and production infra still need tests.
- Asking for modern software with outdated assumptions. If you name the old pattern, the agent will often give it to you.
Key takeaways
- Most prompt failures fit a small set of recognizable anti-patterns
- Naming the anti-pattern helps you fix the instruction instead of arguing with the output
- Safer prompts add scope, constraints, and review checkpoints
- Better prompting is a workflow habit, not a one-time insight
What's Next
Next up: The Build-Review-Iterate Loop. This module taught you how to prompt well. The next workflow module shows how to use those prompts inside a repeatable cycle of generation, review, and correction.