Code Review Basics: What to Look for Before Deploy
Security First — Part 20 of 30
It was a Wednesday morning when Marcus, a solo founder, launched his SaaS app's beta. He'd built the whole thing in six weeks using Cursor and Claude — fast, clean, impressive. His AI assistant had written the user authentication flow, the database queries, the file upload handler. It all worked perfectly in testing.
By Friday, a security researcher had emailed him with a screenshot of his entire user table. Every email address. Every hashed password. Exposed.
The cause? A single line in his AI-generated API endpoint:
# What the AI wrote
user = db.query(f"SELECT * FROM users WHERE email = '{email}'")
SQL injection. A vulnerability old enough to vote. Marcus hadn't written it — his AI had. And Marcus hadn't caught it, because he'd never been taught what to look for.
This is the article Marcus needed before he deployed.
Why AI Code Needs Your Eyes Before It Ships
Let's be honest about the numbers. Veracode's 2025 GenAI Code Security Report tested over 100 large language models across four programming languages. The result: 45% of AI-generated code samples introduced OWASP Top 10 security vulnerabilities. Java was worst at 72% failure. Python came in at 38%. JavaScript at 43%.
The Cloud Security Alliance put it plainly: AI coding assistants don't understand your application's risk model. They optimize for code that works, not code that's safe. And the Cycode 2026 AI Vulnerability report found that security researchers scanning close to 5,600 vibe-coded applications discovered over 2,000 vulnerabilities and 400+ exposed secrets.
None of this means AI coding tools are useless. They're extraordinary productivity multipliers. But a code review step — even a lightweight one — stands between you and Marcus's Friday.
Here's how to do it.
The Four Security Red Flags to Scan for First
You don't need to be a security engineer to catch the most dangerous patterns. You need to know what to search for.
1. Raw User Input in Queries or Commands
Any time you see user-supplied data (from a form, a URL parameter, an API request) flowing directly into a database query, shell command, or eval statement — stop. That's the pattern behind SQL injection, command injection, and code injection.
# RED FLAG — user input directly in a query string
results = db.execute(f"SELECT * FROM orders WHERE user_id = {user_id}")
# SAFE — parameterized query
results = db.execute("SELECT * FROM orders WHERE user_id = ?", (user_id,))
// RED FLAG — eval on user input
const result = eval(userExpression);
// SAFE — use a proper math library instead
import { evaluate } from 'mathjs';
const result = evaluate(userExpression);
The Cloud Security Alliance notes that when prompts are ambiguous, LLMs optimize for the shortest path to a working result — so an AI asked to evaluate a user-provided math expression will frequently reach for eval() because it solves the problem in one line. It also opens the door to remote code execution.
How to find it: In your editor, search for f" or template literals (`) containing words like query, execute, run, system, exec, eval. Any string that mixes user variables with commands is a candidate for review.
2. Missing Authentication and Authorization Checks
AI tends to write the happy path — the code that works when a logged-in user does the expected thing. It often omits the checks that protect endpoints when someone unexpected shows up.
# RED FLAG — no check that the requesting user owns this record
@app.route('/api/documents/<doc_id>')
def get_document(doc_id):
doc = db.get(doc_id)
return jsonify(doc)
# SAFE — verify ownership before returning
@app.route('/api/documents/<doc_id>')
@login_required
def get_document(doc_id):
doc = db.get(doc_id)
if doc.owner_id != current_user.id:
return jsonify({'error': 'Forbidden'}), 403
return jsonify(doc)
This class of bug — Broken Object Level Authorization (BOLA) — is OWASP's #1 API security risk. The AI wrote code that works for the legitimate user. It forgot about the attacker who manually changes doc_id in the URL to someone else's ID.
How to find it: Look at every route or endpoint in your AI-generated code. Ask: "What would happen if a logged-out user hit this? What if they changed the ID in the URL to someone else's?" If there's no ownership check, flag it.
3. Hardcoded Secrets and Credentials
This one is embarrassingly common and accounts for a huge portion of real breaches. The Cycode report found 400+ exposed secrets across 5,600 vibe-coded apps. AI often generates working examples with placeholder credentials — and placeholder credentials have a way of going to production.
# RED FLAG — hardcoded secrets
SECRET_KEY = "my-super-secret-key-123"
DATABASE_URL = "postgresql://admin:password123@localhost/mydb"
AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE"
# SAFE — environment variables
import os
SECRET_KEY = os.environ.get('SECRET_KEY')
DATABASE_URL = os.environ.get('DATABASE_URL')
# Quick scan — search your entire project for common secret patterns
grep -rn --include="*.py" --include="*.js" --include="*.ts" \
-E '(password|secret|api_key|token|aws_secret)\s*=\s*["\x27][^"\x27]{8,}' \
. | grep -v node_modules | grep -v .git
How to find it: Run the grep command above before every deploy. Better yet, add it to your pre-commit hooks so it runs automatically. Also check your .env files are in .gitignore before pushing.
4. Weak or Absent Input Validation
AI frequently skips the boring-but-critical step of validating what users send. Missing validation is the root cause of XSS attacks, path traversal, and a long list of other exploits. Veracode found that 86% of relevant AI-generated code samples failed to defend against Cross-Site Scripting (XSS) — the most common failure in their entire dataset.
// RED FLAG — displaying user content without sanitization
app.get('/profile/:username', (req, res) => {
const user = getUserByName(req.params.username);
res.send(`<h1>Hello, ${user.displayName}!</h1>`);
});
// SAFE — escape HTML before rendering
import { escape } from 'html-escaper';
app.get('/profile/:username', (req, res) => {
const user = getUserByName(req.params.username);
res.send(`<h1>Hello, ${escape(user.displayName)}!</h1>`);
});
How to find it: Search for any place where user-supplied data — from req.body, req.query, req.params, or form inputs — ends up directly in HTML, file paths, or shell commands without passing through a sanitization function first.
The 15-Minute Pre-Deploy Review Workflow
You don't need hours. You need a repeatable checklist you actually run.
Step 1: Run a Static Analysis Scan (5 minutes)
Before you read a single line, let a tool do a first pass:
# Python — install bandit and scan your project
pip install bandit
bandit -r ./src -ll # -ll = medium and high severity only
# JavaScript/TypeScript — use semgrep (free tier available)
npm install -g semgrep
semgrep --config=p/javascript ./src
# Or use the Snyk CLI (free for individuals)
npm install -g snyk
snyk code test
These tools catch the mechanical patterns — hardcoded secrets, SQL concatenation, dangerous function calls — faster than human eyes. They're not perfect, but they eliminate the low-hanging fruit in minutes.
Step 2: The Five-Question Walkthrough (10 minutes)
For every major feature your AI built, ask these five questions:
- Can a logged-out user reach this? If not, is there a
@login_requireddecorator or middleware protecting it? - Can user A access user B's data? Check every endpoint that accepts an ID or identifier in the URL or request body.
- Where does user input go? Trace any form field, URL parameter, or API input to its destination. Does it touch a database query, a file path, or HTML output?
- What secrets does this use? Are they in environment variables or hardcoded in the source?
- What happens with unexpected input? What if someone sends an empty string? A 10MB string? A string containing
<script>tags?
Step 3: Ask Your AI to Review Its Own Work
This is a trick that actually works. Past the code you're about to deploy into your AI assistant and ask:
Please review this code for security vulnerabilities. Specifically check for:
- SQL injection or command injection risks
- Missing authentication or authorization checks
- Hardcoded secrets or credentials
- Missing input validation or output encoding
- Any other OWASP Top 10 issues
Be specific about line numbers and how each issue could be exploited.
Researchers from Université du Québec found that ChatGPT can recognize security flaws in its own code when explicitly asked — it just doesn't volunteer that information unless you prompt it. The AI knows. You have to ask.
The Patterns AI Gets Wrong Most Often
Based on the research and real-world breach reports, here are the top AI-generated code patterns to watch for by language:
Python:
os.system()orsubprocess.call(shell=True)with user input → command injection- String-formatted SQL queries → SQL injection
pickle.loads()on untrusted data → deserialization attacksrandommodule for security tokens instead ofsecretsmodule → weak randomness
JavaScript/TypeScript:
- Template literals in SQL queries → SQL injection
innerHTML = userContent→ XSSeval()ornew Function()on user strings → code injection- JWT tokens verified without checking the algorithm → authentication bypass
Both:
- API endpoints without rate limiting → abuse and credential stuffing
- File upload handlers that don't validate file type or content → malware upload
- Error messages that expose stack traces or database details → information leakage
Your Pre-Deploy Security Checklist
Run the tools:
- Static analysis scan complete (Bandit, Semgrep, or Snyk)
- Grep for hardcoded secrets returned no results
- All dependencies checked with
npm auditorpip-audit
Manual review — five questions answered for each feature:
- Authentication: every protected route has a login check
- Authorization: every data endpoint verifies ownership
- Input validation: user-supplied data is validated before use
- Output encoding: user content is escaped before rendering in HTML
- Secrets: all credentials are in environment variables, not source code
AI self-review:
- Pasted new AI-generated code into the AI with the security review prompt
- Addressed all issues the AI identified in its own code
Before git push:
-
.envfile is in.gitignore - No test credentials or example API keys are in the commit
Ask The Guild
Community prompt: What's the most surprising security bug you've caught in AI-generated code — either through a review, a scan, or the hard way in production? Share the pattern (not the sensitive details) so other guild members know what to watch for. Bonus points if you share the prompt that generated the vulnerable code — let's build a community library of prompts that need extra scrutiny.
Sources: Veracode 2025 GenAI Code Security Report | Cloud Security Alliance — Understanding Security Risks in AI-Generated Code | Cycode — Top AI Security Vulnerabilities to Watch in 2026 | The Register — ChatGPT creates mostly insecure code | Fortune — AI coding tools exploded in 2025. The first security exploits show the risks | RunSafe Security — AI Generated Code and the Next Cyber Crisis