Skip to content
Security First — Part 20 of 30

Code Review Basics: What to Look for Before Deploy

Written by claude-sonnet-4 · Edited by claude-sonnet-4
code reviewAI-generated codesecuritySQL injectionXSSauthenticationauthorizationinput validationvibe codingOWASPstatic analysishardcoded secretspre-deploy checklistAI coding tools

Security First — Part 20 of 30


It was a Wednesday morning when Marcus, a solo founder, launched his SaaS app's beta. He'd built the whole thing in six weeks using Cursor and Claude — fast, clean, impressive. His AI assistant had written the user authentication flow, the database queries, the file upload handler. It all worked perfectly in testing.

By Friday, a security researcher had emailed him with a screenshot of his entire user table. Every email address. Every hashed password. Exposed.

The cause? A single line in his AI-generated API endpoint:

# What the AI wrote
user = db.query(f"SELECT * FROM users WHERE email = '{email}'")

SQL injection. A vulnerability old enough to vote. Marcus hadn't written it — his AI had. And Marcus hadn't caught it, because he'd never been taught what to look for.

This is the article Marcus needed before he deployed.


Why AI Code Needs Your Eyes Before It Ships

Let's be honest about the numbers. Veracode's 2025 GenAI Code Security Report tested over 100 large language models across four programming languages. The result: 45% of AI-generated code samples introduced OWASP Top 10 security vulnerabilities. Java was worst at 72% failure. Python came in at 38%. JavaScript at 43%.

The Cloud Security Alliance put it plainly: AI coding assistants don't understand your application's risk model. They optimize for code that works, not code that's safe. And the Cycode 2026 AI Vulnerability report found that security researchers scanning close to 5,600 vibe-coded applications discovered over 2,000 vulnerabilities and 400+ exposed secrets.

None of this means AI coding tools are useless. They're extraordinary productivity multipliers. But a code review step — even a lightweight one — stands between you and Marcus's Friday.

Here's how to do it.


The Four Security Red Flags to Scan for First

You don't need to be a security engineer to catch the most dangerous patterns. You need to know what to search for.

1. Raw User Input in Queries or Commands

Any time you see user-supplied data (from a form, a URL parameter, an API request) flowing directly into a database query, shell command, or eval statement — stop. That's the pattern behind SQL injection, command injection, and code injection.

# RED FLAG — user input directly in a query string
results = db.execute(f"SELECT * FROM orders WHERE user_id = {user_id}")

# SAFE — parameterized query
results = db.execute("SELECT * FROM orders WHERE user_id = ?", (user_id,))
// RED FLAG — eval on user input
const result = eval(userExpression);

// SAFE — use a proper math library instead
import { evaluate } from 'mathjs';
const result = evaluate(userExpression);

The Cloud Security Alliance notes that when prompts are ambiguous, LLMs optimize for the shortest path to a working result — so an AI asked to evaluate a user-provided math expression will frequently reach for eval() because it solves the problem in one line. It also opens the door to remote code execution.

How to find it: In your editor, search for f" or template literals (`) containing words like query, execute, run, system, exec, eval. Any string that mixes user variables with commands is a candidate for review.


2. Missing Authentication and Authorization Checks

AI tends to write the happy path — the code that works when a logged-in user does the expected thing. It often omits the checks that protect endpoints when someone unexpected shows up.

# RED FLAG — no check that the requesting user owns this record
@app.route('/api/documents/<doc_id>')
def get_document(doc_id):
    doc = db.get(doc_id)
    return jsonify(doc)

# SAFE — verify ownership before returning
@app.route('/api/documents/<doc_id>')
@login_required
def get_document(doc_id):
    doc = db.get(doc_id)
    if doc.owner_id != current_user.id:
        return jsonify({'error': 'Forbidden'}), 403
    return jsonify(doc)

This class of bug — Broken Object Level Authorization (BOLA) — is OWASP's #1 API security risk. The AI wrote code that works for the legitimate user. It forgot about the attacker who manually changes doc_id in the URL to someone else's ID.

How to find it: Look at every route or endpoint in your AI-generated code. Ask: "What would happen if a logged-out user hit this? What if they changed the ID in the URL to someone else's?" If there's no ownership check, flag it.


3. Hardcoded Secrets and Credentials

This one is embarrassingly common and accounts for a huge portion of real breaches. The Cycode report found 400+ exposed secrets across 5,600 vibe-coded apps. AI often generates working examples with placeholder credentials — and placeholder credentials have a way of going to production.

# RED FLAG — hardcoded secrets
SECRET_KEY = "my-super-secret-key-123"
DATABASE_URL = "postgresql://admin:password123@localhost/mydb"
AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE"

# SAFE — environment variables
import os
SECRET_KEY = os.environ.get('SECRET_KEY')
DATABASE_URL = os.environ.get('DATABASE_URL')
# Quick scan — search your entire project for common secret patterns
grep -rn --include="*.py" --include="*.js" --include="*.ts" \
  -E '(password|secret|api_key|token|aws_secret)\s*=\s*["\x27][^"\x27]{8,}' \
  . | grep -v node_modules | grep -v .git

How to find it: Run the grep command above before every deploy. Better yet, add it to your pre-commit hooks so it runs automatically. Also check your .env files are in .gitignore before pushing.


4. Weak or Absent Input Validation

AI frequently skips the boring-but-critical step of validating what users send. Missing validation is the root cause of XSS attacks, path traversal, and a long list of other exploits. Veracode found that 86% of relevant AI-generated code samples failed to defend against Cross-Site Scripting (XSS) — the most common failure in their entire dataset.

// RED FLAG — displaying user content without sanitization
app.get('/profile/:username', (req, res) => {
  const user = getUserByName(req.params.username);
  res.send(`<h1>Hello, ${user.displayName}!</h1>`);
});

// SAFE — escape HTML before rendering
import { escape } from 'html-escaper';
app.get('/profile/:username', (req, res) => {
  const user = getUserByName(req.params.username);
  res.send(`<h1>Hello, ${escape(user.displayName)}!</h1>`);
});

How to find it: Search for any place where user-supplied data — from req.body, req.query, req.params, or form inputs — ends up directly in HTML, file paths, or shell commands without passing through a sanitization function first.


The 15-Minute Pre-Deploy Review Workflow

You don't need hours. You need a repeatable checklist you actually run.

Step 1: Run a Static Analysis Scan (5 minutes)

Before you read a single line, let a tool do a first pass:

# Python — install bandit and scan your project
pip install bandit
bandit -r ./src -ll  # -ll = medium and high severity only

# JavaScript/TypeScript — use semgrep (free tier available)
npm install -g semgrep
semgrep --config=p/javascript ./src

# Or use the Snyk CLI (free for individuals)
npm install -g snyk
snyk code test

These tools catch the mechanical patterns — hardcoded secrets, SQL concatenation, dangerous function calls — faster than human eyes. They're not perfect, but they eliminate the low-hanging fruit in minutes.

Step 2: The Five-Question Walkthrough (10 minutes)

For every major feature your AI built, ask these five questions:

  1. Can a logged-out user reach this? If not, is there a @login_required decorator or middleware protecting it?
  2. Can user A access user B's data? Check every endpoint that accepts an ID or identifier in the URL or request body.
  3. Where does user input go? Trace any form field, URL parameter, or API input to its destination. Does it touch a database query, a file path, or HTML output?
  4. What secrets does this use? Are they in environment variables or hardcoded in the source?
  5. What happens with unexpected input? What if someone sends an empty string? A 10MB string? A string containing <script> tags?

Step 3: Ask Your AI to Review Its Own Work

This is a trick that actually works. Past the code you're about to deploy into your AI assistant and ask:

Please review this code for security vulnerabilities. Specifically check for:
- SQL injection or command injection risks
- Missing authentication or authorization checks  
- Hardcoded secrets or credentials
- Missing input validation or output encoding
- Any other OWASP Top 10 issues

Be specific about line numbers and how each issue could be exploited.

Researchers from Université du Québec found that ChatGPT can recognize security flaws in its own code when explicitly asked — it just doesn't volunteer that information unless you prompt it. The AI knows. You have to ask.


The Patterns AI Gets Wrong Most Often

Based on the research and real-world breach reports, here are the top AI-generated code patterns to watch for by language:

Python:

  • os.system() or subprocess.call(shell=True) with user input → command injection
  • String-formatted SQL queries → SQL injection
  • pickle.loads() on untrusted data → deserialization attacks
  • random module for security tokens instead of secrets module → weak randomness

JavaScript/TypeScript:

  • Template literals in SQL queries → SQL injection
  • innerHTML = userContent → XSS
  • eval() or new Function() on user strings → code injection
  • JWT tokens verified without checking the algorithm → authentication bypass

Both:

  • API endpoints without rate limiting → abuse and credential stuffing
  • File upload handlers that don't validate file type or content → malware upload
  • Error messages that expose stack traces or database details → information leakage

Your Pre-Deploy Security Checklist

Run the tools:

  • Static analysis scan complete (Bandit, Semgrep, or Snyk)
  • Grep for hardcoded secrets returned no results
  • All dependencies checked with npm audit or pip-audit

Manual review — five questions answered for each feature:

  • Authentication: every protected route has a login check
  • Authorization: every data endpoint verifies ownership
  • Input validation: user-supplied data is validated before use
  • Output encoding: user content is escaped before rendering in HTML
  • Secrets: all credentials are in environment variables, not source code

AI self-review:

  • Pasted new AI-generated code into the AI with the security review prompt
  • Addressed all issues the AI identified in its own code

Before git push:

  • .env file is in .gitignore
  • No test credentials or example API keys are in the commit

Ask The Guild

Community prompt: What's the most surprising security bug you've caught in AI-generated code — either through a review, a scan, or the hard way in production? Share the pattern (not the sensitive details) so other guild members know what to watch for. Bonus points if you share the prompt that generated the vulnerable code — let's build a community library of prompts that need extra scrutiny.


Sources: Veracode 2025 GenAI Code Security Report | Cloud Security Alliance — Understanding Security Risks in AI-Generated Code | Cycode — Top AI Security Vulnerabilities to Watch in 2026 | The Register — ChatGPT creates mostly insecure code | Fortune — AI coding tools exploded in 2025. The first security exploits show the risks | RunSafe Security — AI Generated Code and the Next Cyber Crisis

Copy A Prompt Next

Start safely

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

6

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Start Here — Build Safely With AIStart Here — Build Safely With AI

What You're Actually Doing When You Build With AI

A plain-English explanation of the job: AI writes fast, you still choose scope, inspect output, and own the result.

Preview
"I am completely new to vibe coding and I want to build one very small thing safely.
The problem is: [describe the problem]
The user is: [describe the user]
The smallest useful version would do only: [describe the tiny outcome]
Before writing any code:
Security First

Turn this security lesson into a repeatable review habit

This article gives you the judgment call. The security paths give you the vocabulary, checklists, and repetition to catch the next issue before it reaches users.

Best Next Path

Identity and Authentication Deep Dive

Guild Member · $29/mo

Go deep on sessions, JWTs, OAuth flows, enterprise identity, and the auth mistakes that AI-generated code keeps repeating.

15 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.