Skip to content
Security First — Part 16 of 30

SQL Injection: The Attack Older Than Your Career

Written by claude-sonnet-4 · Edited by claude-sonnet-4
SQL injectionSQLiparameterized queriesprepared statementsORMdatabase securitysecure codingvibe codingOWASPweb securityAI-generated codePython securityJavaScript security

Security First — Part 16 of 30


The Breach That Never Should Have Happened

Sometime in early 2025, a security researcher named Hack-Bat opened their laptop and pointed a tool called SQLMap at a Tamil Nadu government web portal. Within minutes, they had dumped the entire database.

What they found was staggering: over one million student records including names and dates of birth, 50,000+ Aadhaar numbers (India's national ID), 100,000+ user accounts with contact details and encrypted passwords, and — most critically — live authentication tokens that could be used to log in as any user on the platform, instantly.

The attack vector? A login form that didn't sanitize its inputs. The attacker typed ' OR IF(1=1, SLEEP(5), 0)-- into the username field. The database paused for five seconds. That pause was the confirmation: this system is injectable.

Then they ran SQLMap and walked out with millions of citizens' data.

The fix — parameterized queries — would have taken a competent developer about an hour to implement. The vulnerability had probably existed since the system was built.


Why We're Still Talking About This in 2026

SQL injection was first documented in 1998. It appeared in the original OWASP Top 10 in 2003. It has been in every major security curriculum for over two decades. And yet:

That last one bears repeating: the tool companies buy to stop SQL injection had SQL injection in it.

So why does this keep happening? The honest answer is that it's an easy mistake to make when you're moving fast — and AI coding tools make it faster to make that mistake at scale.


What SQL Injection Actually Is (No Hand-Waving)

Your web app talks to a database using a language called SQL. When a user fills out a form — a login box, a search field, a profile update — your app takes their input and builds a SQL query from it.

The vulnerable pattern looks like this:

# DANGEROUS — never do this
username = request.form['username']
password = request.form['password']

query = "SELECT * FROM users WHERE username='" + username + "' AND password='" + password + "'"
cursor.execute(query)

Now imagine an attacker types this into the username field:

admin' --

Your application assembles this SQL string:

SELECT * FROM users WHERE username='admin' --' AND password='anything'

In SQL, -- starts a comment. Everything after it is ignored. The password check disappears entirely. The query becomes:

SELECT * FROM users WHERE username='admin'

If there's an admin user in the database, the attacker is now logged in. No password required. No brute force. Just a couple of characters.

The Classic "OR 1=1" Trick

Another common payload:

' OR '1'='1

This turns the login query into:

SELECT * FROM users WHERE username='' OR '1'='1' AND password='' OR '1'='1'

Since 1=1 is always true, this returns every user in the table. The app typically logs you in as the first one — often the admin.

Going Deeper: UNION Attacks

Once an attacker confirms injection is possible, they escalate. A UNION-based attack appends a second SELECT statement to the original query:

' UNION SELECT username, password, email FROM users --

If the columns line up, the app helpfully returns the data from the users table right alongside whatever it was supposed to show you. This is how databases get dumped.

The Scary Part: It Goes Beyond Data Theft

The Fortinet FortiWeb vulnerability in 2025 showed something worse. Once the attacker had SQL injection, they used MySQL's INTO OUTFILE to write a malicious Python file onto the server's filesystem. The next time the server ran Python — which it does automatically — the attacker's code executed. SQL injection became remote code execution. Complete system takeover, starting from a text field.


Why AI Tools Keep Generating Vulnerable Code

If you've used ChatGPT, Claude, Copilot, or Cursor to write database queries, pay attention here.

AI coding assistants are trained on the entire history of code that humans have written and published. That includes decades of tutorials, Stack Overflow answers, and blog posts from the era before parameterized queries were standard — where the "normal" way to build a query was string concatenation.

When you ask an AI to "write a login function that checks the database," it often produces something like:

# What AI tools frequently generate
def login(username, password):
    query = f"SELECT * FROM users WHERE username='{username}' AND password='{password}'"
    result = db.execute(query)
    return result.fetchone()

This is clean. It's readable. It runs perfectly. And it is completely vulnerable to SQL injection.

According to Gomboc.ai's March 2026 analysis of AI-generated vulnerabilities, SQL injection is one of the most common flaws in AI-generated code specifically because "AI assists will provide SQL created via string formatting, such as f-strings in Python or template literals in JavaScript, as this is considered 'normal' in many older tutorials."

The AI isn't being careless. It's pattern-matching to the most common historical examples. Your job as a vibe coder is to recognize this pattern and know the correct alternative.


The Fix: Parameterized Queries (Prepared Statements)

The solution to SQL injection has existed since the late 1990s. It's called a parameterized query (also called a prepared statement). The concept is simple:

Stop building SQL strings. Send data separately.

Instead of concatenating user input into a SQL string, you write the query with placeholders, then pass the data as a separate argument. The database driver handles the separation — user input can never be treated as SQL syntax.

Python (with sqlite3)

# SAFE — use this pattern
import sqlite3

def login(username, password):
    conn = sqlite3.connect('myapp.db')
    cursor = conn.cursor()
    
    # The ? placeholders keep data separate from SQL logic
    query = "SELECT * FROM users WHERE username=? AND password=?"
    cursor.execute(query, (username, password))
    
    return cursor.fetchone()

Notice: the SQL string never contains the user's input. The (username, password) tuple is passed as a separate argument. Even if someone types admin' -- as their username, it's treated as a literal string to look up — not as SQL syntax to execute. The query just won't find a match.

Python (with psycopg2 for PostgreSQL)

import psycopg2

def get_user_profile(user_id):
    conn = psycopg2.connect(DATABASE_URL)
    cursor = conn.cursor()
    
    # %s is the placeholder for psycopg2
    cursor.execute("SELECT * FROM profiles WHERE id = %s", (user_id,))
    
    return cursor.fetchone()

JavaScript/TypeScript (with node-postgres)

import { Pool } from 'pg';

const pool = new Pool();

async function login(username: string, password: string) {
  // $1 and $2 are numbered placeholders
  const result = await pool.query(
    'SELECT * FROM users WHERE username = $1 AND password = $2',
    [username, password]
  );
  
  return result.rows[0];
}

The Rule to Remember

Every time you see user input being concatenated or interpolated into a SQL string — whether it's +, f-strings in Python, template literals in JavaScript, or string formatting in any language — that's the danger sign. The data belongs in the second argument to your execute function, not inside the SQL string itself.


The Even Better Fix: Use an ORM

If parameterized queries are the right tool for custom SQL, Object-Relational Mappers (ORMs) are the right tool for most everything else. An ORM is a library that handles database interactions for you, and they use parameterized queries internally by design.

Popular ORMs:

  • Python: SQLAlchemy, Django ORM, Peewee
  • JavaScript/TypeScript: Prisma, TypeORM, Drizzle
  • PHP: Eloquent (Laravel)
  • Ruby: ActiveRecord (Rails)

Here's what the same login function looks like with SQLAlchemy:

from sqlalchemy.orm import Session
from models import User

def login(db: Session, username: str, password: str):
    # SQLAlchemy builds safe parameterized queries automatically
    user = db.query(User).filter(
        User.username == username,
        User.password == password
    ).first()
    
    return user

And with Prisma in TypeScript:

import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function login(username: string, password: string) {
  // Prisma generates safe SQL — injection is structurally impossible
  const user = await prisma.user.findFirst({
    where: {
      username: username,
      password: password,
    },
  });
  
  return user;
}

With an ORM, SQL injection is structurally impossible in normal usage. You're not writing SQL strings at all — you're describing what you want in your programming language, and the ORM translates it safely.

The catch: ORMs allow raw SQL escape hatches for complex queries. If you use db.execute("SELECT ... " + user_input) inside an ORM, you've defeated the protection. The rule still applies: never concatenate user input into SQL strings, even inside an ORM.


Quick Pattern Check: Spot the Vulnerability

When you're reviewing AI-generated code (or your own), run this mental scan:

# RED FLAG: Any of these patterns with user input
query = "SELECT * FROM orders WHERE id=" + order_id          # string concat
query = f"SELECT * FROM orders WHERE id={order_id}"          # f-string
query = "SELECT * FROM orders WHERE id=%s" % order_id        # % formatting
query = "SELECT * FROM orders WHERE id={}".format(order_id)  # .format()

# GREEN: These patterns are safe
cursor.execute("SELECT * FROM orders WHERE id=?", (order_id,))     # sqlite3
cursor.execute("SELECT * FROM orders WHERE id=%s", (order_id,))    # psycopg2
pool.query("SELECT * FROM orders WHERE id = $1", [order_id])        # node-postgres

The test: is user input inside the string, or is it passed as a separate argument? Inside = vulnerable. Separate = safe.


How to Audit Your AI-Generated Code Right Now

If you've built anything with an AI coding tool that touches a database, do this:

# Search your Python codebase for dangerous query patterns
grep -rn "execute(f\"" .                    # f-strings in execute calls
grep -rn 'execute(".*" +' .                # string concatenation
grep -rn "execute(\".*%s\".*%" .            # %-formatting (not parameterized)

# Search JavaScript/TypeScript for the same
grep -rn 'query(`' .                       # template literals in queries
grep -rn 'query(".*" +' .                  # string concatenation

Every hit is a potential SQL injection vulnerability that needs to be rewritten with parameterized queries.


Security First Checklist: SQL Injection

  • Never concatenate user input into SQL strings. Not with +, not with f-strings, not with .format(). Ever.
  • Use parameterized queries for any raw SQL you write. The data goes in the second argument, not the string.
  • Use an ORM (SQLAlchemy, Prisma, Django ORM) for standard CRUD operations — they handle parameterization automatically.
  • Audit AI-generated database code before shipping. Run the grep commands above on any new codebase.
  • Apply the principle of least privilege. Your database user should only have the permissions it needs — SELECT, INSERT, UPDATE on specific tables. Not DROP, not admin access.
  • Review raw SQL escape hatches. If your ORM exposes a .raw() or .execute() method, treat those call sites with the same scrutiny as raw SQL.
  • Enable database query logging in development so you can see exactly what SQL is being sent to the database.
  • Test your own forms. Try typing ' into your input fields and look for database errors in the response. An error means you're vulnerable.

Ask The Guild

This week's community prompt:

Open one of your AI-generated projects and run the grep audit above. Did you find any string-concatenated SQL queries? Share what you found (sanitize any sensitive details) and how you fixed it — or drop a question if you're not sure whether a specific pattern is safe. Seeing real examples is how the whole community levels up together.


Next up in Security First: Authentication and Session Management — why your JWT implementation might be handing out skeleton keys.

Copy A Prompt Next

Start safely

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

6

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Start Here — Build Safely With AIStart Here — Build Safely With AI

Choose a Tiny First Win

How to pick a first project that teaches the workflow without dragging you into complex product and engineering problems.

Preview
"I need help shrinking this idea into a safe first vibe-coded project.
The big idea is: [describe idea]
Reduce it to the smallest useful version by:
1. removing anything that requires auth, billing, production data, or complicated integrations
2. keeping only one user and one core job to be done
Security First

Turn this security lesson into a repeatable review habit

This article gives you the judgment call. The security paths give you the vocabulary, checklists, and repetition to catch the next issue before it reaches users.

Best Next Path

Advanced Security

Guild Member · $29/mo

Go past security slogans into OWASP, supply-chain failures, infrastructure hardening, and the attack surfaces AI tools introduce.

20 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.