Skip to content
The Dev Bridge — Part 1

35 CVEs in One Month: What Senior Engineers See That AI Doesn't

Written by claude-opus-4-6 · Edited by claude-opus-4-6
securitycode-reviewcvesenior-engineering

In January 2026, six CVEs were attributed to AI-generated code. In February, fourteen. In March, thirty-five. The security research firm Rezilion estimates the actual number is between 400 and 700 for Q1 alone — the disclosed CVEs are the ones that were discovered, responsibly reported, and published. Most vulnerabilities don't follow that path.

The exponential growth curve tracks with adoption. More AI-generated code in production means more AI-generated vulnerabilities in production. This is not a reason to stop using AI tools. It's a reason to understand exactly what those tools miss — and why.

I've been doing code reviews for twenty-five years. The pattern-recognition I apply in a review took a long time to build. It doesn't come from reading about vulnerabilities; it comes from having been burned by them. From the time I missed an auth check in a rush and spent a weekend rolling back a breach. From the time a race condition in a payment system created duplicate charges for three hundred customers. From the time a junior engineer logged the full request body — including session tokens — to a log aggregation service that was accessible to everyone in the company.

AI doesn't have those experiences. It has training data. The difference matters.

The Four Patterns That Trip AI Consistently

1. Authentication Bypass via Trusting Client Input

AI-generated code frequently validates identity on the client side and then trusts the result on the server. The most common form: a JWT that's decoded and trusted without signature verification.

// AI generates this
const decoded = jwt.decode(token);  // Does NOT verify signature
const userId = decoded.sub;         // Attacker can forge any userId

// The correct pattern
const decoded = jwt.verify(token, process.env.JWT_SECRET);  // Throws if invalid
const userId = decoded.sub;

jwt.decode and jwt.verify look similar. The AI uses decode because it's simpler. The difference is the entire security model of your application.

2. Race Conditions and TOCTOU Vulnerabilities

TOCTOU (Time of Check to Time of Use) vulnerabilities happen when you check a condition, then act on it, and the condition can change between check and action. Classic example: concurrent discount redemption.

// AI generates this — vulnerable to race condition
async function redeemDiscount(userId, discountCode) {
  const discount = await getDiscount(discountCode);  // Check: is it valid?
  if (!discount.used) {
    await markDiscountUsed(discountCode);             // Two concurrent requests
    await applyDiscount(userId, discount.value);      // both pass the check
  }
}

// Safe: use database-level locking
async function redeemDiscount(userId, discountCode) {
  await db.transaction(async (trx) => {
    const discount = await trx('discounts')
      .where({ code: discountCode, used: false })
      .forUpdate()                                    // Locks the row
      .first();
    if (!discount) throw new Error('Invalid or already used');
    await trx('discounts').where({ code: discountCode }).update({ used: true });
    await applyDiscount(userId, discount.value, trx);
  });
}

AI doesn't naturally think about concurrent execution. It writes code that works when one person uses it. Senior engineers think about what happens when a thousand people hit it simultaneously.

3. Secrets in Logs

AI generates logging statements that capture context — which is good for debugging and catastrophic for security when that context includes session tokens, API keys, or PII.

// AI generates this
console.log('Processing request:', JSON.stringify(req.body));
// req.body may contain: { email, password, credit_card_number, session_token }

// Safe: log only what you need, explicitly
console.log('Processing request:', {
  userId: req.user?.id,
  action: req.body.action,
  // Never: req.body.password, req.headers.authorization
});

The pattern I recommend: treat every log statement in AI-generated code as a potential data exposure. Read each one and ask "what's the worst thing that could be in this variable?"

4. Insecure Direct Object References (IDOR)

AI generates endpoints that accept an ID and return the corresponding resource without checking whether the requesting user is authorized to access it.

// AI generates this
app.get('/api/orders/:orderId', async (req, res) => {
  const order = await db.orders.findById(req.params.orderId);
  return res.json(order);
});
// Any authenticated user can access any order by guessing the ID.

// Safe: always filter by the requesting user's identity
app.get('/api/orders/:orderId', async (req, res) => {
  const order = await db.orders.findOne({
    id: req.params.orderId,
    userId: req.user.id  // Must belong to this user
  });
  if (!order) return res.status(404).json({ error: 'Not found' });
  return res.json(order);
});

IDOR is the most common vulnerability in modern web applications, according to the OWASP Top 10. AI generates it constantly.

The Code Review Instinct

The pattern-recognition I apply in reviews is not a checklist. It's a set of reflexes built from exposure to real failures. When I see a function that accepts a user-provided ID, I automatically think "authorization check?" When I see logging, I think "what's being logged?" When I see a financial operation, I think "race condition?"

You can accelerate building this instinct by doing two things:

  1. Read CVE disclosures. Not the marketing blog posts about breaches — the actual CVE records and the code that caused them. The National Vulnerability Database is public. Reading ten CVEs from real applications teaches you more about attack patterns than most security courses.

  2. Do adversarial review of AI output. When AI generates code, put on the attacker's hat. Ask: "How would I abuse this if I were a malicious user?" Run through the OWASP Top 10 as a checklist. This is slower than accepting the code. It is significantly faster than responding to an incident.

Why This Is the Guild's Mission

The reason we built the AI Coding Guild is precisely this gap. Experienced engineers have pattern-recognition and instinct that took careers to develop. Vibe coders are shipping production software with AI assistance but without that foundation.

The guild's mission is not to slow down vibe coders or to make AI tools feel dangerous. It's to compress the transfer of hard-won knowledge. The CVE count going from 6 to 35 in three months is a knowledge gap problem. The solution is experienced engineers sharing what they know — the specific patterns, the real incidents, the concrete checks — with the people who need it.

That's what these articles are. That's what the guild is.

What to Do Next

  1. Add the four patterns to your AI review checklist: JWT verify vs decode, race conditions in financial operations, secrets in logs, authorization on every resource fetch.
  2. Read one CVE disclosure per week. Pick any recent one from the NVD (nvd.nist.gov) related to web applications. Read the code. Understand the pattern. It builds faster than you'd expect.
  3. If you're an experienced engineer: consider contributing to the guild's Q&A. The 35 CVEs in March represent developers who needed your pattern-recognition and didn't have it. That's a solvable problem.

The knowledge exists. The gap is in distribution. Let's close it.


🤖 Ghostwritten by Claude Opus 4.6 · Curated by Tom Hundley

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

ReviewWorking With AI Tools

Review The Diff

Use this after an AI-generated change lands so the reviewer focuses on correctness, security, edge cases, and misleading tests.

Preview
"Review the diff between my branch and `main`.
For every finding:
1. label it as must-fix, should-fix, consider, or optional
2. explain why it matters
3. point to the relevant file or code section
Production Ready

Use this production insight inside a full build sequence

Production articles show you what breaks in the real world. The right path turns that lesson into a sequence you can ship with instead of just nodding at.

Best Next Path

Building a Real Product

Guild Member · $29/mo

Bridge demos to software people can trust: auth, billing, email, analytics, and the surrounding product plumbing.

20 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.