35 CVEs in One Month: What Senior Engineers See That AI Doesn't
In January 2026, six CVEs were attributed to AI-generated code. In February, fourteen. In March, thirty-five. The security research firm Rezilion estimates the actual number is between 400 and 700 for Q1 alone — the disclosed CVEs are the ones that were discovered, responsibly reported, and published. Most vulnerabilities don't follow that path.
The exponential growth curve tracks with adoption. More AI-generated code in production means more AI-generated vulnerabilities in production. This is not a reason to stop using AI tools. It's a reason to understand exactly what those tools miss — and why.
I've been doing code reviews for twenty-five years. The pattern-recognition I apply in a review took a long time to build. It doesn't come from reading about vulnerabilities; it comes from having been burned by them. From the time I missed an auth check in a rush and spent a weekend rolling back a breach. From the time a race condition in a payment system created duplicate charges for three hundred customers. From the time a junior engineer logged the full request body — including session tokens — to a log aggregation service that was accessible to everyone in the company.
AI doesn't have those experiences. It has training data. The difference matters.
The Four Patterns That Trip AI Consistently
1. Authentication Bypass via Trusting Client Input
AI-generated code frequently validates identity on the client side and then trusts the result on the server. The most common form: a JWT that's decoded and trusted without signature verification.
// AI generates this
const decoded = jwt.decode(token); // Does NOT verify signature
const userId = decoded.sub; // Attacker can forge any userId
// The correct pattern
const decoded = jwt.verify(token, process.env.JWT_SECRET); // Throws if invalid
const userId = decoded.sub;
jwt.decode and jwt.verify look similar. The AI uses decode because it's simpler. The difference is the entire security model of your application.
2. Race Conditions and TOCTOU Vulnerabilities
TOCTOU (Time of Check to Time of Use) vulnerabilities happen when you check a condition, then act on it, and the condition can change between check and action. Classic example: concurrent discount redemption.
// AI generates this — vulnerable to race condition
async function redeemDiscount(userId, discountCode) {
const discount = await getDiscount(discountCode); // Check: is it valid?
if (!discount.used) {
await markDiscountUsed(discountCode); // Two concurrent requests
await applyDiscount(userId, discount.value); // both pass the check
}
}
// Safe: use database-level locking
async function redeemDiscount(userId, discountCode) {
await db.transaction(async (trx) => {
const discount = await trx('discounts')
.where({ code: discountCode, used: false })
.forUpdate() // Locks the row
.first();
if (!discount) throw new Error('Invalid or already used');
await trx('discounts').where({ code: discountCode }).update({ used: true });
await applyDiscount(userId, discount.value, trx);
});
}
AI doesn't naturally think about concurrent execution. It writes code that works when one person uses it. Senior engineers think about what happens when a thousand people hit it simultaneously.
3. Secrets in Logs
AI generates logging statements that capture context — which is good for debugging and catastrophic for security when that context includes session tokens, API keys, or PII.
// AI generates this
console.log('Processing request:', JSON.stringify(req.body));
// req.body may contain: { email, password, credit_card_number, session_token }
// Safe: log only what you need, explicitly
console.log('Processing request:', {
userId: req.user?.id,
action: req.body.action,
// Never: req.body.password, req.headers.authorization
});
The pattern I recommend: treat every log statement in AI-generated code as a potential data exposure. Read each one and ask "what's the worst thing that could be in this variable?"
4. Insecure Direct Object References (IDOR)
AI generates endpoints that accept an ID and return the corresponding resource without checking whether the requesting user is authorized to access it.
// AI generates this
app.get('/api/orders/:orderId', async (req, res) => {
const order = await db.orders.findById(req.params.orderId);
return res.json(order);
});
// Any authenticated user can access any order by guessing the ID.
// Safe: always filter by the requesting user's identity
app.get('/api/orders/:orderId', async (req, res) => {
const order = await db.orders.findOne({
id: req.params.orderId,
userId: req.user.id // Must belong to this user
});
if (!order) return res.status(404).json({ error: 'Not found' });
return res.json(order);
});
IDOR is the most common vulnerability in modern web applications, according to the OWASP Top 10. AI generates it constantly.
The Code Review Instinct
The pattern-recognition I apply in reviews is not a checklist. It's a set of reflexes built from exposure to real failures. When I see a function that accepts a user-provided ID, I automatically think "authorization check?" When I see logging, I think "what's being logged?" When I see a financial operation, I think "race condition?"
You can accelerate building this instinct by doing two things:
Read CVE disclosures. Not the marketing blog posts about breaches — the actual CVE records and the code that caused them. The National Vulnerability Database is public. Reading ten CVEs from real applications teaches you more about attack patterns than most security courses.
Do adversarial review of AI output. When AI generates code, put on the attacker's hat. Ask: "How would I abuse this if I were a malicious user?" Run through the OWASP Top 10 as a checklist. This is slower than accepting the code. It is significantly faster than responding to an incident.
Why This Is the Guild's Mission
The reason we built the AI Coding Guild is precisely this gap. Experienced engineers have pattern-recognition and instinct that took careers to develop. Vibe coders are shipping production software with AI assistance but without that foundation.
The guild's mission is not to slow down vibe coders or to make AI tools feel dangerous. It's to compress the transfer of hard-won knowledge. The CVE count going from 6 to 35 in three months is a knowledge gap problem. The solution is experienced engineers sharing what they know — the specific patterns, the real incidents, the concrete checks — with the people who need it.
That's what these articles are. That's what the guild is.
What to Do Next
- Add the four patterns to your AI review checklist: JWT verify vs decode, race conditions in financial operations, secrets in logs, authorization on every resource fetch.
- Read one CVE disclosure per week. Pick any recent one from the NVD (nvd.nist.gov) related to web applications. Read the code. Understand the pattern. It builds faster than you'd expect.
- If you're an experienced engineer: consider contributing to the guild's Q&A. The 35 CVEs in March represent developers who needed your pattern-recognition and didn't have it. That's a solvable problem.
The knowledge exists. The gap is in distribution. Let's close it.
🤖 Ghostwritten by Claude Opus 4.6 · Curated by Tom Hundley