Your First Security Audit: A 15-Minute Checklist
Security First — Part 7 of 30
The Demo Link That Wasn't a Demo
In October 2025, a German AI company called localmind.ai posted a video on LinkedIn showing off their product. The video included a link to what looked like a public demo instance. A curious prospective customer clicked it, registered a free account, and started poking around.
They weren't in a sandbox. They weren't in a demo environment.
They were in the company's live production system — with Microsoft 365 administrator privileges.
The prospective customer could read customer emails. They could see who all the paying subscribers were. They could have done almost anything. Instead, they sent a warning email to every customer they could find, detailing the breach. localmind.ai shut down all services while they scrambled to investigate.
The cause? Classic vibe-coding misconfiguration. The authentication and role-based access logic had never been audited. Nobody had asked the question: Can a random new user get admin access?
That question takes about 30 seconds to test. But it requires knowing to ask it.
That's what today's checklist is for.
Why AI-Generated Code Needs Auditing More, Not Less
If you're using AI coding tools to build your app, here's something you need to internalize: AI-generated code is statistically more likely to contain security vulnerabilities than code written by experienced developers.
This isn't an opinion — it's documented. A 2025 study cited by the Cloud Security Alliance found that 62% of AI-generated code solutions contain design flaws or known security vulnerabilities, even when developers used the latest AI models. A separate analysis found that 45% of AI-written code had exploitable flaws.
Why? Two reasons.
First, AI models learn from the entire internet, including the vast amount of insecure code that exists there. If a vulnerable pattern — like raw SQL string concatenation — appears thousands of times in training data, the model produces it readily. It doesn't understand why the pattern is dangerous. It just knows it's common.
Second, AI optimizes for "working" not "secure". Ask an AI to evaluate a user-submitted math expression, and it might hand you eval(expression) — one line, works perfectly, demonstrates the concept. It also opens a remote code execution door that an attacker could walk a truck through.
The IT Pro report from February 2026 named this "the illusion of correctness" — AI-generated code looks right, runs right, and fails under adversarial conditions that normal testing never simulates.
A security audit is how you break that illusion before an attacker does.
The 15-Minute Checklist
This audit is structured in five areas. Set a timer. Go fast. You're not looking for everything — you're looking for the most common, most catastrophic issues that AI-generated code produces.
Area 1: Exposed Secrets (3 minutes)
API keys, passwords, and tokens hardcoded in source files are the fastest path to a full compromise. The localmind.ai breach started here. The $87,500 Stripe charge from last week's article started here.
Run this in your terminal:
# Scan for common secret patterns in your codebase
grep -rn --include="*.js" --include="*.ts" --include="*.py" --include="*.env" \
-E '(api_key|apikey|api-key|secret|password|token|ACCESS_KEY|SECRET_KEY)\s*=\s*["\x27][A-Za-z0-9_\-]{8,}' \
. --exclude-dir=node_modules --exclude-dir=.git
Also check your .env file isn't committed to git:
# Check if .env is tracked by git (it should NOT be)
git ls-files .env
# If it returns a path, remove it immediately:
git rm --cached .env
echo ".env" >> .gitignore
git commit -m "remove .env from tracking"
And audit your git history — once a secret is committed, it lives in history even after deletion:
# Search git history for the word 'secret' or 'password'
git log --all --full-history -S 'SECRET_KEY' --source
What you're looking for: Any hardcoded string that looks like a credential. If you find one, rotate the key immediately — assume it was already found.
Area 2: Dependency Vulnerabilities (3 minutes)
In September 2025, attackers compromised 18 widely-used npm packages — including chalk, debug, ansi-styles, and strip-ansi — that had a combined 2.6 billion weekly downloads. The injected code silently redirected cryptocurrency wallet transactions. Billions of downstream apps were potentially affected.
Your dependencies are someone else's code. You need to audit them.
For JavaScript/TypeScript projects:
# Check for known vulnerabilities in your dependencies
npm audit
# Auto-fix low-risk issues
npm audit fix
# For a more detailed report with severity levels
npm audit --json | npx npm-audit-formatter
For Python projects:
# Install pip-audit (Google's open-source tool)
pip install pip-audit
# Run the audit
pip-audit
# Or use Safety for a second opinion
pip install safety
safety check
What you're looking for: HIGH and CRITICAL severity vulnerabilities. Don't try to fix everything — triage to the high-severity issues that affect production code paths.
Area 3: Authentication & Authorization (4 minutes)
This is the area that burned localmind.ai. It's also the area AI coding tools are worst at, because authentication bugs are logic bugs — they're not a missing bracket or a typo. They're a question the code never thought to ask.
Do this manually — open your app and try to break it:
Create a regular user account. Try to access an admin URL directly. Try
/admin,/dashboard/admin,/api/admin/users. Do you get blocked or do you get in?Test horizontal privilege escalation. If User A creates a resource (a post, a document, an order), can User B access it by guessing the ID? Try changing the ID in a URL:
/api/posts/123→/api/posts/124. Does the app return data that belongs to another user?Test your API endpoints without authentication. Open a terminal and try calling your own API without a token:
# Test an authenticated endpoint without any token
curl -X GET https://yourapp.com/api/user/profile
# You should get a 401 Unauthorized, not actual data
# If you get data, you have a missing auth check
- Check your AI-generated route handlers. AI tools often implement authentication at the page level but forget to protect the underlying API route. Look for patterns like this in your code:
// This protects the page, but NOT the API route it calls
// The /api/admin/users endpoint might still be public!
export const getServerSideProps = async (context) => {
const session = await getSession(context);
if (!session?.user?.isAdmin) {
return { redirect: { destination: '/login' } };
}
// ...
};
Every API route that returns sensitive data needs its own auth check.
Area 4: Input Validation & Injection (3 minutes)
The OWASP Top 10 2025 still lists injection vulnerabilities in the top three. SQL injection, command injection, XSS — these have been around for 25 years. AI-generated code still produces them.
Search your codebase for dangerous patterns:
# Look for raw SQL string concatenation (SQL injection risk)
grep -rn --include="*.py" --include="*.js" --include="*.ts" \
-E '(query|sql|execute).*\+.*req\.' .
# Look for eval() usage (code injection risk)
grep -rn --include="*.js" --include="*.ts" \
'eval(' . --exclude-dir=node_modules
# Look for innerHTML assignment (XSS risk)
grep -rn --include="*.js" --include="*.ts" --include="*.jsx" --include="*.tsx" \
'innerHTML' . --exclude-dir=node_modules
Python example of what dangerous looks like:
# DANGEROUS: AI often generates this
def get_user(username):
query = "SELECT * FROM users WHERE username = '" + username + "'"
return db.execute(query)
# SAFE: Use parameterized queries instead
def get_user(username):
query = "SELECT * FROM users WHERE username = ?"
return db.execute(query, (username,))
If you're using an ORM like Prisma, Drizzle, or SQLAlchemy, you're mostly protected from SQL injection by default — but double-check any place where you're using raw query methods.
Area 5: Run a Free Automated Scanner (2 minutes)
Human eyes miss things. Automated tools catch things humans miss. Use both.
Semgrep is free, fast, and runs locally — no account required for the open-source version:
# Install Semgrep
pip install semgrep
# Run against your codebase with the OWASP Top 10 ruleset
semgrep --config=p/owasp-top-ten .
# Or run the full security audit ruleset
semgrep --config=p/security-audit .
A 2025 benchmark from sanj.dev rated Semgrep at 88% detection accuracy on AI-generated code — not perfect, but dramatically better than nothing.
Snyk offers a free tier with IDE integration and more detailed fix suggestions:
# Install Snyk CLI
npm install -g snyk
# Authenticate (free account)
snyk auth
# Test your project
snyk test
# Test your container image if you're using Docker
snyk container test your-image:tag
For the StackHawk 2025 guide to code security scanning tools, Semgrep is the top recommendation for teams that want fast, customizable scanning with minimal false positives. Run it before every major deployment.
The Honest Truth About AI Coding Tools and Security
In 2025, Fortune reported that AI coding tools saw their first real security exploits — and more broadly, that the agentic nature of these tools makes every piece of AI-generated code a candidate for review. Critical vulnerabilities were found in Cursor (CVE-2025-54135), Anthropic's MCP server (CVE-2025-53109), and Claude Code (CVE-2025-55284). These aren't obscure tools used by careless developers — these are the mainstream tools used by careful ones.
The takeaway isn't "stop using AI coding tools." The takeaway is: treat all AI-generated code the way a senior engineer treats junior developer code. Review it. Question it. Test it adversarially. The AI is brilliant at making things work. You are responsible for making things safe.
And a Wiz study referenced by Kaspersky found that 20% of vibe-coded apps have serious vulnerabilities or configuration errors. One in five. You don't want to be that one.
This 15-minute checklist doesn't guarantee a secure app. It guarantees you're not the low-hanging fruit.
Your 15-Minute Security Audit Checklist
Copy this. Run it before every launch.
Before you ship:
- Secrets scan — Run
grepfor hardcoded API keys, passwords, tokens in source files - Git history check — Verify
.envis in.gitignoreand not in git history - Dependency audit — Run
npm auditorpip-audit; fix all HIGH and CRITICAL items - Admin access test — Log in as a normal user, try to access
/adminURLs directly - Horizontal privilege test — Try to access another user's resources by guessing/incrementing IDs
- API endpoint test — Hit your authenticated API endpoints with
curland no token; verify 401 responses - Route handler review — Confirm every API route has its own auth check, not just the page
- SQL injection scan — Search codebase for string-concatenated SQL queries
- Injection pattern scan — Search for
eval(), rawinnerHTMLassignments - Semgrep scan — Run
semgrep --config=p/owasp-top-ten .and review findings
After you launch:
- Set a monthly reminder to run
npm auditorpip-auditagain - Subscribe to security advisories for your major dependencies
- Monitor your logs for unusual access patterns (repeated 404s on
/adminpaths = someone probing)
Ask The Guild
Community prompt: Run the Semgrep OWASP scan on one of your projects this week:
pip install semgrep
semgrep --config=p/owasp-top-ten .
What did it find? Were you surprised? Drop your results (not the sensitive details — just the categories of issues) in the #security channel. Bonus points if you fixed something and want to share what it was. The more we share our "oh no" moments as a community, the fewer of them we all have.
Next in Security First: Part 8 — HTTPS Isn't Enough: What TLS Actually Protects (And What It Doesn't).