Skip to content
Security First — Part 30 of 30

OWASP Top 10 for AI-Built Apps: The Complete Guide

Written by claude-sonnet-4 · Edited by claude-sonnet-4
owasptop-10securityai-securityvibe-codingweb-security

Security First -- Part 30 of 30


You have spent 30 days building your security foundation. Here is the standard the professionals use.

The OWASP Top 10 is the closest thing the security industry has to a universal syllabus. Every penetration tester, every compliance auditor, every senior engineer knows it cold. And as of the 2025 release, the list has been updated to reflect exactly the threat landscape you are operating in -- one where AI tools write significant portions of application code, and where that convenience carries a measurable price.

Veracode's 2025 GenAI Code Security Report tested over 100 large language models across Java, Python, C#, and JavaScript. The finding: 45% of AI-generated code samples introduced OWASP Top 10 security vulnerabilities. For Cross-Site Scripting alone, AI tools failed to produce safe code in 86% of relevant test cases. This is not a theoretical risk. It is the baseline reality of vibe-coded applications today.

The OWASP Top 10 for LLM Applications 2025 adds a second layer: if your app uses AI features -- a chatbot, a summarizer, an AI-assisted form -- it now carries its own distinct threat model, with prompt injection sitting at the very top.

You have spent this series learning to defend against these exact risks, piece by piece. Today we name them all at once.


The OWASP Top 10:2025 for Vibe-Coded Applications

1. Broken Access Control

What it is: Users reach data or actions they are not authorized to reach -- other users' records, admin panels, private files.

Why AI-built apps are exposed: AI coding tools generate CRUD operations fluently but rarely enforce row-level restrictions by default; the generated code fetches whatever the database returns without asking whether the requesting user owns it.

The defense: Row-Level Security policies in Supabase lock data at the database layer itself, so no amount of clever URL manipulation can return someone else's records -- exactly what Day 27's Supabase hardening walkthrough demonstrated in practice.


2. Cryptographic Failures

What it is: Sensitive data -- passwords, tokens, personal information -- is transmitted or stored without proper encryption, or is hashed with algorithms that are no longer safe.

Why AI-built apps are exposed: Prompts like "store the user's password" will produce working code, but working is not the same as secure; AI tools have been observed generating MD5 and SHA-1 hashes, both considered broken for credential storage since the early 2010s.

The defense: Enforce HTTPS at every layer and use bcrypt or Argon2 for password hashing -- the Day 27 SSL configuration covered the transport side, and any auth library worth using (Supabase Auth, NextAuth) handles hashing correctly out of the box if you let it do its job.


3. Injection

What it is: Untrusted input is interpreted as code or commands -- SQL injection executes arbitrary database queries, XSS injects scripts into other users' browsers, and prompt injection hijacks AI instructions.

Why AI-built apps are exposed: The 2025 Veracode report found AI tools failed XSS defenses in 86% of relevant samples; beyond classic web injection, the OWASP LLM Top 10 places prompt injection at rank one for AI-feature apps, because AI tools almost never sanitize user input before passing it to a language model.

The defense: Parameterized queries for SQL (never concatenate user strings into a query), output encoding for anything rendered in a browser, and a system-prompt firewall for AI features -- Days 16, 17, 26, and 29 each attacked a different surface of this same problem.


4. Insecure Design

What it is: Security flaws baked into the architecture itself, not just the implementation -- flaws that no amount of patching can fully fix because the threat model was never considered.

Why AI-built apps are exposed: AI tools respond to prompts, not to threat models; when you describe a feature without describing its abuse cases, the tool builds the happy path only, and the adversarial paths are left wide open.

The defense: Think in attack scenarios before you build -- "who would misuse this, and how?" -- which is the exact framing the Architecture track introduced early in this series and which every subsequent security day applied to a specific feature area.


5. Security Misconfiguration

What it is: Default settings, unnecessary features left enabled, open cloud storage buckets, exposed error messages, or missing security headers.

Why AI-built apps are exposed: AI-generated deployment scripts and configuration files are optimized for "it works," not for "it is locked down"; default Supabase tables are publicly readable, default Express apps expose their stack trace, and default S3 buckets have been the source of some of the largest data exposures in recent memory.

The defense: Environment variables for every secret, security headers on every response, and a configuration audit before every deploy -- Day 22 walked through exactly this process for a typical AI-built stack.


6. Vulnerable and Outdated Components

What it is: Third-party libraries, frameworks, or dependencies with known security flaws that your application inherits.

Why AI-built apps are exposed: AI tools generate import statements and package.json entries confidently, without checking whether the referenced version has open CVEs; the OWASP 2025 release candidate expanded this category to "Software Supply Chain Failures," recognizing that the risk now extends through the entire dependency chain, not just direct dependencies.

The defense: Run npm audit or pip-audit on every dependency install and before every deploy, automate it in CI, and treat a high-severity finding as a build blocker -- Days 18 and 29 covered dependency scanning as a non-negotiable step.


7. Authentication Failures

What it is: Weak or broken authentication that allows account takeover -- missing rate limits on login, no multi-factor option, session tokens that never expire, or credentials leaked in URLs.

Why AI-built apps are exposed: AI tools will generate a login form in seconds, but the generated code rarely includes brute-force protection, MFA scaffolding, or secure session expiry by default; the path of least resistance is the path the AI takes.

The defense: Use a battle-tested auth provider rather than rolling your own -- Supabase Auth, Clerk, or Auth0 all implement rate limiting, secure session management, and MFA in ways that hand-rolled AI code almost never does -- which is exactly what Days 8 and 20 argued in detail.


8. Software and Data Integrity Failures

What it is: Code or data pipelines that do not verify the integrity of updates, plugins, or build artifacts -- the category that covers supply chain attacks like the SolarWinds compromise and the xz backdoor.

Why AI-built apps are exposed: AI tools generate CI/CD configurations that pull dependencies at build time without pinning versions or verifying checksums, meaning a compromised upstream package silently becomes part of your application.

The defense: Pin dependency versions, use lock files (package-lock.json, poetry.lock) and commit them to version control, and review your CI pipeline for steps that fetch and execute remote scripts without verification -- Day 18's supply chain security module covered this in full.


9. Security Logging and Monitoring Failures

What it is: Not logging security-relevant events, or logging so much noise that real threats are invisible -- and critically, logging data that should never be logged (passwords, tokens, full request bodies with PII).

Why AI-built apps are exposed: AI tools generate logging statements to aid debugging, not to support incident response; they are far more likely to log a full request object -- which may contain authorization headers or form data -- than to log a structured security event with the right fields and nothing else.

The defense: Log authentication events, authorization failures, and suspicious input patterns; never log credentials or personal data; and use structured logging so events are queryable -- Day 24's logging module drew the exact line between useful security telemetry and a liability.


10. Server-Side Request Forgery (SSRF)

What it is: An attacker tricks your server into making HTTP requests to internal resources -- your cloud metadata endpoint, internal databases, or other services not meant to be externally reachable.

Why AI-built apps are exposed: AI-generated API routes that accept a URL parameter and fetch its contents are a textbook SSRF vector; the AI has no concept of your network topology and cannot know that fetching http://169.254.169.254/latest/meta-data/ would expose your entire AWS environment.

The defense: Never accept arbitrary URLs from user input in server-side fetch calls; validate against an allowlist of permitted domains; and treat any API route that accepts a URL as a parameter as a high-risk surface requiring explicit review.


Your 30-Day Security Checklist

You have now mapped every item in the professional standard to something concrete you built or hardened during this series. Here is the complete checklist to carry forward.

Access Control

  • Row-Level Security enabled on every Supabase table
  • Authorization checks on every API route, server-side
  • No client-side-only access control logic

Cryptography

  • HTTPS enforced everywhere, HTTP redirected
  • Passwords hashed with bcrypt or Argon2, never MD5/SHA-1
  • Secrets in environment variables, never in source code

Injection

  • Parameterized queries or ORM for all database access
  • Output encoding for all user-supplied content rendered in HTML
  • Prompt injection mitigations on AI feature inputs

Architecture

  • Threat model written before building each major feature
  • Principle of least privilege applied to every integration

Configuration

  • Security headers set (Content-Security-Policy, X-Frame-Options, etc.)
  • Error messages sanitized in production (no stack traces)
  • Default credentials changed, unnecessary services disabled

Dependencies

  • npm audit / pip-audit runs in CI
  • Dependency versions pinned in lock files
  • High-severity CVEs treated as build blockers

Authentication

  • Auth handled by a maintained provider, not custom code
  • Rate limiting on login endpoints
  • Session expiry configured

Integrity

  • Lock files committed to version control
  • CI pipeline reviewed for unverified remote script execution
  • Build artifacts signed or checksummed where possible

Logging

  • Authentication and authorization events logged
  • No credentials, tokens, or PII in logs
  • Structured log format for queryability

SSRF

  • No API routes that accept and fetch arbitrary user-supplied URLs
  • Allowlist validation on any server-side URL fetch
  • Internal metadata endpoints blocked at network level if possible

Where You Stand

The OWASP Top 10:2025 is the benchmark. You have now worked through every category with concrete implementation -- not as theory, but as actual features, configurations, and policies in your own applications.

You are now more security-aware than 90% of vibe coders out there. Keep auditing, keep learning, keep building.

The threats will evolve. New frameworks will emerge. The OWASP list will be updated again. But the discipline you have built over these 30 days -- the habit of asking "how would someone break this?" before shipping -- is the thing that does not go out of date.


Ask The Guild

Community prompt: Which of the 10 categories surprised you most when you audited your own project? Share what you found -- and what you fixed -- in the Guild forum. Your experience is exactly the kind of real-world signal that helps every other builder level up.


Tom Hundley is a software architect with 25 years of experience. He has spent the last several years teaching non-developers to build secure, production-grade applications with AI coding tools.

Copy A Prompt Next

Start safely

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

6

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

SafetyStart Here — Build Safely With AI

Safe Beginner Loop

Use this before any implementation work when you want the agent to stay scoped, explain itself, and stop after one reviewable change.

Preview
"I want to work in a safe beginner loop.
Please do only this one task: [describe one tiny change].
Before making changes:
1. explain your plan in plain English
2. list the files you expect to change
Security First

Turn this security lesson into a repeatable review habit

This article gives you the judgment call. The security paths give you the vocabulary, checklists, and repetition to catch the next issue before it reaches users.

Best Next Path

Advanced Security

Guild Member · $29/mo

Go past security slogans into OWASP, supply-chain failures, infrastructure hardening, and the attack surfaces AI tools introduce.

20 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.

OWASP Top 10 for AI-Built Apps: The Complete Guide | AI Coding Guild