Prompt of the Day: Write Tests for Your AI-Generated API Routes
Series: Prompt of the Day — Part 7 of 30 | Track: Prompts | By Tom Hundley
The Monday Morning Fire Drill
A developer I coach—sharp guy, two years in—spent a weekend building a REST API with Cursor. Twelve routes, full CRUD, authentication middleware, error handling. The AI wrote beautiful, readable code. It looked like the work of a senior engineer. He deployed it Friday afternoon.
Monday morning he came in to find three support tickets: users couldn't update their profiles, deleted records were reappearing, and a specific endpoint was returning 200 OK with an empty body instead of the 404 users expected. None of it had been caught because none of it had been tested.
This story isn't unusual anymore. A December 2025 analysis by CodeRabbit of 470 open-source pull requests found that AI-generated code produces 10.83 issues per PR on average, compared to 6.45 for human-authored PRs—1.7x more bugs overall. Logic and correctness errors were 75% more common in AI-generated code. Error handling gaps were nearly 2x more frequent. These are exactly the bugs that don't announce themselves; they just quietly corrupt your production data until a user notices.
Veracode's 2025 GenAI Code Security Report, summarized by SoftwareSeni, found AI-generated code contains 2.74x more vulnerabilities than human-written code, with a 45% failure rate on secure coding benchmarks. One of the most common patterns: an AI generates an admin route handler that checks authentication but skips authorization—you're logged in, so you can do anything.
The fix is tests. The trick is knowing how to prompt for them.
The Prompt
You are a senior backend engineer writing a complete test suite for an Express.js API.
Here is the route file I need tested:
[PASTE YOUR ROUTE FILE HERE]
Write Vitest (or Jest) tests that cover:
1. Happy path — the expected successful response for each method (GET, POST, PUT, DELETE)
2. Auth failures — what happens when no token is sent, or an invalid/expired token is sent
3. Authorization failures — authenticated user trying to act on a resource they don't own
4. Validation errors — missing required fields, wrong types, out-of-range values
5. Not found cases — requesting a resource ID that doesn't exist
6. Unexpected server errors — mock the database to throw, verify the route returns 500 with a safe error message (no stack traces)
For each test:
- Use supertest to fire real HTTP requests against the Express app
- Mock the database layer (not the route), so tests don't need a real DB
- Use describe blocks grouped by route and HTTP method
- Assert on both the status code AND the response body shape
- Include at least one test that verifies a failed operation does NOT mutate data
Output the complete test file, ready to run with `vitest run` or `jest`.
Include a brief comment above each describe block explaining what scenario it covers.
Why It Works
This prompt succeeds because it forces the AI to think like a QA engineer, not a feature developer. Here's what each piece does:
Concrete coverage list. AI tools default to happy-path tests if you give them latitude. By explicitly naming auth failures, authorization failures, validation errors, 404s, and server errors, you're closing every escape hatch. The authorization failure case alone is critical—it catches the "logged in but shouldn't be allowed" bug that Apiiro found was 322% more common in AI-generated code.
Mock the database layer, not the route. This is the single most important instruction. If you tell AI to mock the route, it writes tests that always pass by construction. If you mock the DB and fire real HTTP requests through supertest, the test actually exercises the route logic—middleware, error handling, response formatting, all of it.
Assert on body shape, not just status codes. A route that swallows an error and returns { success: false } with a 200 status will fool a status-only test every time.
The mutation guard. Requiring a test that confirms a failed DELETE or PUT doesn't corrupt data forces the AI to think about transactional behavior—something it almost never does unprompted.
Seeing It in Action
Here's a minimal example of what the prompt produces for a PATCH /users/:id route:
// Tests for PATCH /users/:id — update user profile
import { describe, it, expect, vi, beforeEach } from 'vitest';
import request from 'supertest';
import app from '../app';
import * as db from '../db/users';
describe('PATCH /users/:id', () => {
beforeEach(() => vi.resetAllMocks());
describe('happy path', () => {
it('returns 200 with updated user on valid request', async () => {
vi.spyOn(db, 'updateUser').mockResolvedValue({ id: '123', name: 'Jane Doe' });
const res = await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer valid-token')
.send({ name: 'Jane Doe' });
expect(res.status).toBe(200);
expect(res.body).toMatchObject({ id: '123', name: 'Jane Doe' });
});
});
describe('auth failures', () => {
it('returns 401 when no token is provided', async () => {
const res = await request(app).patch('/users/123').send({ name: 'Jane' });
expect(res.status).toBe(401);
});
it('returns 401 when token is expired', async () => {
const res = await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer expired-token')
.send({ name: 'Jane' });
expect(res.status).toBe(401);
});
});
describe('authorization failures', () => {
it('returns 403 when user tries to update another user\'s profile', async () => {
// Authenticated as user 456, trying to update user 123
const res = await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer token-for-user-456')
.send({ name: 'Hacker' });
expect(res.status).toBe(403);
});
it('does not mutate data on authorization failure', async () => {
const updateSpy = vi.spyOn(db, 'updateUser');
await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer token-for-user-456')
.send({ name: 'Hacker' });
expect(updateSpy).not.toHaveBeenCalled();
});
});
describe('validation errors', () => {
it('returns 422 when name is an empty string', async () => {
const res = await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer valid-token')
.send({ name: '' });
expect(res.status).toBe(422);
});
});
describe('server errors', () => {
it('returns 500 with safe message when database throws', async () => {
vi.spyOn(db, 'updateUser').mockRejectedValue(new Error('DB connection lost'));
const res = await request(app)
.patch('/users/123')
.set('Authorization', 'Bearer valid-token')
.send({ name: 'Jane' });
expect(res.status).toBe(500);
expect(res.body.message).toBe('Internal server error');
// Stack trace must not leak to the client
expect(res.body.stack).toBeUndefined();
});
});
});
Run it:
npx vitest run src/__tests__/users.test.ts
The Anti-Prompt
What vibe coders actually type:
Write tests for my API routes
Why it fails:
This produces a handful of happy-path GET tests that mock the entire route handler, assert on mocked return values, and call it done. You'll get green checkmarks. Your API will still break in production because nothing tested what the route actually does when auth fails, when the database throws, or when a user sends malformed input. It's the testing equivalent of checking that your parachute bag is closed without checking if there's a parachute inside.
A January 2026 post on DEV Community from a developer who let AI write 47% of his codebase described finding middleware with three subtle vulnerabilities: incorrect auth state caching, missing token refresh handling, and a race condition during session validation. "These aren't bugs you catch in testing," he wrote—but they would have been, with a test suite that checked token expiry and concurrent requests.
Variations
For Python / FastAPI projects:
Write pytest tests using httpx.AsyncClient for this FastAPI router.
Cover: happy paths, 401/403 auth failures, 422 validation errors,
404 not-found, and 500 server errors from mocked DB failures.
Use pytest-asyncio and mock the repository layer with unittest.mock.
For Next.js App Router API routes:
Write Vitest tests for this Next.js App Router route handler using
next-test-api-route-handler. Mock the database with vi.mock().
Test all HTTP methods the route handles. Include auth failure cases
using the same auth pattern already in this codebase: [paste middleware].
For adding tests to an existing suite:
Here is my existing test file: [paste]
Here is the new route I just added: [paste]
Extend the test file with tests for the new route, following
exactly the same patterns, mock style, and describe structure
already in the file. Do not change the existing tests.
Today's Checklist
- Paste a real route file into today's prompt — don't test hypotheticals
- Confirm your test file mocks the DB layer, not the route handler
- Check that every test asserts on both status code and response body
- Add at least one test that confirms a rejected mutation doesn't change data
- Run
vitest run --coverageand look for uncovered branches, not just line coverage - Grep your route files for
catchblocks — if they return200, you have a bug - For auth routes specifically, write tests before you let AI touch them
Ask The Guild
This week's community prompt:
What's the sneakiest bug an AI-generated API route introduced in your codebase — and what test would have caught it? Drop your route snippet and the test you wish you'd had. Best submission gets featured in next week's roundup.
Tom Hundley is a software architect with 25 years of experience. He teaches vibe coders how to build production systems that don't call them at 2 AM.