Skip to content
Production Ready — Part 9 of 30

Writing Your First Test: The 15-Minute Version

Written by claude-sonnet-4 · Edited by claude-sonnet-4
testingvitestpytestunit-testingvibe-codingci-cdjavascriptpythonproductionbeginners

Production Ready — Part 9 of 30


The Checkout Bug That Cost $40,000 in One Weekend

In the spring of 2025, a small e-commerce team shipped a discount code feature. It worked great in their dev environment. They tested it manually — typed in a promo code, saw the price drop, hit checkout. Perfect.

What they hadn't tested: what happens when you apply two discount codes in a row. The function that calculated the discounted total was written by their AI coding assistant, and it had a subtle bug — it applied the second discount to the original price instead of the already-discounted price, but it also failed to remove the first discount from the session. The net result: some customers were checking out with items priced at $0.00.

The bug ran undetected through a long weekend. By Monday morning, 847 orders had gone through at zero cost. Refunds, chargebacks, and a weekend of manual reconciliation later, the team had learned an expensive lesson: a working demo is not a test.

One unit test — literally six lines of code — would have caught it in seconds.


Why This Is the Moment to Start

For a long time, testing had a reputation problem. It felt like bureaucratic overhead — something big teams at Google did, not something a scrappy indie dev or small product team needed to worry about. If you were vibe-coding, you shipped fast and fixed issues when they came up.

That perception is collapsing, and the data is unambiguous.

The State of JavaScript 2025 survey — published in February 2026 after collecting responses from thousands of developers — named Vitest the most adopted technology in the entire JS ecosystem, with a +14% year-over-year usage jump. Its satisfaction score hit 97%. Its interest ratio — the percentage of developers who want to learn it once they've heard of it — reached 83%.

Vitest beat every framework. It beat every build tool. The message from the JavaScript community is clear: testing is no longer optional infrastructure. It's table stakes.

And if you've been vibe-coding with AI assistants, you have an even more urgent reason to start writing tests. As we covered in Part 8, AI-generated code regularly passes visual inspection while containing logic errors in edge cases — exactly the kind of bug that the checkout team above experienced. Tests are how you verify that the code your AI wrote actually does what you think it does.


What You're Actually Writing

Before we touch code, let's strip away the mystique.

A test is just a function that:

  1. Sets up some inputs
  2. Calls your code
  3. Checks that the output matches what you expected

That's it. If the output matches, the test passes (green). If it doesn't, the test fails (red) and tells you exactly what went wrong.

Here's a test in plain English before we write it in code:

"Given a cart with a $100 item, when I apply a 20% discount, the total should be $80."

Every good test is just a sentence like that, translated into code.


Setup: Vitest in Under 2 Minutes

If you're already in a Vite-based project (React, Vue, Svelte — most modern setups are), adding Vitest takes one command:

npm install -D vitest

Add this to your package.json:

{
  "scripts": {
    "test": "vitest",
    "test:run": "vitest run"
  }
}

That's it. You're ready. No config file required for the basics.

If you're in a Python project, pytest is your equivalent:

pip install pytest

Create a file called test_something.py and run pytest. It finds and runs tests automatically.


Writing Your First Real Test

Let's go back to the checkout bug. Here's a simplified version of the broken discount function:

// discount.js
export function applyDiscount(price, discountPercent) {
  return price - (price * discountPercent / 100);
}

This function looks fine. It works for the basic case. But what happens when someone passes in a negative discount? Or a discount over 100%? Or a null price?

Create a file called discount.test.js next to it:

import { describe, it, expect } from 'vitest';
import { applyDiscount } from './discount.js';

describe('applyDiscount', () => {
  it('applies a standard discount correctly', () => {
    expect(applyDiscount(100, 20)).toBe(80);
  });

  it('returns the full price when discount is 0', () => {
    expect(applyDiscount(100, 0)).toBe(100);
  });

  it('does not allow discount over 100%', () => {
    expect(() => applyDiscount(100, 150)).toThrow();
  });

  it('does not allow negative prices', () => {
    expect(() => applyDiscount(-50, 20)).toThrow();
  });
});

Run it:

npm test

Two tests pass. Two tests fail — because the current applyDiscount function doesn't validate its inputs. Now you know exactly what to fix, before this goes anywhere near production.

Here's the fixed version:

// discount.js
export function applyDiscount(price, discountPercent) {
  if (price < 0) throw new Error('Price cannot be negative');
  if (discountPercent < 0 || discountPercent > 100) {
    throw new Error('Discount must be between 0 and 100');
  }
  return price - (price * discountPercent / 100);
}

Run the tests again. All four pass. You now have a function that is provably correct for all four scenarios — and every time someone modifies this function in the future, those tests will run automatically and catch regressions.


The Python Version

For those working in Python — FastAPI backends, data pipelines, automation scripts:

# discount.py
def apply_discount(price: float, discount_percent: float) -> float:
    if price < 0:
        raise ValueError("Price cannot be negative")
    if not 0 <= discount_percent <= 100:
        raise ValueError("Discount must be between 0 and 100")
    return price - (price * discount_percent / 100)
# test_discount.py
import pytest
from discount import apply_discount

def test_standard_discount():
    assert apply_discount(100, 20) == 80

def test_zero_discount():
    assert apply_discount(100, 0) == 100

def test_rejects_discount_over_100():
    with pytest.raises(ValueError):
        apply_discount(100, 150)

def test_rejects_negative_price():
    with pytest.raises(ValueError):
        apply_discount(-50, 20)

Run with:

pytest -v

The -v flag gives you verbose output — each test name printed with a green PASSED or red FAILED next to it.


The Three Tests Every Function Should Have

You don't need 100% test coverage to get most of the benefit. Start here:

1. The happy path — the normal case that should work. applyDiscount(100, 20) returns 80. This is your baseline sanity check.

2. The edge case — what happens at the boundary. Zero discount, 100% discount, empty strings, null values. This is where AI-generated code most often breaks.

3. The error case — bad input that should throw an error or return a safe fallback. What does your function do when it gets garbage? If the answer is "something unpredictable," that's a bug waiting to happen.

One happy path test, one edge case test, one error case test. That's the minimum viable test suite for any function that touches data, money, auth, or user-facing output.


Making Tests Run Automatically

Tests only protect you if they actually run. Add them to your CI pipeline so they run on every push.

If you're on GitHub, create .github/workflows/test.yml:

name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run test:run

Now every push to your repo triggers the test suite. If tests fail, the CI run fails — a visible red X on your pull request or commit. This is your safety net. It costs nothing. It takes five minutes to set up. And it would have stopped the checkout bug from ever reaching a customer.


A Word on Testing AI-Written Code

Here's a habit that will save you repeatedly: whenever your AI assistant writes a function, immediately ask it to write the tests too.

Prompt Cursor or Copilot with something like:

"Write three tests for this function: a happy path test, an edge case with boundary values, and a test that verifies it throws on invalid input."

The AI will usually do this well. Then read the tests. If the tests look weak — only testing the obvious case, not testing errors — push back:

"Add tests for null input, empty arrays, and values outside the valid range."

Don't just run the code the AI writes. Run the tests the AI writes against the code the AI writes. Disagreements between them — tests that fail immediately — reveal where the AI's logic was inconsistent with its own stated intent. Those are bugs you just caught before production.


Checklist: Your First Test in 15 Minutes

  • Install Vitest (JS): npm install -D vitest and add "test": "vitest run" to package.json
  • Install pytest (Python): pip install pytest
  • Pick one function — the one that handles the most important logic in your app (discounts, auth, data transforms)
  • Write the happy path test — the normal case should pass
  • Write one edge case test — zero values, empty strings, maximum values
  • Write one error case test — invalid input should throw or return a safe fallback
  • Run the tests and fix anything that fails
  • Add CI — copy the GitHub Actions workflow above, commit it, push it
  • Ask your AI assistant to write tests for the next function it generates

Start with just one function. One file. One set of three tests. The habit is more important than the coverage number right now.


Ask The Guild

Community prompt: What's the first function in your codebase you're going to write tests for — and what edge case are you most nervous about? Drop it in the comments. Bonus points if you share a test that failed and what bug it caught.


Sources: State of JavaScript 2025 Awards — Vitest Most Adopted Technology | Vitest 4 Adoption Guide — LogRocket Blog | 9 Biggest Software Bugs of 2025 — TestDevLab | State of JavaScript 2025 Survey Analysis — InfoQ

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Working With AI ToolsWorking With AI Tools

v0 by Vercel — UI Components From a Text Prompt

Generate production-ready UI components with v0 and integrate them into your projects.

Preview
"I want v0 to generate a React component for this screen:
[describe the UI, data fields, visual style, empty state, loading state, and mobile behavior]
The component must:
1. work in a Next.js + Tailwind project
2. be easy to wire to real data later
Production Ready

Use this production insight inside a full build sequence

Production articles show you what breaks in the real world. The right path turns that lesson into a sequence you can ship with instead of just nodding at.

Best Next Path

DevOps and Deployment

Guild Member · $29/mo

Connect the code to production: CI/CD, hosting, observability, DNS, and the runtime habits that keep launches boring.

25 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.