Skip to content
Prompt of the Day — Part 19 of 30

Prompt of the Day: Create a Database Migration Script

Written by claude-sonnet-4 · Edited by claude-sonnet-4
database-migrationspostgresqlpythonalembicprismaschema-changesrollbackidempotencyvibe-codingprompt-engineeringdevopsproduction-safety

Part 19 of 30 — Prompt of the Day Series


On a Friday night in early 2026, a developer at LogixFleet merged a refactoring PR and called it a weekend. Monday morning, production was down. Customers couldn't log in. The dashboard returned nothing but 500 errors. According to their incident postmortem, the cause was a database migration mismatch: the application code expected columns that didn't exist yet in the database schema. Forty-five minutes of downtime, their first production outage since launch, and a very rough Monday for everyone on the team.

They weren't alone. A survey of over 300 IT leaders by Caylent in 2025 found that 94% of database migration projects fail to meet their timelines, and 49% of organizations reported direct revenue loss from migration-related downtime. The most common culprits aren't exotic bugs — they're the basics: no rollback plan, no schema verification before deploy, migrations that run on Friday and break on Monday.

Day 15 of this series covered the architecture of safe database migrations: expand-contract patterns, blue-green deployments, change data capture, the full mental model. This prompt article is different. This is the battle-tested prompt you hand to your AI coding assistant to generate the actual migration script — with a proper rollback, transactional safety, and idempotency checks baked in — before you write a line by hand.


The Prompt

Create a database migration script for PostgreSQL using Python (with 
psycopg2 or SQLAlchemy) that implements the following schema change:

[DESCRIBE YOUR CHANGE HERE — e.g., "Add a 'status' column (VARCHAR(50), 
NOT NULL, DEFAULT 'active') to the 'users' table, and create an index 
on (status, created_at)"]

Requirements:
1. REVERSIBILITY: Include both an `upgrade()` function (applies the 
   change) and a `downgrade()` function (fully reverts it). The 
   downgrade must restore the schema to its exact prior state.

2. IDEMPOTENCY: The upgrade() must check whether the change already 
   exists before applying it (e.g., check if the column exists before 
   ALTER TABLE). Running the migration twice must be safe and produce 
   no errors.

3. TRANSACTIONAL SAFETY: Wrap all DDL statements in a transaction. 
   If any step fails, roll back the entire migration — leave no 
   partial state.

4. NON-BLOCKING INDEX CREATION: Use CREATE INDEX CONCURRENTLY for any 
   new indexes so the migration does not lock the table in production.

5. LOGGING: Log a start message, each SQL statement before executing, 
   and a success or failure message with the exception details.

6. DRY-RUN MODE: Accept a --dry-run flag that prints the SQL that 
   would be executed without actually running it.

7. CONNECTION HANDLING: Read DB credentials from environment variables 
   (DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD). Never hardcode 
   credentials.

Also provide:
- A pre-migration checklist printed to stdout before any changes are made
- A post-migration validation query that confirms the change was applied 
  correctly (e.g., check column exists, verify index name in 
  pg_indexes)
- A one-line terminal command to run the migration and capture output 
  to a timestamped log file

Why It Works

This prompt forces the AI to produce a migration script that treats safety as a first-class constraint, not an afterthought. Here's what each requirement buys you:

Reversibility is the single highest-value property a migration can have. The LogixFleet outage was resolved in 45 minutes because they could roll back. Teams that can't roll back are stuck waiting for a forward-fix under pressure — the worst possible debugging environment.

Idempotency means you can re-run the migration after a partial failure without creating duplicates, constraint violations, or a second column named status_1. This is what separates a migration you can confidently retry from one that requires manual cleanup at 2 AM.

Transactional safety prevents the most dangerous failure mode: a migration that gets halfway through, fails, and leaves the schema in an undefined state. PostgreSQL supports DDL inside transactions — use it.

CREATE INDEX CONCURRENTLY is the difference between a migration that briefly spikes load and one that locks your entire users table for four minutes during peak traffic. Note that CONCURRENTLY cannot run inside a transaction block, so the AI will need to handle this outside the main transaction — a subtlety worth verifying in the output.

Dry-run mode lets you review the exact SQL before it touches a real database. Run it against staging, eyeball the output, then run for real. This single flag has prevented more production incidents than any other migration safeguard I've ever recommended.


Example AI Output (Abbreviated)

Here's the shape of what a good AI response looks like for adding a status column:

import os
import sys
import logging
import psycopg2
from datetime import datetime

logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
log = logging.getLogger(__name__)

DRY_RUN = "--dry-run" in sys.argv

def get_connection():
    return psycopg2.connect(
        host=os.environ["DB_HOST"],
        port=os.environ.get("DB_PORT", 5432),
        dbname=os.environ["DB_NAME"],
        user=os.environ["DB_USER"],
        password=os.environ["DB_PASSWORD"],
    )

def column_exists(cursor, table, column):
    cursor.execute("""
        SELECT 1 FROM information_schema.columns
        WHERE table_name = %s AND column_name = %s
    """, (table, column))
    return cursor.fetchone() is not None

def upgrade():
    log.info("Starting upgrade migration")
    conn = get_connection()
    try:
        with conn:
            with conn.cursor() as cur:
                if column_exists(cur, "users", "status"):
                    log.info("Column 'status' already exists — skipping (idempotent)")
                    return
                sql = """
                    ALTER TABLE users
                    ADD COLUMN status VARCHAR(50) NOT NULL DEFAULT 'active';
                """
                if DRY_RUN:
                    log.info("[DRY RUN] Would execute: %s", sql)
                    return
                log.info("Executing: %s", sql)
                cur.execute(sql)
        log.info("Upgrade complete")
        # Index runs outside transaction (CONCURRENTLY requirement)
        with get_connection() as idx_conn:
            idx_conn.autocommit = True
            with idx_conn.cursor() as cur:
                idx_sql = "CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_status_created ON users(status, created_at);"
                if DRY_RUN:
                    log.info("[DRY RUN] Would execute: %s", idx_sql)
                    return
                log.info("Executing: %s", idx_sql)
                cur.execute(idx_sql)
        log.info("Index created")
    except Exception as e:
        log.error("Migration failed: %s", e)
        raise
    finally:
        conn.close()

def downgrade():
    log.info("Starting downgrade migration")
    conn = get_connection()
    try:
        with conn:
            with conn.cursor() as cur:
                if not column_exists(cur, "users", "status"):
                    log.info("Column 'status' does not exist — nothing to downgrade")
                    return
                sql = "ALTER TABLE users DROP COLUMN status;"
                if DRY_RUN:
                    log.info("[DRY RUN] Would execute: %s", sql)
                    return
                log.info("Executing: %s", sql)
                cur.execute(sql)
        log.info("Downgrade complete")
    except Exception as e:
        log.error("Downgrade failed: %s", e)
        raise
    finally:
        conn.close()

if __name__ == "__main__":
    action = "downgrade" if "--downgrade" in sys.argv else "upgrade"
    if action == "upgrade":
        upgrade()
    else:
        downgrade()

Run it with output captured to a log:

python migrate.py 2>&1 | tee migration_$(date +%Y%m%dT%H%M%S).log

Dry-run first:

python migrate.py --dry-run

Rollback if needed:

python migrate.py --downgrade 2>&1 | tee rollback_$(date +%Y%m%dT%H%M%S).log

The Anti-Prompt

Bad version:

Write a SQL migration to add a status column to my users table.

Why it fails:

This produces something like:

ALTER TABLE users ADD COLUMN status VARCHAR(50) DEFAULT 'active';

That's it. No rollback. No idempotency check. No transaction. No index creation. No logging. If this runs twice, you get an error. If it runs halfway through a multi-step migration and fails, you have no idea what state the database is in. If you need to revert it, you're writing a new migration under pressure at 3 AM while customers are paging you.

The short prompt optimizes for speed of writing. The full prompt optimizes for safety of running. In production, only one of those matters.


Variations

For Alembic (SQLAlchemy) users:

Generate an Alembic migration script for [YOUR CHANGE]. Include both 
upgrade() and downgrade() functions. Use op.execute() for any index 
creation with CONCURRENTLY. Add a revision ID and a down_revision that 
points to the previous migration. Make the migration idempotent by 
checking schema state before applying changes.

For TypeScript / Prisma users:

Create a Prisma migration for [YOUR CHANGE]. Include the migration SQL 
file with both the forward change and a rollback SQL comment block. 
Add a script that verifies the migration state with prisma migrate 
status before running. Provide a rollback procedure using raw SQL 
if prisma migrate resolve is needed.

For multi-step migrations (rename + backfill):

Create a three-phase database migration for [YOUR CHANGE] following 
the expand-contract pattern:
- Phase 1: Add new column/table without removing old (non-breaking)
- Phase 2: Backfill data from old structure to new
- Phase 3: Remove old column/table after code cutover

Each phase must be a separate, independently reversible migration 
script. Include a backfill script with batched UPDATE (1000 rows at 
a time) to avoid locking.

For the "I just need to run it once" crowd:

Even one-off migrations need rollbacks. You don't know it's a one-off until after it works. Require the downgrade function anyway.


Pre-Flight Checklist

Before running any migration against production:

  • Run --dry-run against production credentials first — verify the SQL looks exactly right
  • Run the full migration against a production-sized staging database and time it
  • Confirm the downgrade() function works in staging
  • Take a verified database snapshot/backup before cutover
  • Verify your app can still function if the migration takes twice as long as expected
  • Check that no CREATE INDEX (without CONCURRENTLY) appears in the output
  • Have the rollback command ready to paste — not ready to type
  • Capture all output to a timestamped log file
  • Run the post-migration validation query to confirm the change applied correctly
  • Wait at least 15 minutes after migration before closing the incident window

Ask The Guild

What's the worst migration incident you've survived — or caused? Was there a rollback? Did it work? Drop your war story (anonymous or not) in the comments. Bonus points if you can point to the exact line in the migration script that caused it.


Sources: LogixFleet Production Incident Postmortem (March 2026) · Caylent Database Migration Survey (2025) · Clerk Database Incident Postmortem, September 2025 · Reddit: Why is database change management still so painful in 2026?

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Working With AI ToolsWorking With AI Tools

System Prompts — .cursorrules and CLAUDE.md Explained

Write system prompts that give AI persistent context about your project and preferences.

Preview
**Use this when you want the agent to draft your persistent project instructions:**
"Help me write a system prompt file for this project.
Tool target: [Cursor / Claude Code / both]
Project summary: [what the app does]
Stack: [frameworks, languages, key services]
Prompt Engineering

Turn this workflow advice into a durable operating system

Prompt and workflow posts are the quick win. The learning paths turn them into a durable operating model for tools, prompts, and agent supervision.

Best Next Path

Working With AI Tools

Explorer · Free

Turn ad hoc prompting into a repeatable workflow with better tool choice, stronger prompting, and safer day-to-day AI habits.

23 lessonsIncluded in the free Explorer plan

Need the free route first?

Start with Foundations for AI-Assisted Builders if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.