Prompt of the Day: Optimize Slow Database Queries
Part 12 of 30 — Prompt of the Day Series
It was a Thursday afternoon. A team I was consulting with had just shipped a feature that let users search their customer records — filter by region, date range, and status. Worked perfectly in staging. On Friday morning, with 800,000 rows in production, a routine search started taking 47 seconds. Users thought the app was broken. The CEO filed a Slack ticket.
The culprit? Three things stacked on top of each other: a missing composite index, a correlated subquery running once per row, and an ORM that was firing a separate query to fetch related account data for each of the 200 results returned. Classic N+1 plus a full table scan, on a table with no query monitoring in place.
This story plays out every week in production systems. In January 2025, the candidate search platform Pickle documented a two-hour outage caused by permission logic that checked access at the profile_field level — meaning millions of individual field rows were scanned per search request. Queries blew past Vercel's 15-second serverless timeout and appeared as outright failures to users. The fix was refactoring the permission check to the profile level so inaccessible records were filtered out before the expensive joins ran. In March 2026, a MongoDB production incident documented on AWS Plain English showed the same pattern: missing indexes caused 90-second API latency on a workflow that looked completely fine in development.
The good news: with the right prompt, an AI assistant can find and fix these issues in minutes.
The Prompt
I have a slow database query in my [PostgreSQL / MySQL / MongoDB] application.
Help me diagnose and optimize it. Here is the context:
1. The slow query (include the actual SQL or ORM call):
[PASTE YOUR QUERY HERE]
2. The table(s) involved and approximate row counts:
[e.g., users: 500k rows, orders: 2M rows]
3. Existing indexes on these tables:
[PASTE OUTPUT OF \d tablename or SHOW INDEX FROM tablename]
4. The EXPLAIN / EXPLAIN ANALYZE output:
[PASTE EXPLAIN OUTPUT HERE]
5. ORM being used (if any):
[e.g., Prisma, SQLAlchemy, Django ORM, TypeORM, ActiveRecord]
Please:
a) Identify the root cause(s): missing indexes, N+1 queries, full table scans,
correlated subqueries, missing WHERE clause sargability, or other issues.
b) Rewrite the query or ORM call to fix each problem found.
c) Provide the exact CREATE INDEX statements needed.
d) Show a before/after EXPLAIN plan comparison where possible.
e) Flag any N+1 patterns in the ORM usage and show the corrected eager-load syntax.
f) Estimate the expected performance improvement.
Why It Works
This prompt works because it feeds the AI the same information a DBA would demand before touching anything: the query, the schema, the indexes, and the execution plan. Without EXPLAIN output, an AI is just guessing. With it, the model can see exactly which operations are doing sequential scans, which join order the planner chose, and how many rows are being examined.
The explicit checklist in part (e) is critical. ORM-generated N+1 problems are invisible in the query itself — you have to ask the AI to look at the ORM call pattern, not just the raw SQL. According to a March 2025 analysis on DEV Community, N+1 queries are especially dangerous because each individual query is fast enough that it never appears in the slow query log. Only when you look at total query count do you see 101 queries where there should be 1.
Example: Fixing an N+1 in Python (SQLAlchemy)
Before — triggers N+1:
# Fetches all orders, then queries users table once per order
orders = session.query(Order).all()
for order in orders:
print(order.user.name) # N separate SELECT queries
After — single JOIN query:
from sqlalchemy.orm import joinedload
orders = session.query(Order).options(joinedload(Order.user)).all()
for order in orders:
print(order.user.name) # No additional queries
Example: Adding the Right Index (PostgreSQL)
-- Before: full sequential scan on 2M row orders table
SELECT * FROM orders
WHERE user_id = 42 AND status = 'pending' AND created_at > '2025-01-01';
-- Check the plan
EXPLAIN ANALYZE SELECT * FROM orders
WHERE user_id = 42 AND status = 'pending' AND created_at > '2025-01-01';
-- Seq Scan on orders (cost=0.00..48200.00 rows=2000000 ...)
-- Fix: composite index matching the WHERE clause
CREATE INDEX CONCURRENTLY idx_orders_user_status_date
ON orders (user_id, status, created_at DESC);
-- After: index scan, query drops from 4.2s to 8ms
Example: Getting the EXPLAIN Output (Terminal)
# PostgreSQL
psql -U myuser -d mydb -c "EXPLAIN ANALYZE SELECT ...;"
# MySQL
mysql -u myuser -p mydb -e "EXPLAIN SELECT ...;"
# Check slow query log in MySQL
mysql -u myuser -p -e "SHOW VARIABLES LIKE 'slow_query_log';"
mysql -u myuser -p -e "SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 1;"
The Anti-Prompt
Bad version:
My app is slow. Can you make my database queries faster?
Why it fails: Without the query, the schema, the indexes, and the execution plan, the AI has nothing to work with. You'll get a generic lecture about indexes and SELECT * being bad. The AI might even suggest rewriting queries that aren't slow at all, or adding indexes on columns that are already indexed. Worse, it might blame your application code when the database is the entire problem — exactly what happened in the Pickle incident, where engineers initially suspected the frontend TanStack Query migration rather than the permission-level query logic.
Always give the AI the EXPLAIN output. Always.
Variations
For Django ORM users:
I'm using Django ORM. Run django-debug-toolbar on this view and show me
the SQL it generates. Then rewrite the queryset using select_related()
and prefetch_related() to eliminate N+1 queries. Show the before/after
query count.
For TypeScript / Prisma users:
I have this Prisma query that is slow in production:
[PASTE QUERY]
The Prisma schema for the involved models is:
[PASTE SCHEMA]
Rewrite it to use include/select efficiently, add any missing @@index
directives to the schema, and generate the prisma migrate SQL
for the new indexes.
For MongoDB users:
I have a slow MongoDB aggregation pipeline:
[PASTE PIPELINE]
Collection size: [N] documents.
Run db.collection.explain("executionStats") and show me:
1. Which stages are doing COLLSCAN vs IXSCAN
2. The exact createIndex() calls to fix the scans
3. A rewritten pipeline that pushes $match and $sort stages early
to maximize index use
For identifying queries to fix first:
# PostgreSQL: find your top 10 slowest queries right now
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
Checklist: Before You Prompt
- Run
EXPLAIN ANALYZEon the slow query and copy the full output - Note approximate row counts for all involved tables
- Check existing indexes with
\d tablename(PostgreSQL) orSHOW INDEX FROM tablename(MySQL) - Enable slow query logging if you haven't already (catches queries > 1 second)
- If using an ORM, log the generated SQL to see what queries are actually firing
- Check
pg_stat_statements(PostgreSQL) or the slow query log to confirm which query is the actual bottleneck — don't optimize by instinct - Test index changes with
CREATE INDEX CONCURRENTLYin production to avoid table locks - Benchmark before and after with realistic data volumes — not the 50-row dev dataset
Ask The Guild
What's the most embarrassing database performance bug you've shipped to production? N+1 query that took down an API? A missing index you didn't find for six months? A query that ran fine for a year until the table hit 1 million rows? Share your war story — the more specific the better. Someone else in the Guild is about to make the exact same mistake.