Prompt of the Day: Build a Complete Search Feature with Embeddings
Part 30 of 30 -- Prompt of the Day
Thirty days ago, you started learning how to talk to AI coding tools. Today, we end the series with the prompt I consider the most transformative one in a modern web stack: semantic search powered by vector embeddings. Unlike a SQL LIKE query that matches exact characters, semantic search understands meaning -- so a query for "affordable housing policy" can surface a document titled "rent control legislation" even with zero keyword overlap. This is not the future of search; it is the present, and your users already expect it.
The Prompt
You are an expert Next.js and Supabase engineer. Build a complete semantic search feature for my app using the following stack: OpenAI text-embedding-3-small, Supabase pgvector, and Next.js 14 App Router with TypeScript throughout.
Implement each of the following steps in full -- do not skip or abbreviate any of them:
1. DATABASE SCHEMA
- Enable the pgvector extension in Supabase: CREATE EXTENSION IF NOT EXISTS vector;
- Add an embedding column to the existing `documents` table:
ALTER TABLE documents ADD COLUMN IF NOT EXISTS embedding vector(1536);
- Create an HNSW index for fast cosine similarity search:
CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops);
2. EMBEDDING GENERATION FUNCTION (TypeScript, server-side)
- File: lib/embeddings.ts
- Accept a string, call the OpenAI API with model text-embedding-3-small
- Return a number[] (the 1536-dimension vector)
- Handle empty input by throwing a typed error
- Export type: EmbeddingVector = number[]
3. SUPABASE RPC FUNCTION (SQL)
- Function name: match_documents
- Parameters: query_embedding vector(1536), match_threshold float, match_count int
- Use cosine distance operator (<=>)
- Return: id, title, content, similarity (1 - cosine distance)
- Filter results below match_threshold, order by similarity descending
4. NEXT.JS API ROUTE
- File: app/api/search/route.ts
- Method: POST, accepts JSON body { query: string }
- Validate: return 400 if query is empty or missing
- Generate embedding for the query using the lib/embeddings.ts function
- Call Supabase RPC match_documents with match_threshold 0.75 and match_count 10
- Return 200 with { results: SearchResult[] } or 500 with { error: string }
- Export TypeScript interface: SearchResult { id: string; title: string; content: string; similarity: number }
5. REACT SEARCH COMPONENT
- File: components/SemanticSearch.tsx
- Controlled input with debounce of 400ms (implement debounce with useEffect and setTimeout, no external library)
- Show a loading spinner while the request is in flight
- Show a "No results found" message when the results array is empty and the query is non-empty
- Render each result as a card showing title, a truncated excerpt (first 160 chars of content), and similarity as a percentage
- Handle fetch errors gracefully -- display a user-friendly error message, log the raw error to console
Produce all five files in order. Add a brief comment at the top of each file explaining its role. Do not use any vector search library other than the Supabase JS client and the pgvector SQL operators.
Why It Works
This prompt works because it eliminates every ambiguity an AI coding tool might otherwise resolve in the wrong direction.
Specificity kills hallucination. Naming text-embedding-3-small (not just "an embedding model"), specifying vector(1536) dimensions, and naming the exact cosine distance operator <=> prevents the model from guessing -- or choosing a deprecated alternative like text-embedding-ada-002.
Numbered, ordered steps force sequencing. The schema has to exist before the RPC function, and the RPC function has to exist before the API route. Listing them in dependency order means the output code will be in the right order to run without modification.
Edge cases are not an afterthought. Specifying "return 400 if query is empty," "No results found when array is empty," and "handle fetch errors gracefully" closes the gaps where vibe-coded apps typically break in production.
File paths are explicit. lib/embeddings.ts, app/api/search/route.ts, components/SemanticSearch.tsx -- the AI knows exactly where each piece lives, which keeps the output compatible with a standard Next.js 14 App Router project structure.
The Supabase documentation for vector search with Next.js and OpenAI confirms this entire architecture and is the canonical reference for production deployments using this stack.
The Anti-Prompt
Do not do this:
add semantic search to my app using embeddings
Why it fails: The AI has no idea which embedding model to use, what database you are on, what framework you are in, or how the search UI should behave. You will receive one of two outcomes: a toy example using an in-memory cosine similarity function over a hardcoded array, or a hallucinated library import that does not exist. Either way, you are starting over. Vague prompts produce confident-sounding but wrong code at a higher rate than almost any other input pattern -- and semantic search has enough moving parts that the compounded errors are severe.
Variations
Variation 1 -- Supabase pgvector with hybrid search fallback
Extend step 3 of the main prompt with:
Also create a second SQL function named hybrid_search that combines the pgvector cosine similarity result with a Postgres full-text search score using ts_rank. Accept an additional parameter query_text text, build a tsvector from the content column, and blend the two scores with a weight of 0.7 semantic + 0.3 keyword. Return the same SearchResult interface.
Hybrid search is the pattern FocusReactive's March 2025 implementation guide recommends for production, because pure semantic search can miss exact-match queries for proper nouns and product codes.
Variation 2 -- Pinecone instead of pgvector
Replace steps 1 and 3 with:
Instead of Supabase pgvector, use Pinecone as the vector store. In lib/embeddings.ts, add a second export function upsertEmbedding(id: string, vector: number[], metadata: Record<string, string>) that writes to a Pinecone index named "documents". In the API route, replace the Supabase RPC call with a Pinecone query call, returning the top 10 matches above score 0.75.
Use this variation when your document count exceeds several million rows, where a dedicated vector database offers better index management and multi-region replication.
Variation 3 -- Simpler full-text search without embeddings
If you do not yet have an OpenAI API key and want a working search feature today:
Add full-text search to my documents table in Supabase using Postgres tsvector. Add a generated column fts of type tsvector built from to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, '')). Add a GIN index on fts. Create a Supabase RPC function text_search that accepts query_text text and returns documents ordered by ts_rank descending. Build the same Next.js API route and React component, but call text_search instead.
This is not semantic search, but it is a massive upgrade over LIKE '%query%', requires no external API calls, and costs nothing to run.
Real-World Context
A December 2025 deep-dive on the DEV Community on implementing semantic search with pgvector noted a key operational insight: embedding titles and body text as separate vectors -- rather than concatenating them -- meaningfully improves recall for short queries, because users search with "title-like keywords" that match poorly against long body embeddings. That is the kind of production detail that belongs in a follow-up prompt once you have the baseline feature running.
The cost picture is also favorable. OpenAI's text-embedding-3-small model is priced at $0.02 per million tokens as of 2025 -- cheap enough that embedding a 10,000-document corpus at launch and re-embedding on content updates is operationally trivial for any early-stage product.
Ask The Guild
This is Prompt 30 of 30 -- the last one. Over the past month, you have learned how to prompt for scaffolding, debugging, refactoring, testing, database design, API integrations, auth flows, and now search. The skill is not memorizing prompts. The skill is knowing what you want precisely enough to ask for it.
Here is your final reflection question:
Which single prompt from this series changed the way you work the most -- and what is the one thing you wish you had known on Day 1?
Share your answer in the thread. Read everyone else's. Thirty days of daily prompts is a foundation, not a finish line.