Few-Shot Prompting — Teaching AI by Showing Examples
Use examples to teach AI your patterns, styles, and conventions before it writes new code.
If you've ever tried to explain a coding pattern to someone and found it easier to just show them an example — that's few-shot prompting. Instead of describing what you want in words, you show the AI one, two, or three examples of the pattern, and it extrapolates.
This technique is surprisingly powerful for vibe coding. When the AI can see what you've already done, it produces code that matches your style, conventions, and patterns with much higher accuracy.
What "Few-Shot" Means
The name comes from machine learning research:
- Zero-shot — You give the AI instructions but no examples. "Write a function that validates email addresses."
- One-shot — You give one example. "Here's how I wrote the phone number validator. Now write one for email that follows the same pattern."
- Few-shot — You give two or three examples. "Here are three validators I've written. Create the email validator using the same pattern."
More examples generally produce better results, but there are diminishing returns. Two or three examples usually capture the pattern. Ten examples waste tokens without improving output.
Why Examples Work Better Than Descriptions
Consider this prompt:
Create a database utility function for getting a user's projects.
Follow our project's conventions.The AI doesn't know your conventions. Even with a .cursorrules file, the AI might interpret "conventions" differently than you mean.
Now consider this:
Here's how we write database utility functions in this project:
// src/lib/db/tasks.ts
export async function getTasksByProject(projectId: string) {
const supabase = createClient()
const { data, error } = await supabase
.from('tasks')
.select('*')
.eq('project_id', projectId)
.order('created_at', { ascending: false })
if (error) throw new DatabaseError('Failed to fetch tasks', error)
return data as Task[]
}
export async function getTaskById(taskId: string) {
const supabase = createClient()
const { data, error } = await supabase
.from('tasks')
.select('*, assignee:users(*)')
.eq('id', taskId)
.single()
if (error) throw new DatabaseError('Failed to fetch task', error)
return data as TaskWithAssignee
}
Now write the equivalent functions for getting a user's projects:
- getProjectsByUser(userId: string)
- getProjectById(projectId: string) — include the project's membersThe second prompt shows exactly what pattern to follow — how you create the Supabase client, how you handle errors, how you type-cast the results, how you name things. The AI doesn't have to guess any of this.
When to Use Few-Shot Prompting
Creating Multiple Similar Things
When you need to create several components, pages, or functions that follow the same pattern, build the first one carefully, then use it as the example for the rest.
Here's my ProductCard component:
[paste ProductCard code]
Now create these cards using the same pattern:
- TeamMemberCard — name, role, avatar, email
- TestimonialCard — quote, author name, company, rating stars
- PricingCard — plan name, price, feature list, CTA buttonMatching an Existing Codebase
When you're adding to a project that already has established patterns:
The existing API routes in this project follow this pattern:
[paste an existing route]
Create a new API route at /api/settings that:
- GET: returns the current user's settings
- PUT: updates the user's settings
Follow the same error handling, auth checking, and response format.Data Transformations
When you need to transform data in a specific way:
I need to transform raw database records into display-ready objects.
Input example:
{ id: "abc", created_at: "2025-01-15T10:30:00Z", price_cents: 2999, user_id: "u1" }
Output example:
{ id: "abc", createdAt: "January 15, 2025", price: "$29.99", userId: "u1" }
Rules I'm following:
- Snake case to camelCase
- Dates formatted as "Month Day, Year"
- Prices from cents to formatted dollars
- IDs stay unchanged
Now write a function that does this transformation for an array of these records.Consistent Formatting
When you want output formatted a specific way:
I'm writing helper text for form fields. Here are examples of my style:
Field: Email
Helper: "We'll use this to send you order confirmations. No spam, ever."
Field: Password
Helper: "At least 8 characters. Mix in some numbers or symbols for extra security."
Field: Phone Number
Helper: "Optional. Only used if there's a delivery issue and we need to reach you fast."
Now write helper text for these fields:
- Billing Address
- Company Name (optional field)
- Preferred Contact TimeHow to Structure Few-Shot Prompts
The pattern is always the same:
1. Brief context (optional)
2. One or more examples
3. The instruction for what to create
4. Any additional constraintsOne Example (One-Shot)
Here's how I structure React page components:
[paste example]
Now create the Settings page following the same structure. It should have
sections for Profile, Notifications, and Billing.Two Examples (Better Pattern Recognition)
Here are two API routes from this project:
Example 1 - GET route:
[paste GET route]
Example 2 - POST route with validation:
[paste POST route]
Create a new route at /api/teams that supports both GET (list user's teams)
and POST (create a new team with name and description).Three Examples (Best for Complex Patterns)
Here are three database migration files from this project:
Migration 001:
[paste migration]
Migration 002:
[paste migration]
Migration 003:
[paste migration]
Create migration 004 that adds a "settings" table with columns:
user_id (foreign key to users), theme (text, default 'light'),
notifications_enabled (boolean, default true), timezone (text).
Include RLS policies matching the pattern from the other migrations.The Example Quality Matters
Not all examples are equally useful. Good examples for few-shot prompting:
Show the complete pattern. Partial examples leave gaps the AI fills with guesses. Show the full file or function, not a snippet.
Represent your actual conventions. Use examples from your real codebase, not hypothetical code. Real code has real patterns — import styles, error handling, naming conventions — that hypothetical code often lacks.
Include edge cases. If your pattern handles special cases (null values, empty arrays, error states), include an example that demonstrates this.
Are recent. If your conventions have evolved, use examples from recent code, not older code that follows outdated patterns.
If your examples disagree with each other, stop and resolve that first. Few-shot prompting amplifies the pattern you show, so conflicting examples usually produce confident but messy output.
Combining Few-Shot With Other Techniques
Few-shot prompting works great combined with other techniques from this module:
Few-shot + System Prompts: Your .cursorrules file describes conventions in words. Few-shot examples show them in action. Together, they give the AI both the rules and the reference implementation.
Few-shot + Decomposition: When building features in steps, use the output from step 1 as the few-shot example for step 2. "Here's the user list page we just built. Now build the project list page following the same pattern."
Few-shot + Constraints: "Follow this pattern, but don't include the caching logic — we don't need it for this endpoint."
Try this now
Choose two real files from your codebase that represent the pattern you want repeated. Write down:
- what the agent should copy
- what the agent should adapt
- what it must not copy blindly
That short note is often the difference between "good imitation" and "weird cargo cult clone."
Prompt to give your agent
Use this when you want the agent to follow an existing pattern instead of inventing one: "I need a new [component / route / hook / migration]. Use these files as examples:
- [file 1]
- [file 2]
Replicate these aspects:
- [imports and file structure]
- [error handling pattern]
- [response shape or UI composition]
Adapt these aspects for the new task:
- [field names]
- [queries]
- [business logic]
Before coding, summarize the pattern you see. Call out any inconsistencies between the examples. Stop if the examples appear outdated or conflict with the rest of the repo."
What you must review yourself
- That the examples are current and still reflect your preferred conventions
- That the files do not contain hidden assumptions, legacy hacks, or outdated patterns
- That the agent copied the right structure instead of blindly copying old logic
- That no secrets, test fixtures, or environment-specific values leaked from the examples
Common mistakes to avoid
- Using too many examples. More context is not always better if the examples are noisy.
- Using inconsistent examples. If the samples disagree, the agent will improvise.
- Using toy examples for real work. Show the agent code at the same complexity level you actually need.
- Failing to say what should change. If you do not mark what is fixed versus adaptable, the agent may clone the wrong details.
Key takeaways
- Few-shot prompting works by showing the agent the pattern, not just naming it
- Two or three high-quality examples beat a wall of loosely related files
- Real code from your repo is usually more useful than hypothetical examples
- Always tell the agent what to replicate, what to adapt, and what to ignore
What's Next
Next up: Chain-of-Thought — Making AI Think Step by Step. Once the agent can follow your patterns, the next question is how to make it reason through harder problems before it writes code.