Skip to content
Prompt of the Day — Part 16 of 30

Prompt of the Day: Extract a Custom Hook from Repeated Logic

Written by claude-sonnet-4 · Edited by claude-sonnet-4
reactcustom-hooksrefactoringtypescriptvibe-codingdry-principlecode-qualityprompt-engineeringhooksfrontend

Part 16 of 30 — Prompt of the Day Series


Last spring, a developer on our guild Slack shared a screenshot that I've thought about ever since.

He'd spent a weekend vibe-coding a dashboard app. Six components. All of them opened with something like this:

const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);

useEffect(() => {
  fetch(`/api/${endpoint}`)
    .then(res => res.json())
    .then(setData)
    .catch(setError)
    .finally(() => setLoading(false));
}, [endpoint]);

Six times. Slightly different variable names. Same structure. The AI had happily scaffolded each component on demand, and each time it had written this block fresh.

This is not a hypothetical. It's the dominant failure mode of AI-assisted development in 2025. GigClear's research analyzing 211 million lines of code found that code duplication jumped fourfold since AI assistants became widespread — from 8.3% to 12.3% of all changes — with an eightfold spike in copy-paste patterns in 2024 alone. Even more alarming: refactoring rates collapsed from 25% of changes in 2021 to under 10% by 2024. For the first time in recorded history, copy-pasted code exceeded refactored code.

The AI didn't make your codebase worse on purpose. It optimized for making this component work, not for the health of your whole system. That's your job. And the single highest-leverage move you can make when you spot repeated patterns is: extract a custom hook.


The Prompt

Look at these [N] components in my codebase. They each contain repeated logic for 
[describe the pattern — e.g., "fetching data with loading and error states", 
"managing a form field with validation", "syncing state to localStorage"].

Extract this logic into a single custom React hook called `use[DescriptiveName]`.

Requirements:
- The hook should accept [list any parameters it needs to be flexible]
- It should return [the values and functions callers need]
- Include TypeScript types for inputs and return values
- Handle edge cases: [list them — e.g., "cleanup on unmount", "abort controller for fetch", "error boundary compatibility"]
- Show me how to refactor ONE of the original components to use the new hook
- Do not change the external behavior of any component

Why It Works

This prompt succeeds because it does four things that vague refactoring prompts almost never do.

It names the pattern explicitly. "Fetching data with loading and error states" is far more actionable than "clean this up." The AI knows exactly what to extract — it's not guessing.

It specifies the contract. Stating what the hook accepts and returns forces the AI to design an interface before writing implementation. This is the same discipline a senior engineer applies before touching a line of code.

It demands edge cases upfront. Without this, you'll get a hook that works on the happy path and breaks on unmount, concurrent renders, or rapid re-fetches. Telerik's React design patterns guide for 2025 notes that custom hooks allow stateful logic to be "tested independently from the used components" — but only if the hook is designed with cleanup and error handling in mind from the start.

It asks for a migration example. One concrete refactor is worth more than six abstract descriptions. It also lets you verify the hook API makes sense before you commit to using it everywhere.

Here's what a well-executed useFetch hook looks like from that prompt:

import { useState, useEffect, useRef } from 'react';

interface FetchState<T> {
  data: T | null;
  loading: boolean;
  error: Error | null;
}

export function useFetch<T>(url: string): FetchState<T> {
  const [data, setData] = useState<T | null>(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState<Error | null>(null);
  const abortRef = useRef<AbortController | null>(null);

  useEffect(() => {
    abortRef.current?.abort();
    const controller = new AbortController();
    abortRef.current = controller;

    setLoading(true);
    setError(null);

    fetch(url, { signal: controller.signal })
      .then(res => {
        if (!res.ok) throw new Error(`HTTP ${res.status}`);
        return res.json() as Promise<T>;
      })
      .then(setData)
      .catch(err => {
        if (err.name !== 'AbortError') setError(err);
      })
      .finally(() => setLoading(false));

    return () => controller.abort();
  }, [url]);

  return { data, loading, error };
}

And the refactored component:

// Before: 12 lines of repeated boilerplate
// After:
function UserProfile({ userId }: { userId: string }) {
  const { data: user, loading, error } = useFetch<User>(`/api/users/${userId}`);

  if (loading) return <Spinner />;
  if (error) return <ErrorMessage error={error} />;
  if (!user) return null;

  return <ProfileCard user={user} />;
}

The component dropped from ~20 lines to 8. More importantly, it now reads as what it does, not how it fetches.


The Anti-Prompt

Refactor this code to be cleaner.

Why it fails: "Cleaner" is not a specification. The AI will make cosmetic changes — rename variables, add comments, maybe consolidate the catch/finally — and hand it back. It has no reason to extract a hook unless you tell it the duplication problem exists across multiple components. It cannot see your whole codebase; it only sees what you show it.

Also deadly:

Make a custom hook for fetching.

This produces a generic useFetch with no abort controller, no TypeScript generics, no error typing, and a return shape that won't match what your components already expect. You'll spend twenty minutes adapting it. The "Rule of Three" from ByteIota's analysis of AI code quality applies here: when you see the same pattern for the third time, stop and refactor — but refactor with intention, not a throwaway prompt.


Variations

For form field management:

These components each manage a controlled input with validation. Extract a 
`useFormField(initialValue, validator)` hook that returns `{ value, onChange, 
error, reset }` and runs the validator on every change.

For localStorage sync:

Extract a `useLocalStorage<T>(key, initialValue)` hook that keeps React state 
and localStorage in sync. Handle JSON parse errors gracefully and return 
[value, setValue, clearValue].

For debounced search:

These three search inputs each debounce their query before firing an API call. 
Extract a `useDebouncedSearch(delay)` hook. Debounce the input locally, 
expose the debounced value, and cancel pending calls on unmount.

For detecting duplicate logic you haven't spotted yet:

Review the components in [list files or paste them]. Identify any stateful logic 
that appears more than once. List each pattern, how many times it's repeated, 
and suggest a custom hook name and interface for each one.

That last one is an audit prompt. Run it at the end of a vibe-coding session before you commit. It catches what your eyes miss after four hours of building.


Checklist

  • Identify the repeated pattern across components (loading/error/data, form state, subscriptions, browser APIs)
  • Name what the hook should accept as parameters
  • Name what the hook should return
  • Call out edge cases explicitly: cleanup, abort, error types, concurrent calls
  • Ask for one refactored example component to validate the API
  • Run the audit prompt after vibe-coding sessions to find duplication you missed
  • Verify behavior is unchanged — same outputs, same side effects, just centralized

Ask The Guild

What's the most common repeated logic pattern you see in AI-generated React code — fetch state, form fields, something else entirely? Drop your hook name and a one-line description of what it wraps in the comments. Bonus points if you share the prompt you used to extract it.

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Working With AI ToolsWorking With AI Tools

Refactoring With AI — Making Code Better Without Breaking It

Use AI to safely improve, reorganize, and simplify existing code without changing what it does.

Preview
"Help me refactor this code without changing behavior.
First explain what the current code does and what parts are risky to change.
Then propose the smallest valuable refactoring step.
For that step, show me:
1. the files involved
Prompt Engineering

Turn this workflow advice into a durable operating system

Prompt and workflow posts are the quick win. The learning paths turn them into a durable operating model for tools, prompts, and agent supervision.

Best Next Path

Working With AI Tools

Explorer · Free

Turn ad hoc prompting into a repeatable workflow with better tool choice, stronger prompting, and safer day-to-day AI habits.

23 lessonsIncluded in the free Explorer plan

Need the free route first?

Start with Foundations for AI-Assisted Builders if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.