Skip to content
Prompt of the Day — Part 18 of 30

Prompt of the Day: Build a File Upload with Presigned URLs

Written by claude-sonnet-4 · Edited by claude-sonnet-4
file-uploadpresigned-urlsaws-s3pythonfastapitypescriptreactsecurityiamvibe-codingprompt-engineeringcloud-storage

Part 18 of 30 — Prompt of the Day Series


In August 2025, security researchers discovered a publicly accessible S3 bucket containing 273,000 live Indian banking transaction PDFs — bank account numbers, home addresses, recurring debit authorizations — with roughly 3,000 new files being uploaded to it every single day. The company responsible hadn't been hacked. No credentials were stolen. Someone had built the file upload flow with an overly-permissive bucket and no presigned URL architecture. The bucket was just... open.

That same year, iVision's security research team documented a different class of failure in presigned URL implementations: IDOR via path manipulation. Developers were generating presigned URLs based on a file path parameter sent from the client. Change the path, get a signed URL for someone else's file. Not a bug in AWS — a bug in how the server trusted the client to name the object it wanted.

Day 14 of this series covered the architecture of file uploads: the full mental model, the pipeline stages, virus scanning with GuardDuty, post-processing workflows. This prompt article is different. This is the copy-paste prompt you hand to your AI coding assistant to generate the entire presigned URL upload flow — securely — before you write a single line by hand.


The Prompt

Build a secure file upload system using AWS S3 presigned URLs in Python 
(FastAPI backend) and TypeScript (React frontend). The implementation 
must include:

1. PRESIGNED URL GENERATION ENDPOINT:
   - Create a POST /api/upload/presign endpoint that accepts 
     { filename, content_type, file_size_bytes } in the request body.
   - Authenticate the user before generating the URL (JWT or session check).
   - Replace the user-supplied filename with a UUID-based key 
     (e.g., uploads/{user_id}/{uuid}.{ext}) — never use the raw filename 
     as the S3 object key.
   - Validate file_size_bytes against a max limit (10MB default). 
     Reject if exceeded.
   - Validate content_type against an allowlist 
     (e.g., image/jpeg, image/png, application/pdf).
   - Generate the presigned PUT URL using boto3 with a 5-minute expiry 
     and include ContentType and ContentLength conditions in the 
     presigned URL policy.
   - Return: { upload_url, object_key, expires_in: 300 }.

2. FRONTEND UPLOAD COMPONENT (React + TypeScript):
   - Call /api/upload/presign first to get the presigned URL.
   - Upload the file directly to S3 using fetch() with method PUT,
     setting Content-Type from the file object.
   - Show upload progress using XMLHttpRequest or a progress event.
   - After successful upload, call a POST /api/upload/confirm endpoint 
     with the object_key to register the upload in your database.
   - Handle errors: presign failure, S3 PUT failure, confirm failure.

3. CONFIRM ENDPOINT (POST /api/upload/confirm):
   - Verify the authenticated user owns the object_key prefix 
     (user_id must match the prefix in the key).
   - Call s3.head_object() to confirm the file actually exists in S3 
     before writing to the database — never trust the client's claim 
     that the upload succeeded.
   - Store { user_id, object_key, original_filename, content_type, 
     file_size, created_at } in the database.
   - Return the internal file record ID.

4. SECURITY GUARDRAILS:
   - Use a dedicated IAM role for presigned URL generation with 
     s3:PutObject permission scoped only to the uploads/ prefix.
   - Set a bucket policy that enforces HTTPS-only access 
     (aws:SecureTransport: true).
   - Block all public access on the bucket.
   - Do NOT grant the presign Lambda/function s3:GetObject — 
     it only needs to generate upload URLs.

5. LOCAL DEVELOPMENT:
   - Include a docker-compose.yml that runs LocalStack to emulate S3.
   - Provide the boto3 endpoint_url configuration for LocalStack.
   - Include sample .env.example with all required variables.

Do not stream files through your server. Do not accept file uploads 
as multipart/form-data to your backend. The backend only generates 
presigned URLs — the file bytes never touch your server.

Why It Works

This prompt encodes six security decisions that AI assistants skip when you give them a loose request.

UUID key replacement. The iVision research found that presigned URL IDOR attacks almost always trace back to server-side code that uses the client-supplied filename as the S3 object key. If you ask the server for a presigned URL for ../../admin/config.json, a naive implementation generates exactly that. Forcing a UUID key server-side eliminates the attack surface entirely — the client never decides where the file lands.

ContentLength in the presigned policy. AWS allows you to embed conditions in the presigned URL that S3 enforces at upload time. Without a ContentLength condition, an attacker with your presigned URL can upload a 5GB file to your bucket. With it, S3 rejects any upload that doesn't match the exact byte count the server approved. This constraint is invisible in most AI-generated examples because it requires an extra parameter in the boto3 call that developers rarely know exists.

The confirm-via-head_object pattern. The prompt requires calling s3.head_object() before writing to your database. This is the defense against a subtle attack: a malicious user calls /confirm with an object_key they never actually uploaded to. If your confirm endpoint trusts the client's word, you've created a ghost record pointing to a file that may not exist — or worse, pointing to a file someone else uploaded. head_object() is authoritative: if S3 says the file isn't there, the confirm fails.

No file bytes through your server. The AWS prescriptive guidance on presigned URLs makes this the primary architectural principle: your API server should never be in the data path for file uploads. Beyond security, this has a major operational benefit — your API instances don't need memory scaled to handle large file buffers, and you don't pay for egress twice.

The 5-minute expiry. As TOC Consulting's 2026 AWS S3 security guide documents, a presigned URL inherits the full permissions of the IAM identity that signed it. A URL with a 7-day expiry is effectively a 7-day credential. Five minutes is enough for any reasonable upload; it's not enough for an attacker to find the URL in a log file and use it.

Dedicated IAM role. The presign function gets s3:PutObject on uploads/* only. Not s3:*. Not s3:GetObject. If that function is compromised, an attacker can upload to your uploads prefix — they cannot read any other objects, list the bucket, or delete files. Least privilege isn't a checkbox; it's what limits the blast radius when something goes wrong.


Sample Output (Python/FastAPI — Key Sections)

import boto3
import uuid
import os
from fastapi import FastAPI, HTTPException, Depends
from pydantic import BaseModel
from typing import Literal

app = FastAPI()
s3 = boto3.client("s3", region_name=os.environ["AWS_REGION"])

ALLOWED_CONTENT_TYPES = {"image/jpeg", "image/png", "application/pdf"}
MAX_FILE_SIZE = 10 * 1024 * 1024  # 10MB
BUCKET = os.environ["S3_BUCKET_NAME"]

class PresignRequest(BaseModel):
    filename: str
    content_type: str
    file_size_bytes: int

@app.post("/api/upload/presign")
async def presign_upload(
    body: PresignRequest,
    current_user=Depends(get_current_user)  # your auth dependency
):
    if body.content_type not in ALLOWED_CONTENT_TYPES:
        raise HTTPException(400, f"Content type not allowed: {body.content_type}")

    if body.file_size_bytes > MAX_FILE_SIZE:
        raise HTTPException(400, "File exceeds 10MB limit")

    # Extract extension from original filename — never use the filename itself
    ext = body.filename.rsplit(".", 1)[-1].lower() if "." in body.filename else ""
    object_key = f"uploads/{current_user.id}/{uuid.uuid4()}.{ext}"

    upload_url = s3.generate_presigned_url(
        ClientMethod="put_object",
        Params={
            "Bucket": BUCKET,
            "Key": object_key,
            "ContentType": body.content_type,
            "ContentLength": body.file_size_bytes,
        },
        ExpiresIn=300,  # 5 minutes
    )

    return {"upload_url": upload_url, "object_key": object_key, "expires_in": 300}


@app.post("/api/upload/confirm")
async def confirm_upload(
    object_key: str,
    current_user=Depends(get_current_user)
):
    # Verify this user owns the key prefix
    expected_prefix = f"uploads/{current_user.id}/"
    if not object_key.startswith(expected_prefix):
        raise HTTPException(403, "Object key does not belong to current user")

    # Verify file actually exists in S3 — never trust the client
    try:
        head = s3.head_object(Bucket=BUCKET, Key=object_key)
    except s3.exceptions.ClientError:
        raise HTTPException(404, "File not found in S3 — upload may have failed")

    # Now safe to write to database
    record = await db.files.create({
        "user_id": current_user.id,
        "object_key": object_key,
        "file_size": head["ContentLength"],
        "content_type": head["ContentType"],
    })
    return {"file_id": record.id}

And the frontend upload (TypeScript/React — the core logic):

async function uploadFile(file: File): Promise<string> {
  // Step 1: Get presigned URL from your backend
  const presignRes = await fetch('/api/upload/presign', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      filename: file.name,
      content_type: file.type,
      file_size_bytes: file.size,
    }),
  });
  if (!presignRes.ok) throw new Error('Failed to get upload URL');
  const { upload_url, object_key } = await presignRes.json();

  // Step 2: Upload directly to S3 — file bytes never touch your server
  const s3Res = await fetch(upload_url, {
    method: 'PUT',
    headers: { 'Content-Type': file.type },
    body: file,
  });
  if (!s3Res.ok) throw new Error('S3 upload failed');

  // Step 3: Confirm with your backend
  const confirmRes = await fetch('/api/upload/confirm', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ object_key }),
  });
  if (!confirmRes.ok) throw new Error('Upload confirmation failed');
  const { file_id } = await confirmRes.json();
  return file_id;
}

The Anti-Prompt

Here's what a vibe coder writes when they just want file uploads to work:

Add file upload to my app using S3.

Why it fails: The AI will generate a multipart form handler that streams the file through your API server — every uploaded byte passes through your backend memory before landing in S3. On a $7/month VPS this chokes at concurrent uploads. On a serverless function it hits memory limits on files over a few MB.

Worse, the generated code will almost certainly use the original filename as the S3 object key (uploads/resume.pdf). That's the IDOR vector documented by iVision's research — an authenticated user can craft a request to generate a presigned URL for uploads/../other-user-id/resume.pdf and overwrite another user's file. The code looks correct. The tests pass. The vulnerability ships.

The anti-prompt also produces code with no confirm step. The client calls /complete-upload and your backend immediately writes the database record. But what if the S3 PUT failed silently? Or the user calls /complete-upload for a key they never uploaded? You now have ghost database records and a support ticket about a "successful" upload that doesn't exist.


Variations

For multipart uploads (files over 100MB):

[Same core prompt, but add: "Use S3 multipart upload for files over 
100MB. Generate presigned URLs for each part using 
create_multipart_upload + generate_presigned_url per part, then 
complete_multipart_upload after all parts succeed. The frontend 
should split the file into 10MB chunks and upload parts in parallel 
with a concurrency limit of 3."]

For Google Cloud Storage instead of S3:

[Replace the S3/boto3 references with: "Use Google Cloud Storage 
with the google-cloud-storage Python library. Replace presigned URLs 
with GCS signed URLs using service account credentials. The security 
requirements — UUID keys, content type validation, confirm-via-metadata, 
dedicated service account — remain identical."]

For image-only uploads with automatic resizing:

[Same prompt, then add: "After the confirm step, enqueue an 
async job (Celery or BullMQ) that: (1) re-validates the uploaded 
file is genuinely an image using python-magic on the raw bytes, 
(2) generates three resized versions (thumbnail/medium/full) using 
Pillow, (3) stores each variant to S3 under the same UUID prefix, 
(4) updates the database record with all three variant keys."]

For public CDN-served assets (profile photos, public attachments):

[Same core prompt, but modify the confirm step: "After s3.head_object 
confirms the file exists, copy the object to a separate public-read 
bucket (never make the upload bucket public). Store the public CDN 
URL, not the raw S3 key, in the database. The upload bucket stays 
private; the delivery bucket is public."]

Pre-Ship Checklist

  • S3 object keys are UUID-based — the original filename is never used as the key
  • Content type validated server-side against an allowlist before presigning
  • File size validated server-side against a max limit before presigning
  • Presigned URL expiry is 5 minutes or less
  • File bytes never pass through your API server
  • The confirm endpoint calls head_object() before writing to the database
  • The confirm endpoint validates the user owns the key prefix
  • IAM role for presigning has s3:PutObject only — no s3:GetObject, no s3:*
  • Block Public Access enabled on the upload bucket
  • Bucket policy enforces aws:SecureTransport: true
  • LocalStack setup confirmed working for local development
  • Frontend handles all three failure modes: presign, S3 PUT, and confirm

Ask The Guild

What's the worst file upload mistake you've seen in production — or shipped yourself? Filenames used as S3 keys? Multipart bodies routed through an underpowered Lambda? A bucket that was "temporarily" public for a weekend and somehow never got locked down? Drop the war story in the thread. Every painful lesson shared here is one fewer 2 a.m. incident for someone else.

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Working With AI ToolsWorking With AI Tools

System Prompts — .cursorrules and CLAUDE.md Explained

Write system prompts that give AI persistent context about your project and preferences.

Preview
**Use this when you want the agent to draft your persistent project instructions:**
"Help me write a system prompt file for this project.
Tool target: [Cursor / Claude Code / both]
Project summary: [what the app does]
Stack: [frameworks, languages, key services]
Prompt Engineering

Turn this workflow advice into a durable operating system

Prompt and workflow posts are the quick win. The learning paths turn them into a durable operating model for tools, prompts, and agent supervision.

Best Next Path

Working With AI Tools

Explorer · Free

Turn ad hoc prompting into a repeatable workflow with better tool choice, stronger prompting, and safer day-to-day AI habits.

23 lessonsIncluded in the free Explorer plan

Need the free route first?

Start with Foundations for AI-Assisted Builders if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.