Skip to content
Mobile Dev — Part 1

72,000 Photos Exposed: When AI Sets Up Your Firebase

Written by claude-opus-4-6 · Edited by claude-opus-4-6
mobilefirebasesecuritycloud-storage

The Tea App was a dating safety application. Its purpose was to help people verify the identity of potential dates, which meant it collected sensitive data: photos, and government ID documents. In 2024, 72,000 images were exposed to the public internet — including approximately 13,000 government IDs — because the Firebase storage bucket was configured with default rules that allowed unauthenticated read access.

The engineering team had used an AI assistant to set up the Firebase backend. The AI generated working code. The app functioned correctly in testing. The security rules it generated were the ones Firebase shows in its quickstart documentation — which, for development speed, default to open access.

The AI did not know the difference between "rules that let you build quickly" and "rules you should ship to production." It had no context about what data was being stored. It generated what developers typically ask for: a setup that works.

Seventy-two thousand images. Thirteen thousand government IDs. That's the cost of not reading what your AI wrote.

Firebase Security Rules: What Open Looks Like

Firebase Storage rules are a simple declarative syntax. The most dangerous rule is also the most common default in AI-generated setups:

// This is what AI almost always generates for quick setup
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /{allPaths=**} {
      allow read, write: if true;  // Anyone can read or write anything
    }
  }
}

This rule allows any person on the internet, authenticated or not, to read every file in your storage bucket and write new files. It is not a starting point to refine later. It is a live data exposure.

The Principle of Least Privilege Applied to Storage

The right rules depend on your application's data model, but the principle is always the same: grant the minimum access necessary for the application to function, for the authenticated user who owns the data.

For a typical user-generated content app:

rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    // Users can only read and write their own files
    match /users/{userId}/{allPaths=**} {
      allow read: if request.auth != null && request.auth.uid == userId;
      allow write: if request.auth != null
                   && request.auth.uid == userId
                   && request.resource.size < 10 * 1024 * 1024  // 10MB limit
                   && request.resource.contentType.matches('image/.*');
    }

    // Public files (e.g., app assets) — read only, no write
    match /public/{allPaths=**} {
      allow read: if true;
      allow write: if false;
    }

    // Deny everything not explicitly allowed
    match /{allPaths=**} {
      allow read, write: if false;
    }
  }
}

This pattern:

  • Requires authentication for any user data access
  • Enforces ownership (you can only access your own files)
  • Limits file size and type to prevent abuse
  • Denies access to anything not explicitly permitted

Storage Bucket ACLs on Other Platforms

The Firebase incident is not Firebase-specific. The same pattern applies to AWS S3, Google Cloud Storage, and Azure Blob Storage. AI-generated bucket configurations routinely default to public read access or overly broad IAM policies.

For S3, the equivalent of "deny everything not explicitly allowed" is the Block Public Access setting:

# Enable Block Public Access on a bucket (should be on by default, verify it)
aws s3api put-public-access-block \
  --bucket my-app-user-uploads \
  --public-access-block-configuration \
    "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

Then grant access only through signed URLs or through an application layer that validates authentication.

A Pre-Launch Mobile Backend Security Checklist

This is the checklist I would run before shipping any mobile app with a Firebase or cloud storage backend. It takes less than an hour and would have prevented the Tea App incident.

Authentication

  • Firebase Auth is enabled and required for all non-public operations
  • Token validation is enforced in security rules (not just in app code)
  • Anonymous auth is disabled unless there's a specific reason for it

Storage Rules

  • Default open rules (allow read, write: if true) have been replaced
  • Each path has explicit allow rules; everything else is denied
  • File size and content type limits are enforced in rules
  • Rules have been tested in the Firebase Rules Playground

Database Rules (Firestore/Realtime)

  • No collection allows unauthenticated read or write
  • Sensitive fields (PII, payment data, IDs) are in restricted collections
  • Field-level validation is in rules, not just in app code

API Keys and Config

  • Firebase config keys are restricted to your app's bundle ID / SHA-1
  • Service account keys are not in client-side code
  • google-services.json / GoogleService-Info.plist are in .gitignore

Pre-Launch Test

  • Attempt to access another user's files using the Firebase REST API directly
  • Attempt an unauthenticated read of a private storage path
  • If both return 403, your rules are working

What to Do Next

  1. Open your Firebase console right now and check your Storage rules. If you see allow read, write: if true on any path that contains user data, fix it immediately.
  2. Run the pre-launch checklist against your current project before your next release.
  3. Add a prompt constraint when using AI for Firebase setup: "Generate Firebase security rules that require authentication for all user data access and enforce owner-only permissions. Never use allow read, write: if true."

The Tea App's users trusted it with their most sensitive documents. The app trusted AI to set up its security. Neither trust was warranted. Yours doesn't have to end the same way.


🤖 Ghostwritten by Claude Opus 4.6 · Curated by Tom Hundley

Copy A Prompt Next

Review and debug

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

23

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

ReviewWorking With AI Tools

Review The Diff

Use this after an AI-generated change lands so the reviewer focuses on correctness, security, edge cases, and misleading tests.

Preview
"Review the diff between my branch and `main`.
For every finding:
1. label it as must-fix, should-fix, consider, or optional
2. explain why it matters
3. point to the relevant file or code section
Production Ready

Use this production insight inside a full build sequence

Production articles show you what breaks in the real world. The right path turns that lesson into a sequence you can ship with instead of just nodding at.

Best Next Path

Databases and Data

Guild Member · $29/mo

Ground the architecture in schemas, queries, indexing, and integrity so the system scales on real data instead of assumptions.

26 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.