72,000 Photos Exposed: When AI Sets Up Your Firebase
The Tea App was a dating safety application. Its purpose was to help people verify the identity of potential dates, which meant it collected sensitive data: photos, and government ID documents. In 2024, 72,000 images were exposed to the public internet — including approximately 13,000 government IDs — because the Firebase storage bucket was configured with default rules that allowed unauthenticated read access.
The engineering team had used an AI assistant to set up the Firebase backend. The AI generated working code. The app functioned correctly in testing. The security rules it generated were the ones Firebase shows in its quickstart documentation — which, for development speed, default to open access.
The AI did not know the difference between "rules that let you build quickly" and "rules you should ship to production." It had no context about what data was being stored. It generated what developers typically ask for: a setup that works.
Seventy-two thousand images. Thirteen thousand government IDs. That's the cost of not reading what your AI wrote.
Firebase Security Rules: What Open Looks Like
Firebase Storage rules are a simple declarative syntax. The most dangerous rule is also the most common default in AI-generated setups:
// This is what AI almost always generates for quick setup
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if true; // Anyone can read or write anything
}
}
}
This rule allows any person on the internet, authenticated or not, to read every file in your storage bucket and write new files. It is not a starting point to refine later. It is a live data exposure.
The Principle of Least Privilege Applied to Storage
The right rules depend on your application's data model, but the principle is always the same: grant the minimum access necessary for the application to function, for the authenticated user who owns the data.
For a typical user-generated content app:
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
// Users can only read and write their own files
match /users/{userId}/{allPaths=**} {
allow read: if request.auth != null && request.auth.uid == userId;
allow write: if request.auth != null
&& request.auth.uid == userId
&& request.resource.size < 10 * 1024 * 1024 // 10MB limit
&& request.resource.contentType.matches('image/.*');
}
// Public files (e.g., app assets) — read only, no write
match /public/{allPaths=**} {
allow read: if true;
allow write: if false;
}
// Deny everything not explicitly allowed
match /{allPaths=**} {
allow read, write: if false;
}
}
}
This pattern:
- Requires authentication for any user data access
- Enforces ownership (you can only access your own files)
- Limits file size and type to prevent abuse
- Denies access to anything not explicitly permitted
Storage Bucket ACLs on Other Platforms
The Firebase incident is not Firebase-specific. The same pattern applies to AWS S3, Google Cloud Storage, and Azure Blob Storage. AI-generated bucket configurations routinely default to public read access or overly broad IAM policies.
For S3, the equivalent of "deny everything not explicitly allowed" is the Block Public Access setting:
# Enable Block Public Access on a bucket (should be on by default, verify it)
aws s3api put-public-access-block \
--bucket my-app-user-uploads \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
Then grant access only through signed URLs or through an application layer that validates authentication.
A Pre-Launch Mobile Backend Security Checklist
This is the checklist I would run before shipping any mobile app with a Firebase or cloud storage backend. It takes less than an hour and would have prevented the Tea App incident.
Authentication
- Firebase Auth is enabled and required for all non-public operations
- Token validation is enforced in security rules (not just in app code)
- Anonymous auth is disabled unless there's a specific reason for it
Storage Rules
- Default open rules (
allow read, write: if true) have been replaced - Each path has explicit allow rules; everything else is denied
- File size and content type limits are enforced in rules
- Rules have been tested in the Firebase Rules Playground
Database Rules (Firestore/Realtime)
- No collection allows unauthenticated read or write
- Sensitive fields (PII, payment data, IDs) are in restricted collections
- Field-level validation is in rules, not just in app code
API Keys and Config
- Firebase config keys are restricted to your app's bundle ID / SHA-1
- Service account keys are not in client-side code
-
google-services.json/GoogleService-Info.plistare in.gitignore
Pre-Launch Test
- Attempt to access another user's files using the Firebase REST API directly
- Attempt an unauthenticated read of a private storage path
- If both return 403, your rules are working
What to Do Next
- Open your Firebase console right now and check your Storage rules. If you see
allow read, write: if trueon any path that contains user data, fix it immediately. - Run the pre-launch checklist against your current project before your next release.
- Add a prompt constraint when using AI for Firebase setup: "Generate Firebase security rules that require authentication for all user data access and enforce owner-only permissions. Never use
allow read, write: if true."
The Tea App's users trusted it with their most sensitive documents. The app trusted AI to set up its security. Neither trust was warranted. Yours doesn't have to end the same way.
🤖 Ghostwritten by Claude Opus 4.6 · Curated by Tom Hundley