Skip to content
Security First — Part 6 of 30

Front-End vs Back-End: Why Browser API Keys Are Public

Written by claude-sonnet-4 · Edited by claude-sonnet-4
api-keysfront-end-securityback-endbrowser-devtoolsstripeenvironment-variablesvibe-codingsecuritynext-jspublishable-keys

Security First — Part 6 of 30


The $87,500 Lesson Nobody Plans to Learn

Picture this: A developer builds a payment app with the help of an AI coding tool. It works beautifully. They ship it. They're proud.

A few weeks later, 175 customers get charged $500 each. The developer didn't do it. But their Stripe secret API key — sitting right there in the front-end JavaScript — did.

$87,500 in unauthorized charges. Gone.

This isn't a hypothetical. A post circulating in March 2026 described exactly this scenario, with the author pointing out: "API keys were sitting on the front end. One prompt could have fixed it. But nobody asked that prompt."

That single distinction — front-end vs back-end — is one of the most important security concepts you will ever learn as a vibe coder. Let's break it down so completely that you never make this mistake.


What "Front-End" Actually Means for Security

Here's the mental model: anything that runs in a browser is public.

When a user visits your web app, their browser downloads your code. All of it. The HTML, the CSS, the JavaScript — it all arrives on their device. Every variable. Every string. Every API key you embedded in that code.

Open Chrome, hit F12 to open DevTools, click the Sources tab, and browse to any website's JavaScript files. You're reading their front-end code right now. No hacking required. Just F12.

That's the browser. It is a fundamentally public execution environment.

The back-end is different. Your server code runs on a machine you control. Users never see it. They send requests to it, and it sends responses back — but the code itself, including its secrets, stays private.

This is not a philosophical difference. It's a physical one.

┌─────────────────────────────────────────────────────┐
│  BROWSER (Public)           SERVER (Private)         │
│  ─────────────────          ─────────────────        │
│  HTML/CSS/JS visible  ───►  Node.js / Python / etc.  │
│  React components           Database credentials     │
│  PUBLISHABLE API keys       SECRET API keys          │
│  Anyone can read this       Only you can read this   │
└─────────────────────────────────────────────────────┘

The Two Types of API Keys (and Why They Exist)

Services like Stripe figured this out a long time ago. They give you two different keys for a reason.

Publishable keys (also called public keys) are designed to be embedded in front-end code. They identify your account to the service for low-stakes operations — like rendering Stripe's payment form widget in the browser. Even if someone copies your publishable key, they can't do much damage with it.

Secret keys are back-end only. According to Stripe's official documentation, "unlike publishable keys, which are safe to include in webpages and apps, secret keys must stay in your server environment. If an unauthorized party obtains your secret API key, they can make unauthorized charges, access customer data, or disrupt your integration."

Here's what they look like:

// ✅ SAFE — lives in your front-end JavaScript
const stripe = Stripe('pk_live_51ABC...');  // publishable key

// ❌ NEVER put this in your front-end — secret key
// stripe.secretKey = 'sk_live_51XYZ...';  // THIS IS A CRIME AGAINST YOUR USERS

The naming convention isn't arbitrary. pk_ = publishable key. sk_ = secret key. Stripe chose those prefixes so that automated scanners (and you) can instantly recognize which is which.


The Google Gemini Wake-Up Call (February 2026)

Here's a newer, nastier version of this problem — one that blindsided developers who thought they were doing everything right.

For years, Google API keys (used for Maps, YouTube embeds, Firebase) were considered safe to embed in front-end code. Google's own documentation said so. Developers followed the rules.

Then Google launched Gemini AI.

In February 2026, security researchers at TruffleHog scanned a common internet crawl dataset and found 2,863 live Google API keys publicly embedded in websites and JavaScript files — some from major financial institutions, security companies, and Google itself. The keys had been sitting there for years, harmless.

Until Gemini changed everything.

When Google enabled the Generative Language API on projects, every existing API key in that project automatically inherited access to Gemini endpoints. Those old Maps keys could now query Gemini, access uploaded files, read cached conversation data, and rack up AI billing charges.

One developer on Reddit reported $82,314.44 in unexpected Gemini charges accumulated in just 36 hours — up from a typical $180/month spend.

The lesson: a key that was "publishable" yesterday may be a secret key tomorrow. Services evolve. Scopes expand. The only real protection is understanding why keys belong where they do — not just following rules that can change.


What AI Coding Tools Get Wrong

Here's the uncomfortable truth: AI coding assistants are optimized to make things work, not to make things secure.

When you prompt an AI to "add Stripe payments to my app" or "connect to the OpenAI API," it will often generate code that works — including code that puts credentials exactly where they shouldn't be.

A 2025 security article documented how modern front-end frameworks like Next.js, React, and Angular routinely leak API keys into their bundled JavaScript. In Next.js, any environment variable prefixed with NEXT_PUBLIC_ is automatically baked into the client-side build — meaning it's public by design. But the AI doesn't know the difference between NEXT_PUBLIC_STRIPE_KEY and NEXT_PUBLIC_STRIPE_SECRET_KEY. The prefix makes both public.

// next.js .env.local

// ✅ OK to be public
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_live_abc123

// ❌ NEVER prefix a secret with NEXT_PUBLIC_
// NEXT_PUBLIC_STRIPE_SECRET_KEY=sk_live_xyz789  ← this is in every user's browser

Always review AI-generated code for any credentials. Ask the AI explicitly: "Is any secret key in this code accessible from the browser?"


The Right Architecture: Back-End as Gatekeeper

The correct pattern is simple: your front-end never touches secret keys directly. It talks to your back-end, which holds the secrets and makes the sensitive API calls.

 Browser (User)          Your Server             Stripe/OpenAI
 ─────────────           ────────────            ─────────────

 "Charge my card"  ──►  Receives request
                        Loads sk_live from
                        environment variable
                        Makes Stripe API call  ──►  Charges card
                        Returns result         ◄──  Success
 ◄── "Success!"         Sends response

In practice, with a simple Node.js/Express back-end:

// server.js (back-end only — users never see this)
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);  // from env var

app.post('/create-payment', async (req, res) => {
  const { amount } = req.body;
  const paymentIntent = await stripe.paymentIntents.create({
    amount,
    currency: 'usd',
  });
  res.json({ clientSecret: paymentIntent.client_secret });
});
// frontend.js (public — this is fine)
const stripe = Stripe(process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY);

// Ask YOUR server for the payment intent — not Stripe directly
const response = await fetch('/create-payment', { method: 'POST', ... });
const { clientSecret } = await response.json();

Notice what's happening: the front-end only ever holds the publishable key. The secret key lives in an environment variable on your server, injected at runtime, never written into code.


Security Checklist

Before you ship any project that uses external APIs, run through this list:

  • Open DevTools on your app (F12 → Sources → search for your key prefix). If you can find your key, attackers can too.
  • Check all environment variables — in Next.js, never prefix a secret with NEXT_PUBLIC_. In React, never use REACT_APP_ for secrets.
  • Use secret keys only in server-side code — Lambda functions, Express routes, FastAPI endpoints, or similar.
  • Store secrets in environment variables, not in code files (.env files should be in .gitignore).
  • Know the difference — if a service gives you two keys, read the docs to understand which is publishable and which is secret.
  • Audit after AI generates code — ask your AI assistant: "Does any secret credential appear in browser-accessible code?"
  • Set spending limits on AI services (OpenAI, Google Cloud, AWS) so a leaked key can't generate unlimited charges.
  • Rotate any key you suspect has been exposed — don't wait to confirm. Rotate first, investigate after.
  • Review which APIs your keys have access to — a key originally created for Maps may now have Gemini access. Check your Google Cloud Console.

Ask The Guild

Community Prompt: Have you ever accidentally shipped an API key in front-end code — or caught yourself about to? What happened, and what did you change about your workflow? Drop your story in the comments. Your near-miss might save someone else from the real thing.

Copy A Prompt Next

Start safely

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

6

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

SafetyStart Here — Build Safely With AI

Safe Beginner Loop

Use this before any implementation work when you want the agent to stay scoped, explain itself, and stop after one reviewable change.

Preview
"I want to work in a safe beginner loop.
Please do only this one task: [describe one tiny change].
Before making changes:
1. explain your plan in plain English
2. list the files you expect to change
Security First

Turn this security lesson into a repeatable review habit

This article gives you the judgment call. The security paths give you the vocabulary, checklists, and repetition to catch the next issue before it reaches users.

Best Next Path

Security Essentials

Guild Member · $29/mo

Make the instincts in this article operational with concrete review checklists for secrets, auth boundaries, and common vulnerabilities.

28 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.