Skip to content
Security First — Part 19 of 30

AI Hallucination Detection: When Your AI Invents Packages

Written by claude-sonnet-4 · Edited by claude-sonnet-4
slopsquattingAI hallucinationpackage hallucinationsupply chain securityPyPI securitynpm securityvibe codingAI coding toolsdependency securitymalicious packagespackage validationChatGPT securityCopilot securitysoftware supply chain

Security First — Part 19 of 30


It's a Tuesday afternoon in early 2024. A developer at Alibaba is following the README instructions for GraphTranslator, an open-source research tool. The instructions tell them to run:

pip install huggingface-cli

The install succeeds. No errors. The tool runs. Life goes on.

Except huggingface-cli isn't a real package. It was imagined by ChatGPT, which kept suggesting it when developers asked how to install Hugging Face tools. (The correct command is pip install -U "huggingface_hub[cli]" — a completely different package name.) Bar Lanyado, a security researcher at Lasso Security, noticed the AI kept hallucinating this same name. So he uploaded an empty, harmless package under that name to PyPI — just to see what would happen.

In three months, that empty package received over 30,000 authentic downloads. Alibaba was one of the companies pulling it in. A Hugging Face-owned project had incorporated it too, until Lanyado alerted them.

Here's the uncomfortable part: Lanyado made his package harmless on purpose. He was a researcher running an experiment. The next person to register that name might not be so generous.


Welcome to Slopsquatting

Yesterday we covered supply chain attacks via npm — how attackers compromise real packages and hijack maintainer accounts. Today's threat is different. Today we're talking about packages that never existed until an AI invented them.

Slopsquatting is the attack where someone:

  1. Monitors what package names AI coding assistants tend to hallucinate
  2. Registers those names on PyPI or npm
  3. Fills them with malware
  4. Waits

Every developer who asks an AI assistant the same question, gets the same hallucinated package name, and runs pip install or npm install becomes a victim. The install succeeds. No errors. The malware runs silently in the background — harvesting credentials, exfiltrating API keys, establishing persistence.

Unlike typosquatting (which targets human typing mistakes) or dependency confusion (which targets private package names), slopsquatting targets AI outputs. And AI assistants make the same mistakes, repeatedly, at scale.


The Research Numbers Are Alarming

In 2025, researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma published a comprehensive study at USENIX Security 2025. They tested 16 popular AI coding models across 576,000 code samples. Their findings:

  • On average, 19.7% of recommended packages didn't exist — roughly one in five
  • That generated approximately 205,000 unique hallucinated package names
  • Commercial models (GPT-4, etc.) hallucinated at 5.2% — still significant
  • Open-source models like DeepSeek and WizardCoder hallucinated at 21.7%
  • In separate testing, Gemini fabricated packages in 64.5% of conversations

The number that should worry you most isn't the hallucination rate — it's the repeatability rate. The researchers found that 43% of hallucinated package names appeared consistently across 10 different runs of the same prompt. Over 58% reappeared in multiple runs.

That repeatability is what makes this a viable attack. Attackers don't have to guess. They run a handful of AI queries, note which fake package names keep appearing, register those names, and wait for the AI to keep recommending them to everyone else.

In January 2026, Aikido Security researcher Charlie Eriksen found an npm package called react-codeshift that nobody had registered yet — but AI agents had spread its name to 237 GitHub repositories through forks, and it was already receiving a couple of daily downloads from AI agents trying to install it. If an attacker had claimed it first, thousands of automated AI pipelines would have silently executed whatever malware was inside.


What Hallucinated Packages Actually Look Like

AI models don't hallucinate completely random strings. They hallucinate plausible-sounding names — which makes them more dangerous, not less. The USENIX research found:

  • 38% are conflations of two real things (e.g., express-mongoose, react-axios-client)
  • 13% are typo variants of existing packages
  • 51% are pure fabrications that sound totally legitimate

Here's a quick illustration of the pattern. If you ask an AI assistant to help you validate environment variables in a Node.js app, you might get:

// AI-suggested code
const { validateEnv } = require('env-validator-utils');

const config = validateEnv({
  DATABASE_URL: { required: true, type: 'string' },
  PORT: { required: false, type: 'number', default: 3000 },
});

Does env-validator-utils exist? Maybe. Maybe not. The name is perfectly plausible. You wouldn't know without checking — and most developers don't check.


How to Detect Hallucinated Packages Before You Install Them

The fix is simple: verify before you install. It takes 30 seconds and it's the single most important habit you can build.

Step 1: Check the Registry First

Before running any pip install or npm install that an AI suggests, look it up:

# For Python packages — check PyPI
pip index versions <package-name>

# Or just visit: https://pypi.org/project/<package-name>/

# For npm packages — check the registry
npm view <package-name>

# Or just visit: https://www.npmjs.com/package/<package-name>

If the package doesn't exist, pip index versions returns nothing and npm view throws a 404 error. That's your signal to stop.

Step 2: Check Package Age and Download Count

Even if the package does exist, that doesn't mean it's legitimate. A slopsquatting attacker may have already registered it. Look for red flags:

# Check npm package details
npm view <package-name> time.created downloads

# Or use the npx tool for a quick audit
npx package-json <package-name>

Red flags that should make you pause:

  • Package was created in the last few days or weeks
  • Zero or very low download counts (under 100 total)
  • No GitHub repository linked
  • Single version published, no updates
  • Description is vague or auto-generated-sounding

Step 3: Cross-Reference with the AI

After the AI suggests a package, ask it a follow-up:

You just suggested 'env-validator-utils'. Can you confirm this is a 
real, maintained package on npm? What's the GitHub repo URL? How 
many weekly downloads does it have?

A good AI assistant will either confirm the package with real details, or admit it isn't sure. Either way, you've made the AI accountable for its own suggestion.

Step 4: Use a Verification Script

If you're regularly building with AI assistance, add this check to your workflow. Save this as check-package.sh:

#!/bin/bash
# check-package.sh — verify a package exists before installing
# Usage: ./check-package.sh npm <package-name>
#        ./check-package.sh pypi <package-name>

ECOSYSTEM=$1
PACKAGE=$2

if [ "$ECOSYSTEM" = "npm" ]; then
  RESULT=$(npm view "$PACKAGE" name 2>&1)
  if echo "$RESULT" | grep -q "404\|E404\|Not found"; then
    echo "DANGER: $PACKAGE does not exist on npm. Do NOT install."
    exit 1
  else
    echo "OK: $PACKAGE exists on npm"
    npm view "$PACKAGE" version description homepage
  fi
elif [ "$ECOSYSTEM" = "pypi" ]; then
  STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://pypi.org/pypi/$PACKAGE/json")
  if [ "$STATUS" = "404" ]; then
    echo "DANGER: $PACKAGE does not exist on PyPI. Do NOT install."
    exit 1
  else
    echo "OK: $PACKAGE exists on PyPI"
    curl -s "https://pypi.org/pypi/$PACKAGE/json" | python3 -c \
      "import sys,json; d=json.load(sys.stdin)['info']; print(f'Version: {d[\"version\"]}\\nAuthor: {d[\"author\"]}\\nHome: {d[\"home_page\"]}')"
  fi
else
  echo "Usage: $0 [npm|pypi] <package-name>"
  exit 1
fi

Make it executable and use it every time an AI suggests a new package:

chmod +x check-package.sh
./check-package.sh npm react-codeshift
./check-package.sh pypi huggingface-cli

The AI-Validated-by-AI Problem

Here's the twist that keeps security researchers up at night: some AI-powered tools are being used to validate the packages that AI coding assistants suggest. When one hallucinating AI rubber-stamps another hallucinating AI's suggestions, the false confidence compounds.

Feross Aboukhadijeh, CEO of Socket, flagged a case in January 2025 where Google's AI Overview recommended a malicious npm package called @async-mutex/mutex — a typosquatted version of the legitimate async-mutex library — presenting it as a credible result to developers. The malicious package contained code designed to steal Solana private keys and exfiltrate them through Gmail's SMTP servers.

The lesson: never trust an AI to validate another AI's package recommendation. Only the registry itself — PyPI, npm — can confirm a package exists. And even then, existence isn't safety.


If You've Already Installed a Suspicious Package

If you've run pip install or npm install on an AI-suggested package without verifying it first, here's your incident response:

# 1. Immediately uninstall the package
pip uninstall <package-name>    # Python
npm uninstall <package-name>    # Node.js

# 2. Check what the package actually contains
pip show -f <package-name>      # Show all installed files
npm pack <package-name>         # Download without installing, inspect tarball

# 3. Check for suspicious postinstall scripts in package.json
cat node_modules/<package-name>/package.json | grep -A5 '"scripts"'

# 4. Rotate any credentials that were accessible in that environment
# — API keys, cloud tokens, database passwords, SSH keys

If the package had a postinstall script, assume it executed. Rotate all credentials immediately.


Your Slopsquatting Defense Checklist

Before every AI-suggested package install:

  • Search the package registry manually (PyPI or npm) before running the install command
  • Check the package creation date — anything less than a few months old deserves extra scrutiny
  • Verify the package has a linked GitHub repo with real activity
  • Ask the AI to confirm the package exists and provide the official docs URL
  • Cross-check the package name against the AI's suggestion character by character

Ongoing habits:

  • Never let an AI agent install packages automatically without human review
  • Keep AI temperature settings low in your coding tools (higher temperature = more hallucinations)
  • Run npm ci --ignore-scripts in CI pipelines to prevent postinstall execution
  • Use a software composition analysis (SCA) tool like Snyk or Socket that scans your full dependency tree, including nested dependencies
  • If you use AI agents with autonomous coding capabilities (Claude Code, Cursor, Copilot Workspace), scope their permissions so they cannot install packages without explicit approval

When something feels wrong:

  • If a package installs cleanly but you can't find any documentation, GitHub repo, or community discussion about it — treat it as malicious until proven otherwise
  • Rotate credentials immediately if you installed a package that turned out to be suspicious
  • Report confirmed malicious packages to the registry (PyPI's malware report form, npm's npm report <package>)

Ask The Guild

Community prompt: Have you ever caught your AI assistant recommending a package that turned out not to exist — or that had suspiciously low download counts? Share what happened in the thread below. What was the package name, which AI suggested it, and how did you catch it? Let's build a community list of hallucination patterns to watch out for.


Sources: Bar Lanyado / Lasso Security — AI Package Hallucinations | USENIX Security 2025 — "We Have a Package for You!" (Spracklen et al.) | Aikido Security — Slopsquatting: The AI Package Hallucination Attack | FOSSA — Slopsquatting: AI Hallucinations and the New Software Supply Chain Risk | Trend Micro — Slopsquatting: When AI Agents Hallucinate Malicious Packages | ThinkPol — Slopsquatting: the supply chain attack vibe coding made

Copy A Prompt Next

Start safely

If this article changed how you think about the problem, copy a prompt that turns that judgment into one safe, reviewable next step.

Matching public prompts

6

Keep the task scoped, copy the prompt, then inspect one reviewable diff before the agent continues.

Need the safest first move instead? Open the curated sample prompts before you browse the broader library.

Start Here — Build Safely With AIStart Here — Build Safely With AI

What You're Actually Doing When You Build With AI

A plain-English explanation of the job: AI writes fast, you still choose scope, inspect output, and own the result.

Preview
"I am completely new to vibe coding and I want to build one very small thing safely.
The problem is: [describe the problem]
The user is: [describe the user]
The smallest useful version would do only: [describe the tiny outcome]
Before writing any code:
Security First

Turn this security lesson into a repeatable review habit

This article gives you the judgment call. The security paths give you the vocabulary, checklists, and repetition to catch the next issue before it reaches users.

Best Next Path

Advanced Security

Guild Member · $29/mo

Go past security slogans into OWASP, supply-chain failures, infrastructure hardening, and the attack surfaces AI tools introduce.

20 lessonsIncluded with the full Guild Member library

Need the free route first?

Start with Start Here — Build Safely With AI if you want the workflow and vocabulary before you dive into the deeper path above.

T

About Tom Hundley

Tom Hundley writes for builders who need stronger technical judgment around AI-assisted software work. The Guild turns production experience into public articles, copy-paste prompts, and structured learning paths that help non-software developers supervise AI agents more safely.

Do this next

Leave this article with one concrete move. Copy the matching prompt, or start with the path that teaches the safest next skill in sequence.