Last week we wrote about TeamPCP’s supply-chain attack on LiteLLM — the open-source library downloaded millions of times daily that connects applications to AI services. We warned that the blast radius was still expanding.
Now we have the first confirmed major casualty: Mercor, the $10 billion AI training data startup whose clients include OpenAI, Anthropic, and Meta.
What Happened
Fortune reports that Mercor confirmed it was “one of thousands of companies” affected by the TeamPCP supply-chain attack on LiteLLM. The company said it had “moved promptly” to contain and remediate the incident and that a third-party forensics investigation is underway.
But that’s the sanitized version. Here’s what’s actually unfolding:
Lapsus$ — the notorious extortion group known for social engineering attacks against Nvidia, Samsung, Microsoft, and Uber — has claimed responsibility for targeting Mercor specifically. According to TechCrunch, the group has published samples of allegedly stolen data on its leak site, including:
- Slack messages and internal ticketing data
- Two videos purportedly showing conversations between Mercor’s AI systems and contractors
- Claims of up to 4 terabytes of total data, including source code and database records
Security researchers from Wiz (now part of Google Cloud) have indicated that TeamPCP has recently begun collaborating with Lapsus$ and other ransomware/extortion groups — a significant escalation from the teenage hacking collective’s original profile.
Why Mercor Matters
Mercor isn’t just another startup. It’s a critical node in the AI training pipeline:
- $10B valuation after a $350M Series C led by Felicis Ventures (October 2025)
- Recruits domain experts — doctors, lawyers, writers, scientists — to produce training data
- Its customers include Anthropic, OpenAI, and Meta
- According to unconfirmed reports, datasets used by some customers and information about their secretive AI projects may have been compromised
That last point is the real headline. If Lapsus$ has obtained data related to the internal training processes of frontier AI labs, the implications extend far beyond a typical corporate breach. Training data curation is one of the most closely guarded competitive advantages in the industry.
The Kill Chain — Revisited
Here’s how the dominoes fell, connecting our earlier coverage:
- TeamPCP poisoned LiteLLM (versions 1.82.7–1.82.8) via a supply-chain attack on the Trivy CI runner and PyPI publishing pipeline
- The malicious code was a credential harvester that extracted 50+ types of secrets (API keys, database credentials, cloud tokens)
- Mercor, as one of thousands of LiteLLM users, had credentials exposed
- Lapsus$ used the harvested credentials (or collaborated with TeamPCP directly) to pivot deeper into Mercor’s infrastructure
- 4TB of data allegedly exfiltrated — including source code, Slack, databases, and contractor recordings
This is textbook supply-chain-to-extortion escalation: the initial compromise is automated and broad, then specialized extortion groups cherry-pick high-value targets from the victim pool.
The MOVEit Parallel
Fortune draws a direct comparison to the 2023 Cl0p/MOVEit campaign, where a single supply-chain vulnerability led to breaches at hundreds of organizations, affecting nearly 100 million individuals across government agencies, financial institutions, and healthcare providers. Extortion attempts from that campaign dragged on for months.
TeamPCP has publicly stated its intention to partner with ransomware and extortion groups to target affected companies at scale. If that strategy materializes, Mercor is the opening act — not the finale.
What This Means for OpenClaw Users
The LiteLLM supply-chain attack has direct relevance to the OpenClaw ecosystem. Many users run LiteLLM as a proxy layer for model routing. Here’s what to verify:
Immediate Actions
- Check your LiteLLM version — if you ever ran 1.82.7 or 1.82.8, assume credential compromise
- Rotate ALL API keys and tokens that were accessible to your LiteLLM instance
- Audit your pip/poetry lockfiles — pin to known-good SHA hashes, not just version numbers
- Check for unexpected outbound connections in your logs during the exposure window (March 23–25)
Structural Hardening
- Run LiteLLM (or any proxy) in an isolated environment — container, VM, or at minimum a separate user with restricted filesystem access
- Use OpenClaw’s built-in model routing where possible instead of external proxy layers
- Enable credential storage outside environment variables — use secrets managers or encrypted config rather than .env files that any compromised library can read
- Monitor your PyPI dependencies via
pip auditor Snyk — make it part of your update workflow
The Bigger Lesson
This incident crystallizes a risk we’ve been tracking: the AI tooling ecosystem is a high-value, low-hardness target. LiteLLM has 95 million downloads and aggregates credentials for every major AI provider. It’s the definition of a crown-jewel dependency. TeamPCP — teenagers — understood this before most enterprise security teams did.
The credential aggregation pattern is especially dangerous. Any tool that holds keys to OpenAI, Anthropic, Google, Azure, and AWS simultaneously is a single point of compromise for an organization’s entire AI stack. Treat these tools with the same security posture you’d apply to an identity provider or secrets manager.
What’s Next
Mercor says a third-party forensics investigation is underway. Lapsus$ claims to have 4TB. TeamPCP says they’re scaling the extortion model.
If your organization uses LiteLLM — or any AI proxy that aggregates provider credentials — the question isn’t whether you were affected. It’s whether you’ve verified you weren’t.
We’ll continue tracking the fallout. The supply-chain attack surface in the AI ecosystem is still expanding faster than defenses are being built.
This post is part of our ongoing coverage of AI infrastructure security. See also: TeamPCP’s Nightmare Month, OpenClaw’s March CVE Tsunami.