In the last two weeks of March 2026, a loose crew of teenagers and young adults calling themselves TeamPCP executed one of the most consequential supply chain attacks in AI infrastructure history. The FBI issued a critical alert. Trend Micro called it “one of the most sophisticated multi-ecosystem supply chain campaigns publicly documented to date.” Forbes got a direct interview with the hackers.

The cascade went like this: compromise a security scanner, use it to poison an AI gateway, and watch the credentials flow in.

The Chain: Trivy → LiteLLM → Everything

Step 1: The security scanner becomes the attack. In late February, a TeamPCP operator exploited a misconfigured pull_request_target workflow in Trivy, the open-source vulnerability scanner used by ~10,000 companies. They stole the aqua-bot Personal Access Token. Aqua Security rotated credentials on March 1, but the rotation wasn’t atomic — TeamPCP kept valid tokens through the gap.

On March 19, they force-pushed 76 of 77 release tags in trivy-action to malicious commits. The payload scraped Runner.Worker process memory for secrets, harvested cloud credentials and SSH keys, encrypted them with AES-256-CBC + RSA-4096, and exfiltrated to a typosquatted domain (scan[.]aquasecurtiy[.]org). The legitimate Trivy scan still ran afterward — producing normal output, leaving no visible sign of compromise.

Step 2: The AI gateway becomes a backdoor. LiteLLM — an AI proxy that lets developers route between GPT-5, Claude, and other LLMs through a single interface — was a Trivy user. TeamPCP found LiteLLM’s PyPI publishing credentials in the Trivy CI haul and pushed malicious versions 1.82.7 and 1.82.8 to PyPI.

The payload was a three-stage monster:

  • Stage 1: Credential harvester targeting 50+ categories of secrets — AWS credentials, .env files, SSH keys, Kubernetes service account tokens
  • Stage 2: Kubernetes lateral movement toolkit capable of compromising entire clusters
  • Stage 3: Persistent backdoor for ongoing remote code execution

LiteLLM has been downloaded 95 million times. It concentrates API keys and cloud credentials by design — that’s its job. When it’s compromised, every secret it touches is exposed.

Step 3: The cascade continues. TeamPCP also hit Checkmarx KICS (another security scanner) and later pushed malicious Telnyx versions to PyPI, hiding a stealer inside WAV files. The campaign spanned PyPI, npm, Docker Hub, GitHub Actions, and OpenVSX in a single coordinated operation.

The Hackers Spoke Up

In a Forbes interview, a TeamPCP spokesperson using the handle T00001B described the group as young people who couldn’t find paying work and turned to cybercrime. They confirmed using Anthropic’s Claude to build components that helped the malware spread — not for finding vulnerabilities, but for accelerating payload development.

Their most revealing comment: many AI developers appeared to be blindly downloading tools like LiteLLM, believing the open-source community had made them safe.

“This attack wasn’t highly sophisticated at all but it was initially effective for this reason,” T00001B told Forbes. “Nobody expected this to snowball as hard as it did.”

Ben Hirschberg, CTO at Armo, put it bluntly: “It’s a nightmare scenario for the cyber community. The rise of AI agents has made one of the most expensive parts of offensive cyber much cheaper than it used to be.”

The 13-Minute Pivot

Trend Micro’s analysis revealed how fast TeamPCP iterated. Version 1.82.7 (10:39 UTC) injected the payload directly into proxy_server.py — effective but visible in code review. Thirteen minutes later, version 1.82.8 switched to a .pth file injection method — stealthier, exploiting Python’s automatic execution of .pth files in site-packages.

That rapid adaptation suggests either deep Python internals knowledge or — more likely given their own admissions — AI-assisted attack refinement.

What This Means for OpenClaw Users

If you use LiteLLM as an upstream proxy (a common pattern for multi-model routing), check immediately:

  1. Verify your LiteLLM version. If you ran 1.82.7 or 1.82.8 at any point after March 19, treat all credentials that LiteLLM could access as compromised. Rotate everything: API keys, AWS credentials, SSH keys, Kubernetes service account tokens.

  2. Audit your CI/CD pipelines. If you use trivy-action in GitHub Actions, check which tag you pinned. If you were on a mutable tag (not a commit SHA), you may have run the malicious version. Review Aqua’s advisory.

  3. Pin dependencies by hash, not tag. Mutable tags and version ranges are the attack surface. Pin to exact commit SHAs for GitHub Actions and exact version hashes for PyPI packages.

  4. Treat your security tooling as attack surface. The uncomfortable lesson: security scanners have the same access as deployment tooling. If they’re compromised, everything downstream is exposed.

  5. Don’t assume open-source is audited. LiteLLM processes 3.4 million downloads per day. The malicious versions were discovered because they caused crashes — not because anyone reviewed the code change.

The Structural Problem

TeamPCP didn’t need a zero-day. They needed one misconfigured CI workflow, one non-atomic credential rotation, and the trust graph of the open-source ecosystem to do the rest.

AI infrastructure has a unique concentration risk: tools like LiteLLM are credential aggregators by design. They hold the keys to every LLM provider, every cloud account, every API integration they proxy. When the gateway falls, the blast radius is everything behind it.

The industry response is already moving. CrowdStrike’s $740M acquisition of SGNL — announced in early 2026 — targets exactly this: continuous dynamic authorization that eliminates standing privileges for human, non-human, and AI agent identities. The premise is that static credentials shouldn’t exist long enough to be stolen.

For now, though, the defense is manual: rotate, pin, audit, and stop trusting that someone else checked.


Sources: Forbes, Trend Micro, Arctic Wolf, The Hacker News, CrowdStrike