CISA Warns Langflow AI Agent Platform Actively Exploited — Hackers Built Exploits in 20 Hours

The Cybersecurity and Infrastructure Security Agency (CISA) has added CVE-2026-33017 to its Known Exploited Vulnerabilities (KEV) catalog, confirming active exploitation of a critical code injection flaw in Langflow, one of the most popular open-source frameworks for building AI agent workflows.

The vulnerability carries a CVSS score of 9.8 and allows unauthenticated remote code execution via a single crafted HTTP request.

What Makes This Remarkable

The timeline is what security teams should be staring at:

  • March 17: Vulnerability advisory published
  • 20 hours later: Automated scanning begins
  • 21 hours: Working Python exploits deployed
  • 24 hours: Data harvesting (.env files, databases, credentials) underway

No public proof-of-concept existed. Attackers reverse-engineered the exploit directly from the advisory text — the vulnerable endpoint path and injection mechanism were described in enough detail to build a working weapon.

Sysdig researchers observed over 1,000 exploitation attempts across multiple regions in the first week, deploying info stealers, reverse shells, and cryptominers.

Why This Matters for the AI Agent Ecosystem

Langflow is a visual drag-and-drop framework for building AI pipelines, with 145,000 stars on GitHub. It sits in the same ecosystem tier as OpenClaw — widely adopted by developers building autonomous AI workflows.

The critical detail: Langflow instances typically hold API keys for OpenAI, Anthropic, and AWS. Compromising a single Langflow instance gives attackers lateral movement into:

  • Connected databases and cloud services
  • CI/CD pipelines (if GitHub/GitLab credentials are stored)
  • Software supply chains downstream

As Sysdig warned: “The window between advisory publication and active exploitation is now measured in hours, not days or weeks.”

This Is the Second Time

CISA flagged active Langflow exploitation last year too — CVE-2025-3248, a similar unauthenticated RCE flaw that spawned the Flodrix botnet. The recurrence pattern is significant: AI agent frameworks are becoming repeat targets because they combine:

  1. High-value credentials (API keys to frontier AI models)
  2. System-level execution (code runs with process permissions)
  3. Rapid adoption outpacing security hardening

Sound familiar? This is the exact threat model OpenClaw’s security community has been wrestling with since the ClawHavoc campaign.

What OpenClaw Users Should Do

The Langflow CVE doesn’t directly affect OpenClaw, but the attack pattern applies to any framework running AI agent pipelines:

Immediate Actions

  • Never expose agent frameworks to the public internet without authentication
  • Rotate all API keys and cloud credentials if you’ve run Langflow ≤1.8.1
  • Audit outbound traffic from AI pipeline hosts for unexpected connections
  • Update Langflow to 1.9.0+ immediately if you use it

Architectural Lessons

  • Sandbox execution environments: AI agent code should never run with the same permissions as the host process
  • Separate credential stores: Don’t embed API keys in the same environment your agents execute in
  • Monitor advisory feeds: With 20-hour exploit windows, patch cycles measured in weeks are suicide
  • Assume breach: If your AI pipeline was internet-facing and unpatched, treat credentials as compromised

The Acceleration Problem

The Langflow timeline crystallizes a trend we’ve been tracking: the advisory-to-exploit window has collapsed. Combined with the TeamPCP/LiteLLM supply-chain attack that hit Mercor this week, we’re seeing a consistent pattern — AI infrastructure is being targeted not for the models themselves, but for the credentials and access those models are connected to.

CISA has given federal agencies until April 8 to patch or stop using Langflow. For everyone else, the deadline was 20 hours ago.


Sources: BleepingComputer, Dark Reading, Sysdig Research, CISA KEV