On April 1, 2026, CERT/CC published Vulnerability Note VU#221883 — four CVEs targeting CrewAI, the popular multi-agent orchestration framework with 48,000 GitHub stars. The most severe, CVE-2026-2275, carries a CVSS score of 9.6 (Critical) and enables remote code execution through a deceptively simple mechanism: the framework silently falls back to an insecure sandbox when Docker isn’t available.
This isn’t a hypothetical. An attacker who can interact with a CrewAI agent — including through prompt injection — can chain all four vulnerabilities together.
The Four Vulnerabilities
CVE-2026-2275 — Critical RCE via Silent Sandbox Downgrade (CVSS 9.6)
CrewAI’s CodeInterpreter tool is designed to run code inside Docker containers. But when Docker isn’t reachable, the tool silently falls back to SandboxPython — a mode that allows arbitrary C function calls. No warning. No log entry. No user consent.
The vulnerability triggers when allow_code_execution=True is set in agent configuration, or when the Code Interpreter Tool is manually added by a developer. In production environments where Docker might be unavailable due to resource constraints, network partitions, or simply not being installed, agents silently run in the insecure mode.
CVE-2026-2285 — Arbitrary Local File Read
The JSON loader tool reads files without path validation. An attacker who can influence the file path — through prompt injection or crafted input — can read any file on the server the agent has access to. Configuration files, credentials, environment variables, private keys.
CVE-2026-2286 — Server-Side Request Forgery
CrewAI’s RAG search tools don’t validate URLs at runtime. An attacker can redirect searches to internal services, cloud metadata endpoints (169.254.169.254), or other infrastructure that should never be accessible from an AI agent’s context.
CVE-2026-2287 — Runtime Docker Check Failure
Even when Docker was available at startup, CrewAI doesn’t re-check during runtime. If Docker becomes unavailable mid-session — crashed, resource-exhausted, or deliberately stopped — the framework silently downgrades to the insecure sandbox. Same RCE surface as CVE-2026-2275, but triggered by a runtime state change rather than initial configuration.
The Chain Attack
What makes this cluster dangerous isn’t any single CVE — it’s how they compose:
- Prompt injection reaches the agent through user input, RAG documents, or tool output
- CVE-2026-2286 (SSRF) lets the attacker probe internal infrastructure and cloud metadata
- CVE-2026-2285 (file read) extracts credentials, API keys, and configuration
- CVE-2026-2275/2287 (RCE) executes arbitrary code on the host
The attacker never needs direct network access to the server. They just need to get a crafted message into the agent’s context — through a document it processes, a web page it scrapes, or a conversation it participates in.
The Silent Downgrade Pattern
This is the third time in 2026 we’ve seen the “silent security downgrade” pattern in AI agent frameworks:
- Langflow (CVE-2026-33017): authentication bypass exploited within 20 hours of disclosure
- MCP servers: one-third vulnerable to SSRF, hundreds with zero authentication
- CrewAI: Docker sandbox silently disabled when infrastructure conditions change
The pattern is consistent: frameworks build security features that depend on external infrastructure (Docker, auth services, network policies), then silently degrade when that infrastructure isn’t available. The developer thinks they’re running in a sandbox. They’re not.
What This Means for OpenClaw Users
OpenClaw’s architecture is fundamentally different — it doesn’t use CrewAI’s sandbox model — but the lesson applies broadly:
1. Audit your agent framework dependencies. If you’re using CrewAI as a sub-agent orchestrator alongside OpenClaw, these CVEs affect you directly.
2. Check for silent fallbacks. Any tool or framework that says “we’ll use Docker if available” is telling you it has an insecure fallback path. Find it. Disable it.
3. Validate at runtime, not just startup. Security checks that only run at initialization are meaningless in long-running agent sessions where infrastructure state can change.
4. Treat prompt injection as a network attack. These CVEs are all exploitable through prompt injection. If your agent processes any untrusted input — and almost all agents do — the attack surface is open.
Vendor Response
According to CERT/CC, CrewAI has “provided a statement addressing some, but not all, of the reported vulnerabilities.” The research was conducted by the @PwrtYrdn team.
As of publication, no patched version has been released. Organizations running CrewAI in production should ensure Docker is always available and monitored, restrict allow_code_execution to explicitly sandboxed environments, and audit RAG tool URL validation.
The full CERT/CC advisory is available at VU#221883. CrewAI users should monitor the project’s GitHub for patch releases.