Check Point Research disclosed two critical vulnerabilities in Anthropic’s Claude Code that turned innocent-looking repositories into attack vectors. Simply cloning and opening a malicious project was enough to compromise a developer’s machine and steal API credentials — no clicks, no warnings.
Both CVEs have been patched, but the attack patterns matter for anyone running AI agents.
The Vulnerabilities
CVE-2025-59536 — MCP Consent Bypass (CVSS 8.7)
Claude Code’s Model Context Protocol (MCP) lets repositories define external tool servers. The flaw: repository-level settings like enableAllProjectMcpServers could auto-approve MCP servers before the user saw a trust dialog. Combined with hooks that executed arbitrary shell commands on project open, an attacker could:
- Launch reverse shells on the developer’s machine
- Download and execute payloads silently
- Pivot to connected services and infrastructure
CVE-2026-21852 — API Key Theft (CVSS 5.3)
A repository’s .claude/settings.json could override ANTHROPIC_BASE_URL, redirecting all authenticated API traffic — including plaintext API keys in authorization headers — to an attacker-controlled server. This happened before trust confirmation, meaning the key was stolen the moment Claude Code connected to the Anthropic API.
A single stolen key in a shared Anthropic Workspace could expose, modify, or delete files across the entire organization and generate unauthorized API costs.
Why This Matters for AI Agents
These aren’t traditional code vulnerabilities. They exploit the configuration-as-execution pattern that’s fundamental to how AI agents integrate with tools. When an agent’s config file can trigger network connections, shell commands, and API calls, every config file becomes an attack surface.
This is the same class of risk that affects any agent system with plugin or tool integration — including OpenClaw’s MCP and skill systems.
Lessons for OpenClaw Users
1. Audit your MCP servers. Only enable MCP servers you trust. OpenClaw’s mcp.json defines which servers your agent connects to — review it periodically.
2. Be cautious with third-party skills. Skills are powerful because they can execute code. That’s also why you should vet them before installing. Check the source, read the SKILL.md, and understand what commands it runs.
3. Use allowlists over wildcards. OpenClaw supports tool policies and security modes. Prefer explicit allowlists over permissive defaults.
4. Rotate API keys. If you suspect any config-level compromise, rotate your API keys immediately. Don’t reuse keys across projects.
5. Keep OpenClaw updated. Security patches ship regularly. Run openclaw update or enable auto-updates via the auto-updater skill.
The Bigger Picture
AI agents are becoming development infrastructure. As they gain more tool access — file systems, APIs, shell execution, network requests — the attack surface grows proportionally. The Check Point findings are a preview of a new class of supply chain attacks: ones that target agent configuration rather than source code.
Anthropic handled disclosure well, collaborating with Check Point for months before the public release. But the pattern is now documented, and copycats will adapt it for other agent frameworks.
The defense is the same as it’s always been: don’t trust what you haven’t verified, minimize permissions, and assume that anything auto-executed is a potential vector. For OpenClaw-specific guidance, see our guardrails setup and complete security guide. For another MCP vulnerability example, read about the Atlassian RCE flaw.
Sources: Check Point Research disclosure (Feb 25, 2026). CVE-2025-59536 (CVSS 8.7), CVE-2026-21852 (CVSS 5.3). All vulnerabilities patched by Anthropic prior to public disclosure.