HiddenLayer just dropped the numbers that enterprise security teams have been dreading. Their 2026 AI Threat Landscape Report, based on a survey of 250 IT and security leaders, puts hard data behind what the industry has been sensing: agentic AI systems are already being breached in production.
The headline stat: 1 in 8 companies now report AI breaches directly linked to agentic systems. Not hypothetical. Not red-team exercises. Real breaches, in real enterprises, happening now.
Three Shifts That Changed the Threat Model
1. Agents Moved From Labs to Production
Agentic AI that can browse the web, execute code, access files, and interact with other agents went from experimentation to production deployments throughout 2025. This transforms prompt injection from a model flaw into an operational security risk with direct paths to system compromise.
“As soon as agents can browse the web, execute code and trigger real-world workflows, prompt injection is no longer just a model flaw,” said Marta Janus, principal security researcher at HiddenLayer. “It becomes an operational security risk.”
This tracks what we’ve seen across the OpenClaw ecosystem — from SOUL.md persistence attacks to browser-level agent hijacking. The attack surface isn’t theoretical anymore.
2. Reasoning Models Amplify Blast Radius
Self-improving and reasoning models are now mainstream. A single compromised model can influence downstream systems at scale — when an agent plans, reflects, and acts autonomously, the consequences of manipulation multiply through every connected system.
3. Edge AI Creates New Blind Spots
Smaller, specialized models deployed on devices, vehicles, and critical infrastructure shift AI execution away from centralized cloud controls. This decentralization introduces security blind spots in regulated and safety-critical environments where you can’t just patch a model running on an embedded device.
Supply Chain: The #1 Attack Vector
The most-cited source of AI-related breaches wasn’t prompt injection or social engineering — it was malware hidden in public model and code repositories, at 35% of all AI-related breaches.
Yet 93% of respondents continue to rely on open repositories. This is the exact tension we documented in the ClawHavoc campaign, where 800+ malicious skills were found in ClawHub — roughly 20% of the registry.
Enterprises can’t stop using open-source AI components. But the supply chain verification controls remain inadequate.
The Transparency Crisis
Perhaps the most damning finding: 53% of organizations admitted to withholding AI breach reporting due to fear of backlash.
This, despite 85% supporting mandatory breach disclosure.
Meanwhile, 31% don’t even know whether they experienced an AI security breach in the past year. You can’t fix what you can’t see — and you can’t see what you won’t report.
Shadow AI Accelerates
| Metric | 2025 | 2026 | Change |
|---|---|---|---|
| Orgs citing shadow AI as definite/probable problem | 61% | 76% | +15 pts |
| Orgs with internal conflict over AI security ownership | — | 73% | — |
| Orgs partnering externally for AI threat detection | — | 34% | — |
A 15-point year-over-year increase is one of the largest shifts in the dataset. The shadow agent discovery tools and ConductorOne’s AI Access Management launching this week respond to a measurable, accelerating problem.
Budget vs. Reality
91% of organizations added AI security budgets for 2025. But more than 40% allocated less than 10% of that budget to AI security specifically. Combined with the 73% reporting internal conflict over ownership, the organizational machinery to address these risks remains immature.
What This Means for OpenClaw Users
- Supply chain is the #1 vector — vet your skills, pin versions, use Chainguard hardened images when available
- Visibility is non-negotiable — if you can’t enumerate your running agents and their access, you’re in the 31% that doesn’t know if they’ve been breached
- Runtime controls matter more than perimeter defenses — Singulr Agent Pulse, AWS Bedrock AgentCore, and Manifold’s endpoint protection all target the runtime gap
- Self-hosted doesn’t mean self-secured — running OpenClaw locally gives you control, but only if you use it. The 42,900 exposed instances prove many don’t
The report lands days before RSAC 2026 (March 23-27). HiddenLayer just set the empirical baseline for that conversation.
The Bottom Line
“Agentic AI has evolved faster in the past 12 months than most enterprise security programs have in the past five years,” said HiddenLayer CEO Chris Sestito.
The data confirms the gap. One in eight breaches. 53% concealment. 31% blind. The question isn’t whether agentic AI introduces new risks — it’s whether enterprises can close the governance gap before the breach numbers get worse.