Here’s a number that should end every “are we ready?” meeting: 97% of enterprise security leaders expect a material AI-agent-driven security or fraud incident within the next 12 months. Nearly half expect one within six months. And the average enterprise allocates just 6% of its security budget to this risk.

The data comes from Arkose Labs’ 2026 Agentic AI Security Report, a survey of 300 enterprise leaders across security, fraud, identity, and AI functions. Respondents span North America, Europe, and Asia-Pacific, covering financial services, banking, technology, telecom, retail, healthcare, manufacturing, and digital services.

This isn’t a vendor scare piece with cherry-picked stats. The methodology is solid: 95% confidence level, ±5.6% margin of error, conducted in February 2026 — before the Mercor breach, before the Axios supply chain attack, before the latest wave of AI agent security disclosures.

The findings paint a picture of an industry that knows exactly what’s coming and is dramatically underinvesting in preparation.

The Ultimate Insider Threat

The report’s most striking finding: 87% of enterprise leaders agree that AI agents operating with legitimate credentials pose a greater insider threat risk than human employees.

This reframes the entire threat model. Traditional insider threat programs were built around people — disgruntled employees, negligent contractors, compromised accounts. AI agents now operate inside enterprise environments through service accounts, API tokens, and application identities that often carry significant privileges and whose activity closely resembles legitimate system behavior.

As one EVP of AML, Sanctions & Fraud told the researchers: “Stealthy increases in access rights undermine preventive controls.”

The insider threat of 2026 doesn’t need badge access. It already has API credentials.

Three Gaps Defining Enterprise Exposure

The report surfaces three operational vulnerabilities that cut across functions and geographies.

1. The Detection Illusion

More than 70% of security teams are not confident their current tools will scale as AI-driven attacks evolve. Respondents cited model drift, adaptive bypass techniques, and fragmented signals across systems as reasons detection may become harder — not easier — as autonomous systems mature.

Today’s detection stack was built for human-speed threats. AI agents operate at machine speed, across multiple systems, with legitimate credentials. The tools that catch today’s attacks may already be obsolete for tomorrow’s.

2. The Attribution Crisis

When an incident occurs, you need to know what happened, which systems were involved, and whether the initiating actor was human, automated, or adversarial. Only 26% of enterprise leaders are very confident they could definitively prove that an AI agent caused a security or fraud incident.

As one Director of Security Engineering put it: “Movement between interconnected systems can resemble legitimate operational behavior.”

This isn’t a technology problem. It’s a visibility and forensics problem — and it has direct implications for regulatory accountability and incident response. If you can’t attribute an action to an agent, you can’t investigate it, you can’t report it, and you can’t prevent it from happening again.

3. The Governance Vacuum

57% of organizations have no formal governance controls for AI agents today. Yet 88% expect to have defined or mature frameworks within three years.

That three-year window — between where most organizations are now and where they expect to be — is exactly the period of maximum exposure. Attackers don’t wait for governance frameworks to mature.

The Acceleration Window

The report introduces a useful concept: the acceleration window — a compressed period where agentic AI deployment is outrunning the controls required to manage it.

This maps precisely to what we’ve been documenting on this site for months:

The consistent pattern: the industry ships governance tools after the agents are already running in production. The 6% budget allocation isn’t a data point — it’s a structural problem.

What This Means for OpenClaw Operators

If you’re running OpenClaw in a business context — or even as a personal agent with access to sensitive services — the Arkose Labs findings map directly to your threat model.

You Are the Enterprise

Even a single-user OpenClaw setup has:

  • Service accounts: API keys for OpenAI, Anthropic, cloud providers
  • Legitimate credentials: OAuth tokens for Gmail, Slack, GitHub
  • Autonomous decision-making: the agent acts on your behalf without per-action approval
  • Privilege accumulation: skills and tools grant expanding access over time

The 87% insider threat finding applies to you. Your agent operates with your credentials. If compromised — through a supply chain attack, a malicious skill, or a prompt injection — it’s acting as a trusted insider with your full access.

Practical Hardening Steps

1. Treat agent credentials as privileged identities

Don’t let your agent operate with your personal tokens. Create dedicated service accounts with minimum necessary permissions.

2. Enable audit logging

OpenClaw’s gateway supports logging all agent actions. Turn it on. You can’t attribute what you don’t record.

# In your OpenClaw config
logging:
  level: info
  # Log all tool invocations for forensic capability

3. Implement action boundaries

Use OpenClaw’s approval system for high-risk operations. The Task Brain update in v2026.3.31 introduced semantic approval categories that replace fragile name-based whitelists.

4. Monitor for credential drift

Periodically audit what your agent has access to. Skills can introduce new OAuth scopes, API integrations, and system access. If you can’t list your agent’s permissions, you can’t govern them.

5. Segment your agent’s network

Run OpenClaw behind a firewall or VPN with explicit egress rules. An agent that can reach any endpoint can exfiltrate to any endpoint.

The 6% Problem

The Arkose Labs report’s core tension isn’t that enterprises don’t understand the risk. It’s that understanding hasn’t translated into action.

97% awareness. 6% budget allocation. 57% with no governance controls. These numbers describe an industry running the largest experiment in autonomous software deployment in history — and betting that the controls will catch up before the first catastrophic incident forces them to.

For OpenClaw operators: you don’t have a three-year governance window. You have whatever time passes between reading this and the next supply chain compromise hitting your dependency tree. The controls you build today determine whether you’re investigating an incident or recovering from one.

The data says the incident is coming. The only question is whether you’re the 6% or the 94%.