Darktrace’s 2026 State of AI Cybersecurity Report surveyed over 1,500 cybersecurity professionals across 14 countries. The headline number: 76% are worried about the security implications of AI agents in their organizations.

But the number that should alarm you is different: only 37% have a formal policy for securely deploying AI — down 8 percentage points from last year. Enterprises are adopting agents faster than they’re governing them.

The Key Numbers

The report, released February 3, covers the full spectrum of AI’s impact on enterprise security:

AI agent concerns:

  • 76% of security professionals are worried about AI agents in their organizations
  • 47% of senior security executives are “very or extremely concerned”
  • Top risk: data exposure (61%), followed by regulatory violations (56%) and AI tool misuse (51%)
  • Only 37% have formal AI deployment policies (down from 45% last year)

AI-powered threats:

  • 73% say AI-powered threats are already having a significant impact
  • 87% report AI is significantly increasing attack volume
  • 89% say AI is making attacks more sophisticated overall
  • 91% say AI is improving phishing and social engineering effectiveness
  • 46% feel unprepared to defend against AI-driven attacks — essentially unchanged from 45% last year

Top AI-powered attack concerns:

  1. Hyper-personalized phishing (50%)
  2. Automated vulnerability scanning (45%)
  3. Adaptive malware (40%)
  4. Deepfake voice fraud (39%)

AI in defense:

  • 77% have GenAI embedded in their security stack
  • 96% say AI significantly boosts speed and efficiency
  • 72% cite detecting novel threats as AI’s greatest impact
  • 14% let AI act independently in SOC; 70% enable AI with human approval; 13% limit AI to recommendations only

Why This Matters for OpenClaw Users

The report identifies agents’ access to sensitive data, their ability to interact with critical systems, and a lack of mature governance as the three key risk drivers. Sound familiar?

OpenClaw operates with exactly these characteristics — it accesses files, emails, APIs, and shell commands. It interacts with critical systems. And most personal deployments lack formal governance because they’re individual setups, not enterprise rollouts.

Darktrace’s own data adds a concrete data point: in October 2025, they observed a 39% month-over-month increase in anomalous data uploads to GenAI services. The average anomalous upload was 75MB — roughly 4,700 pages of documents. Sensitive data is already leaving organizations through AI tools at scale, often unchecked.

The report’s framing of agents as “a new class of insider risk” is particularly relevant. As Darktrace VP Issy Richards put it: “These systems can act with the reach of an employee — accessing sensitive data and triggering business processes — without human context or accountability.”

The Preparedness Paradox

Here’s the pattern that keeps repeating across every major AI security report this year:

  1. Concern is high — everyone knows agents introduce new risks
  2. Preparedness is flat — almost half feel unprepared, same as last year
  3. Adoption isn’t waiting — 77% already have GenAI in their security stack
  4. Governance is actually declining — formal policies dropped from 45% to 37%

This is the gap that RSAC 2026 made painfully visible. Every vendor had an agent security product. Every keynote warned about agent risks. But enterprises are still deploying faster than they’re securing.

The SOC Autonomy Spectrum

One of the more interesting findings: how much autonomy security teams give their own AI.

  • 14% let AI act independently (fully autonomous response)
  • 70% let AI act with human approval (human-in-the-loop)
  • 13% keep AI limited to recommendations only

That 14% fully autonomous figure is noteworthy. These are cybersecurity teams — arguably the most security-conscious segment of any organization — and even they’re increasingly comfortable letting AI take action without human approval. The broader enterprise workforce is almost certainly less cautious.

What To Do With This

If you’re running OpenClaw in any capacity:

  1. Write down your AI policy. Even a personal one. What can your agents access? What can they do without asking? Where are the boundaries? The 63% of organizations without formal policies includes a lot of smart people who just haven’t gotten around to it.

  2. Audit your data exposure. What sensitive data can your OpenClaw agents reach? Emails, credentials, financial data, health records? The 75MB average anomalous upload means agents are moving significant volumes of data through AI services.

  3. Monitor agent behavior. Darktrace’s entire business model is based on learning “normal” behavior and flagging deviations. You can apply the same principle: know what your agents normally do, and notice when they don’t.

  4. Pick your autonomy level deliberately. The 14% / 70% / 13% split in SOC autonomy is a useful framework. Are you in the “act independently” camp, the “act with approval” camp, or the “recommend only” camp? For which tasks?

The gap between concern and action is where incidents happen. Darktrace’s report doesn’t tell us anything we didn’t already know. What it tells us is that knowing hasn’t translated to doing — at any level of the industry.


The full 2026 State of AI Cybersecurity Report is available on Darktrace’s website. Survey conducted October–November 2025 across 1,540 cybersecurity leaders in 14 countries.