The numbers from a 2026 Gravitee survey of over 900 executives and practitioners paint a picture that should alarm every CISO: the enterprise AI agent governance gap is worse than anyone admits.

The Confidence Gap

What executives believeWhat’s actually happening
82% confident their policies protect against unauthorized agent actionsOnly 14.4% send agents to production with full security/IT approval
Policies existRuntime enforcement doesn’t

This is the defining disconnect of enterprise AI security in 2026. Policy documentation and runtime enforcement are not the same thing — but most organizations treat them as if they are.

The Visibility Crisis

Only 24.4% of organizations have full visibility into which AI agents communicate with each other.

More than half of all agents run without any security oversight or logging.

The average organization manages 37 deployed agents — a number that grows every quarter as individual teams spin up automation without central review.

Each undiscovered agent is an unmapped access path. Shadow AI security incidents cost an average of $670,000 more than standard incidents, driven by delayed detection and difficulty scoping the exposure.

The Execution Layer Problem

The survey reveals that most enterprises have focused their AI security efforts on the model layer — which vendors employees can use, what data those tools can see, which tools pass procurement review.

That work matters. But it leaves the execution layer completely open.

When an AI agent takes action, it does so through tool invocations: calling APIs, writing to databases, triggering workflows, pushing instructions to connected systems. This is where AI reasoning meets production infrastructure — and where most enterprises have no governance at all.

Tool invocations are trusted by default. There’s no risk scoring before execution, no policy enforcement at the connector level, and no audit trail showing what agents are actually doing.

The Identity Problem

The survey surfaces a fundamental architectural failure:

  • 45.6% of teams rely on shared API keys for agent-to-agent authentication
  • 25.5% of deployed agents can create and instruct other agents
  • Only 21.9% treat AI agents as independent, identity-bearing entities with their own access scopes and audit trails

When multiple agents share credentials, attribution becomes impossible. Your SIEM sees failed transactions but can’t tell which agent started the cascade or where it was compromised.

The organizations that treat agents as first-class security principals — with their own identities, scoped permissions, and audit trails — have a fundamentally cleaner picture of their environment.

What This Means for RSAC 2026

The Gravitee data validates the market thesis behind every agent security product launching at RSAC this week:

  • Agent discovery (Okta, Geordie AI, Entro) — because you can’t govern what you can’t see, and 75%+ organizations can’t see their agents communicating
  • Runtime enforcement (Zenity, CrowdStrike × NVIDIA) — because the execution layer is where attacks happen, not the model layer
  • Intent-based governance (Token Security, Proofpoint) — because static permissions fail for non-deterministic agents
  • Agent identity (Oasis, 1Password, Deutsche Telekom) — because 45% of teams still use shared credentials

Stanford’s Trustworthy AI Research Lab adds another data point: model-level guardrails alone are insufficient. Fine-tuning attacks bypassed Claude Haiku in 72% of cases and GPT-4o in 57%. Model-layer safety does not extend to the execution layer.

The Hard Truth

80.9% of technical teams have moved past planning into active testing or full deployment. The productivity gains are real. But the governance has not kept pace.

The organizations deploying AI agents without execution-layer controls are running with the same risk profile as organizations that ran cloud workloads in 2015 without IAM policies. We know how that played out.

The difference: agents move faster than humans, take more actions per minute, and can cascade failures across systems in ways traditional automation never could. The governance gap is the same — the blast radius is bigger.


Sources: Gravitee Survey via AGAT Software · Stanford Trustworthy AI Research Lab