Proofpoint’s CEO just said the quiet part out loud: AI agents have the same risk profile as human insiders.
In an interview at RSAC 2026, Sumit Dhawan drew a direct line between enterprise insider risk programs and the emerging challenge of governing AI agents. The argument is simple and hard to argue with: agents are non-deterministic, can be manipulated through prompt engineering, and operate with persistent access to sensitive systems. That’s not a firewall problem. That’s an insider threat problem.
Why Traditional Security Controls Don’t Apply
Traditional security tools were designed for Boolean, pattern-based logic — deterministic systems that follow predictable paths. Firewalls, ACLs, and signature-based detection assume the thing you’re protecting against behaves consistently.
AI agents don’t. They:
- Make different decisions given the same inputs (non-deterministic)
- Can be socially engineered via prompt injection
- Evolve their behavior based on context and conversation history
- Operate continuously with persistent credentials and access
This makes them functionally identical to human insiders — except they don’t take breaks, don’t get security awareness training, and can be compromised without anyone noticing a change in their “mood.”
Behavioral Drift as the Core Detection Model
Dhawan’s key insight: behavioral drift detection is the operative defense model for AI agents, just as it is for human insider risk.
When a human employee’s behavior deviates from their expected pattern — logging in at unusual hours, accessing files outside their role, exfiltrating data — controls escalate. The same mechanism applies to agents:
- An OpenClaw agent that suddenly starts accessing files it hasn’t touched before
- A coding agent that begins making network calls to unfamiliar endpoints
- An MCP-connected agent whose tool usage pattern shifts without a corresponding user request
These are behavioral drift signals, and they require the same kind of continuous monitoring that enterprise insider risk programs already provide.
”There’s No Code of Conduct for AI”
The quote that frames the entire problem:
“With AI, there is no code of conduct. There’s no form of integrity, per se — and it’s something that has to be coded up into a technology layer, which is an AI behavior safeguard layer.” — Sumit Dhawan, CEO, Proofpoint
Humans get onboarding, policies, training, and social pressure to behave within norms. AI agents get… a system prompt. Maybe some tool restrictions. The gap between human governance infrastructure and agent governance infrastructure is enormous.
The CISO Split: Proactive vs. Wait-and-See
Dhawan noted that CISOs are splitting into two camps:
Proactive camp: Building AI safeguard infrastructure now, treating agent governance as an extension of existing insider risk and DLP programs. These organizations are already deploying behavioral monitoring, credential scoping, and audit trails for their agents.
Wait-and-see camp: Watching the space develop, waiting for standards and vendor solutions to mature before investing. These organizations are accumulating unmonitored agent access and behavioral drift risk with every passing week.
The proactive camp has the right instinct. By the time the wait-and-see camp decides to act, they’ll be facing entrenched agent access patterns that are much harder to retrofit with controls.
What This Means for OpenClaw Operators
If you’re running OpenClaw agents with persistent access to tools, files, and APIs, you’re operating an insider risk surface. Here’s how to think about it:
Map Your Agent Access Like Employee Access
Every agent has an access footprint. Document it: which skills, which MCP servers, which credentials, which files. This is your agent’s “role” — and you should scope it with the same rigor you’d apply to a human employee.
Monitor for Behavioral Drift
Track what your agents actually do vs. what they’re expected to do. If an agent’s tool usage pattern changes — new endpoints, new file access, unusual timing — that’s a signal worth investigating.
Implement Graduated Escalation
Just like insider risk programs escalate from monitoring to intervention based on severity, your agent governance should have tiers:
- Normal: Agent operates within expected patterns
- Elevated: Unusual behavior detected, increase logging
- Critical: Agent taking actions outside its scope, require human approval or pause
Assume Compromise Is Possible
The insider risk model assumes any insider could be compromised — by incentives, coercion, or negligence. Apply the same assumption to agents: they can be prompt-injected, their MCP servers can be compromised, their context windows can be poisoned.
The Convergence
The cybersecurity industry spent 15 years building insider risk programs for humans. Now it needs the equivalent for AI agents — and it needs it faster, because agents scale in ways humans don’t.
A compromised human insider can exfiltrate data at human speed. A compromised AI agent can exfiltrate data at API speed, across every system it has access to, simultaneously.
The good news: the conceptual frameworks already exist. Behavioral baselines, drift detection, graduated response, least privilege, continuous monitoring. The technology needs to catch up to the model, not the other way around.
The bad news: most organizations haven’t started.
Source: ISMG interview with Sumit Dhawan, CEO of Proofpoint, RSAC Conference 2026