The security industry has been talking about “AI for security” for years. But Orca Security’s RSAC 2026 announcement is the first major platform release that treats AI agents as both the threat and the response — and connects them through a single data model.

Two new autonomous agents, runtime AI detection that tracks every LLM call in your cloud, and a remediation workflow engine that turns findings into resolved issues. Here’s what actually shipped.

The Threat Investigation Agent

Security teams drown in alerts. Cloud environments generate thousands of findings, and the highest-skill, most time-consuming work isn’t triaging — it’s investigating. A senior analyst spends hours correlating signals across cloud logs, identity records, network flows, and application traces to determine whether an alert is a real incident or noise.

Orca’s Threat Investigation Agent does that work autonomously:

  1. Ingests the alert with full context from Orca’s Unified Data Model — workload metadata, identity chains, exposure details, network topology
  2. Correlates signals across the environment, validating facts rather than making assumptions
  3. Produces an investigation report with a verdict, evidence chain, and proposed containment actions
  4. Explains its reasoning — every conclusion links back to the data that supported it

The transparency piece is critical. Security agents that produce verdicts without explanations are useless in regulated environments. Orca’s approach — transparent decision logic with visible reasoning chains — is the design pattern that enterprise security teams actually need.

The practical impact: what previously took a senior analyst 2-4 hours of manual investigation now produces actionable results in minutes. Not by cutting corners, but by automating the correlation and fact-checking that consumes most investigation time.

The AppSec Triage Agent

SAST scanners are notorious for false positive rates that erode developer trust. When 60-80% of your alerts are noise, developers stop looking at any of them.

Orca’s AppSec Triage Agent attacks this directly:

  • Analyzes code context of each SAST finding — not just the pattern match, but the surrounding logic
  • Determines false positive likelihood based on actual code behavior (e.g., “this open redirect has explicit URL validation upstream”)
  • Automatically deprioritizes confirmed false positives by reducing risk scores
  • Lets humans override — every AI triage decision can be reviewed and reversed

For confirmed true positives, the agent chains into Orca’s existing AI-driven code remediation to generate pull requests with fixes. The full pipeline: detect → triage → fix → PR — mostly autonomous, with human review at the merge point.

Runtime AI Detection: The Shadow AI Problem Gets a Solution

This is the announcement with the biggest long-term implications. Orca Sensor now identifies actual runtime usage of AI across cloud environments, regardless of provider or programming language. Not static scanning. Not configuration auditing. Real-time detection of which workloads are calling which LLMs, sending what data, through which MCP servers.

What Orca now tracks:

DetectionWhy It Matters
Which workloads invoke LLMsInventory of AI usage across the estate
Sensitive data sent to modelsData loss prevention for AI workflows
External MCP server connectionsShadow tool integrations
AI provider and model identificationVendor risk management
Prompt injection vulnerability assessmentApplication-level risk
Internal vs. external AI interactionsAttack surface mapping
Identity/process to AI correlationGovernance and attribution

This addresses the visibility gap that Microsoft’s shadow agent research and AvePoint’s AgentPulse have highlighted. You can’t govern what you can’t see. Orca’s approach — correlating AI runtime activity with cloud context (workloads, identities, exposure, network) — produces a richer picture than AI-only governance tools.

For OpenClaw users specifically: if your agent runs in a cloud environment, this is the kind of detection that will identify your MCP server connections, LLM API calls, and data flows. Understanding what enterprise security tools can see helps you build agents that play well with governance frameworks.

Orca Missions: From Findings to Resolution

The fourth piece is less glamorous but potentially more impactful: Orca Missions groups related security findings into remediation workflows with objectives and verification steps.

Instead of presenting 47 individual alerts about a misconfigured IAM role and its downstream effects, Missions creates a single workflow: “Remediate over-permissioned service account X — 47 related findings, 3 containment actions, verification criteria.” Teams work through structured missions rather than swimming in alert soup.

Code Reachability Analysis

Orca also adds code reachability analysis that determines whether vulnerable code paths are actually executed. A critical CVE in a dependency that’s imported but never called is a different risk than one in a hot code path. Combining static analysis with runtime and agentless signals produces more accurate prioritization than any single method alone.

The Bigger Picture

Orca’s release fits a pattern we’ve been tracking throughout RSAC 2026 pre-announcements:

The stack is assembling: score risk (AIVSS), govern access (Agent 365), map attack surface (Salt), detect and respond (Orca). Each layer addresses a different aspect of the agentic security problem. The question is whether these pieces will interoperate — or whether enterprises will need to build their own integration layer.

Orca is at Booth #1035 in the South Hall at Moscone, demonstrating all four capabilities live through March 26.