Here’s the problem with securing AI agents: the tools we use to score vulnerabilities were designed for software that sits still. CVSS measures how bad a buffer overflow is. EPSS predicts whether someone will exploit a known CVE. Neither can tell you how dangerous it is when an autonomous agent with persistent memory, tool access, and its own identity chain starts behaving unexpectedly.

OWASP’s new Agentic AI Vulnerability Scoring System (AIVSS) v0.8, released March 19, is the first serious attempt to fix that — and it’s being presented at RSAC 2026 today.

What AIVSS Actually Scores

AIVSS extends CVSS v4.0 with 10 agentic risk amplification factors that capture the things that make agent vulnerabilities fundamentally different from traditional software bugs:

  • Autonomy level — How much does the agent act without human approval?
  • Tool access scope — What can the agent reach? File systems, APIs, databases, other agents?
  • Dynamic identity — Does the agent acquire or escalate credentials at runtime?
  • Persistent memory — Can a compromised state persist across sessions?
  • Workflow blast radius — If the agent goes wrong, how far does the damage cascade?
  • Multi-step attack chains — Can an attacker exploit multiple agent capabilities in sequence?

The math matters here. AIVSS doesn’t just add a subjective “agent risk” modifier — it uses a quantitative model that produces reproducible scores. Two different security teams evaluating the same agent vulnerability should arrive at comparable numbers.

Why This Matters Now

Three converging pressures made this inevitable:

1. Agents are deploying faster than governance. Microsoft’s 1.3 billion agent forecast and Gartner’s 40% enterprise prediction aren’t hypothetical anymore. Organizations are running agents in production with no standardized way to assess risk.

2. Existing frameworks describe risks, not magnitudes. The OWASP Top 10 for Agentic Applications tells you what can go wrong. NIST’s agent security standards tell you what controls to implement. But neither gives you a number you can put in a risk register or compare across vendors.

3. Cyber insurance needs quantifiable risk. This is the real accelerant. AIVSS v0.8 is co-published with AIUC-1 (the AI Underwriting Criteria standard), with an official crosswalk mapping between the two frameworks. Insurers need numbers to price policies. AIVSS gives them numbers.

The Co-Publication Strategy

The simultaneous release with AIUC-1, the OWASP AI Exchange, and the OWASP Citizen Development Top 10 is deliberate. By publishing crosswalks between all four frameworks, OWASP creates a unified ecosystem where:

  • Security teams use AIVSS to score vulnerabilities
  • Insurance underwriters use AIUC-1 to price policies, mapped directly to AIVSS scores
  • Compliance teams use the crosswalks to satisfy multiple regulatory requirements with one assessment
  • Risk managers get a common language that works across all three contexts

The AIUC-1 crosswalk is available now — and it may end up being the most consequential document in the release. When insurance companies start requiring AIVSS scores for coverage, adoption will follow whether organizations like it or not.

What Changed From v0.5

Version 0.8 incorporates over 1,900 public comments from the v0.5 release. Key changes:

Areav0.5v0.8
Scoring modelConceptual frameworkRefined quantitative math
Risk scenariosGeneric examplesReal-world attack chains
Framework mappingNIST AI RMF only+ OWASP Agentic Top 10, CSA MAESTRO, AIUC-1
Empirical validationNoneExpert survey data (Appendix D)
Insurance integrationNoneAIUC-1 crosswalk

The empirical data in Appendix D is notable — it documents relative risk rankings from expert contributors, grounding the framework in practitioner judgment rather than purely theoretical modeling.

The SSVC Companion

Alongside the quantitative scoring system, OWASP is developing a parallel SSVC (Stakeholder-Specific Vulnerability Categorization) decision tree for agentic AI. Where AIVSS gives you a number, SSVC gives you a decision: defer, attend, act, or respond immediately.

The SSVC draft is open for community review now, with v1.0 of the full AIVSS framework targeted for end of 2026. The public review period opens April 16.

What This Means for OpenClaw Users

If you’re running an OpenClaw agent with skills that connect to external services — which is most of you — AIVSS provides a framework for thinking about the risk surface that matters:

  • Skill-level tool access: Each skill’s external connections represent scorable attack surface
  • Memory persistence: OpenClaw’s memory system (MEMORY.md, daily files) is a persistence vector that AIVSS accounts for
  • Autonomy configuration: The difference between an agent that asks before sending emails and one that sends them automatically is a quantifiable AIVSS factor
  • MCP server exposure: Every MCP server connection maps directly to AIVSS tool access scoring

The practical takeaway: as AIVSS matures toward v1.0, expect enterprise buyers and insurers to start asking for agent risk scores. Organizations deploying AI agents — whether OpenClaw or any other framework — should start familiarizing themselves with the scoring methodology now.

The Distinguished Review Board

The RSAC session (today at 9:40 AM, Moscone West 2001) features a review board that signals how seriously the intelligence and standards communities are taking this: Rob Joyce (former NSA Cybersecurity Director), Apostol Vassilev (NIST), Jason Clinton (Anthropic CISO), and Ken Huang (AIVSS project lead).

When the former head of NSA Cybersecurity is reviewing your agent vulnerability scoring framework, the enterprise world pays attention.

The full AIVSS v0.8 document is available at aivss.owasp.org.