The company that proved AI can out-hack humans just became a unicorn.
Xbow raised $120M in Series C funding led by DFJ Growth and Northzone, bringing total funding to $237M and valuation above $1 billion. The Seattle-based startup, founded in 2024 by former GitHub technology incubator lead Oege de Moor, deploys swarms of AI agents that autonomously conduct penetration testing — and last year, those agents reached #1 on HackerOne, the world’s largest bug bounty platform.
That’s not theoretical capability. That’s a measured result against the same targets human hackers compete on.
How It Works: Agent Swarms, Not Human Testers
Traditional pen testing: one human tester sequentially probes a system over weeks. Xbow: a swarm of AI agents simultaneously explore multiple attack vectors, reducing testing from weeks to hours.
De Moor describes the approach: “Think of it as a swarm of agents all trying different attack types across the attack surfaces. By being able to do many of them in parallel, it can work much faster than a human pen test.”
The next evolution is incremental testing — instead of retesting entire systems after every change, Xbow identifies what changed and focuses only on affected endpoints. This brings testing time down from hours to minutes, making continuous testing economically viable for the first time.
This matters because vibe coding is dramatically increasing the rate at which web applications are created. As de Moor notes: “AI vibe coding makes it possible for everyone to create more apps. These web apps are being created at a tremendous rate.” More apps means more attack surface, which means the old model of periodic human pen tests is fundamentally broken.
The $120M Goes to GPUs
A significant portion of the funding goes to compute infrastructure, particularly GPU resources. Autonomous hacking at scale requires massive inference capacity — every agent in the swarm is running an LLM that reasons about attack strategies, generates payloads, and evaluates responses in real time.
De Moor: “We are an AI-native company that needs a lot of GPU power.”
The rest funds team growth (currently 190 people) and expansion beyond web applications into mobile and native environments. Native applications present harder challenges — memory corruption, system-level vulnerabilities — but also represent a larger and less-tested attack surface.
Prompt Injection: AI Hacking AI
One of the most technically interesting areas Xbow is developing: specialized agents for detecting prompt injection vulnerabilities. This is AI systems trying to trick other AI systems into doing things they shouldn’t — a fundamentally new class of vulnerability that traditional testing tools can’t find.
De Moor acknowledges the challenge: “The special techniques for prompt injection are not yet very well known in the training set of the models.” This means Xbow has to develop novel attack patterns that weren’t in any model’s training data — genuine zero-day research at the intersection of AI security and AI capability.
The company is also building validators to ensure findings are accurate and reproducible. False positives in offensive security waste defender time; Xbow’s approach includes verification that each discovered vulnerability is genuinely exploitable.
The Humans-in-the-Loop Design
Despite the autonomous capability, Xbow maintains a human-in-the-loop design:
- Security teams provide credentials, scoping instructions, and focus areas — the same briefing they’d give a human pen tester
- The system maintains detailed logs of reasoning and actions for human review
- When the AI identifies something suspicious but can’t exploit it, humans step in to complete the analysis
- Results are presented with full context so defenders can prioritize and remediate
This is the right design pattern for offensive security: let AI handle the breadth and speed, let humans handle the judgment calls.
The Offensive AI Security Landscape
Xbow isn’t alone. A cluster of companies is building AI-powered offensive security:
| Company | Approach | Signal |
|---|---|---|
| Xbow | Autonomous pen testing swarms | $120M Series C, #1 HackerOne |
| Booz Allen (Vellox Striker) | AI adversary emulation | $12B defense contractor |
| RunSybil | AI agents for offensive testing | $40M Series A |
| Codex Security | AI vulnerability research | 14 CVEs in 30 days |
| XBOW AI | First critical CVE found by AI (CVSS 9.8) | March Patch Tuesday |
The pattern: AI agents are becoming both the attack surface and the primary tool for finding vulnerabilities in that surface. The companies that can build AI that thinks like an attacker are the ones best positioned to defend against AI-powered attacks.
What OpenClaw Users Should Know
Xbow’s success validates something OpenClaw users should take seriously: AI agents can and will find vulnerabilities in your systems faster than you can patch them.
If you’re exposing OpenClaw agents to the internet — through MCP servers, API endpoints, or web interfaces — assume that AI-powered scanners are already probing them. The 30,000+ internet-exposed OpenClaw instances found without authentication earlier this year are exactly the kind of target that autonomous pen testing swarms excel at finding and exploiting.
The defensive implications:
- Continuous testing > periodic audits — if your attack surface changes daily, annual pen tests are meaningless
- Prompt injection is a real vulnerability class — your agent’s system prompts, tool descriptions, and MCP configurations are all attack surfaces
- Speed matters — AI attackers don’t take breaks; your defenses shouldn’t either
- Test your own agents — before someone else does
The era of “we’ll do a pen test next quarter” is over. AI-powered attackers operate continuously, and defense needs to match that cadence.
Xbow’s $120M Series C was announced March 19, 2026. RSAC 2026 runs March 23–26 in San Francisco.