DataDome just published the numbers everyone was guessing at: 7.9 billion AI agent requests hit their network in January and February 2026 alone — a 5% increase over Q4 2025. For one customer, agentic traffic accounted for 9.75% of total traffic over a 30-day window.
The headline isn’t the volume. It’s what comes with it: an identity crisis that turns allowlists into attack surfaces.
The Scale Problem
AI agents are crawling, indexing, and interacting with websites at volumes most organizations can’t classify, let alone manage. Unlike traditional bots that scrape in predictable patterns, AI agents visit thousands of sites per task, chain requests across sessions, and behave differently depending on what they’re trying to accomplish.
The traffic concentrates where the money is:
- E-commerce and retail: ~20% of agentic traffic volume
- Real estate: 17%
- Travel and tourism: 15%
These are the industries with the most valuable transactional data — exactly where you’d want visibility, and exactly where most organizations don’t have it.
The Identity Spoofing Problem
This is where it gets dangerous. DataDome found that known AI agents are being actively impersonated at significant scale:
| Agent | Spoofed Requests | Impersonation Rate |
|---|---|---|
| Meta-externalagent | 16.4M | — |
| ChatGPT-User | 7.9M | — |
| PerplexityBot | — | 2.4% fraudulent |
Meta ExternalAgent accounted for nearly 25% of top AI agent traffic in February 2026. ChatGPT-User followed at 19.1%, and Meta WebIndexer at 14.3%.
The implication: sites that allowlist known crawlers based on user-agent strings are exposed. A spoofed PerplexityBot or ChatGPT-User string turns an allowlist into an open door. Without the ability to classify agents by verified identity and intent, neither blocking nor allowlisting can be done with confidence.
The Visibility Gap
DataDome VP of Threat Research Jérôme Segura framed it bluntly: “Invisible traffic is unmanaged traffic. And right now, most organizations cannot see this clearly enough to do anything meaningful about it.”
The core challenge: high-volume agents are not the same as high-value agents. One agent may drive referral value (a customer clicking through from ChatGPT’s answer). Another harvests data with no benefit to the site it visits. Without distinguishing between the two, security teams are guessing.
What This Means for the Agent Ecosystem
This report provides empirical grounding for several trends we’ve been tracking:
1. Agent identity is the next security frontier. Products from Okta, Token Security, Deutsche Telekom, and F5 × Skyfire are all building agent identity infrastructure. DataDome’s data shows why: you can’t manage what you can’t identify.
2. Agentic commerce is real — and messy. AI agent traffic to retail sites surged 4,700% year-over-year from generative AI interfaces. Amazon’s Rufus influenced 66% of Black Friday 2025 purchases. But agent-driven discovery also means agent-driven data harvesting, price scraping, and competitive intelligence at machine speed.
3. Bots may overtake human web usage by 2027. DataDome’s data aligns with Search Engine Land’s projection. When agents visit thousands of sites per task while humans visit a handful, the traffic ratio inverts fast.
4. The “trust management” category is emerging. DataDome positions itself as handling “bot and agent trust management” — deciding which automated visitors to allow, restrict, or block. This is a different problem from endpoint agent security (Manifold), runtime agent security (Zenity), or agent identity governance (Oasis). The stack keeps getting deeper.
The full report is available at datadome.co/threat-research/ai-traffic-report.
Sources: BusinessWire · DataDome · Search Engine Land