Most AI labor market studies ask a hypothetical question: “Could AI do this task?” Anthropic just asked a different one: “Is AI actually doing this task right now — and how often?”

The answer, based on millions of real Claude interactions mapped to the U.S. government’s occupational database, is more concrete than anything we’ve seen before. And for programmers, it’s stark: 75% of their tasks are already being performed by AI in production usage.

AI Coverage: A New Metric

The report — “Labor market impacts of AI: A new measure and early evidence” — was authored by economists Maxim Massenkoff and Peter McCrory. It introduces AI Coverage: the fraction of a job’s real tasks that AI is actively completing today, measured from actual Claude usage data.

This isn’t theoretical capability scoring. It’s empirical measurement. The researchers analyzed millions of Claude sessions and mapped them to O*NET, the U.S. government’s taxonomy of 19,000+ job tasks across 1,000+ occupations.

An AI Coverage score of 0.75 means that in real Claude sessions, 75% of that job’s defined tasks are being performed by AI right now. Not “could be automated someday.” Being done.

The Five Most Exposed Professions

CategoryKey Signals
Computer Science & Programming75% task coverage — highest by a wide margin
Business & Finance OperationsFinancial analysis, modeling, reporting
ManagementStrategic planning, decision support, forecasting
LegalContract review, research, drafting
Office & Administrative SupportData entry, scheduling, document management

Workers in these high-exposure categories are projected to see slower employment growth through 2034 compared to low-exposure occupations.

What “75% Coverage” Actually Means

Anthropic is careful to distinguish between task displacement and job displacement. Most of what’s happening today is the former — AI handles specific tasks within a job, not the entire role.

A financial analyst with 60% AI Coverage isn’t obsolete. But their employer can now get 60% more output from the same headcount. That’s not a firing event — it’s a hiring headwind. The next analyst position that would have opened doesn’t get posted. The team that was going to grow from five to seven stays at five.

The phrase “Great Recession for white-collar workers” circulated widely in media coverage but doesn’t appear in the paper itself. Anthropic’s framing is more measured: “occupational shifts” rather than mass layoffs. The practical implication: your competitors who deploy AI will operate leaner and faster.

The Anthropic Institute

The report launched alongside the Anthropic Institute, announced March 11 and led by co-founder Jack Clark. It consolidates three research groups:

  • Frontier Red Team — proactively tests Claude for catastrophic misuse (bioweapons synthesis, cyberattack enablement, mass manipulation)
  • Societal Impacts Team — studies effects on democracy, information ecosystems, power concentration
  • Economic Research Team — produces ongoing AI Coverage measurements across occupations as models improve

This is strategic positioning. Anthropic is building itself as the authoritative voice on AI’s societal consequences — policy influence that OpenAI has historically held alone. While OpenAI is landing government contracts through AWS, Anthropic is establishing itself as the research institution that governments consult when making policy.

The OpenClaw Connection

For OpenClaw users, this data validates what you’re already experiencing. If 75% of programming tasks are being handled by AI in Claude sessions, then the people building agent fleets, automating workflows, and deploying personal AI agents aren’t early adopters anymore. They’re ahead of an adoption curve that’s already reshaping employment.

The Korean workers paying premium prices for agent-building courses? The Chinese entrepreneurs lining up for OpenClaw installations? They’ve seen this data, or something like it, reflected in their own job markets.

What the Numbers Don’t Capture

The AI Coverage metric measures tasks Claude is performing today. It doesn’t capture:

  • Compound effects — when 75% of programming tasks are automated, the remaining 25% change in nature. You’re not writing code; you’re reviewing, architecting, and directing AI that writes code.
  • New tasks created — agent deployment, AI system monitoring, prompt engineering, and agent orchestration are tasks that didn’t exist in O*NET’s taxonomy two years ago.
  • Speed of acceleration — the metric will rise as models improve. What’s 75% today could be 90% in a year. The trajectory matters more than the snapshot.

The Bottom Line

This is the first credible, data-backed measurement of AI’s actual labor market impact. Not a survey. Not a capability assessment. Empirical usage data from millions of real sessions mapped to real occupations.

The disruption isn’t theoretical. It’s measurable, and Anthropic just published the ruler.

For anyone building with AI agents: you’re not on the speculative side of this curve anymore. You’re on the measured one. The question is whether the rest of the workforce catches up before the hiring headwinds become layoff tailwinds.