Claude is the most downloaded app in America.

Not an AI app. Not a productivity app. The most downloaded app, period — surpassing ChatGPT, Instagram, and TikTok on Apple’s App Store over the weekend of March 1-2, 2026. On Google Play, Claude climbed to #5 with ChatGPT dropping to #2.

The numbers behind the shift are staggering: ChatGPT uninstalls surged 295%. Claude downloads jumped over 51%. Anthropic reported free active users up 60% since January, daily sign-ups quadrupled, and paid subscribers doubled. The demand was so overwhelming that Claude experienced a 3-hour outage on March 2 as servers buckled under the load.

The QuitGPT Movement

The catalyst was OpenAI’s Pentagon deal and the subsequent designation of Anthropic as a supply chain risk after the company refused to allow Claude for mass surveillance and autonomous weapons.

What started as scattered outrage became organized action. QuitGPT.org, a grassroots campaign, claims over 1.5 million people have taken action — either sharing on social media, signing the boycott, or using the site’s automated account deletion tool. The #QuitGPT and #CancelChatGPT hashtags trended globally on X.

On March 3, protesters gathered outside OpenAI’s San Francisco offices, organized by QuitGPT with a focus on opposing AI-powered mass domestic surveillance and lethal autonomous weapons.

The movement represents something new in the AI industry: users voting with their feet based on ethics, not just features.

What Sam Altman Said

OpenAI CEO Sam Altman responded by proposing two additional sentences to the Pentagon agreement that he said would address surveillance concerns. However, critics noted the language still included “consistent with applicable laws” — a phrase that, given executive authority over national security, provides little practical constraint.

Altman also stated publicly that Anthropic should not be designated a supply chain risk, positioning himself as a moderate voice even as his company benefits from the designation.

The OpenClaw Angle

For OpenClaw users, this isn’t just industry drama — it has practical implications.

Claude is the most popular model powering OpenClaw agents. The surge in Claude adoption means:

  1. Increased API load: If you’ve noticed occasional slowness or rate limits on Claude models, the massive user influx is likely a factor. Consider configuring fallback models in your OpenClaw setup.

  2. Anthropic’s financial position strengthens: More subscribers means more revenue for Claude development. This is good news for OpenClaw users — Anthropic can invest more in model quality and infrastructure.

  3. Model diversification matters more than ever: The Pentagon situation showed that geopolitical events can disrupt model access overnight. OpenClaw’s multi-model architecture — where you can switch between Claude, GPT, Gemini, and local models — is a genuine advantage.

Here’s a minimal fallback configuration for your clawdbot.json:

{
  "ai": {
    "model": "anthropic/claude-sonnet-4",
    "fallbackModels": [
      "google/gemini-2.5-pro",
      "openai/gpt-5"
    ]
  }
}
  1. The open-source advantage: Unlike ChatGPT or Claude’s web apps, OpenClaw runs on your hardware. No company can revoke your access, designate your tool a supply chain risk, or change the terms of service underneath you. In a world where AI providers are subject to political pressure, self-hosting is insurance.

Anthropic’s ChatGPT Import Tool

Riding the wave, Anthropic quickly shipped a simplified migration path: export your ChatGPT data via Settings → Data Controls, then import into Claude via Settings → Capabilities → Memory. It’s not seamless — you’ll lose conversation history formatting — but your memories and preferences carry over.

For OpenClaw users, this is less relevant since your agent’s memory lives in local files (MEMORY.md, daily notes) rather than in any provider’s cloud.

What This Means Long-Term

The AI industry just experienced its first major user migration driven by values rather than technology. A few observations:

Ethics became a competitive advantage. Anthropic’s refusal to compromise on safety red lines — the thing that got them blacklisted by the Pentagon — is exactly what drove millions of new users to their platform. The market rewarded principled behavior.

The “walled garden” risk is real. ChatGPT users who relied entirely on OpenAI’s ecosystem had to scramble to export data and rebuild workflows. OpenClaw users, by contrast, just update a model name in their config.

Platform risk is geopolitical now. It’s not just about companies going bankrupt or changing APIs. Government action can disrupt your AI stack overnight. Self-hosted, open-source tools are the hedge.

The AI agent era was already favoring open, self-hosted architectures. The QuitGPT movement just accelerated the timeline. For a practical look at switching AI providers, see our guide on the best cheap models for OpenClaw.


The OpenClaw project is independent and not affiliated with any AI provider. We cover developments across the ecosystem that affect our community.