The Anthropic-Pentagon saga has escalated dramatically this week. Three developments happened nearly simultaneously: CEO Dario Amodei sent a blistering internal memo attacking OpenAI’s deal as “safety theater,” defense contractors started pulling Claude from their operations, and the US military continued using Claude for strikes against Iran — hours after banning it.

This is the most turbulent week in AI industry history, and it matters for anyone building on these models.

Amodei’s Memo: “Straight Up Lies”

In a 1,600-word memo leaked via The Information, Amodei didn’t hold back. He described OpenAI’s Pentagon agreement as “maybe 20% real and 80% safety theater,” accusing Sam Altman of “gaslighting” employees and the public about what the deal actually permits.

The core issue: OpenAI’s agreement includes language prohibiting domestic mass surveillance and requiring “human responsibility for the use of force” — but it’s qualified by the phrase “consistent with applicable laws.” Amodei’s argument is that this qualifier guts the protection entirely, since future laws could authorize exactly what the language appears to prohibit.

“We haven’t donated to Trump,” Amodei wrote. “We haven’t given dictator-style praise to Trump.” He explicitly contrasted Anthropic’s stance with OpenAI co-founder Greg Brockman’s $25 million donation to Trump.

The memo is significant because Amodei has historically been measured and technocratic in public communications. This was personal, pointed, and clearly intended to rally Anthropic employees amid an existential business threat.

Defense Tech Companies Are Already Moving

While the legal designation is still technically unofficial — limited so far to social media posts from Defense Secretary Hegseth and President Trump — defense companies aren’t waiting.

Per CNBC reporting, the impact is concrete:

  • 10 portfolio companies at J2 Ventures (a defense-focused VC) have stopped using Claude for defense work
  • Lockheed Martin is expected to remove Anthropic from its supply chain (Reuters)
  • Multiple unnamed defense CEOs told CNBC they directed employees to stop using Claude as of last week
  • Companies switching “out of an abundance of caution,” even while calling Claude “excellent”
  • Palantir, which co-deployed Claude to classified networks and gets ~60% of US revenue from government contracts, declined to comment on its plans

Piper Sandler analysts warned that moving off Anthropic could cause “short-term disruptions” for Palantir, noting that “onboarding and negotiating replacement technology will take time and resources.”

Not everyone is rushing. C3 AI’s Tom Siebel said he doesn’t see a need to act “until it gets litigated.” But the trend is clear: most companies that do business with the Pentagon are treating the ban as fait accompli.

The Iran Contradiction

The most striking detail in this entire saga: the US military used Claude for intelligence assessments and target identification in strikes against Iran — hours after Trump declared that “EVERY Federal Agency” should stop using Anthropic’s technology.

The Wall Street Journal reported that planning for the strikes was already underway and relied on Claude. Trump had to walk back his demand for immediate cessation, replacing it with a six-month phase-out period. The reason is obvious — you can’t rip out the intelligence infrastructure supporting active military operations overnight.

This contradiction exposes the political nature of the designation. If Claude were genuinely a security risk, you wouldn’t use it to plan strikes against a nation-state. The designation is punitive, not protective.

What OpenClaw Users Should Do

This situation reinforces the same principle we’ve been advocating since day one: don’t be a single-provider shop.

The Commercial API Is Fine

Let’s be clear: none of this affects your ability to use Claude through the commercial API. Anthropic’s enterprise business is separate from military contracts. Your OpenClaw agent running Claude continues to work.

But Diversify Anyway

The lesson isn’t about Claude specifically — it’s about systemic risk. Political, regulatory, and business disruptions can hit any provider. OpenClaw’s architecture makes this easy:

# Primary + fallback in clawdbot.json
model: "anthropic/claude-sonnet-4-6"
fallbackModel: "google/gemini-2.0-flash"

Consider keeping a local model available via Ollama as a third layer of resilience. It’s slower and less capable, but it’s yours — no government designation can touch it.

Anthropic’s Safety Stance Benefits You

Here’s the counterintuitive takeaway: Anthropic taking a $200 million hit to maintain safety red lines is good news for commercial users. The same principles that prevent Claude from being used for autonomous weapons also prevent it from being manipulated into harmful actions by prompt injection or rogue agents.

Tara Chklovski, CEO of Technovation, told CNBC that “Anthropic is the only one that has this very unique set of skills in technology” around safe military AI deployment. “Competition is so fierce that people think going fast and without the weight of these safeguards is the only way to succeed. Anthropic is showing that’s probably not the way.”

The Bigger Picture

Former Trump advisor Dean Ball called the designation “attempted corporate murder.” Politico reported that a former DOJ official specializing in technology law warned this could be “the first step toward partial nationalization of the AI industry.”

Even Ilya Sutskever — OpenAI co-founder turned competitor — weighed in on X: “It’s extremely good that Anthropic has not backed down… In the future, there will be much more challenging situations of this nature.”

Meanwhile, Sam Altman acknowledged his timing on the OpenAI deal was “sloppy” and added language clarifying that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons.” Whether that qualifier survives contact with military operations remains to be seen.

For AI agent builders, this week is a case study in why infrastructure decisions matter. The model you build on carries implicit political, ethical, and business continuity risks. Self-hosted, model-agnostic architecture isn’t just a nice-to-have — it’s insurance. OpenClaw’s model routing lets you switch providers in minutes.


This post follows our earlier coverage: Anthropic Designated a Pentagon Supply Chain Risk and Claude Becomes the #1 App in America.