The biggest story in AI right now isn’t a model release or a benchmark. It’s a contract.
In late February 2026, OpenAI signed a deal with the U.S. Department of Defense to deploy AI models on classified networks. Anthropic had been offered the deal first — and refused. What followed was a week of political threats, internal dissent, researcher resignations, and a public reckoning about what AI companies owe their users.
If you run OpenClaw, this matters to you. Here’s why.
What Happened
February 25: The Pentagon, already in talks with Anthropic, pivots to OpenAI after Anthropic pushes back on broad usage terms.
February 27: The Pentagon issues an ultimatum to Anthropic CEO Dario Amodei. Anthropic refuses to budge on its red lines: no AI enabling fully autonomous weapons, no mass surveillance of U.S. citizens. The response is hostile — officials publicly insult Anthropic, and Trump posts on Truth Social directing federal agencies to cease all use of Anthropic technology.
February 28: Sam Altman announces the deal on X, claiming it includes the same safeguards Anthropic demanded. Critics immediately note the “all lawful use” language as a loophole — under authorities like Executive Order 12333, “lawful” is broader than most people assume.
March 1-3: Internal OpenAI dissent surfaces. Researchers including Noam Brown, Aidan McLaughlin, and Leo Gao voice concerns. Protests occur outside OpenAI HQ. Altman holds an all-hands meeting and amends some contract language, admitting the rollout was rushed.
The Safeguards Gap
OpenAI’s contract prohibits “mass domestic surveillance” and “direct autonomous weapons,” enforced through contract language and technical monitoring. On paper, that sounds reasonable.
The problem is specificity. “Mass” surveillance has no legal definition. “Direct” autonomous weapons excludes systems that assist targeting decisions but leave a human in the loop — which covers most modern military AI. And “all lawful use” under existing intelligence authorities permits bulk collection of metadata and communications data that most people would call surveillance.
Anthropic’s proposed terms were firmer: explicit prohibitions on specific capabilities, independent technical audits, and the right to withdraw models if red lines were crossed. The Pentagon rejected those terms.
Why Open-Source Agents Are the Hedge
Here’s the uncomfortable reality: every AI agent runs on models provided by companies with their own interests. When you use Claude, you depend on Anthropic’s ethics. When you use GPT-4, you depend on OpenAI’s contracts. When you use Gemini, you depend on Google’s defense relationships.
OpenClaw doesn’t eliminate this dependency, but it changes the architecture:
Model Portability
OpenClaw users can swap models with a config change. If you disagree with OpenAI’s Pentagon deal, switch to Claude or Gemini or a local model. No migration, no data loss, no vendor lock-in. Your agent, your memory, your workflows — all stay intact.
# Switch providers in seconds
ai:
provider: anthropic
model: claude-sonnet-4-6
Local-First Privacy
OpenClaw stores everything locally — memory, conversations, files. No data flows to model providers beyond the API calls you initiate. This is fundamentally different from cloud-hosted AI assistants where the provider controls your data.
Self-Hosted Infrastructure
Running OpenClaw on your own hardware means no provider can cut your access for political reasons. If the Pentagon threatens to blacklist a company (as happened to Anthropic), that’s a vendor risk. Self-hosted agents don’t have vendors.
Transparency
OpenClaw is open-source. You can audit every line of code, every plugin, every data flow. Closed-source AI assistants ask you to trust their safety claims. Open-source lets you verify.
The Peter Steinberger Connection
This story has a personal dimension for the OpenClaw community. Peter Steinberger joined OpenAI just weeks before this deal was announced. He committed to keeping OpenClaw open-source through a foundation structure — and nothing about the Pentagon contract changes that.
But it does underscore why the foundation matters. OpenClaw’s independence from any single company isn’t just a governance preference. It’s a safety mechanism. When the company employing your project’s creator signs military contracts, the separation between project and employer isn’t abstract anymore.
What This Means Practically
If you use OpenAI models with OpenClaw: Nothing changes technically. The Pentagon deal doesn’t affect API access or model behavior for civilian users. But you should be aware that your API payments now fund a company with defense contracts.
If that concerns you: OpenClaw makes switching trivial. Anthropic’s Claude, Google’s Gemini, open-weight models via Ollama — you have options, and your agent doesn’t care which model powers it.
If you’re building for an organization: This is a concrete argument for model-agnostic agent architecture. Vendor ethics can shift overnight. Building on a platform that locks you to one provider is a strategic risk that just became very visible.
The Bigger Picture
The AI industry is splitting along geopolitical lines. OpenAI and Google are leaning into defense partnerships. Anthropic is drawing ethical lines. The EU is regulating. China is building independently.
In this environment, open-source isn’t just a development philosophy — it’s a sovereignty strategy. Running your own agent on your own infrastructure with your choice of models is the only architecture that survives every possible future.
The Pentagon deal will be debated for years. The technical response is available today — start with our 10-minute setup guide and understand why self-hosting matters.
For more on OpenClaw’s relationship with OpenAI, see our coverage of Peter Steinberger joining OpenAI.