OpenAI Takes Pentagon Deal as Anthropic Fights Back
The frontier AI industry just split in two. On February 27, 2026, OpenAI announced a deal with the Department of Defense to deploy its models on classified military networks — hours after the Trump administration blacklisted Anthropic, labeled it a "supply-chain risk," and ordered every federal agency to rip out its technology. Two companies born from the same ideological DNA. Two radically different answers to the same question: Should AI companies draw red lines with the Pentagon?
What Actually Happened
The timeline matters here because it's brutal.
Anthropic had been negotiating with the DoD over the terms of its existing contract — worth up to $200 million — for weeks. The sticking point: Anthropic insisted on maintaining safeguards prohibiting the use of its AI for mass domestic surveillance and fully autonomous weapons systems. The Pentagon, under Defense Secretary Pete Hegseth, demanded unrestricted access for "all lawful purposes." Anthropic CEO Dario Amodei published a detailed statement on February 26 explaining why the company wouldn't budge.
The next day, President Trump fired back: "We don't need it, we don't want it, and will not do business with them again!" He called Anthropic a "radical left, woke company" and gave the Pentagon six months to phase out its tech. Hegseth then designated Anthropic a supply-chain risk — a classification normally reserved for foreign adversaries like Huawei or Kaspersky.
Hours later, OpenAI announced its own Pentagon deal. Same $200 million range. Same classified networks. But with a deal in hand.
The Twist: OpenAI Claims the Same Red Lines
Here's where this story gets genuinely interesting — and where the easy narrative breaks down.
OpenAI didn't just waltz in and hand the Pentagon an unrestricted AI toolkit. According to OpenAI's own disclosure, their agreement includes three explicit prohibitions: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decisions without human oversight. Sam Altman claims the company retains discretion over its safety stack, deploys via cloud rather than handing over model weights, and has contractual termination rights if the government breaches the agreement.
Sound familiar? Those are essentially the same red lines Anthropic demanded — the ones the Pentagon apparently found unacceptable.
So either OpenAI found a way to package the same restrictions in language the Pentagon could accept, or the administration was never actually opposed to the safeguards themselves. It was opposed to Anthropic.
Saturday was a full news day in AI politics. Anthropic vs the Pentagon. OpenAI's $200M DoD contract. Everyone had takes.
The Real Divergence Isn't About Ethics — It's About Leverage
Let's be honest about what's happening here. This isn't a clean story about one company having principles and the other selling out. Both companies claim similar safety boundaries. The difference is in posture, relationships, and political capital.
OpenAI has spent the last two years aggressively courting Washington. Altman has been a regular presence on Capitol Hill. The company has positioned itself as the American AI champion — the entity you want building the future before China does. That framing plays exceptionally well with this administration.
Anthropic, meanwhile, has leaned into its identity as the "safety-first" lab. That reputation is an asset with researchers, enterprise customers, and European regulators. It is very much not an asset with a defense secretary who views AI safety discourse as an obstacle to military readiness.
The result: OpenAI got a contract with guardrails. Anthropic got blacklisted for demanding the same guardrails. The policy substance is nearly identical. The political outcome is a chasm.
The Supply-Chain Designation Is the Real Story
Forget the contract for a moment. The supply-chain risk designation is the move that should alarm every AI company — and every company in tech, period.
As Wired reported, this designation doesn't just cut Anthropic off from DoD contracts. It requires military contractors and vendors to certify they don't use Anthropic's models. That's a cascading commercial threat. Defense primes, systems integrators, consulting firms — anyone touching Pentagon money now has a reason to steer clear of Claude.
Anthropic has vowed to challenge the designation in court, calling it "unprecedented and legally unsound" for an American company. They're right that it's unprecedented. Whether the courts agree it's unsound is another matter entirely.
The chilling effect is already visible. Over 100 Google AI employees sent a letter to management demanding the company adopt Anthropic-style red lines for Gemini's military use. A separate public letter from nearly 50 OpenAI employees and 175 Google staffers criticized the Pentagon's negotiating tactics. The workforce is watching. And they're nervous.
What This Means for the Industry
Three implications stand out:
- Political alignment is now a competitive moat. OpenAI's Washington strategy just paid a $200 million dividend. Every frontier lab will recalculate how much political capital to invest — and with whom.
- Safety commitments are under political pressure. If the government can blacklist you for maintaining guardrails it dislikes, the incentive structure for responsible AI development just shifted dramatically. Even if OpenAI got similar terms, the message is clear: negotiate quietly, or get made an example of.
- The precedent is set for weaponizing procurement. Using a supply-chain risk designation against a domestic AI company — not a Chinese chipmaker, not a Russian cybersecurity firm — normalizes using national security tools for commercial and political leverage. That should terrify every tech CEO, regardless of their views on military AI.
The Uncomfortable Truth
OpenAI played this well. They got the deal, included meaningful (if untested) safeguards, publicly defended Anthropic's right not to be blacklisted, and asked the DoD to offer the same terms to every AI company. That's a diplomatically masterful sequence. Credit where it's due.
But the broader picture is grim. We're watching the U.S. government pick winners among frontier AI labs based partly on political compatibility, and punish dissent with tools designed for foreign adversaries. OpenAI's red lines may look identical to Anthropic's on paper. The difference is that OpenAI got to keep doing business. Anthropic got called a national security threat.
That gap between identical principles and opposite outcomes? That's the new reality of AI governance in America. And it should make everyone — hawks and doves alike — deeply uncomfortable.
Related Articles
- Claude Is Picking Military Targets
- AI's Big Money Week
- Trump Bans Anthropic While Military Uses Claude for Iran Strikes
- Claude 4 vs. GPT-4.1 vs. Gemini 2.5
Want sharp analysis on AI policy and frontier lab strategy as it unfolds? Follow ultrathink.ai for coverage that doesn't flinch from the hard questions.
This article was ultrathought.
Get breaking news, funding rounds, and analysis delivered to your inbox. Free forever.