OpenAI Said It Would Hold the Line. Then It Made a Deal.

Last Thursday night, Sam Altman sent an internal memo to OpenAI employees that was quickly shared publicly. He wrote that if OpenAI were in Anthropic’s position with the Pentagon, his company would “largely follow Anthropic’s approach.” His stated red lines matched: no mass domestic surveillance, no autonomous lethal weapons, humans in the decision loop.

By Friday, OpenAI had struck a deal with the Pentagon.

The contents of that deal are not public.

That gap matters. It’s the most consequential unknown in a week full of them.

What happened Friday

The deadline the Pentagon had given Anthropic passed with no agreement. President Trump posted on Truth Social directing every federal agency to immediately stop using Anthropic’s technology. Not just the Department of Defense. Every agency.

Anthropic published a formal statement the same day. It was precise and legal in tone. They clarified the actual statutory scope of what a supply chain risk designation can do, challenged the designation as legally unsound, and announced they would fight it in court. They also made clear they haven’t yet received direct official communication confirming the designation is final.

The Pentagon’s rhetoric had been considerably less measured. The Undersecretary of Defense posted on X the night before that Dario Amodei “is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

That’s an unusual thing for a senior government official to publish about a CEO whose company has held a $200 million contract with that same department since 2024.

What the ban actually covers

There’s a meaningful legal distinction being obscured in how this has been reported and how the Pentagon has framed it publicly.

Hegseth implied that any company doing business with the military would need to stop using Anthropic’s products across the board. Anthropic’s legal reading is that he doesn’t have the authority to do that.

(Update: There are some companies, like Lockheed Martin, making moves in that direction, which is exceptionally frustrating to see.)

The statutory basis for this designation, 10 USC 3252, applies to Department of Defense contracts. Legally, it cannot extend to how contractors use Claude for work unrelated to DoD, and it cannot affect individual users, commercial contracts, or API access. Anthropic is going to court to establish that line clearly.

For people using Claude for creative and business work: your access is not affected. That’s Anthropic’s legal position, and they appear to be prepared to defend it.

Elon Musk in Heil Hitler Salute

Who moves in when Anthropic moves out?

This is worth paying attention to.

Until this week, Anthropic was the only major AI company cleared to operate on the government’s classified networks. That’s a significant position, and it didn’t happen overnight. It required years of security clearances, infrastructure, and trust-building.

xAI, Elon Musk’s AI company, was cleared for classified government networks this week. Grok (known for its acceleration of nonconsensual pornography)  is now the available alternative for the military’s most sensitive work.

That’s the context for understanding what this dispute is actually about, and who benefits from its resolution.

Back to OpenAI

When Altman’s memo was shared publicly, it read as a solidarity statement. OpenAI occupies a similar position in the AI landscape. If the Pentagon could compel Anthropic to remove its safeguards, the same pressure could eventually come for OpenAI.

Altman writing that his company would largely follow Anthropic’s approach was, in that context, a signal to the industry and to the government about where the major labs collectively stood.

Then OpenAI made a deal. On the same day Anthropic was being designated a supply chain risk.

Maybe the deal is entirely consistent with what Altman wrote in that memo. Maybe OpenAI found a way to satisfy the Pentagon’s requirements within their stated red lines. That’s possible. We don’t know because the terms haven’t been made public.

What we do know is that the gap between “we would largely hold Anthropic’s position” and “we have reached an agreement with the same Pentagon that Anthropic walked away from” needs explaining.

The explanation might be completely reasonable. Or it might tell us something important about what the government actually had to accept to close that deal.

Either way, it’s the question the rest of this story turns on.

Where things stand

Anthropic is in litigation posture. Bipartisan congressional voices, including a letter from Senators Markey and Van Hollen calling the pressure campaign “a chilling abuse of government power,” have pushed back. But the presidential directive is issued. The designation process is moving.

Anthropic’s statement is careful to say they want to ensure a smooth transition for military operations. They’re not trying to leave the Pentagon without tools. They’re trying to hold two specific lines while everything else negotiates. Whether the courts will let them is a separate question from whether those lines were the right ones to hold.

The broader situation: one major AI company has been banned from federal use for refusing to enable mass domestic surveillance and fully autonomous weapons. Another major AI company publicly said it would do the same… and then reached an agreement we haven’t seen. And the replacement option is a company whose founder is among the administration’s most prominent allies.

The facts of this week are strange enough. They don’t need editorializing.

Shopping Cart

Discover more from The AI Quilter

Subscribe now to keep reading and get access to the full archive.

Continue reading