The Rundown
Anthropic refused to let the Pentagon use its AI for mass surveillance or autonomous weapons — and got blacklisted for it
Defence Secretary Pete Hegseth labelled Anthropic a "supply chain risk" in an unprecedented move
Trump ordered every federal agency to immediately stop using Anthropic products
OpenAI swooped in hours later and signed a Pentagon deal — with the exact same restrictions Anthropic was demanding
Anthropic is challenging the designation in court, with legal experts calling it "likely illegal"
Over 500 OpenAI and Google employees signed an open letter in support of Anthropic — before their own company inked the deal.
Dictate code. Ship faster.
Wispr Flow understands code syntax, technical terms, and developer jargon. Say async/await, useEffect, or try/catch and get exactly what you said. No hallucinated syntax. No broken logic.
Flow works system-wide in Cursor, VS Code, Windsurf, and every IDE. Dictate code comments, write documentation, create PRs, and give coding agents detailed context- all by talking instead of typing.
89% of messages sent with zero edits. 4x faster than typing. Millions of developers use Flow worldwide, including teams at OpenAI, Vercel, and Clay.
Available on Mac, Windows, iPhone, and now Android - free and unlimited on Android during launch.
How Did We Get Here?
ANTHROPIC HOLD THEIR GROUND
Anthropic had been in discussions with the Department of War.
Negotiations had been going on for weeks.
The sticking point? Anthropic's terms of service. Anthropic wanted explicit language in the contract banning two things:
Mass surveillance of American citizens
Autonomous lethal weapons — AI that can kill without a human making the call
The Pentagon said no.
They wanted the ability to use Anthropic's AI for "any lawful purpose" — without specific carve-outs written into the agreement. Anthropic didn't budge.
Then things escalated fast.
THE BLACKLIST
Defence Secretary Pete Hegseth posted on social media designating Anthropic a "supply chain risk."
This wasn't just about Anthropic losing a government contract. The designation meant that any contractor, supplier, or partner doing business with the US military could no longer use Anthropic products either.
That's a huge deal. Anthropic's Claude is one of the most widely used AI models in enterprise software.
A ripple effect across the entire tech supply chain.
Trump followed up on Truth Social, calling Anthropic's stance a "DISASTROUS MISTAKE" and ordering all federal agencies to "IMMEDIATELY CEASE" any use of their technology.
Anthropic walked away.
HERE COMES OPENAI
Within hours of the Anthropic crackdown, OpenAI CEO Sam Altman announced a new deal with the Pentagon.
Here's where it gets interesting.
Altman stated that OpenAI's agreement includes the exact same two restrictions Anthropic was fighting for — no mass surveillance, no autonomous weapons.
He also added a third: no "high-stakes automated decisions" like social credit systems. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force," Altman wrote on X.
So... OpenAI got the same protections Anthropic was asking for. Without the standoff.
There is a lack of transparency here.
OpenAI agreed the Pentagon could use its tech for "any lawful purpose" — but there is no direct publication of the contract for us to see the wording.
WHAT HAPPENS NEXT?
Sam Altman has tried to diffuse the situation with an AMA on 𝕏, but has struggled to have a major impact.
OpenAI has seen a loss of customers, seeing the ‘QuitGPT’ movement escalate this week.
The Department of War and Anthropic continue to go at each other online, with no clear resolution at this point in time.
Time will tell for what approach OpenAI will take with the government, and where Anthropic will be left at.
AI TOOL DISCOUNTS
Don’t pay full price, use these codes:
Freepik: https://referral.freepik.com/mQI968S
Runway Code: JERROD25



