Federal Judge Blocks Trump Administration's Ban on Anthropic, Calls It 'First Amendment Retaliation'

A federal judge issues a preliminary injunction blocking the Trump administration from banning Anthropic's Claude AI from government use, ruling the move was illegal retaliation for the company's public stance on autonomous weapons.

AI Newspaper Today··2 min read
Federal Judge Blocks Trump Administration's Ban on Anthropic, Calls It 'First Amendment Retaliation'
Share

A Landmark AI Policy Ruling

A federal judge in San Francisco has granted Anthropic a preliminary injunction in its lawsuit against the Trump administration, blocking the government from banning federal agencies from using the company's Claude AI models.

Judge Rita Lin issued the ruling on March 26, two days after oral arguments in what has become the most significant legal clash between an AI company and the U.S. government.

The Dispute

The conflict began when Anthropic CEO Dario Amodei publicly stated the company would not allow Claude to be used for autonomous weapons systems or to surveil American citizens. The Department of Defense wanted unfettered access to Anthropic's models across all lawful purposes. Anthropic wanted guardrails.

The Trump administration responded by directing federal agencies to stop using Anthropic's products and moved to designate the company as a supply chain threat to national security — a classification that would have effectively blacklisted it from all government contracts.

The Judge's Words

Judge Lin's 43-page ruling pulled no punches.

"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote.

She went further: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

The order bars the administration from implementing, applying, or enforcing the ban. Lin put the injunction on hold for seven days to allow the government time to appeal.

Implications for the AI Industry

The ruling sets a precedent that AI companies can impose ethical boundaries on how their products are used — even by the government — without facing retaliatory consequences. It also draws a clear line: the government cannot weaponize procurement policy to punish companies for exercising free speech.

Other AI labs, many of which face similar tensions between commercial opportunity and ethical red lines, are watching closely. The case could reshape how the government negotiates AI contracts for years to come.

Share

Stay up to date with AI news

Get the latest stories delivered to your inbox — free, no spam.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles