OpenAI and Google Employees Rally Behind Anthropic in Pentagon AI Lawsuit

More than 30 employees from OpenAI and Google DeepMind have filed statements supporting Anthropic's legal challenge against the Defense Department's ban on its AI models in government systems.

AI Newspaper Today··3 min read
OpenAI and Google Employees Rally Behind Anthropic in Pentagon AI Lawsuit
Share

Cross-Industry Solidarity

In a rare display of cross-company solidarity, more than 30 employees from OpenAI and Google DeepMind have filed a joint statement supporting Anthropic in its ongoing legal battle against the U.S. Department of Defense. The case, which began when the Trump administration labeled Anthropic a "supply-chain risk" and banned its AI models from government systems, has become a defining test of how government power intersects with AI development.

The employee statement argues that the DOD's designation was politically motivated rather than based on legitimate security concerns, and that allowing such designations to stand would create a chilling effect on AI safety research across the industry.

The Background

The conflict traces back to Anthropic's public stance on military applications of AI. The company has maintained policies limiting how its models can be used in weapons systems and military decision-making — positions that put it at odds with an administration eager to accelerate AI adoption in defense. In late 2025, the DOD designated Anthropic as a supply-chain risk, effectively barring federal agencies from purchasing or using its products.

Anthropic challenged the designation in federal court, arguing it violated both free-speech protections and administrative procedure requirements. In March 2026, a federal judge ruled that the ban did violate free-speech protections, but the DOD has appealed and the designation remains in effect pending resolution.

Why Competitors Are Joining In

The employee statement is notable because it comes from people who work at Anthropic's direct competitors. OpenAI and Google both sell AI products to government agencies and stand to gain commercially from Anthropic's exclusion. Yet the signatories argue that the principle at stake — whether the government can effectively blacklist an AI company for its safety positions — matters more than competitive advantage.

Several signatories are prominent AI safety researchers who have published work on the risks of deploying AI in high-stakes military contexts. Their statement emphasizes that Anthropic's cautious approach to military AI reflects responsible development practices that the industry should encourage, not penalize.

The Broader Stakes

The case has implications beyond Anthropic. If the DOD can designate AI companies as supply-chain risks based on their safety policies, every AI lab faces a choice: accommodate government demands for unrestricted military use, or risk losing access to one of the largest technology procurement markets in the world.

For the AI safety research community, the case tests whether safety-oriented policies are commercially sustainable. Anthropic has built its brand and recruited talent on the premise that building safe AI and building a successful business are compatible goals. A ruling that permanently excludes the company from government contracts would challenge that premise directly.

What Happens Next

The DOD's appeal is expected to be heard in the coming months. Meanwhile, bipartisan legislation has been introduced in Congress that would require the DOD to establish clear, transparent criteria for supply-chain risk designations in AI — criteria that would prevent politically motivated exclusions.

Anthropic continues to operate normally in the commercial market, where its Claude models compete with GPT and Gemini for enterprise customers. But the government market represents significant revenue potential, and the company has acknowledged that the ongoing legal uncertainty has affected its ability to close contracts with agencies that want to use its products but are wary of the designation's implications.

The outcome will likely set precedent for how democratic governments balance national security interests with the commercial AI ecosystem — and whether companies that prioritize safety can do so without paying a competitive penalty.

Share

Stay up to date with AI news

Get the latest stories delivered to your inbox — free, no spam.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles