Anthropic Accuses DeepSeek, Moonshot AI, and MiniMax of Industrial-Scale Model Distillation
Anthropic says three Chinese AI labs created over 24,000 fake accounts and used 16 million Claude exchanges to extract its model's capabilities, escalating tensions over AI intellectual property.

The Biggest AI Intellectual Property Dispute Yet
Anthropic has publicly accused three Chinese AI labs — DeepSeek, Moonshot AI (Kimi), and MiniMax — of orchestrating what it calls "industrial-scale distillation attacks" against its Claude models. The allegations, first disclosed on February 23, have dominated Reddit's r/MachineLearning and r/singularity communities for weeks and reignited debate over AI model theft, export controls, and the fragility of API-based business models.
According to Anthropic, the three companies collectively created more than 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, systematically extracting the model's most differentiated capabilities to train their own systems.
What Is Distillation — and Why Does It Matter?
Distillation is a well-established technique in machine learning where a smaller "student" model is trained to replicate the outputs of a larger "teacher" model. AI labs routinely use distillation internally to create lighter, cheaper versions of their own frontier models.
The controversy arises when competitors use the technique across company lines. By querying Claude millions of times and recording its responses, a rival lab can create training data that captures Claude's reasoning patterns, code generation abilities, and alignment behavior — effectively copying another company's research investment through its public API.
The Scale of Each Campaign
Anthropic's investigation revealed distinct patterns across the three labs:
-
DeepSeek conducted more than 150,000 exchanges focused on improving foundational logic and alignment, particularly around generating responses to policy-sensitive queries. Anthropic suggests this targeted Claude's nuanced approach to content moderation.
-
Moonshot AI generated over 3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, and computer vision — essentially the full spectrum of Claude's enterprise capabilities.
-
MiniMax ran the largest campaign with 13 million exchanges, concentrating on agentic coding, tool orchestration, and multi-step task completion.
Reddit's Reaction: Outrage and Skepticism
The story exploded across AI-focused subreddits within hours of Anthropic's disclosure. On r/MachineLearning, the top thread accumulated thousands of upvotes and hundreds of comments debating the ethics and legality of API-based distillation.
The community response has been split. One camp views this as straightforward intellectual property theft at an unprecedented scale. "This is not some gray area," wrote one highly upvoted commenter on r/singularity. "They created 24,000 fake accounts specifically to steal capabilities. That is fraud."
Others have taken a more nuanced position, pointing out that distillation from public APIs exists in a legal gray zone. Several commenters noted that OpenAI itself was accused of training on copyrighted data, and questioned whether querying a public API — even at massive scale — constitutes theft in any legally meaningful sense.
The Geopolitical Dimension
The timing of Anthropic's disclosure is significant. It came as U.S. policymakers were actively debating expanded AI chip export controls targeting China. Anthropic's allegations provide ammunition for those pushing stricter restrictions, framing Chinese AI development as partially dependent on extracting capabilities from American companies.
The accusations also raise uncomfortable questions about the security model underpinning the entire AI-as-a-service industry. If a frontier model's capabilities can be systematically extracted through its own API, the traditional moat of massive training compute becomes less defensible.
Industry Response and Defensive Measures
Anthropic says it has implemented new detection systems and is investing in defenses that make distillation attacks "harder to execute and easier to identify." The company is calling for a coordinated response across the AI industry, cloud providers, and policymakers.
Neither DeepSeek, Moonshot AI, nor MiniMax have issued detailed public responses to the allegations as of this writing.
What Comes Next
The distillation dispute is likely to accelerate several trends already underway: tighter API rate limiting and usage monitoring across all frontier model providers, increased investment in watermarking and fingerprinting model outputs, and growing pressure on policymakers to establish clear legal frameworks around model-to-model knowledge transfer.
For the broader AI community, the episode is a stark reminder that in the race to build ever-more-capable AI systems, the line between competition and extraction is becoming increasingly difficult to draw.


