Feb 25, 2026 · 5 min read
Three Chinese AI Labs Used 24,000 Fake Accounts to Copy Claude
Anthropic identified industrial scale distillation campaigns by DeepSeek, Moonshot AI, and MiniMax that generated over 16 million exchanges designed to extract Claude's most advanced capabilities.
Industrial Scale Theft
On February 24, 2026, Anthropic published a detailed report documenting what it called "industrial scale" campaigns by three Chinese AI companies to systematically extract the capabilities of its Claude AI model. The companies, DeepSeek, Moonshot AI, and MiniMax, collectively generated over 16 million exchanges through approximately 24,000 fraudulently created accounts.
The technique is called distillation. It involves querying a more powerful AI model with carefully crafted prompts, then using the responses to train a smaller, cheaper model that mimics the original's capabilities. While distillation can be a legitimate research technique, doing it at this scale through fake accounts violates terms of service and raises serious intellectual property and national security concerns.
What Each Company Targeted
Each laboratory focused on different Claude capabilities, suggesting coordinated but independent extraction campaigns:
- DeepSeek generated over 150,000 exchanges targeting reasoning capabilities, reward model functionality, and censorship safe alternatives to policy sensitive queries
- Moonshot AI produced more than 3.4 million exchanges focused on agentic reasoning, tool use, coding, data analysis, and computer vision
- MiniMax was the most aggressive, generating over 13 million exchanges targeting agentic coding and tool orchestration
The targets were not random. These are Claude's most commercially valuable and technically differentiated capabilities: the ability to reason through complex problems, use external tools, write production quality code, and operate autonomously as an agent.
Hydra Cluster Architecture
MiniMax and Moonshot operated what Anthropic described as "hydra cluster" architectures, sprawling networks of fake accounts that distributed traffic across APIs and cloud platforms to avoid detection. One proxy network managed more than 20,000 fraudulent accounts simultaneously.
The scale of infrastructure required to maintain 24,000 fake accounts and process 16 million exchanges suggests these were not rogue employee projects. This was systematic, well resourced, and operationally sophisticated, requiring significant compute, engineering, and coordination.
How Anthropic Caught Them
Anthropic detected the campaigns through multiple signals: IP address correlation, request metadata analysis, infrastructure indicators, and pattern recognition in prompt volume and structure. The company also received corroboration from industry partners, suggesting the problem extends beyond Claude.
OpenAI has made similar allegations. In a joint statement, both companies called for broader industry action to address distillation attacks, noting that the problem is not limited to any single AI provider.
The Safety Problem
The national security implications go beyond intellectual property theft. When a model is distilled illicitly, the safety guardrails built into the original do not transfer. Anthropic invests significant resources in alignment research, safety testing, and content policies. A distilled copy strips those protections away.
Anthropic warned that models built through illicit distillation could enable "authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance" without the safeguards that responsible developers build in.
DeepSeek's specific focus on "censorship safe alternatives to policy sensitive queries" is particularly revealing. It suggests the goal was not just to replicate Claude's technical capabilities but to understand how to build a model that can handle sensitive topics without the ethical constraints Western companies apply.
What Happens Next
The disclosure comes as the US government debates tighter restrictions on AI chip exports to China. Anthropic's report provides concrete evidence that Chinese AI labs are actively working to close the capability gap through means other than independent research.
For the AI industry, the challenge is technical: how do you allow legitimate API access while preventing systematic extraction at scale? Rate limiting, behavioral analysis, and account verification can raise the cost of distillation, but determined actors with sufficient resources will always find workarounds.
The 16 million exchanges that Anthropic documented represent the campaigns it caught. The question no one can answer yet is how many it did not.