Business

Anthropic complained to US government about CEO of Chinese company who was key speaker at Nvidia’s annual conference; and now US State Department ​has ordered …


US State Department has ordered a global push to bring attention to what it says are widespread efforts by Chinese companies, including AI ‌startup DeepSeek, to steal intellectual property from U.S. artificial intelligence labs. The cable, dated Friday and sent to diplomatic and consular posts around the world, instructs diplomatic staff to speak to their foreign counterparts about “concerns over adversaries’ extraction and distillation of US AI models.” For those un aware, Distillation is the process of training smaller AI models using output from larger, more expensive ones as part of an effort to lower the ⁠costs of training a powerful new AI tool.The US government order follows a complaint where Anthropic accused three prominent Chinese AI companies of using its Claude chatbot on a massive scale to secretly train rival models. In a blog post, San Francisco–based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate law by interacting with Claude, its market-reshaping vibe-coding tool.Incidentally, among the companies that Anthropic and the US government have accused is Moonshot AI whose CEO and founder Zhilin Yang took the stage at Nvidia’s biggest annual event of the year. Zhilin Yang was a speaker at Nvidia’s GTC 2026 annual event.

What Anthropic complaint said on Moonshot AI

In its open blog, Anthropic said, “We have identified industrial-scale campaigns by three AI laboratories — DeepSeek, Moonshot, and MiniMax — to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”The complaint letter/blog also went on to describe the technique that Moonshot AI use. “The three distillation campaigns detailed below followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection. The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use,” it said.Moonshot AI Scale: Over 3.4 million exchangesThe operation targeted:* Agentic reasoning and tool use* Coding and data analysis* Computer-use agent development* Computer visionMoonshot (Kimi models) employed hundreds of fraudulent accounts spanning multiple access pathways. Varied account types made the campaign harder to detect as a coordinated operation. We attributed the campaign through request metadata, which matched the public profiles of senior Moonshot staff. In a later phase, Moonshot used a more targeted approach, attempting to extract and reconstruct Claude’s reasoning traces.

What the US State government cable said

According to an exclusive Reuters report, the State Department cable said that its purpose was to “warn of the risks ⁠of utilizing AI models distilled from U.S. proprietary AI models, and lay the groundwork for potential follow-up and outreach by the U.S. government. “It also mentioned Chinese AI firms Moonshot AI and MiniMax.The cable said that “AI models developed from surreptitious, unauthorized distillation campaigns enable foreign actors to ⁠release products that appear to perform comparably on select benchmarks at a fraction of the cost but do not replicate the full performance of the original system.” It added that the campaigns also “deliberately strip security protocols from the resulting models and undo mechanisms that ensure those AI models are ideologically ⁠neutral and truth-seeking.



Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top