Elon Musk has acknowledged in court that xAI used rival AI training techniques while building Grok, a small but awkward admission for a man currently suing OpenAI over its own commercial evolution. The testimony landed in federal court in California, where Musk is accusing OpenAI, Sam Altman, and Greg Brockman of abandoning the startup’s original non-profit mission.
Asked directly whether xAI had used ”distillation” – training a new model by systematically querying existing advanced AI systems – Musk said the practice was common in the industry and then, under follow-up questioning, answered: ”Partly.” In plain English: yes, at least some borrowing happened. The detail matters because AI labs have spent months pretending this is somebody else’s dirty trick, even as the economics of model-building push everyone toward the same grey zone.
What distillation means for Grok
Distillation is attractive for a simple reason: it can help smaller teams mimic the behavior of much larger systems without paying for the same mountain of compute. That is why OpenAI, Anthropic, and Google have been fighting third-party copying so aggressively, while Chinese open-source projects have often been the public face of the debate. Musk’s testimony suggests the practice is not just a foreign problem or a theoretical policy issue – American labs are doing versions of it to one another.
- Technique: distillation, or learning from outputs of existing AI systems
- Use case: speeding up development of a model like Grok
- Risk: violating platform terms even if the method is not clearly illegal
Musk’s hierarchy of AI leaders
During the hearing, Musk also ranked the current field in a way that will irritate more than a few rival executives. He put Anthropic first, followed by OpenAI, Google, and then Chinese open-source projects, while describing xAI – founded in 2023 – as a much smaller operation with only a few hundred employees. That is a useful reminder that startup bravado does not change headcount, and in AI, headcount still burns cash.
OpenAI has not commented publicly on the admission. Meanwhile, the bigger industry response is already underway: the Frontier Model Forum and other major labs are building ways to spot suspicious high-volume requests that look like attempts to copy a model’s behavior. Expect that cat-and-mouse game to get messier, not cleaner, because the incentive to build cheaper rivals is not going away.

