xAI is facing a class action lawsuit filed by three teenagers who claim the company’s AI, Grok, used their photos to generate sexually explicit images without their consent. These AI-generated images, depicting the minors in exploitative scenarios, were circulated on platforms such as Discord and Telegram and shared as child sexual abuse material (CSAM). This lawsuit arises amid wider investigations into xAI for Grok’s alleged role in creating nonconsensual and sexualized content involving minors.

The lawsuit, filed in California, details the emotional trauma suffered by the plaintiffs-teenagers from Tennessee identified only as Jane Doe 1, 2, and 3. They report severe distress from the violation of their privacy and safety, as the manipulated images continue to be distributed online. According to the complaint, law enforcement confirmed Grok’s involvement in creating these images, contradicting xAI’s claims that such outputs were unintentional or unknown.

xAI faces increasing scrutiny over Grok’s AI image generation

Researchers from the Center for Countering Digital Hate estimated earlier this year that Grok generated millions of sexualized images, including approximately 23,000 involving apparent children. These findings have intensified criticism against xAI, which until recently endorsed Grok’s provocative capabilities. CEO Elon Musk has denied awareness of Grok producing explicit content featuring minors.

In response to public scrutiny and ongoing investigations in the US and Europe, xAI has imposed restrictions on Grok’s image-editing features. Since January, the tool no longer allows edits involving real people in bikinis and limits image manipulation features to paid users, aiming to reduce unauthorized and harmful content creation.

The lawsuit may represent thousands of minors whose images were exploited through Grok. It accuses xAI of violating laws prohibiting the production and distribution of child abuse material and profiting from Grok’s misuse. This legal challenge highlights the urgent need for stronger oversight and accountability for AI companies whose technologies can be weaponized against vulnerable individuals.

Source: Engadget

Leave a comment

Your email address will not be published. Required fields are marked *