Dutch Court Orders xAI to Stop Grok From Generating Nonconsensual Nude Images

An Amsterdam court banned xAI's Grok from generating or distributing nonconsensual nude images in the Netherlands, threatening fines of €100,000 per day — as data reveals the tool created millions of sexualized images in just 10 days.

AI Newspaper Today··4 min read
Dutch Court Orders xAI to Stop Grok From Generating Nonconsensual Nude Images
Share

Dutch Court Orders xAI to Stop Grok From Generating Nonconsensual Nude Images

A Dutch court has issued a landmark ruling against Elon Musk's xAI, ordering the company to stop its Grok AI tool from generating and distributing sexual imagery of people without their consent in the Netherlands. The Amsterdam District Court's decision, handed down on March 26, threatens daily fines of €100,000 ($115,350) for noncompliance.

The ruling represents one of the most significant legal actions taken against an AI company over harmful content generation and arrives amid mounting global pressure on AI firms to prevent their tools from being used to create nonconsensual intimate imagery.

The Scale of the Problem

The numbers presented to the court are staggering. Evidence showed that Grok generated approximately 3 million sexualized images between December 29, 2025, and January 8, 2026 — a span of just 10 days. Among those images, an estimated 23,000 appeared to depict children.

The case was brought by Offlimits, a Dutch centre that monitors online violence, in cooperation with the Victims Support Fund. Their investigation revealed that Grok included a feature allowing users to create hyper-realistic deepfake montages of naked women and children using real photographs uploaded to the platform.

Court's Reasoning

The Amsterdam District Court found that xAI had not done enough to prevent the misuse of its image generation capabilities. Despite xAI claiming to have implemented safeguards, the judge ruled that Offlimits had demonstrated "reasonable doubt over the effectiveness of the measures taken to date."

That doubt was supported by a striking demonstration: Offlimits managed to produce a video of a nude person using Grok shortly before the court hearing itself, undermining xAI's claims that its safety filters were sufficient.

The court barred xAI and the X platform from "generating and/or distributing sexual imagery" featuring people "partially or wholly stripped naked without having given their explicit permission" within the Netherlands.

A Class-Action Lawsuit in the U.S.

The Dutch ruling is not an isolated legal action. On March 17, three girls filed a class-action lawsuit against xAI in the United States, alleging that Grok was used to generate child sexual abuse material (CSAM) from their photographs. The lawsuit represents a growing wave of legal accountability for AI companies whose tools enable the creation of nonconsensual intimate imagery.

Women's rights organizations have increasingly focused on AI-generated deepfakes as a vector for harassment and abuse, with advocacy groups arguing that AI companies bear responsibility for building tools that can be weaponized against individuals.

European Legislative Response

The court's decision landed on the same day that the European Parliament approved a broader ban on AI systems that generate sexualized deepfakes. The parliamentary vote signals that European lawmakers are moving toward comprehensive regulation of AI-generated intimate imagery, going beyond case-by-case judicial enforcement.

This legislative action adds to the regulatory framework established by the EU AI Act, which is being phased in through 2026. The Council recently agreed to streamline certain implementation timelines, but the core prohibitions on harmful AI applications remain firm.

Implications for the AI Industry

The Grok case sets several important precedents. First, it establishes that AI companies can be held legally responsible in specific jurisdictions for the harmful outputs their tools generate, even when those tools are operated by third-party users. Second, it demonstrates that courts are willing to evaluate — and reject — companies' claims about the effectiveness of their safety measures.

For the broader AI image generation industry, the ruling is a clear signal. Companies like Stability AI, Midjourney, and others that offer image generation capabilities will need to demonstrate robust safeguards against nonconsensual intimate imagery, or face similar legal challenges.

The €100,000 daily fine may be modest relative to xAI's resources, but the reputational and legal precedent is significant. As AI-generated imagery becomes increasingly realistic, the pressure on companies to prevent misuse will only intensify.

Sources: Al Jazeera, CNBC, Bloomberg, TechPolicy.Press, The Record

Share

Stay up to date with AI news

Get the latest stories delivered to your inbox — free, no spam.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles