Mar 29, 2026 · 5 min read
A Court Just Told Grok to Stop Undressing People or Pay €100K a Day
The Amsterdam District Court issued an injunction ordering xAI to stop generating nonconsensual nude images through its Grok chatbot. The ruling carries daily fines of €100,000 and a maximum penalty of €10 million.
What the Court Ruled
On March 26, 2026, the Amsterdam District Court ordered Elon Musk's xAI and the X platform to stop "generating and distributing sexual imagery" featuring people "partially or wholly stripped naked without having given their explicit permission." The ruling specifically targets Grok's ability to create undressed images from uploaded photographs of real people.
The financial teeth are significant. xAI faces fines of €100,000 (approximately $115,000) per day of noncompliance, with a maximum total penalty of €10 million ($11.5 million). The court also banned the production, distribution, and possession of any AI generated imagery that qualifies as child sexual abuse material under Dutch law.
The Evidence That Triggered the Case
The case was brought by Offlimits, a Dutch nonprofit focused on combating online sexual violence. The organization presented research from the Center for Countering Digital Hate estimating that Grok generated approximately 3 million sexualized images in a single 10 day window between December 29, 2025, and January 8, 2026. Of those, an estimated 23,000 appeared to depict minors.
The scale of the problem is hard to overstate. Three million images in ten days means Grok was producing roughly one nonconsensual sexualized image every quarter of a second, around the clock.
Why the Court Rejected xAI's Defense
xAI argued that it had already restricted Grok's nudification capabilities in January 2026 following international backlash. The company submitted a statement claiming it had "zero tolerance for any forms of child sexual exploitation" and was "committed to making X a safe platform for everyone."
The court was not persuaded. Offlimits demonstrated that on March 9, 2026, the same day xAI sent its categorical denial to the court, their researchers were still able to generate a sexualized video of a real person from a single uploaded photograph. Grok did not verify consent or even check whether the person in the image had agreed to be depicted that way.
The judge concluded that xAI's voluntary measures were insufficient and that financial penalties were necessary to "ensure that the defendants actually do what they claim to be striving for."
A Pattern of Regulatory Pressure
This is not the first time European regulators have targeted X and Grok. The UK's Information Commissioner opened a privacy investigation into Grok over deepfake imagery earlier this year. The European Commission is separately investigating X under the Digital Services Act, and the EU AI Act now includes provisions to ban AI nudification tools outright.
The European Council has proposed amending the EU AI Act to add a comprehensive ban on AI systems that generate nonconsensual intimate imagery, signaling that coordinated enforcement across the entire bloc is coming. The Dutch ruling may be the first binding court order, but it is unlikely to be the last.
What This Means Beyond Europe
Offlimits managing director Robbert Hoving noted that the ruling has implications beyond Dutch borders because "Grok ignores victim location data." The tool does not check where the person being depicted lives or whether generating such imagery violates local law. This means anyone's photograph can be processed regardless of jurisdiction.
For compliance teams at organizations using AI tools, the ruling establishes a clear precedent: deploying AI systems that can generate nonconsensual intimate imagery creates direct legal liability, even if the harm is not intentional. The fact that xAI claimed to have addressed the issue and was still found in violation shows that courts will test whether voluntary safeguards actually work, not just whether a company says they are in place.
As Hoving put it after the ruling: "Today, the court has drawn a clear line. Technology is not a license to violate human rights."