Feb 05, 2026 · 5 min read
The UK Just Opened a Privacy Investigation Into Grok AI Over Deepfake Imagery
The Information Commissioner's Office has launched formal investigations into X and xAI after Grok generated nonconsensual sexual imagery of real people, including children. The probe focuses on whether personal data was processed lawfully.
Two Companies, One Investigation
In February 2026, the UK's Information Commissioner's Office (ICO) announced formal investigations into both X Internet Unlimited Company and X.AI LLC. The probe covers how these companies process personal data in connection with Grok, the artificial intelligence system built into the X platform.
The investigation was triggered by reports that Grok could generate nonconsensual sexual imagery of real people, including children. In testing by Reuters journalists, Grok produced sexualized images in response to 45 of 55 test prompts. The system apparently lacked adequate safeguards to prevent using people's personal data to generate harmful content.
The ICO's investigation runs alongside a separate probe by Ofcom, the UK communications regulator. Ofcom acknowledged that it lacks sufficient powers to directly investigate AI generated illegal images, which is why the ICO is leading on data protection grounds.
The Data Protection Problem
The ICO's investigation centers on a specific question: were appropriate safeguards built into Grok's design and deployment to prevent the generation of harmful manipulated images using personal data?
Under UK GDPR, organizations must process personal data lawfully, fairly, and transparently. When an AI system can take someone's likeness, which qualifies as personal data, and generate sexual imagery without their consent, that is a clear failure of data protection by design.
The ICO stated that when people "lose control of their personal data in ways that expose them to serious harm," it falls squarely within its enforcement mandate. The potential penalties are substantial: up to 17.5 million pounds or 4% of annual worldwide turnover, whichever is higher.
France Raided X's Paris Office
The UK investigation is not happening in isolation. French prosecutors raided X's Paris office in a separate probe linked to alleged child abuse material and sexually explicit deepfakes generated through Grok. The French action adds criminal investigation dimensions to what the UK is pursuing through civil data protection enforcement.
Together, the investigations represent the most significant regulatory action against an AI image generation system to date. They establish that generating nonconsensual imagery using people's personal data is not just an ethical problem or a content moderation failure. It is a data protection violation with concrete legal consequences.
What This Means for AI and Personal Data
The Grok investigation sets an important precedent. AI systems that process personal data, whether to generate images, draft text, or analyze behavior, must comply with data protection law. The fact that the processing happens through an AI model rather than a traditional database does not exempt it from GDPR requirements.
This has implications beyond image generation. AI systems that scrape social media profiles, analyze email content, or build behavioral models from personal data all face the same legal standard. If the data includes personal information, the processing must have a lawful basis, be transparent, and include appropriate safeguards.
For individuals, the investigation highlights how personal data shared on platforms can be repurposed in ways that were never anticipated or consented to. Photos uploaded to social media can become training data for AI systems that generate harmful content. Email addresses and names can feed into profiles that power targeted manipulation.
Controlling Your Data in the AI Era
The Grok case demonstrates how personal data can be processed in harmful ways once it leaves your control. Every piece of information you share online, from social media posts to email interactions, becomes potential input for AI systems whose behavior you cannot predict.
Regulators are starting to enforce boundaries, but enforcement is reactive. By the time an investigation concludes, the data has already been processed. The most effective defense remains limiting what personal data you expose in the first place.
Whether it is an AI system using your photos to generate images or a tracking pixel using your email activity to build a behavioral profile, the common thread is personal data being processed without meaningful consent. Blocking the data collection at the source remains the strongest protection available.