- UK data watchdog formally investigates X and xAI over Grok’s creation of deepfake images without consent
- Grok has reportedly generated millions of explicit AI images, including those that appear to depict minors
- The investigation looks at possible GDPR violations, lack of security measures
The UK Data Protection Authority has launched a wide-ranging investigation into X and xAI following reports that the Grok AI chatbot generated obscene deepfake images of real people without their consent. The Information Commissioner’s Office is investigating whether the companies breached the GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.
“The reports about Grok raise deeply worrying questions about how people’s personal data has been used to generate intimate or sexualized images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this,” the ICO’s executive director of regulatory risk and innovation William Malcolm said in a statement.
The investigators are not only looking at what users did, but what X and xAI failed to prevent. The move follows a raid last week on X’s Paris office by French prosecutors as part of a parallel criminal investigation into the alleged distribution of deepfakes and child abuse images.
The scale of this incident has made it impossible to dismiss as an isolated case of a few bad impulses. Researchers estimate that Grok generated about three million sexualized images in less than two weeks, including tens of thousands that appear to depict minors. GDPR’s penalty structure provides a clue to the stakes: breaches can result in fines of up to £17.5 million or 4% of global turnover.
Grok problems
X and xAI have insisted they are implementing stronger security measures, although details are limited. X recently announced new measures to block certain image generation pathways and limit the creation of altered photos involving minors. But once this type of content starts circulating, especially on a platform as big as X, it becomes nearly impossible to completely delete.
Politicians are now calling for systemic changes in the law. A group of MPs led by Labour’s Anneliese Dodds has called on the government to introduce AI legislation requiring developers to carry out thorough risk assessments before releasing tools to the public.
As AI image generation becomes more common, the line between real and fabricated content is blurring. That shift affects everyone with social media, not just celebrities or public figures. When tools like Grok can produce convincingly explicit images from an ordinary selfie, sharing personal images changes the stakes.
Privacy becomes somewhat more difficult to protect. It doesn’t matter how careful you are when technology overtakes society. Regulators worldwide are scrambling to keep up. The UK’s investigation into X and xAI may take months, but it is likely to influence how AI platforms are expected to behave.
A push for stronger, enforceable safety-by-design requirements is likely. And there will be more pressure on the companies to provide transparency about how their models are trained and which guardrails are in place.
The UK investigation signals that regulators are losing patience with the idea of a “move fast and break things” approach to public safety. When it comes to artificial intelligence that can manipulate people’s lives, there is momentum for real change. When AI makes it easy to distort someone’s image, the burden of protection lies with the developers, not the public.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



