UK Demands Answers From X Over Grok AI Images
Britain has demanded explanations from X, the social media platform owned by Elon Musk, after its AI chatbot Grok was reported to have generated undressed images of people and sexualised images of children.
The intervention follows serious concerns that the platform may have failed in its legal duty to protect users, particularly minors, from illegal and harmful content.
Ofcom Contacts X and xAI Over Safeguards
Britain’s media regulator Ofcom confirmed it has made urgent contact with both X and xAI to assess whether they are complying with UK law.
“We are aware of serious concerns raised about this feature,” an Ofcom spokesperson said. “We have made urgent contact to understand what steps have been taken to comply with legal duties to protect users in the UK.”
Grok Admits Safeguard Failures
In a statement released on Friday, Grok acknowledged that lapses in its safeguards had resulted in “images depicting minors in minimal clothing” appearing on the platform. The chatbot said fixes were being implemented urgently.
“xAI has safeguards, but improvements are ongoing to block such requests entirely,” Grok said.
Illegal Content Under UK Law
Under British law, creating or sharing non-consensual intimate images or child sexual abuse material — including AI-generated sexual deepfakes — is illegal. Technology platforms are also legally required to take proactive steps to prevent UK users from encountering illegal content and to remove it swiftly once identified.
France Also Takes Action
The controversy is not limited to the UK. Ministers in France have reported X to prosecutors and regulators over similar content. In a statement, French officials described the images as “sexual and sexist” and said they were “manifestly illegal.”
Growing Scrutiny of AI-Generated Content
The incident highlights intensifying regulatory scrutiny across Europe over AI-generated imagery, child safety, and platform accountability. Regulators are increasingly demanding that tech companies prove their systems are safe by design — not merely reactive after harm has occurred.
As investigations continue, X and xAI face mounting pressure to demonstrate that their AI tools can operate within the strict legal frameworks governing online safety in the UK and beyond.


