Grok deepfake scandal fuels regulation demands
Women across the UK say they have been left humiliated after discovering AI tools were used to digitally “undress” them in realistic deepfake bikini images — prompting new calls for regulation and criminal penalties.
The controversy centers on Grok, X’s AI system, after Reuters reported that in a single 10-minute period, users made 102 requests to edit women’s photos into swimsuit images.
UK Prime Minister Keir Starmer condemned the trend as “disgusting” and “completely unacceptable,” warning that “all options remain on the table,” including regulatory action. The government has already vowed to introduce a new criminal law making it illegal to create or request deepfake images of adults without consent.
‘Ordinary photos turned into props’
Several women told The Sun that their everyday pictures were transformed without permission.
TV presenter Maya Jama warned the practice is “scary” and “only getting worse,” saying anonymous accounts used her public photos as “test material” for Grok.
“Strangers repeatedly prompted Grok to sexualise them as if it was a game, and I was the prop,” she said.
Another victim, Jessaline, said the technology “turns consent into a joke,” adding that what began as a trend among OnlyFans creators quickly spiraled into widespread harassment:
“The constant flood of images grew day by day until early January — it was stomach-churning.”
Family members found the edited images
Local councillor Daisy Blakemore-Creedon, 20, was horrified when relatives showed her deepfake bikini edits of her selfie circulating on X:
“It made me feel very uncomfortable, especially as a young female.”
She said several reports were ignored and warned that if no action is taken, “it’s probably going to get worse.”
Victims fear career damage
For 41-year-old London TV pundit Paula London, the misuse of her image felt “unsettling” and potentially reputation-destroying.
“There’s a real risk to my professional reputation if someone misuses my image.”
Another woman, Maria Bowtell, said she initially misread the attention as flattering before realizing the scale of the problem. Requests escalated to users asking Grok to “put her in a string bikini.”
Child safety concerns escalate the row
The Internet Watch Foundation has warned that criminals have reportedly used Grok to generate sexualised images of children as young as 11. Musk has said he is “not aware” of any such images being produced.
In response, X announced that Grok will no longer edit real people’s photos in jurisdictions where it is illegal and stated:
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
Debate widens: AI tool problem or user problem?
While critics demand tighter regulation, some argue political motivations are driving the backlash.
Commentator Sophie Corcoran called attempts to ban X “political desperation,” while Reem Ibrahim of the Institute of Economic Affairs said:
“Banning image-editing on AI makes as much sense as banning pen and paper… The problem is the user, not the tool.”


