In early January 2026, a troubling trend surfaced across X. Users began tagging Elon Musk’s AI chatbot Grok under photos of women and issuing prompts that altered their clothing, sexualised their bodies or stripped them entirely. Within days, what some dismissed as provocation or experimentation escalated into a global crisis around consent, safety and accountability in artificial intelligence.
At the centre of the storm was Grok, an AI image and text generator developed by Musk’s xAI and integrated directly into X. Designed to be edgy and permissive, Grok’s lack of safeguards allowed it to generate non consensual sexual deepfakes at a scale unseen on other platforms.
A Prompt Away From Abuse
For many users, the ease was the shock. Unlike most AI image generators, Grok initially required no payment, no verified identity and no meaningful restrictions on editing images of real people. Users could simply reply to a post with “@Grok” and request an altered version of the image.
These requests were rarely subtle. Prompts ranged from placing women in bikinis or lingerie to explicitly sexual scenarios. In several cases, women who had uploaded their own photos found altered versions of themselves circulating publicly within minutes.
Sex educator Seema Anand was among those targeted. Speaking publicly, she described seeing AI generated nude images of herself spread online. Despite filing police complaints, she said the experience left her feeling physically ill and deeply violated.
Scale That Set Grok Apart
According to a Bloomberg investigation, Grok users generated an estimated 6,700 nude or sexualised images per hour between 5 and 6 January 2026. By comparison, other leading AI platforms collectively produced around 79 such images per hour during the same period.
The disparity was not incidental. While rivals such as ChatGPT’s DALL E and Google’s Gemini employ pre prompt filters, face recognition blocks and strict bans on editing real individuals, Grok relied largely on post generation moderation and basic age tick boxes.
This meant that by the time an image was flagged, it had often already spread.
Also Read: Grok Misuse Sparks Global Alarm Over Abuse of Women and Children
Political Backlash and Global Action
The backlash was swift. In Ireland’s Parliament, lawmakers read out some of the prompts being fed to Grok, describing them as abusive and degrading. In the United Kingdom, Prime Minister Keir Starmer called the content disgusting and said turning image generation into a paid feature after the damage was done was shameful.
Indonesia imposed a nationwide block on Grok on 10 January, citing violations of women’s and children’s dignity. Malaysia followed with restrictions and warnings of legal action. Australia’s eSafety regulator opened a probe into digitally undressed images, while the European Union ordered X to preserve Grok related documents under the Digital Services Act.
In India, Rajya Sabha MP Priyanka Chaturvedi formally raised the issue with the Information Technology Minister, warning that women were being targeted through fake accounts and AI powered sexualisation. Shortly after, the Ministry of Electronics and Information Technology issued a notice to X, citing violations of the IT Act, 2000 and the IT Rules, 2021.
Elon Musk and X Respond
Elon Musk initially said he was unaware of any instances involving minors and maintained that Grok only responded to user prompts and refused illegal requests. However, as evidence mounted, X’s Safety team announced new restrictions.
Image editing of real people in revealing clothing was blocked globally. Image generation through Grok was limited to paid subscribers. X also said it would geoblock image generation in jurisdictions where such content was illegal.
Critics argued these measures came too late. By then, thousands of altered images had already circulated, with lasting consequences for those targeted.
Why Women Were Disproportionately Targeted
A report by the Centre for Information Resilience found that seven out of ten requests targeting identifiable individuals were aimed at women, and 98 per cent of those requests were sexualised. Requests involving men were more often framed as humiliation rather than sexual violence.
In some cases, users asked Grok to remove hijabs or sarees. In others, they demanded women be made more modest. The common thread was control over women’s appearance.
Experts warned this reflected a broader pattern rather than a technological anomaly.
“AI does not invent gender based violence,” analysts noted. “It accelerates and scales it.”
Legal Consequences and What Victims Can Do
In India, several legal provisions can apply to AI generated deepfakes. Section 66C of the IT Act covers identity theft. Section 66E addresses violation of privacy. Section 67 deals with publishing obscene material electronically.
Victims are advised to report such content directly to platforms, file complaints on the National Cyber Crime Reporting Portal, and lodge police complaints for serious cases.
A Larger Question for AI
The Grok controversy has forced governments and tech companies to confront a hard truth. When powerful tools are released without robust safeguards, harm is not a hypothetical risk. It is an inevitability.
As AI systems become faster, cheaper and more accessible, the debate is no longer about innovation alone. It is about responsibility, consent and whose safety is treated as collateral damage.
For many women affected by Grok’s failures, the damage is already done. The question now is whether the lessons will be learned before the next tool becomes the next tormentor.

