“How is this allowed on a platform that is so important today? Why does it not moderate these posts? Why does it lack a team or technology to prevent this?” singer Shreya Ghoshal asked me in an interview last year.
Well before X (formerly Twitter) started allowing users to generate non-consensual images of people (not just women) in swimsuits, it allowed the running of Artificial Intelligence (AI)-generated ads featuring celebrities, without their consent. It wasn’t just Shreya Ghoshal who struggled with getting these ads taken down, but there were ads featuring all sorts of cricketers and celebrities, including the Ambani couple: Anant Ambani and Radhika Merchant.
Two years ago, the issue of non-consensual AI-generated content hit the headlines, when a video morphing actress Rashmika Mandanna surfaced. This was, until recently, largely a celebrity problem, and it led to celebrities, starting with Anil Kapoor, protecting their “personality rights” in court, so that it becomes easier for that content to be taken down.
The swimsuit trend began on X around the end of 2025, days after the social media platform allowed users to edit photos posted by others using its AI tool Grok. What happened next was predictable.
Generative AI and Personal Rights
This one change has not just converted an issue faced largely by celebrities into one faced by everyone, but also expanded the scale of the problem by allowing everyone to do this.
It goes without saying that the disproportionate impact is on women, but we’ll soon see that it will impact business leaders, politicians, and a much larger subset of people. These images won’t just stop on X, they’ll make their way to the darker underbelly of social media: Telegram and WhatsApp groups.
X’s response is predictable as well: Elon Musk, who owns X, said that "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content."
This is a smart sleight of hand. In most countries, such content may not be illegal if the content isn’t necessarily pornographic in nature. In any case, while a user can complain, X may take its own sweet time taking it down, by which point it time the hurt would already have been caused.
Thirdly, Musk is squarely equating creation of the content with uploading it, indicating that Grok is not responsible: only the user is.
This also underlines a deeper problem with Generative AI: the outputs are based on photos provided to the platform by the user, the data it is trained on, and the prompt written by the user. Who is liable for the output is an issue that isn’t settled in law. Musk is clearly avoiding addressing this question by focusing on publishing.
In 2024, a controversy around AI content and bias made the news when someone had asked Google’s Bard (now Gemini) whether the Indian PM Narendra Modi is fascist, and then published the response on X. I had argued then that Bard cannot be held responsible for its response because it was a private chat. If anything, the user was responsible for broadcasting that point of view by posting it on X.
In this current issue of non-consensual, modified images being published, they are clearly being published by Grok, and not the user in question.
If there is illegal content published by Grok in response to a user request, then Grok must be held accountable.
Addressing the Regulations
There also isn’t a regulatory vacuum: Section 3(1)(b) of India’s IT Rules state that platforms like X “shall make reasonable efforts” to prevent the users from publishing content that is, among other things, “obscene, pornographic, paedophilic, invasive of another’s privacy including bodily privacy, insulting or harassing on the basis of gender, racially or ethnically objectionable.”
If nothing else, this kind of an output is harassing on the basis of gender, and while one may question whether it is invasive of privacy, given that the output is AI generated, it is potentially obscene. In that case, X is actively enabling the publishing of this content via its own AI service, and not making “reasonable efforts” to prevent it; quite the opposite.
Secondly, since its AI service is publishing such content, Grok and the company behind it, which also runs X, is potentially liable for it. It is true that AI companies cannot prevent the generation of outputs once they have been trained on certain data.
India’s Ministry of IT has sent X a notice to take action about this obscene and sexually explicit content, telling them to remove this material, saying that the platform cannot claim “they cannot escape their duty or responsibility simply by pleading safe harbour (protections)”.
Safe harbour protections are provided to platforms that allow others to post content: they act as “intermediaries” and mere conduits. The company that runs X is not an intermediary here: its AI service is actively publishing this content, so safe harbour protections cannot apply to it. X also says in its terms of service that “All Content, including anything referenced therein, is the sole responsibility of the person who posted, generated, inputted, or created such Content”, suggesting that the user who gave the prompt to Grok is responsible for the output. However, a company's Terms of Service cannot override the law of the land.
X could also have blocked these obscene outputs by design, like Bard did after the Narendra Modi incident, but it looks like it decided to see what could go wrong, even though that was fairly predictable. This reeks of irresponsible behaviour.
That the Ministry is pleading with X to follow the law instead of enforcing it underlines the problem with governance in India today: if the entity on the other side is powerful, like Elon Musk is, the government avoids enforcement. It fails people like Shreya Ghosal, who has for over a year and a half been trying to get morphed images being taken down.
Digital India's Downward Curve
Things are obviously going to get worse from here: India’s Digital Personal Data Protection Law actively removes user privacy protections against AI manipulation, when it says that privacy protections do not apply to you if you make your personal data available online. This means that if you publish your photo on your X or Instagram account, you don’t have privacy protections as far as that photo is concerned.
X’s Terms make it worse: it says that when users choose “to submit, input, create, generate, post or display” content, you give the platform rights to “adapt, modify, publish” the content, including “transforming”, like AI models do. Under the clause on intellectual property rights, it adds that users provide the service with a license to make the content available to the rest of the world, and very strangely “to let others do the same”. That last bit isn’t in the terms and conditions of Instagram and Facebook.
Therefore, if you post your photo on X, then there is no protection for you: no privacy, no intellectual property, and no protection against the manipulation and transformation of your photos, and the government of India is unwilling to act on an apparent flagrant violation of the law, beyond sending notices.
What we’re seeing today is an indication of what will happen when platforms integrate AI creation tools into public user feeds. The liability cannot rest with the user alone when platforms let loose an AI service, and allow it to publish without adequate checks and balances.
Safe harbour protections were never meant to apply to publishing, and consent should never be optional. Until India’s regulatory framework, especially the Digital Personal Data Protection Act, reflects this reality, these problems will continue.
(Nikhil Pahwa is the editor of MediaNama and is at x.com/nixxin. This is an opinion piece and the views expressed are the author's own. The Quint neither endorses nor is responsible for the same.)
