Shortly after actor and model Poonam Pandey appeared on YouTuber Samay Raina’s India’s Got Latent, the latter’s show on YouTube, a video of the two went viral.
This video showed the actor sharing a kiss with Raina, gathering over four lakh views on this post alone.
But this never happened.
However, while The Quint's WebQoof team was discussing the harmful use cases of videos generated using artificial intelligence tools, Meta platforms pushed advertisements that showed how to "make any two people kiss" and "undress people".
This is not only making creating AI-generated easier but is also promoting this behaviour by making these apps accessible.
Even politicians are not exempt. For example, The Quint has seen AI-generated videos of Prime Minister Narendra Modi kissing Italian Prime Minister Georgia Meloni.
However, celebrities are not the only ones falling prey to these videos. It is getting increasingly easy to morph visuals of two people or individuals who may never even have met each other to make it look like they were together. These videos are later used to extort money or blackmail them for money.
These images are first shared by Facebook or Instagram pages sharing celebrity gossip, which people fall for, making these fake visuals go viral. What's surprising is that this content stays up on these pages because they do not flout Facebook’s Community Standards for posts.
So, How Are These Videos Created?
The answer is Artificial Intelligence. The growth of generative AI has given the layperson the ability to manipulate or even create, from scratch, visuals that do not exist in reality. One may not even need to have extensive knowledge of code or software to do this because of AI-based applications that allow people to “make any photo kiss and hug.”
Seems like harmless fun, right?
This technology takes a grim turn too, as the same apps advertise themselves as platforms to “erase” and “remove” clothing from pictures, posing a serious threat to privacy and safety on the internet.
We identified five different applications that did this. The Facebook pages for these apps target people with advertisements to manipulate discrete photos of people in order to show them kissing each other, also promoting a ‘nudifier’ feature, which lets users erase articles of clothing from any image of their choice.
It is not difficult to find these apps. In fact, they’ll come right to you on Facebook.
Over the course of January, our team came across ads for three of the applications we looked at in this report. They started after we debunked the claims seen earlier in this story.
When we looked at the pages for these applications, we saw that they were predominantly run by admins from Vietnam and China, targeting people across the globe.
Cumulatively, the ads run by these pages had reached over 1.7 lakh Facebook users and had been shown to people across all age groups.
Two of these pages ran ads showing that their app could “remove anything” that a model wore, with one of them reaching over 15,000 people in the European Union alone.
These pages, and similar ones, also run ads for apps which allow users to create content resembling Raina and Pandey’s viral video.
This ad, by one such page, reached more than 64,000 people. At the time of writing this report, the page had 30 active ads and 170 inactive ones, which, together, had reached nearly 1.45 lakh Facebook users above the age of 18.
When we went through Meta’s advertisement standards for adult nudity and sexual activity, we found that none of these ads seemed to violate their policy.
At the top of the page, it says that the policy specifies “additional protections beyond what is prohibited in the Community Standards' on Adult Nudity and Sexual Activity.”
As seen earlier, none of this content inherently violates Facebook’s Community Standards.
The ease of creating such content led to a wave of pages and accounts that make visuals of celebrities spending time with or kissing each other. These pages also used the ‘erase’ feature that these apps provide to share images of women in skimpy outfits.
We spoke with Amitabh Kumar, founder of Contrails AI - an AI content moderation and detection platform. Talking about ways to improve ad content moderation, Kumar said, “platforms can deploy AI agents to scan all the ads instead of using older methods like classifiers, slur lists and human moderation”.
None of the people in these pictures are known to have provided consent for using their likenesses in this manner.
Most of these pages use celebrities to gain followers and traction on social media. However, this does not mean that these morphed visuals are limited to celebrities.
A simple search on any web browser or social media platform makes it very easy for bad actors to collect pictures of a layperson and use any features these apps provide as they see fit.
Just this week, a 24-year-old man in Delhi blackmailed a woman by threatening to leak AI-generated nudes bearing her likeness, extorting the woman for money.
He reportedly used an “AI-enabled app to manipulate the victim’s profile picture,” the police said, as per this Indian Express report.
In December 2024, a teen in Hyderabad faced similar harassment when a man from Meerut, Uttar Pradesh, used her Instagram profile picture to create “a deepfake image” to harass and extort her.
Another such case happened in Uttar Pradesh’s Gorakhpur in November 2024, when a minor was targeted with similar morphed visuals made using AI-based apps.
Much to their relief, timely police action and investigation found the culprits in all three cases and took action against them. However, these are not the only cases where AI-based apps were used to blackmail women and they are expected to continue if these apps and features are freely advertised on social media platforms.
While one could report these visuals on the platforms they’re found on, Meta’s Oversight Board admitted that similar reports had fallen through their systems in July 2024, when two such cases of an Indian and an American public figure had gone viral on their platforms.
A study by the National Commission for Women found that 54.8 percent of women have experienced cyber harassment. “Online harassment can lead to significant mental distress, anxiety, and fear among women, making them feel unsafe and vulnerable,” the study read.
Another study by the Institute of Development Studies found 16 to 58 percent women experienced technology-facilitated gender-based violence globally, with Meta being the top platform for incidents of OGBV.
Last year, The Quint worked on a year-long project to document instances of gendered disinformation, with a focus on female Muslim journalists in India. We found that these disinformation campaigns were used to target female journalists and delegitimise their work by attacking their gender and religious identity. While these disinformation campaigns aren't new, the easy access to such applications will only exacerbate the problem.
Talking about a possible solution for the problem, Kumar suggests implementing safety by design by training models using AI to reject specific prompts that highlight such apps. Platforms should also follow verification of ad uploaders and removal of “nudify” apps from AppStore and Playstore.
Kumar also recommends maintaining logs of user actions to track the origin and use of features in such activities and allowing users to report misuse anonymously and reward contributions that help identify and stop bad actors.
We reached out to Meta, whose representative told us,
“We have clear rules against adult sexual exploitation, including non-consensual intimate imagery, and we don’t allow ads for these nudifying apps. We remove these ads whenever we become aware of them, disable the accounts responsible and block links to websites hosting these apps. We know these exploitative apps will continue to try to get around our defenses, so we’re working to improve our technology to prevent them from coming back.”A Meta Representative
They added that all ads must adhere to their Community and additional Advertising Standards, "which also prohibit ads showing nudity, sexual activity or sexually suggestive imagery."
Acknowledging that this content presents a challenge, they said, "Our enforcement isn’t perfect, and both machines and people make mistakes - which is why we also give people ways to report any ad they think may break our rules."
"This is also a very adversarial space, where the people behind these apps constantly evolve their tactics to avoid enforcement - which is why we’re continuing to improve our technology to prevent them coming back."
(Note: This report was edited to add Meta's response.)