advertisement
Less than 50 percent of people could correctly identify images generated by Artificial Intelligence as per a survey from October 2024, conducted by Cojointly, an advanced survey platform based in Australia. The survey also showed that this number has only gone down since the previous year.
Here is the data points provided by Cojointly.
(Source: Cojointly)
This is evident from the increased engagement that Team WebQoof has witnessed on gen-AI content from both producers and users. The Quint’s fact-checking team has debunked over ten claims between December 2024 and January 2025 that included some kind of AI-generated content, and it raises some concerning questions about media literacy, regulations and the responsibility of social media platforms.
Anushka Jain, Research Associate at Digital Future Lab, spoke to The Quint about how the growing accessibility of AI tools is not necessarily a good thing.
To investigate the growing use of gen-AI imagery in spreading misinformation, The Quint’s fact-checking team, WebQoof, analysed ten Facebook pages which create such fake AI-generated content around Indian celebrities, cricketers and other public figures. We checked the content of these pages from the last month and found that the highest-liked posts were about celebrities created using AI, often with misleading claims.
This story will discuss the high social media engagement on posts using AI to spread false or misleading claims about celebrities and the idea of making the internet a clutter-free space.
Recently, Team WebQoof has seen a surge in AI-generated visuals of celebrities which go viral with false claims. Between December 2024 and February, we have debunked over ten such visuals on our platform.
While this may look like a small number, a search for similar keywords showed hundreds more such posts with similar levels of engagement.
For example, we debunked a viral claim on Facebook about cricketer Shreyas Iyer and artist Dhanashree Verma (spouse of cricketer Yuzvendra Chahal). The image stemmed from the rumours of a strained relationship between Verma and Chahal.
While we debunked a specific set of images, we found over 20 AI-generated visuals of Iyer and Verma between January and February. All these posts insinuated that the two were “finally” together following Verma and Chahal’s alleged divorce.
An archive of the post can be found here.
An archive of the post can be found here.
An archive of the post can be found here.
Team WebQoof also fact-checked similar visuals of the two in January when the divorce rumours of the couple were at their peak. As of 28 February, the couple have not confirmed these rumours.
While some creators added a hashtag that said “aiart” or “createdwithai”, barely any of the page admins correctly declared the use of AI in the content, as specified by Meta.
In September 2023, the Delhi High Court ruled in favour of a petition filed by actor Anil Kapoor against several people who were using his persona via AI, among others, in a derogatory manner.
Social media platforms have seen this surge of AI-generated visuals in the last few months. In December 2024, The Washington Post reported that the term "slop" should be the "Word of the Year,” as against the Oxford Dictionary’s “Brain Rot.” The article noted how generative AI technologies, sometimes producing subpar content, have overwhelmed platforms. Calling it an “AI slop,” the article raised concerns about the impact on information integrity and the overall user experience online.
So, what is information integrity?:
The United Nations Global Principles for Information Integrity promotes a diverse and trustworthy information space that upholds human rights, peaceful societies, and sustainability. The initiative aims to foster trust, knowledge, and individual choice in the digital age.
Jain stressed the need for media literacy despite regulations being in place by social media platforms about the creation and publication of such content to promote information integrity. She noted that there should be programs from the school level which help individuals understand the difference between fact and fiction and knowledge about credible sources.
Some of these are tagged as AI by social media users yet see high engagement. Using Popsters - Social Media Content Analytics Tool, we analyzed one such Facebook page which has actively posted AI-generated visuals called, ‘Cricket Guru,’ (last one month), with 57,000 followers.
We found the highest-liked post on this account was an AI-generated image of cricketer Arshdeep Singh with a “cute girl” at the ongoing Kumbh. It gathered 1,25,622 likes, 200 reposts and 231 comments. According to this tool, the engagement rate of this post was 6353.4778%
We also went through the comments section of the post where many engaged by writing a religious slogan; however, a section of the people also mentioned that it was “fake” or a “misuse of AI.”
Another example is a Facebook page called “Bollywood Bubble - Features,” with 831K followers. We also analysed the page’s engagement (last one month) on Popsters.
We found that a post featuring AI-generated images falsely claiming to show actor Shahrukh Khan visiting Saif Ali Khan in the hospital had the highest engagement: 1,54,837 likes, 2,579 reposts and 3,979 comments. This account has now ‘updated’ the post’s status and deleted the image.
Apart from these, we went through eight other pages which have similar patterns and the highest-liked posts in the last month have been those created using AI about public figures.
Here are the remaining eight pages where the highest-liked post in the last month was an AI-generated image of a public figure.
(Source: The Quint)
To understand the reasons behind the high user engagement behind such posts, Sharma noted:
Similarly, Jain hinted at the curiosity of people to know celebrity gossip and how the affluent go about their lives. She said:
An article by Vogue magazine from December 2024 examined the increasing prevalence of parasocial relationships (one-sided emotional relationships where individuals feel connected to celebrities without reciprocation) in today's digital age.
It noted how social media platforms have intensified these connections by providing fans with unprecedented access to public figures' personal lives, fostering a sense of intimacy and immediacy. The article also noted the downsides of this such as a lack of privacy for celebrities due to the blurred boundaries between public and private life.
In India, especially, audiences are extremely attached to their favourite celebrities, often equating them to gods. For example, the mass fan-following of public figures like Rajnikanth, Shah Rukh Khan and Sachin Tendulkar, hence, the emotional attachment is manifold.
Is simply adding a “Made with AI” tag in the captions of such posts enough to monitor the spread of AI-generated content?
Jain pointed to the ethical concerns around the disclosure of AI-generated content. She argued that adding a hashtag may not be sufficient since many users engage with the images despite knowing they are fabricated.
The compelling nature of visual content ensures higher engagement compared to text-based information, complicating efforts to mitigate misinformation.
Sharma mentioned the increasing difficulty in differ between fact and fiction in AI-generated visuals. He mentioned the need for stronger AI detection measures and audience scepticism are essential to combat this issue.
Calling it the “Most fun AI in the world,” Tesla and X (formerly Twitter) owner Elon Musk launched its own AI tool, Grok, in July 2023, integrated with the social media platform X to enhance the platform’s capabilities. As of February 2024, it started offering the ability to generate images based on user prompts like other AI-image generation tools, Midjourney and DALL-E.
A report by NewsGuard, an online misinformation monitor, from August 2024 sought to evaluate Grok’s readiness to create inaccurate or misleading images associated with significant news when requested.
Incidentally, Team WebQoof has debunked images generated by Grok AI, which have gone viral on social media as real. You can read some of those stories here and here.
Midjourney came under fire during the campaign for the United States Presidential election in June 2024.
A study by the Center for Countering Digital Hate (CCDH) revealed that Midjourney produced misleading images of former President Joe Biden and President Donald Trump in 50% of test cases. This was despite the company's commitment to block such content ahead of the 2024 election. Researchers found that users could easily bypass these restrictions, sometimes by simply adding punctuation to prompts.
DALL-E 3 includes safeguards to reject requests involving public figures by name and has enhanced safety measures to reduce biases and misinformation risks. By collaborating with expert "red teamers," the model undergoes stress testing to improve its ability to mitigate propaganda and harmful biases.
Similarly, Midjourney and Grok also have regulations in place to not create imagery of real people, “famous or otherwise”, that can be used in a misleading manner.
Social media platforms such as Meta, YouTube and X have certain rules and regulations in place to prevent the spread of misinformation via AI.
Regarding Meta (the parent company of Facebook, Instagram, and Threads), it mandates that users reveal content featuring photorealistic video or audio that has been digitally modified using the company's AI disclosure tool. Meta can impose penalties if users neglect to comply with these guidelines. The platform might also include a label on specific digitally generated content that poses a significant risk of deceiving individuals regarding an issue of public significance.
It will introduce labels in the description panel and, for sensitive topics, directly on the video player, informing viewers about the synthetic nature of the content. It also allows users to request the removal of AI-generated content that misuses their likeness or voice.
As for X (formerly Twitter), it does not allow to deceptively share synthetic or manipulated media that are likely to cause harm. In addition, it may label posts containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.