
advertisement
As the war between Iran and Israel/USA entered its 26th day, an analysis of fact-check stories published by Team WebQoof found that nearly 50 percent of claims carried visuals created using Artificial Intelligence (AI) technology.
While this is not the first conflict in times of AI, this comes at a time where generating AI-generated visuals have become accessing and affordable for starting a narrative war using mis and disinformation. This is also a time when fact-checkers and journalists are able to use AI to detect and debunk mis and disinformation.
With the quality of AI-generated imagery becoming better and better, the influence it will have on future wars could be unprecedented. It could be used by state actors to prolong wars or by bad actors to manipulate sentiments and disturb stock markets.
Fact-checking and data analysis company Newsguard tracked 50 false claims in the first twenty-five days of the war and found that they collectively amassed hundreds of millions of views, averaging about two new false narratives per day.
These claims largely push a pro-Iran agenda, increasingly rely on AI-generated imagery, and often attempt to dismiss credible reporting as fake, even though only a small portion originated from Iranian state media.
Among the viral claims was an image purportedly showing Iran’s late Supreme Leader Ayatollah Ali Khamenei being retrieved from rubble after alleged US-Israeli strikes on Tehran, following statements by US President Donald Trump and Israeli Prime Minister Benjamin Netanyahu claiming he had been killed.
While the image may appear realistic at first glance, it contains several inconsistencies. Team WebQoof’s analysis found that the scene appeared staged, with elements such as the chair's placement and the turban appearing arranged artificially.
Here is a closer look at the visual.
(Source: Altered by The Quint)
AI-detection tool Hive Moderation revealed that the image was 90 percent AI-generated.
Here are the results by Hive Moderation.
(Source: Hive Moderation)
Social media was also abuzz with rumours that Netanyahu had been killed by Iran amid the ongoing conflict. Users claimed that he had made almost no public appearances or addressed the media in the first twelve days of the conflict.
This led to users sharing an AI-generated image of Netanyahu being retrieved by several people from rubble, with social media pages claiming that he was dead. However, Team WebQoof found that the image was AI-generated.
Here are the results by Sightengine.
(Source: Sightengine)
Another category of widely circulated AI-generated content featured visuals of buildings and open spaces being bombed or attacked.
These videos were shared with claims such as—Iran had struck US bases near the Burj Khalifa in Dubai, launched hypersonic missile attacks on Tel Aviv, carried out strikes in Bahrain, captured US troops, and even targeted an Indian carrier in the Strait of Hormuz.
Some clips also falsely showed a US soldier crying after an alleged Iranian strike.
Here is a preview of the story.
Here is a preview of the story.
Here is a preview of the story.
Here is a preview of the story.
A deepfake video of journalist Palki Sharma was shared to falsely claim that an Indian origin man, spying for Israeli intelligence agencies, was arrested in Bahrain. Additionally, another AI-generated video purportedly showing a report by Al Jazeera falsely claimed that Netanyahu was killed inside a bunker.
Here are the results by Hive Moderation.
Here are the results by Deepfake-O-Meter.
Here is a preview of the story.
Here is a preview of the story.
Here is a preview of the story.
To debunk fake news generated using artificial intelligence tools, journalists have to rely on AI detection tools to find out the truth behind the visuals. However, these tools come with several limitations and can't be fully trusted.
Team WebQoof uses tools such as Deepfake-O-Meter, Hive Moderation, AI or Not, Sightengine, Was It AI, Contrails AI and others.
Contrails, a tool developed by a start-up in Bengaluru, India, works better at detecting deepfake videos but it falters when asked to detect AIGC content.
Other tools like Deepfake-O-meter also fail to debunk videos showing alleged missile strikes or bombings, which are viral on social media.
In a paper by Dorsaf Sallami for her doctoral research at the University of Montreal’s Department of Computer Science and Operations Research, she argued that AI fake-news detection tools often appear highly accurate in controlled environments, but their real-world performance is far weaker. These tools rely on patterns and probabilities learned from training data, meaning their outputs reflect existing biases and gaps rather than the objective truth.
Here is the preview of the paper.
(Source: Tech Explorer)
Another key limitation is that these systems function more like “mirrors” of their training data than true fact-checkers.
This makes them vulnerable to bias, incomplete datasets, and changing information landscapes, leading to misclassification of both real and false content, especially when dealing with nuanced or evolving news stories.
In practice, these tools struggle with reliability, context, and adaptability, making them insufficient as standalone solutions for combating misinformation.
Adding to this conversation, a research paper from the Al Jazeera Media Institute exploring the complex relationship between artificial intelligence and wartime information during the Israel-Gaza, Russia-Ukraine, and Lake Chad conflicts also answered the limitations of AI and verification.
Here is the preview of the paper.
(Source: Al Jazeera Media Institute)
They also fail to grasp context, such as evolving symbols or slang and can produce translation errors, requiring constant human oversight.
Speaking to The Quint in March 2025, Anushka Jain, Research Associate at Digital Future Lab, noted:
Research by Clemson University’s Media Forensic Hub, as reported by The Times, highlighted how pro-Iran disinformation has been amplified through coordinated networks of social media accounts posing as ordinary users from the UK and Ireland.
Operating across platforms such as X, Instagram, and Bluesky, these accounts adopted local identities to blend seamlessly into Western online spaces and build credibility.
To appear authentic, many of these profiles initially engaged with domestic issues like Scottish independence or Irish politics.
Despite convincing personas, investigators found clues like Farsi text, Iranian-linked metadata, and coordinated posting patterns exposing the networks. These tactics helped Iranian-linked actors covertly shape discourse, amplify false claims, and exploit trust in “local” voices at scale.
Additionally, US President Trump accused Iran of actively deploying artificial intelligence to generate and disseminate misleading content, particularly fake images and videos tied to the conflict.
A report by the BBC also pointed to the role of so-called “engagement farmers” who capitalise on conflict by sharing sensational and misleading content for profit. In response, X announced on 4 March that it would suspend creators from its revenue-sharing programme for 90 days if they fail to disclose AI-generated content related to armed conflicts.
Meanwhile, research by Al Jazeera notes a growing reliance on AI-powered “superbots” by both state and non-state actors.
The impact of such tactics has been evident in viral misinformation during the conflict. False claims about the death of Mojtaba Khamenei circulated widely, while his social media profile image was flagged as 99.8 percent AI-generated by detection tools.
This is Mojtaba's profile picture on his X page.
Here are the results by Hive Moderation.
Similarly, rumours surrounding Benjamin Netanyahu, including claims that a press conference video showed him with “six fingers,” were debunked by fact-checkers, confirming the footage was authentic.
At the institutional level, Press Information Bureau’s fact-check unit has been actively countering misinformation during the West Asia crisis.
While its focus has largely been on content involving Indian officials and military generals, it has also debunked viral AI-generated claims, such as a fabricated video alleging an Iranian attack on an Indian vessel.
Here is the preview of the post.
Here is the preview of the post.
Here is the preview of the post.
Together, these developments underscore how AI is redefining modern warfare, not just on the battlefield, but in the information ecosystem, where influence, perception, and narrative control have become critical weapons.
With the use of AI, wars can be prolonged, markets can be manipulated and realities can be altered for humanity.
Don’t trust everything you see online: In times of crisis, social media is flooded with information, but not all of it is true. Bad actors often exploit such situations, spreading AI-generated visuals and false claims to gain attention or profit.
Be wary of “verified” accounts: A blue tick doesn’t guarantee credibility. Many accounts posing as real-time news sources may still share misleading or false information.
Question “larger-than-life” visuals: If an image or video looks overly dramatic, hyper-realistic, or sensational, pause and verify it before believing or sharing it.
Use AI-detection tools: Several free tools allow you to upload images and check whether they’re AI-generated or manipulated. You can make use of them.
Follow credible fact-checkers: Stay connected to reliable fact-checking organisations that regularly debunk viral misinformation during crises.
Leverage fact-checking helplines: Many fact-checking groups offer WhatsApp chatbots or contact channels where you can send suspicious content for verification.
Verify before you share: If you’re unsure about something, don’t forward it. Always cross-check information before amplifying it.
(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9540511818 , or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)