Fact-Checker's Take on 'Mountainhead': Sophisticated AI Might Make It A Reality

'Mountainhead' showed a realistic portrayal of the future of the disinformation ecosystem with sophisticated AI.

Aishwarya Varma
WebQoof
Published:
<div class="paragraphs"><p>As AI systems evolve, the threat of AI-generated disinformation amid global unrest keeps increasing.</p></div>
i

As AI systems evolve, the threat of AI-generated disinformation amid global unrest keeps increasing.

(Photo: The Quint)

advertisement

When I sat down for a weeknight movie session, I went through many streaming platforms and came across ‘Mountainhead’. Being a fan of Steve Carell (hello, fellow ‘The Office’ fan!) I had to watch it. I was prepared to be entertained and watch another ‘tech bro’ satire story. What I wasn’t prepared for, was how this film showed a realistic portrayal of the dangers of sophisticated Artificial Intelligence (AI) in the wrong hands.

As a fact-checker, I regularly see how people use AI to fabricate non-existent visuals to push propaganda, narratives, and total lies. This film showed me exactly where we could be heading with AI systems which keep getting better and create increasingly realistic visuals.

The film revolves around AI-generated disinformation which causes real-world, global violence, riots, and the impact it had on unsuspecting users of a fictitious social media platform called Traam.

We know that there is no dearth of AI-generated content on social media, or on any media platforms. From that viral video of a kangaroo trying to board a plane to ads for energy drinks, AI is everywhere. 

It is also used for mis and disinformation. This is exceptionally harmful in times of conflict, as it can spread fear, very believable propaganda, and create information which neither exists, nor is true

Take, for instance, the ongoing county-wide protests in the US’ Los Angeles against arrests by the Immigration and Customs Enforcement (ICE) which began on 6 June. Social media has been rife with visuals of violence, riots, arson, and tear gas from the area, where locals have taken a stand against officials arresting undocumented migrants in multiple raids.

Now, let’s take a moment to really think about what we’re seeing on our feeds. Do we know, for a fact, that all the scenes of the protests we see are actually happening? Or were they created from scratch, using AI?

What kind of impact would fake visuals have during such a socially and politically charged event?

Fact-checkers here asked themselves these questions in May 2025. Amid the recent conflict between India and Pakistan and the Indian Army’s ‘Operation Sindoor’, The Quint debunked eight AI-generated visuals, including deepfakes, which shared claims related to the operation.

For instance, when Operation Sindoor first took place on 7 May, an image of a fighter jet on fire went viral on social media, with the caption, “Indian Rafael aircraft taking its last breaths (sic)”

An archive of the post can be found here.

(Source: Instagram/Screenshot)

However, the image bore Meta AI’s watermark. AI-generated content detectors AI or Not and Hive Moderation both showed that it was an AI-generated image.

Similar cases were seen with claims about Rawalpindi stadium being destroyed and Pakistan’s Prime Minister Shehbaz Sharif apologising to Prime Minister Narendra Modi.

Both claims shared AI-generated images to make false claims.

(Source: The Quint)

These images were shared targeting the other side, trying to show either India or Pakistan as having faced damages that were either not real or ones which did not accurately depict the ground reality.

We saw the same themes carried forward in claims which shared deepfakes. One such deepfake showed Sharif saying that Pakistan had to retreat due to political isolation by other countries, “the enemy’s strength,” and a dearth of resources “despite [Pakistan’s] armed forces fighting bravely.”

This isn’t true. The original video, as shared by Brut India, showed Sharif in Pakistan’s parliament, where he claimed that of the 80 fighter jets India had sent, Pakistan had allegedly shot down five of them, which included three French-made Rafale jets.

When we ran this claim through Hive Moderation, the tool gave it a 99.9 percent confidence rating of it likely being a deepfake.

Hive Moderation's tool said that the video was likely a deepfake.

(Source: Hive Moderation/Screenshot)

A counter to this claim — another set of deepfakes — showing PM Modi, Home Minister Amit Shah, and External Affairs Minister S Jaishankar went viral. In these videos, the three leaders were shown to apologise for the military escalations with Pakistan, claiming that they acknowledged that they “lost the battle” to the neighbouring country.

However, not all the disinformation that went viral was targeting the opponent. It also shared communally coloured disinformation.

The Indian Army’s Colonel Sofia Qureshi gave the media updates about Operation Sindoor in Hindi during nearly every media briefing by the Armed Forces. A section from one of these videos was picked up, and was altered using AI, to claim that she said, “I am a Muslim but not Pakistani. I am a Muslim, but not a terrorist.”

The altered clip went viral during Operation Sindoor.

(Source: The Quint)

Bengaluru-based start up Contrails.ai’s analysis showed that an audio clone of Colonel Qureshi’s voice was used to overlay synthetic audio over an authentic video of a briefing.

This video is altered.

(Source: Contrails AI/Screenshot)

Graphic Visuals: Real or Not Real?

‘Mountainhead’ shows similar scenes, but with much more graphic visuals, where the group debated whether a video showing a child juggling severed feet on Traam was real or not. Indian fact-checkers saw similar claims, showing AI-generated visuals of a meadow with countless dead bodies shortly after the Pahalgam terror attack on 22 April. 


One of the group’s tech billionaires, Jeffrey ‘Jeff’ Abredazi, whose platform called Bilter AI – referred to as a “filter for nightmares” –  was able to accurately fact-check the AI-generated disinformation on the platform continued to profit as the impact of Traam’s disinformation spiralled.

Unfortunately, Bilter AI isn’t real. There are no tools which can consistently and completely accurately discern whether visuals are AI generated or not. People have even tried to fight fire with fire.

Ever since Grok and Perplexity AI became easily accessible, X (formerly Twitter) users have attempted to use it to fact-check social media posts, to no avail. We explored the prevalence and dependence on AI chatbots to verify misinformation in another report published earlier this year.

A very common comment to most contentious X posts is '@grok is this true'

(Source: The Quint)

With generative AI getting better and more sophisticated with each passing day, it would be hard to say whether we’re headed to where ‘Mountainhead’ says we might be. 

But AI-Generated Visuals Aren't Perfect, Are They?

So far, it has been possible to tell apart AI from reality. Let’s go back to the recently viral video of a kangaroo trying to board a plane.

ADVERTISEMENT
ADVERTISEMENT

Carefully watching (and listening to) the video highlights two red flags — the language that the women speak is gibberish, and the text on the kangaroo’s harness and boarding pass are garbled, a telltale sign of AI generated content.

The text visible in the video was all gibberish.

(Source: Instagram/Altered by The Quint)

It also helps that the page which first shared it tagged it as a clip containing AI.

Here's another example. A video showing security camera footage of a sleeping man 'escaping' a lion attack in India went viral on social media.

This video was more realistic and believable than the one showing the kangaroo. However, it too, carried gibberish text in a non-existent Indian language.

These words do not make sense in any existing Indian language.

(Source: Instagram/Altered by The Quint)

Additionally, the sleeping man's body was also placed at an odd angle, with his legs facing in a different direction than his head.

The man's body is placed at an odd angle.

(Source: Instagram/Altered by The Quint)

As we mentioned earlier, AI isn't consistently reliable at detecting AI. This video tricked several AI-generated content detectors.

When this video was submitted to AI-generated content classifiers, all but one of them could reliably and confidently say that this video was an AI-generated one. Multiple tools were certain that the video was not made by AI.

Now watch this video of a sailor talking about the sea. 

How about another video? Here’s an off-road rally car making its way through muddy waters.

Neither of these videos show real people or events. They were made using Google Deepmind’s latest offering, Veo 3. Along with Flow, its AI-powered filmmaking tool, the tech giant now lets its users easily transfer speech, movements, and expressions onto completely different faces – ones that don’t even exist. 

Can They Do Any Real Damage?

In a recent report published by TIME magazine, journalists Andrew R. Chow and Billy Perrigo noted that they had tested out Veo 3 to find that they could create realistic visuals of crisis events, “including a Pakistani crowd setting fire to a Hindu temple.” Such visuals could easily trigger communal incidents in India. 

We saw this happen during the socio-political turmoil in Bangladesh in August 2024. Amid reports of violence against Hindus in the neighbouring country, members of the Hindu Raksha Dal destroyed homes of Indian Muslims in Uttar Pradesh’s Ghaziabad, accusing them being ‘Bangladeshi infiltrators’, despite the police clarifying otherwise.

The ease with which this technology can be exploited by bad actors, governments, or corporations is an alarming theme in the film, and may not remain fictional if AI systems don’t put in guardrails for their technology.

A growing and insidious threat, AI-generated disinformation is a real problem which blurs the lines between truth and manipulation. In the film, powerful AI is used to fabricate highly convincing narratives, deepfakes, and falsified news, which are indistinguishable from reality to most people, leading to global violence and countless deaths.

What Our AI-Based Future Might Hold

As these tools become more sophisticated day-by-day, they hold the power to warp public perception, spreading misinformation on a massive scale. The film acts as a mirror to our world, showing us how the digital age can be weaponised to control populations and stir societal unrest.


Through personalised manipulation, these AI systems fuel division and chaos, causing people to distrust not just media, but each other. In ‘Mountainhead’, we overhear a news report where community leaders urged people “not to trust online news reports, no matter how convincing they appear.”

The erosion of trust in institutions as a result of AI-generated disinformation is also a real concern. As AI becomes more adept at crafting realistic fake content, even well-established news outlets and government entities are no longer immune to suspicion. 

We saw this happen, even without the use of AI in the recent past, when news organisations reported total falsehoods during Operation Sindoor.

The film portrays a world where no one knows who to trust, as everyone becomes aware that their perceptions might be shaped by AI-driven manipulation. This collapse of trust in the media, government, and even personal interactions shows the long-term social consequences of living in a world where AI-driven disinformation runs rampant.

'Mountainhead' serves as a cautionary tale, warning that the unchecked advancement of such technologies could lead to a fractured, confused, and highly polarised society.

(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9540511818 , or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT