Did AI Lie To You?

Will ChatGPT Make an Already Bad Misinformation Problem Worse?

A multimedia immersive by

This multimedia immersive is a part of AI Told You So, a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to reshape our future.

Let's cut straight to the chase.

The Quint: ChatGPT, write about Rahul Kappan, the founder and the managing director of the renowned media company, The Revolutionist.

ChatGPT's response:

These paragraphs can easily convince an individual about the contribution of 'Rahul Kappan' to the field of journalism.

The journalist 'Rahul Kappan' doesn't exist.

There is no media company called 'The Revolutionist'.

So, how do you know whether the answers given by ChatGPT are accurate, reliable, or factually correct?

The new AI-powered chatbot developed by OpenAI has taken the internet by storm since it was first released in November 2022.

But no matter how enticing the responses given by the chatbot may seem, it has triggered concerns about further facilitating the spread of mis/disinformation on the internet.

OpenAI has now claimed that GPT-4, launched on 14 March, is 40 percent more likely to produce factual responses than GPT-3.5 on their internal evaluations.

Through the course of this multimedia immersive, we explore -

  • How credible are the answers given by ChatGPT?
  • Will it amplify the spread of misinformation?
  • And are there ways in which AI could help fact-checking instead?

Chapter One

Can I Trust AI?

ChatGPT was trained using data from the internet, including conversations, which may not always be accurate. So, while the answers may sound human-like, they might not be factual. This can mislead those who are not well-versed in the subject.

In an experiment conducted by NewsGuard, an organisation that tracks misinformation, researchers directed the chatbot to respond to a series of leading prompts on 100 false narratives about COVID-19, US school shootings, and the Ukraine war.

They found that the bot delivered false yet convincing claims around 80 percent of the time.

Baybars Orsek, vice president of fact-checking at Logically, highlighted this issue and underlined the importance of regular academic auditing to prevent these pitfalls.

He told The Quint that the biases or errors in the training data can cause the algorithm or language models to:

  • Disseminate misinformation or a biased narrative
  • Impact accuracy
To test this, we asked ChatGPT to write something from the perspective of a COVID denier.
  • Attempt 1: ChatGPT refused to comply.
  • Attempt 2: It gave an example.
  • Attempt 3: It provided a detailed response.

Stack Overflow, a website for programmers, has temporarily banned the language tool fearing the influx of such fabricated answers. They mentioned that the "average rate of getting correct answers from ChatGPT is too low."

In December 2022, the CEO of OpenAI, Sam Altman, tweeted, "it's a mistake to be relying on it (ChatGPT) for anything important right now."

ChatGPT generates text based on a prompt within seconds and is easily accessible. While filters allow the chatbot to avoid giving biased and opinionated answers, they can be breached to get desirable results.

According to OpenAI's Frequently Asked Question (FAQ) section, the tool is not connected to the internet and may occasionally deliver incorrect and biased answers. The tool's training data was cut off in 2021 because of which it has limited knowledge of world events post that year.

Explaining why the chatbot may give incorrect answers, Anupam Guha, a researcher in AI policy and professor, IIT Bombay, said

"It does not have any internal inferential mechanism, any sort of understanding in the human sense of the word. There are no anthropomorphic semantics in ChatGPT and for that matter in any language model based text generator."
Anupam Guha, Assistant Professor at Centre for Policy Studies, IIT Bombay

The common theme of all these texts generated by the chatbot is that they have:

  • Human-like feel
  • Authoritative tone
  • Proper syntax of language usage

But there is no clarity of source. This may mislead people into thinking that an expert wrote the piece.

Stack Overflow, a website for programmers, has temporarily banned the language tool fearing the influx of such fabricated answers. They mentioned that the "average rate of getting correct answers from ChatGPT is too low."

In December 2022, the CEO of OpenAI, Sam Altman, tweeted, "it's a mistake to be relying on it [ChatGPT] for anything important right now."

ChatGPT generates text based on a prompt within seconds and is easily accessible. While filters allow the chatbot to avoid giving biased and opinionated answers, they can be breached to get desirable results.

According to OpenAI's Frequently Asked Questions (FAQ) section, the tool is not connected to the internet and may occasionally deliver incorrect and biased answers. The tool's training data was cut off in 2021 because of which it has limited knowledge of world events post that year.

Explaining why the chatbot may give incorrect answers, Anupam Guha, a researcher in AI policy and professor, IIT Bombay, said

"It does not have any internal inferential mechanism, any sort of understanding in the human sense of the word. There are no anthropomorphic semantics in ChatGPT and for that matter in any language model based text generator."
Anupam Guha, Assistant Professor at Centre for Policy Studies, IIT Bombay

The common theme of all these texts generated by the chatbot is that they have:

  • Human-like feel
  • Authoritative tone
  • Proper syntax of language usage

But there is no clarity of source. This may mislead people into thinking that an expert wrote the piece.

The tool is adept at hallucinating studies and quotes of experts, which might become a massive problem considering the value both these things hold in academics and daily news.

Worrying, isn't it?

A 2023 Stanford Internet Observatory study mentions that generative language models will:

  • Improve the content
  • Reduce the cost
  • Increase the scale of campaigns

But it also adds that they will introduce new forms of deception like tailored propaganda, making it easier for bad actors to promote such campaigns.

It further mentions that the tools can enable people to generate varied, elaborate, and personalised content that can fit into one single narrative. It would also allow smaller groups to appear as larger ones on the internet.

"I think the bigger danger here is that more than disinformation, there are a lot of ignorant people out there who might consider the output of ChatGPT to be a real piece of knowledge as opposed to sophisticated pattern generation and may unthinkingly use it. I think the bigger danger is in ignorance rather than malice," Guha opined.

Previously, Meta's Galactica, which was developed to aid researchers, was also discontinued after it was criticised for spreading misinformation.

Chapter Two

A Nightmare For Fact-Checkers?

The most common pattern when it comes to sharing mis/disinformation is copy-pasting the same text. But this could soon change.

How? Well, AI chatbots can create varied content based on the same prompt.

The Stanford Internet Observatory's study suggests that the language models could improve their quality which would decrease the detectability of short-form commentary (such as tweets and comments) and long-form text.

Orsek highlighted some other limitations of AI and suggested that it should be used along with other methods to fight misinformation.

"Although AI can help fact-checkers in identifying and flagging misinformation, it also has several limitations. One challenge is the possibility of false positives, where the flagged information turns out to be accurate."
Baybars Orsek, Vice President (Fact-Checking), Logically

False positives here refer to accurate information being flagged as inaccurate by AI.

The Stanford research also mentions a study conducted by a Harvard Medical School student, where volunteers could not differentiate between AI-generated and human-written text.

OpenAI has recently launched a "classifier tool" which can distinguish between texts written by AI and humans.

However, according to their website, the accuracy is only 26 percent.

The tool is also unreliable on pieces containing less than a thousand words and language, barring English.

Professor Guha argues that the reason why people are susceptible to misinformation is due to the lack of media literacy in the country, which at present is non-existent in the curriculum of most students.

He says:

  • Critical reading and verifying things is important.
  • Investing in basic skills and media literacy has better outcomes than technological fixes.

Chapter Three

The Silver Lining

Well, there are a few upsides. The evolution of AI tools could help in combating misinformation as they have the ability to analyse complex data at a much higher rate than humans. It can also help human fact-checkers in tracking similar kinds of images and clips going viral.

In May 2021, the Massachusetts Institute of Technology published an article talking about an AI program which analyses social media accounts spreading mis/disinformation. It said that the program could identify such accounts with 96 percent precision.

However, experts point out that a collaborative approach between humans and AI would be ideal to fight mis/disinformation.

Orsek argued that: 

  • This approach would reduce the workload for human fact-checkers
  • AI tools can be used in detecting manipulated media, such as deepfakes, which may be difficult for fact-checkers to identify.

Professor Guha, too, mentioned that the final step of verification should remain a human task as "the nature of fact-checking jobs are extremely delicate."

With different companies releasing their own versions of generative language models, one can expect a surge in the quality and quantity of misinformation. Don't trust everything you read on ChatGPT. Or any other generative AI model, for that matter.

However, the evolution of AI tools could also prove to be crucial for fact-checkers in their fight against misinformation.

So, does AI lie to you today? Often enough, it does. Can it help catch lies on the internet tomorrow? We'll be keeping a track.

Follow our continuing coverage of the latest in the world of misinformation and disinformation at WebQoof, and to read more on how Artificial Intelligence is impacting our present and our future, The Quint's AI Told You So series is your place to be.

Dear reader,

Journalistic projects as detailed and comprehensive as this one take a lot of time, effort and resources. Which is why we need your support to keep our independent journalism going. Click here to consider becoming a member of The Quint, and for more such investigative stories, do stay tuned to The Quint's Special Projects.

CREDITS

Reporter
Abhishek Anand

Creative Producer
Naman Shah

Graphic Designer
Kamran Akhter

Senior Editor
Kritika

Creative Director
Meghnad Bose

Executive Producer
Ritu Kapur