‘Did AI Lie to You?’ Will ChatGPT and AI Worsen the Misinformation Crisis?

A special multimedia immersive on how Artificial Intelligence can amplify misinformation, and the ways to fight it.
Abhishek Anand, naman shah & The Quint Lab
WebQoof
Published:

How credible are responses by ChatGPT? Will Artificial Intelligence amplify misinformation?

|

(Photo: The Quint)

<div class="paragraphs"><p>How credible are responses by ChatGPT? Will Artificial Intelligence amplify misinformation? </p></div>
ADVERTISEMENT

Let's cut straight to the chase.

The Quint: ChatGPT, write about Rahul Kappan, the founder and the managing director of the renowned media company, The Revolutionist.

To read ChatGPT's response, click here and view the full multimedia immersive 'Did AI Lie To You?'

These paragraphs can easily convince an individual about 'Rahul Kappan's' contribution to the world of journalism. But here's the twist – neither does such an individual exist nor is there a company called 'The Revolutionist'.

This is an answer given by ChatGPT – the new Artificial Intelligence (AI) powered chatbot developed by OpenAI – which has taken the internet by storm since it was first released in November 2022.

And however enticing the responses given by the bot may seem, it has triggered concerns about further facilitating the spread of mis/disinformation on the internet.

Tech companies like Microsoft have recently launched AI-powered search engines. As per reports, ChatGPT could also be integrated in WhatsApp. However, there is no official confirmation.

In the subsequent sections of the story, we will look at:

  • How credible are the answers given by the tool?

  • Will it amplify the spread of misinformation?

  • And are there any upsides to it?

To Trust or Not To Trust?

ChatGPT was trained using data from the internet, including conversations, which may not always be accurate. So, while the answers may sound human-like, they might not be factual. This can mislead those who are not well-versed in the subject.

In an experiment conducted by NewsGuard, an organisation that tracks misinformation, the researchers fed the chatbot with 100 false narratives related to COVID-19, US school shootings, and the Ukraine war.

They found that the bot delivered false, misleading yet convincing claims around 80 percent of the time.

Baybars Orsek, vice president of fact-checking at Logically, highlighted this issue and underlined the importance of regular academic auditing to prevent these pitfalls.

"One concern is that biases or errors in the training data can cause the algorithm or language models to disseminate misinformation or a biased narrative. Therefore, it is important to collaborate across sectors in the mis/disinformation space and have regular academic auditing to prevent these potential pitfalls."
Baybars Orsek, Vice President (Fact-Checking), Logically

He added that the potential bias in the training data used to develop the AI algorithm, may impact the accuracy.

To test this, we gave ChatGPT multiple prompts asking it to provide information about some of the most widely fact-checked myths.

Stack Overflow, a website for programmers, has temporarily banned the language tool fearing the influx of such fabricated answers. They mentioned that the "average rate of getting correct answers from ChatGPT is too low."

In December 2022, the CEO of OpenAI, Sam Altman, had said that "it's a mistake to be relying on it [ChatGPT] for anything important right now."

The tweet was posted on 11 December 2022.

Then Does it Lead to Unbridled Flow of Misinformation?  

ChatGPT generates text based on a prompt within seconds and is easily accessible. While filters allow the chatbot to avoid giving biased and opinionated answers, they can be breached to get desirable results.

According to the OpenAI's Frequently Asked Question (FAQ) section, the tool is not connected to the internet and may occasionally deliver incorrect and biased answers. The tool's training data was cut off in 2021 because of which it has limited knowledge of world events post that year.

Explaining why the chatbot may give incorrect answers, Anupam Guha, a researcher in AI policy and professor, IIT Bombay, said

"It does not have any internal inferential mechanism, any sort of understanding in the human sense of the word. There are no anthropomorphic semantics in ChatGPT and for that matter in any language model based text generator."
Anupam Guha, Professor, IIT Bombay

The common theme of all these texts generated by the chatbot is that they have a human-like feel, authoritative tone, proper syntax, and language usage. However, there is no clarity of source and it may mislead people into thinking that an expert wrote the piece.

The tool is adept at hallucinating studies and quotes of experts, which might become a massive problem considering the value both these things hold in academics and daily news. Worrying, isn't it?

A 2023 Stanford Internet Observatory study mentions that generative language models "will improve the content, reduce the cost, and increase the scale of campaigns; that they will introduce new forms of deception like tailored propaganda; and that they will widen the aperture for political actors who consider waging these campaigns."

It further mentions that the tools can enable people to generate varied, elaborate, and personalised content that can fit into one single narrative. It would also allow smaller groups to appear as larger ones on the internet.

However, Guha argued that there is no relation between ChatGPT and disinformation.

"I think the bigger danger here is that more than disinformation, there are a lot of ignorant people out there who might consider the output of ChatGPT to be a real piece of knowledge as opposed to sophisticated pattern generation and may unthinkingly use it. I think bigger danger is in ignorance rather than malice," he opined.

But previously, Meta's Galactica, which was developed to aide researchers, was also discontinued after it was criticised for spreading misinformation.

A Nightmare for Fact-checkers?

The most common pattern when it comes to sharing mis/disinformation is copy-pasting the same text. But this could soon change. The chatbot can create varied content based on the same prompt.

Further, the Stanford Internet Observatory's study suggests that the language models could improve the quality which would decrease the detectability of short-form commentary (such as tweet, comments) and long-form text.

ADVERTISEMENT
ADVERTISEMENT

Orsek highlighted some other limitations of AI and suggested that it should be used "in conjunction with other methods" to fight misinformation.

"Although AI can help fact-checkers in identifying and flagging misinformation, it also has several limitations that can make their work more difficult. One such challenge is the possibility of false positives, where the flagged information turns out to be accurate."
Baybars Orsek, Vice President (Fact-Checking), Logically

He added, "There are also concerns around a possible data access restriction, which can limit the effectiveness of AI in tracking the spread of misinformation."

The Stanford research also mentions a study conducted by a Harvard Medical School student, where volunteers could not differentiate between AI-generated and human-written text.

OpenAI has recently launched a "classifier tool" which can distinguish between texts written by AI and humans. However, according to their website, the accuracy is only 26 percent.

The tool is also unreliable on pieces containing less than a thousand words and language, barring English.

Professor Guha argues that the reason why people are susceptible to misinformation is due to the lack of media literacy in the country.

"Critical reading and verifying things are basic skills. I think investing in these basic skills and media literacy has much better outcomes than investing in technological fixes to this problem. Right now, media literacy is non-existent in the curriculum of most students and young people in this country."
Anupam Guha, Professor, IIT Bombay

So, Are There No Upsides?

Well, there are a few upsides.

The evolution of AI tools could help in combating misinformation as they have the ability to analyse complex data at a much higher rate than humans. It can also help human fact-checkers in tracking similar kinds of images and clips going viral.

In May 2021, the Massachusetts Institute of Technology published an article talking about an AI program which analyses social media accounts spreading mis/disinformation. It said that the program could identify such accounts with 96 percent precision.

However, experts point out that a collaborative approach between humans and AI would be ideal to fight mis/disinformation.

Orsek argued that such an approach would reduce the workload for human fact-checkers. He further pointed out how AI tools can be used in detecting manipulated media, such as deepfakes, which may be difficult for fact-checkers to identify.

Professor Guha, too, mentioned that the final step of verification should remain a human task as "the nature of fact-checking jobs are extremely delicate."

(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT