AI Chatbots Repeat Misinformation When Trained on False Content, Study Finds

The reliance on AI for healthcare can pose a challenge if misinformation is being spread through these AI tools.

Anika K
WebQoof
Published:
<div class="paragraphs"><p>A recent study shows that AI chatbots replicate misinformation fed to internet servers, leading to the spread of false information.&nbsp;</p></div>
i

A recent study shows that AI chatbots replicate misinformation fed to internet servers, leading to the spread of false information. 

(Source: The Quint)

advertisement

A study published in the British scientific journal, Nature, discussed how feeding incorrect information to internet servers can lead AI chatbots to replicate it to users.

An experiment conducted by a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, yielded these findings.

The experiment was carried out to test whether large language models (LLMs) would ingest misinformation and present it as a legitimate medical condition.

The experiment carried out by Osmanovic Thunström involved her and the team uploading two preprints about a made-up condition named ‘bixonimania’ to the academic social network Sciprolfis, and included a fake name for the researcher and a photograph of him created with AI.

She also included multiple red flags throughout the preprint to make it clear to any medical staff that the condition is made up. 

The results showed that not only did these AI models present this false information as truth, but numerous researchers have cited the made-up research paper on Bixonimania in their studies. 

This has raised concerns among researchers, as over 23 crore people worldwide use AI chatbots for health-related concerns every year.

AI Hallucinations

Misinformation by AI chatbots is not limited to accessing fake research papers. AI hallucinations are another common form of how false information is spread. AI hallucination occurs when the chatbots generate information that seems plausible but is actually misleading or inaccurate.

The following graph is based on a survey conducted by the Kaiser Family Foundation, a US-based health policy organisation. The survey sample included 2,428 US adults. The graphs show how often the individuals interact with AI.

The graphs show how often adults in the US interact with AI. 

(Source: KFF/Screenshot) 

While explaining why chatboxes give incorrect responses, Anupam Guha, a researcher in AI policy and professor at IIT Bombay, told us how AI lacks a human sense of the world, during a story about how AI spreads misinformation.

'It does not have any internal inferential mechanism, any sort of understanding in the human sense of the word. There are no anthropomorphic semantics in ChatGPT and for that matter in any language model based text generator."
Anupam Guha, a researcher in AI policy and professor at IIT Bombay

AI chatbots often give false diagnoses, unreliable advice and even invent body parts in response to medical reports, according to a report by Emergency Care Research Institute, an American healthcare research nonprofit organisation. They also discussed how the risk of using chatbots in healthcare becomes even more concerning as healthcare expenses reduce access to healthcare and increase dependency on these tools.

As part of a study by nature, researchers gathered 234 samples of distorted ChatGPT responses to understand how ChatGPT generates distorted or misleading information. These responses were distributed across different categories.

The following chart visualises how frequently each error type appeared.

The chart visualises how frequently each error type appeared.

(Source:nature/Screenshot)

AI Healthcare in India

According to a research paper, AI in Indian healthcare: From roadmap to reality, the use of AI and robots in the health sector in India is increasing, aiming to account for the shortage of medical professionals and healthcare workers in the country.

The study says that one of the key strengths of AI is its ability to provide personalised advice based on the patient's medical history, their response to treatment and their lifestyle.

ADVERTISEMENT
ADVERTISEMENT

India has enabled multiple AI tools to support the country's healthcare sector, including e-Sanjeevani, which performs AI-assisted differential diagnosis. In the current context, where people are dependent on AI for health to a large extent, the spread of misinformation warrants caution. 

e-Sanjeevani has integrated AI into its system to improve access to healthcare 

(Source: eSanjeevani/Screenshot)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT