
advertisement
A study published in the British scientific journal, Nature, discussed how feeding incorrect information to internet servers can lead AI chatbots to replicate it to users.
An experiment conducted by a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, yielded these findings.
The experiment was carried out to test whether large language models (LLMs) would ingest misinformation and present it as a legitimate medical condition.
She also included multiple red flags throughout the preprint to make it clear to any medical staff that the condition is made up.
The results showed that not only did these AI models present this false information as truth, but numerous researchers have cited the made-up research paper on Bixonimania in their studies.
This has raised concerns among researchers, as over 23 crore people worldwide use AI chatbots for health-related concerns every year.
Misinformation by AI chatbots is not limited to accessing fake research papers. AI hallucinations are another common form of how false information is spread. AI hallucination occurs when the chatbots generate information that seems plausible but is actually misleading or inaccurate.
The following graph is based on a survey conducted by the Kaiser Family Foundation, a US-based health policy organisation. The survey sample included 2,428 US adults. The graphs show how often the individuals interact with AI.
The graphs show how often adults in the US interact with AI.
(Source: KFF/Screenshot)
While explaining why chatboxes give incorrect responses, Anupam Guha, a researcher in AI policy and professor at IIT Bombay, told us how AI lacks a human sense of the world, during a story about how AI spreads misinformation.
AI chatbots often give false diagnoses, unreliable advice and even invent body parts in response to medical reports, according to a report by Emergency Care Research Institute, an American healthcare research nonprofit organisation. They also discussed how the risk of using chatbots in healthcare becomes even more concerning as healthcare expenses reduce access to healthcare and increase dependency on these tools.
As part of a study by nature, researchers gathered 234 samples of distorted ChatGPT responses to understand how ChatGPT generates distorted or misleading information. These responses were distributed across different categories.
The following chart visualises how frequently each error type appeared.
The chart visualises how frequently each error type appeared.
(Source:nature/Screenshot)
According to a research paper, AI in Indian healthcare: From roadmap to reality, the use of AI and robots in the health sector in India is increasing, aiming to account for the shortage of medical professionals and healthcare workers in the country.
The study says that one of the key strengths of AI is its ability to provide personalised advice based on the patient's medical history, their response to treatment and their lifestyle.
India has enabled multiple AI tools to support the country's healthcare sector, including e-Sanjeevani, which performs AI-assisted differential diagnosis. In the current context, where people are dependent on AI for health to a large extent, the spread of misinformation warrants caution.
e-Sanjeevani has integrated AI into its system to improve access to healthcare
(Source: eSanjeevani/Screenshot)