At first glance, an article about Meta’s AI chatbot that was published on Patna-based news portal Biharprabha reads like a regular 600-word news report that delves into the history of the AI bot, the controversy surrounding its responses, and the concerns raised, in particular, by Dr Emily Bender, a “leading AI ethics researcher”.
“The release of BlenderBot 3 demonstrates that Meta continues to struggle with addressing biases and misinformation within its AI models,” Dr Emily Bender is quoted as saying in the article titled ‘Meta’s AI Bot Goes Rogue, Spews Offensive Content’ published on 21 February.
But it turns out that the real Dr Emily Bender never actually said it. The entire quote was fabricated and misattributed to her in the article that was generated using an AI tool, specifically Google’s Large Language Model (LLM) known as Gemini.
Confirming this with The Quint, Dr Bender said that she "had no record of talking to any journalist from Biharprabha."
While the fake quote was removed from the article soon after Dr Bender reached out to the editor of Biharprabha, what may seem like a gaffe is actually part of a larger, more worrying trend of made-up quotes being attributed to real people in AI-generated articles published online.
It further underscores how academics, like Dr Bender, and any individual with a media presence could lose agency in being accurately represented in public.
"The folks at Biharprabha probably don't know who I am, but the LLM they used produced my name and something people might believe I said (I didn't)," Dr Bender wrote in a post on X.
This article is a part of 'AI Told You So', a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to shape our future. Click here to view the full collection of stories in the series.
AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?
1. What Actually Happened
How did Dr Bender find out about the fake, AI-generated quote in the first place?
"I do a lot of media work, and don't always hear back from the journalists I've spoken with when pieces go to press. So, I periodically do a search on my name on news aggregator sites (Google, Bing). One of those turned up the Biharprabha story," she told The Quint.
Besides misattributing two fake quotes to her, the AI-generated article calls Dr Bender a "leading AI ethics researcher" when, in reality, she is a professor in the Department of Linguistics at the University of Washington as well as the director of the university's Computational Linguistics Laboratory.
That's not to say that Dr Bender doesn't have anything to do with the ethics of AI as the "societal impacts of language technology" is one of the professor's many research interests, according to her website.
Another interesting point is that the fake quote misattributed to Dr Bender could easily seem like she said it for someone who is familiar with her work or follows her on social media. "The quote didn't sound like something I'd say, though I could see how someone might think it could," she told The Quint.
Still, the fact remains that Dr Bender didn't actually say it.
Responding to the professor's email, Biharprabha co-founder Abhishek Bharadwaj said, "Actually, we had prompted Gemini AI to create a story about Blenderbot 3's latest blunder and it created this article misquoting you."
"We have removed your quote and published a retraction at the bottom of the same article," Bharadwaj said in the email response to Dr Bender that was shared with The Quint.
For safe measure, we also ran the text of the article through a few free-to-use AI text detectors for a third confirmation that the article was indeed AI-generated (although it must be noted that these detectors are not always reliable).
- 01/03
(Screenshot: WinstonAI)
- 02/03
(Screenshot: Quillbot)
- 03/03
(Screenshot: Copyleaks)
There are also a few tell-tale signs in the AI-generated article published by Biharprabha, like this one where the term 'open AI' is randomly inserted into a paragraph.
We still don't know if the entire article or just parts of it were AI-generated, whether any human editor was involved in the process of publishing the article with the fake quote, and whether the news portal has stopped using AI tools to publish articles since the retraction.
The Quint has reached out to Biharprabha as well as Google with detailed questions. This report will be updated with their responses if we hear back.
Expand2. Why It Matters
The article published by Biharprabha did not contain any disclosure or indication that it was generated by prompting an LLM model like Gemini. Instead, the byline of the said article only mentioned 'BP Staff'.
As a result, Biharprabha's readers would have no idea whether the news they're reading and the news articles they're sharing are AI-generated or not.
Okay, but how many readers could that really be? Well, the news portal founded in 2010 boasts of roughly 30,000 followers on Facebook and another 2,255 followers on X.
In terms of SEO traffic, the URL biharprabha[dot]com has been back-linked 47,000 times with 942 websites linking to it, according to Ahrefs.
Furthermore, Biharprabha's domain rating is 34 with 62.3 percent being direct traffic, 25.11 percent of traffic coming from referring websites, and 12.59 percent of traffic coming from organic search, according to similarweb.
Even if Biharprabha is not as established as other Indian news outlets, the incident raises the prospect of similar fabricated quotes popping up in other AI-generated articles.
"I guess this leaves us with the job of trying to find and pushback against all such misquotes. Yet another kind of spam, sucking up literal and figurative energy to no benefit," Dr Bender said on X.
Expand3. Biharprabha Is Not Alone
Last year, Futurism reported on how an AI-generated article published by a website called Enlightened Mindset also featured a fabricated quote that was attributed to a real person.
In this case too, the misquoted person was an AI researcher called Dr Reuben Binns who reportedly specialises in machine learning and data protection at the University of Oxford.
"In a few years' time, this will not be an unusual experience for anyone whose name is on the internet somewhere," the real Dr Reuben Binns told Futurism, adding that your name or public persona not being something that you control anymore will just be a part of life.
Moreover, experts have pointed out that the low cost of production associated with publishing AI-generated content is only further polluting our information ecosystem.
For instance, a report by NewsGuard published last year identified 49 "low-quality" websites that used generative AI tools like OpenAI's ChatGPT to publish unreliable and cheap text content disguised as news reports purely to attract ad revenue.
"Now that everyone has access to synthetic text extruding machines that can generate text on a requested topic and in a requested style, any unfamiliar source is suspect," the University of Washington's Dr Emily Bender said.
"On the internet we (especially speakers of colonial languages) have access to news media from around the world, but we need to build networks that can help us contextualize what we see. This has been true for a long time, of course, it's just now it's gotten much cheaper to create plausible looking sites filled with worthless content," she added.
Expand4. What Media Organisations Ought To Know
The Biharprabha incident comes at a time when newsrooms in India and across the world are experimenting with a wide array of generative AI tools at different stages of publishing.
However, some of these experiments have gone seriously wrong in the past. For instance, CNET published articles that were AI-generated in 2023. Shortly after, the tech news website ended up issuing corrections in 41 out of 77 such AI-generated stories, The Verge reported.
According to Dr Bender, AI-generated news reports are prone to factual errors because LLMs are just picking the likely next words. "When the strings that come out of the system are ones that we deem to be true, that is just by chance," she said.
Gizmodo and Men's Journal are also part of that list of AI slip-ups.
"The work of journalism is not primarily the work of putting words on the page, but rather framing investigations, finding appropriate sources, asking those sources appropriate questions, and crafting a narrative out of all of the information the journalist finds through this process. And the heart of this has to be a commitment to truth and accuracy in reporting. So-called "generative AI" does none of those things," Dr Bender told The Quint.
Sports Illustrated (SI) has also received flak for allegedly publishing AI-generated content without disclosing it from the get-go.
When asked why it's important for media organisations to be upfront about their usage of AI tools, Dr Bender said, "For a news organisation to print the output of a generative AI system as if it were news is the most glaring declaration of lack of journalistic integrity. To do so without even providing transparency about what they have done is even worse."
Expand
What Actually Happened
How did Dr Bender find out about the fake, AI-generated quote in the first place?
"I do a lot of media work, and don't always hear back from the journalists I've spoken with when pieces go to press. So, I periodically do a search on my name on news aggregator sites (Google, Bing). One of those turned up the Biharprabha story," she told The Quint.
Besides misattributing two fake quotes to her, the AI-generated article calls Dr Bender a "leading AI ethics researcher" when, in reality, she is a professor in the Department of Linguistics at the University of Washington as well as the director of the university's Computational Linguistics Laboratory.
That's not to say that Dr Bender doesn't have anything to do with the ethics of AI as the "societal impacts of language technology" is one of the professor's many research interests, according to her website.
Another interesting point is that the fake quote misattributed to Dr Bender could easily seem like she said it for someone who is familiar with her work or follows her on social media. "The quote didn't sound like something I'd say, though I could see how someone might think it could," she told The Quint.
Still, the fact remains that Dr Bender didn't actually say it.
Responding to the professor's email, Biharprabha co-founder Abhishek Bharadwaj said, "Actually, we had prompted Gemini AI to create a story about Blenderbot 3's latest blunder and it created this article misquoting you."
"We have removed your quote and published a retraction at the bottom of the same article," Bharadwaj said in the email response to Dr Bender that was shared with The Quint.
For safe measure, we also ran the text of the article through a few free-to-use AI text detectors for a third confirmation that the article was indeed AI-generated (although it must be noted that these detectors are not always reliable).
- 01/03
(Screenshot: WinstonAI)
- 02/03
(Screenshot: Quillbot)
- 03/03
(Screenshot: Copyleaks)
There are also a few tell-tale signs in the AI-generated article published by Biharprabha, like this one where the term 'open AI' is randomly inserted into a paragraph.
We still don't know if the entire article or just parts of it were AI-generated, whether any human editor was involved in the process of publishing the article with the fake quote, and whether the news portal has stopped using AI tools to publish articles since the retraction.
The Quint has reached out to Biharprabha as well as Google with detailed questions. This report will be updated with their responses if we hear back.
Why It Matters
The article published by Biharprabha did not contain any disclosure or indication that it was generated by prompting an LLM model like Gemini. Instead, the byline of the said article only mentioned 'BP Staff'.
As a result, Biharprabha's readers would have no idea whether the news they're reading and the news articles they're sharing are AI-generated or not.
Okay, but how many readers could that really be? Well, the news portal founded in 2010 boasts of roughly 30,000 followers on Facebook and another 2,255 followers on X.
In terms of SEO traffic, the URL biharprabha[dot]com has been back-linked 47,000 times with 942 websites linking to it, according to Ahrefs.
Furthermore, Biharprabha's domain rating is 34 with 62.3 percent being direct traffic, 25.11 percent of traffic coming from referring websites, and 12.59 percent of traffic coming from organic search, according to similarweb.
Even if Biharprabha is not as established as other Indian news outlets, the incident raises the prospect of similar fabricated quotes popping up in other AI-generated articles.
"I guess this leaves us with the job of trying to find and pushback against all such misquotes. Yet another kind of spam, sucking up literal and figurative energy to no benefit," Dr Bender said on X.
Biharprabha Is Not Alone
Last year, Futurism reported on how an AI-generated article published by a website called Enlightened Mindset also featured a fabricated quote that was attributed to a real person.
In this case too, the misquoted person was an AI researcher called Dr Reuben Binns who reportedly specialises in machine learning and data protection at the University of Oxford.
"In a few years' time, this will not be an unusual experience for anyone whose name is on the internet somewhere," the real Dr Reuben Binns told Futurism, adding that your name or public persona not being something that you control anymore will just be a part of life.
Moreover, experts have pointed out that the low cost of production associated with publishing AI-generated content is only further polluting our information ecosystem.
For instance, a report by NewsGuard published last year identified 49 "low-quality" websites that used generative AI tools like OpenAI's ChatGPT to publish unreliable and cheap text content disguised as news reports purely to attract ad revenue.
"Now that everyone has access to synthetic text extruding machines that can generate text on a requested topic and in a requested style, any unfamiliar source is suspect," the University of Washington's Dr Emily Bender said.
"On the internet we (especially speakers of colonial languages) have access to news media from around the world, but we need to build networks that can help us contextualize what we see. This has been true for a long time, of course, it's just now it's gotten much cheaper to create plausible looking sites filled with worthless content," she added.
What Media Organisations Ought To Know
The Biharprabha incident comes at a time when newsrooms in India and across the world are experimenting with a wide array of generative AI tools at different stages of publishing.
However, some of these experiments have gone seriously wrong in the past. For instance, CNET published articles that were AI-generated in 2023. Shortly after, the tech news website ended up issuing corrections in 41 out of 77 such AI-generated stories, The Verge reported.
According to Dr Bender, AI-generated news reports are prone to factual errors because LLMs are just picking the likely next words. "When the strings that come out of the system are ones that we deem to be true, that is just by chance," she said.
Gizmodo and Men's Journal are also part of that list of AI slip-ups.
"The work of journalism is not primarily the work of putting words on the page, but rather framing investigations, finding appropriate sources, asking those sources appropriate questions, and crafting a narrative out of all of the information the journalist finds through this process. And the heart of this has to be a commitment to truth and accuracy in reporting. So-called "generative AI" does none of those things," Dr Bender told The Quint.
Sports Illustrated (SI) has also received flak for allegedly publishing AI-generated content without disclosing it from the get-go.
When asked why it's important for media organisations to be upfront about their usage of AI tools, Dr Bender said, "For a news organisation to print the output of a generative AI system as if it were news is the most glaring declaration of lack of journalistic integrity. To do so without even providing transparency about what they have done is even worse."