ADVERTISEMENTREMOVE AD

OpenAI Takes Measures To Counter Election Misinformation: Will They Be Enough?

As India heads into election season, what is OpenAI doing to address AI-generated misinformation?

Published
Tech News
5 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

In an attempt to tackle AI-generated misinformation around elections, OpenAI has said that it "won't allow" chatbots that impersonate candidates. The ChatGPT-maker has also prohibited the use of its AI tools for political campaigning and lobbying.

These measures were announced in a blog post published by OpenAI on Monday, 15 January, amid rising concerns that generative AI could be used to influence election outcomes in various countries, including India.

"As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency," OpenAI said in a blog post.

With critical elections due in over 40 countries this year, what other measures is OpenAI taking to curb election misinformation fuelled by AI? Take a look.

This article is a part of 'AI Told You So', a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to shape our future. Click here to view the full collection of stories in the series.

ADVERTISEMENTREMOVE AD

OpenAI's Plan To Combat Election Misinformation

OpenAI highlighted the following preventive actions that it is taking to prepare for elections this year:

No candidate impersonations: "People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government)," the Sam Altman-led AI firm said.

  • OpenAI further pointed out that its popular text-to-image generator DALL-E would decline "requests that ask for image generation of real people, including candidates."

No campaigning, lobbying: "We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying," OpenAI said.

No inaccurate voting information: "We don’t allow applications that deter people from participation in democratic processes – for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless)," the post read.

0

What Else Does OpenAI Have in the Works?

Besides enforcing the restrictions mentioned above, OpenAI outlined a few more steps:

Labelling AI-generated content: "Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials – an approach that encodes details about the content’s provenance using cryptography – for images generated by DALL·E 3," the AI company said.

  • How will these digital signatures help? "Better transparency around image provenance – including the ability to detect which tools were used to produce an image – can empower voters to assess an image with trust and confidence in how it was made," the post read.

Detecting AI-generated content: OpenAI said that it is experimenting with a new tool for detecting images generated by DALL-E. "Our internal testing has shown promising early results, even where images have been subject to common types of modifications," it said.

  • "We plan to soon make it available to our first group of testers – including journalists, platforms, and researchers – for feedback," OpenAI added.

Making ChatGPT cite its sources: Since the popular Large Language Model is increasingly integrating with existing sources of information, OpenAI said that it looks to give ChatGPT users "access to real-time news reporting globally, including attribution and links."

As India heads into election season, what is OpenAI doing to address AI-generated misinformation?

Setting up reporting mechanisms: "With our new GPTs, users can report potential violations to us," OpenAI said.

ADVERTISEMENTREMOVE AD

Why Some of OpenAI's Measures Might Be Flawed

With the 2024 general elections in India on the way, AI tools that are used to generate synthetic images, audio, and video are likely to increase the spread of false and misleading information.

Offering a glimpse of what lies ahead, fact-checking website BOOM recently reported on how AI-generated images of politicians were shared widely on social media platforms in the run up to the Telangana Assembly elections in November last year.

The set of AI-generated images that emerged online depicted candidates contesting the polls such as Congress' Revanth Reddy and Bharat Rashtra Samithi (BRS) chief K Chandrasekhar Rao in situations that didn't happen.

Interestingly, the images were reportedly generated using Microsoft's Image Creator tool that is powered by OpenAI's DALL-E; thus indicating that users were able to bypass any filters built into the tool and successfully generate AI images of election candidates.

Such instances raise questions about the effectiveness of DALL-E's policy to decline user requests asking for images of election candidates. Furthermore, OpenAI's image detection tool is still undergoing tests with no clarity on when exactly it would be rolled out.

Would it help to embed such AI-generated images with a digital signature? Yes and no.

The Coalition for Content Provenance and Authenticity (C2PA) initiative can make it easier to tell AI-generated images apart since it proposes to use cryptographic methods in order to mark and sign AI-generated content with metadata about the origin of the image or video.

However, open source AI tools that aren’t built by the big AI companies would sit outside such initiatives.

ADVERTISEMENTREMOVE AD

Another measure highlighted by OpenAI is attribution so that when ChatGPT provides real-time news updates, it also links out to the relevant sources.

However, a WIRED report published last year had found that when Microsoft's Copilot was asked to recommend Telegram channels that discuss election integrity, the AI chatbot's response linked out to a page managed by a far-right group in the US.

Notably, Copilot is powered by OpenAI's GPT technology.

A 2023 study published by non-profit groups Algorithm Watch and AI Forensics found that one third of Copilot's answers to questions about the Swiss and German elections contained factual errors such as wrong election dates or outdated candidates.

Copilot even invented controversies concerning candidates in the Swiss and German polls, the study found.

"Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm. For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests," OpenAI said in its blog post on Monday.

On a side note, the generative AI company with a valuation of $100 billion recently deleted language from its usage policies that had expressly prohibited the use of its AI tools for military and warfare.

First reported by The Intercept, the tweaked policy raised concerns about OpenAI's tools potentially being used to serve the interests of the US military.

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from tech-and-auto and tech-news

Topics:  AI Told You So   OpenAI 

Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More