ADVERTISEMENTREMOVE AD

Is the Govt's AI Advisory an Attempt To Manipulate Information on Social Media?

It seems that the advisory gives a power button to the government against AI platforms.

Published
Law
4 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

In a move that sent ripples through the tech industry, the Union Ministry of Electronics and Information Technology (MeitY) issued an advisory on 1 March, in continuation to advisory No. e No. 2(4)/2023-CyberLaws – 2, dated 26 December 2023.

The controversial part of the advisory is that it mandates explicit government permission for the use of under-tested or unreliable AI models, including Large Language Models (LLMs) and other generative AI models, on the "Indian internet".

The legality of this advisory is still doubtful on the ground of it being ultra vires the parent Act, that is, the Information Technology (IT) Act, 2000.

Moreover, the advisory uses several vague and ambiguous terms such as “bias”, “threaten the integrity of the electoral process”, “under-testing”, “unreliable” etc, which form another ground for challenge.

The advisory seems to be arbitrary as Union IT Minister of State Rajeev Chandrashekar later clarified on X that it is aimed at "significant platforms" and that only "large platforms" need to seek permission before deploying their AI models, not "startups".

Furthermore, the advisory requires platforms to deploy such models only after appropriately labelling their fallibility or unreliability and implementing a "consent popup" mechanism to inform users about potential inaccuracies.

This article is a part of 'AI Told You So', a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to shape our future. Click here to view the full collection of stories in the series.

ADVERTISEMENTREMOVE AD

A Power Button for the Govt

The requirement of government permission introduces a concerning possibility of bias and censorship. By wielding the authority to grant or deny permission for AI models, the government holds significant leverage over the dissemination of information and the functioning of digital platforms.

This power could be exploited to favour AI models that generate favourable outcomes for the government while suppressing those that produce unfavourable results or challenge the status quo.

Furthermore, the requirement to label the fallibility or unreliability of AI output, coupled with the implementation of consent pop-ups, could be used as tools for manipulation.

The government could dictate the language and tone of these disclosures, potentially downplaying or omitting information that reflects negatively on its policies or actions. Users may be presented with biased or incomplete information, undermining their ability to make informed decisions about the reliability of AI-generated content.

The criteria for evaluating AI models and granting permission are not clearly defined, leaving room for arbitrary decision-making and opacity. Without mechanisms for oversight and accountability, there is a risk of favouritism, corruption, and misuse of power.

It seems that the advisory gives a power button to the government against AI platforms and therefore, in the context of elections, where public opinion and political messaging play a crucial role, the potential misuse of AI models for propaganda or disinformation campaigns is particularly troubling.

0

In the Context of the Lok Sabha Polls

AI-powered algorithms can be used to amplify certain messages, manipulate public discourse, or target specific voter demographics with tailored content.

By controlling the deployment of AI models through government permission, the ruling party or candidates could gain an unfair advantage in shaping public opinion and influencing electoral outcomes.

The potential for government misuse of AI models in elections underscores the need for greater transparency, accountability, and oversight.

Independent regulatory bodies should be empowered to scrutinise the deployment of AI models during election campaigns and ensure compliance with ethical standards and democratic principles.

Mechanisms for public oversight and accountability should be established to prevent abuses of power and safeguard the integrity of the electoral process.

Moreover, civil society organisations, media watchdogs, and political opposition parties have a crucial role to play in monitoring and exposing any attempts at government manipulation of AI technologies for electoral purposes.

The issuance of such government advisories seeking to regulate AI platforms reflects a broader societal fear and misunderstanding surrounding AI technologies.

This fear is often fuelled by sensationalised media portrayals and science fiction narratives that depict AI as malevolent or uncontrollable.

While it's essential to acknowledge the potential risks associated with AI misuse, knee-jerk regulatory responses based on misconceptions can do more harm than good.

ADVERTISEMENTREMOVE AD

The Need for Demystifying AI Models

Instead of blanket regulations that treat all AI technologies as equal threats, policymakers should adopt a nuanced approach that considers the specific risks and benefits associated with different AI applications.

This approach requires collaboration between policymakers, technologists, ethicists, and other stakeholders to develop frameworks that promote responsible AI development and deployment.

Education also plays a crucial role in addressing misconceptions about AI. By fostering a better understanding of how AI technologies work and their limitations, we can demystify these technologies and empower individuals to engage in informed discussions about their societal implications.

Hence, the recent advisory issued by MeitY reflects an attempt to manipulate AI-generated results and demonstrates a larger misunderstanding about generative AI, the level of intelligence we assign to LLM models, and other AI tools.

This misunderstanding has contributed to reactionary government interventions that risk stifling innovation and hindering the potential benefits of AI technology.

Moving forward, it's essential for policymakers and society as a whole to adopt a more nuanced and informed approach to AI regulation that balances the potential risks with the immense opportunities for progress and innovation.

(Ravi Singh Chhikara is a practicing advocate at the Delhi High Court. Vaishali Chauhan is a practising advocate at the Supreme Court and Delhi High Court. This is an opinion piece and the views expressed above are the author’s own. The Quint neither endorses nor is responsible for the same.)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More