ADVERTISEMENTREMOVE AD

As AI Fever Grips the World, Can India Seize the Opportunity?

As India had indicated a framework around its usage, the power of AI can be exploited for the good of the citizens.

Published
Opinion
6 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

In early May this year, Geoffrey Hinton, the well-known British-Canadian computer scientist most known for his work in Artificial Intelligence (AI) publicly resigned from Google mentioning his concerns about the risks of AI and how he needed to speak about the dangers of AI without impacting Google although defending Google for handling AI very responsibly.

In two separate interviews around this resignation, he mentioned how AI could soon surpass the information capacity of the human brain and also that AI could wipe out humanity. Earlier on 28 March this year, more than 1100 individuals ranging from global tech leaders to eminent citizens signed an open letter that was posted online that called on “all AI labs to immediately pause for at least six months, the training of AI systems more powerful than GPT-4."

Snapshot

While GPT-4 is a more comprehensive multimodal large language model and considered wider than ChatGPT, the fact remains that the outcome of the processing in both these models and also the Bard from Google is somewhat sending outrage across the world.

Clearly, generative AI based on large language models offer a bigger challenge than anticipated in terms of its timings and the pace at which it is progressing.

As the Indian government had indicated in its acceptance of the Chair to work in close cooperation with member states to put in place a framework around which the power of AI can be exploited for the good of the citizens and consumers across the globe.

While Big Tech has been on the radar of governments and regulators across the world more so for their significant influence and reach, the push on them for responsible technology development and layout has not been impactful

ADVERTISEMENTREMOVE AD

The AI Revolution: A Boon or a Bane?

The timing of these two incidents is significant: across the globe, the debate had begun on the reach and impact of natural language processing tools that are driven by AI technology and generative. All of that started ever since the launch of ChatGPT by OpenAI in November last year which facilitated human-like conversations and much more with the chatbot and could assist with tasks like composing emails, essays, and code in no time.

While GPT-4 is a more comprehensive multimodal large language model and considered wider than ChatGPT, the fact remains that the outcome of the processing in both these models and also the Bard from Google is somewhat sending outrage across the world.
0

These fears are premised around the concerns that the open letter has raised— should we let machines flood our information channels with propaganda and untruth? should we automate away all the jobs, including the fulfilling ones? Should we develop non human minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilisation?

ADVERTISEMENTREMOVE AD

Impact Of Technology On Global Order

These questions definitely need answers and the stakeholders cannot delay a global regime to address issues of technology and how it is managed as a global commons. The leading nations of the world have spent almost two decades looking at the impact of technology on international order. While space and nuclear have binding agreements, cyber technology issues don’t have one nor is there on the horizon.

ADVERTISEMENTREMOVE AD

The first attempt in the form of the Council of Europe draft convention on cyber crimes in 2001 was a failure and the more recent United Nations fostered Group of Governmental Experts on Advancing responsible State behavior in Cyberspace in the context of International Security has made some headway in the form of prescribing 11 non-binding norms in July 2021 which has still a long way to go before an agreement is reached.

Meanwhile, the march of technology continues. It is very clear that while an overarching agreement has to be there, some of the sectoral areas like AI need to be addressed more proactively and some arrangement arrived at so that product and solution developers, regulators, and sovereign authorities know how to deal with the impact of advances in technology.

Clearly, generative AI based on large language models offer a bigger challenge than anticipated in terms of its timings and the pace at which it is progressing. This technology is becoming increasingly sophisticated, with systems that can generate images, music, text, and even videos.
ADVERTISEMENTREMOVE AD

While these capabilities are impressive, they also raise concerns about their potential misuse. There are fears that these AI systems could be used to spread fake news, create malicious content, or even impersonate individuals. One of the challenges of regulating generative AI is that it is difficult to determine the intent of these systems.

ADVERTISEMENTREMOVE AD

AI & Human Creators: Where Does the Buck Stop?

Unlike human creators, AI systems do not have a moral compass or conscience. Therefore, it is essential to ensure that these systems are designed in a way that aligns with ethical standards. For instance, they should not be programmed to generate content that is racist, sexist, or discriminatory in any way. Another challenge of regulating generative AI is that it can be challenging to distinguish between content that is generated by humans and that which is generated by AI. This makes it difficult to hold individuals or organisations accountable for the content they publish. Therefore, it is essential to develop methods that can verify the source of generated content.

To regulate generative AI, it is necessary to have a clear set of guidelines and standards that govern their development and use. These guidelines have to be developed in consultation with stakeholders, including industry experts, policymakers, and the general public. One approach to regulating generative AI is to introduce legal frameworks that govern their development and use. This would require policymakers to work closely with industry experts to develop regulations that balance the potential benefits of generative AI with the need to protect individuals and society as a whole.

Such regulations could include rules on the use of personal data, the transparency of AI systems, and the accountability of those who develop and use them. Another approach to regulating generative AI is to promote self-regulation within the industry. This would involve industry experts developing their own codes of conduct that govern the development and use of generative AI. These codes could be enforced by industry bodies or through peer review. However, the challenges have to be addressed by governments and regulatory bodies more proactively than what it is today.

ADVERTISEMENTREMOVE AD

India’s AI Leadership Is Warranted

In this context it will be very pertinent for India to take a major role in the global order to address the concerns that have been listed in the open letter. In November last year, India had assumed the Chair of the Global Partnership on Artificial Intelligence (GPAI), an international initiative to support responsible and human-centric development and use of AI having 25 major nations as its members.

As the Indian government had indicated in its acceptance of the Chair to work in close cooperation with member states to put in place a framework around which the power of AI can be exploited for the good of the citizens and consumers across the globe and ensure that there are adequate guardrails to prevent misuse and user harm, the opportunity cannot be better. It also naturally falls in the position that India as the current chair for G20 has mentioned wherein it has espoused the reach of technology for greater public good and fostering responsible AI in development and usage across the horizon.
ADVERTISEMENTREMOVE AD

While Big Tech has been on the radar of governments and regulators across the world more so for their significant influence and reach, the push on them for responsible technology development and layout has not been impactful. Their competition in generative AI has already shown that there is no industry-wide consensus to stop at a level.

The 23 Asilomar AI Principles which comprise the guidelines and ethics for the research and development of beneficial AI in January 2017 by a group of AI researchers, technology experts, and legal scholars from different universities and organizations have not been followed by the developers. Possibly that’s a guiding tool that GPAI under India’s leadership could deliberate on and fine-tune a binding agreement for all stakeholders for responsible AI. 

(Subimal Bhattacharjee is a commentator on cyber and security issues around Northeast India. He can be reached @subimal on Twitter. This is an opinion piece and the views expressed are the author’s own. The Quint neither endorses nor is responsible for them.)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from opinion

Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More
×
×