Google Won’t Use AI to Make Military Weapons, Says Sundar Pichai

Google will make sure that none of its AI-based technology is used for devious purposes, Pichai said. 
IANS
Tech News
Published:
Sundar Pichai, CEO, Google, delivers his keynote address during the Google I/O 2015. 
|
(Photo: Reuters)
Sundar Pichai, CEO, Google, delivers his keynote address during the Google I/O 2015. 
ADVERTISEMENT

After facing backlash over its involvement in an Artificial Intelligence (AI)-powered Pentagon project "Maven", Google CEO Sundar Pichai has emphasised that the company will not work on technologies that cause or are likely to cause overall harm.

About 4,000 Google employees had signed a petition demanding "a clear policy stating that neither Google nor its contractors will ever build warfare technology".

Following the anger, Google decided not to renew the "Maven" AI project with the US Defence Department after it expires in 2019.

"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," Pichai said in a blog post on Friday, 8 June.

They also showcased Duplex, which allows Google Assistant to talk like a human, without anyone able to figure that out.

Facebook, Microsoft and even Amazon have become active participants in AI but Google has been rather muted in its approach till now. But with Pichai, the search-engine and technology behemoth is looking to set things straight.

Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichai emphasised.

In a blog post describing seven "AI principles", he said these are not theoretical concepts but "concrete standards that will actively govern our research and product development and will impact our business decisions".

How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.
<a href="https://www.blog.google/topics/ai/ai-principles/">Sundar Pichai, CEO, Google</a>

Google will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where it operates.

We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
<a href="https://www.blog.google/topics/ai/ai-principles/">Sundar Pichai, CEO, Google</a>

Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.

"We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies be subject to appropriate human direction and control," he added.

There’s a greater emphasis on making AI involve in various activities, and its role is only going to become significant in the coming years.

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT