ADVERTISEMENTREMOVE AD

Google AI Trumps Human Champion at Complex Chinese Game

For the first time, a Google AI has beaten a human champion of the 2,500-year-old complex Chinese game of Go.

Updated
Tech News
2 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

For the first time, a Google computer programme has beaten a human champion at the 2,500-year-old complex Chinese game of Go, in an event seen as a milestone for artificial intelligence, beating Facebook to the feat. Earlier this month, Facebook founder Mark Zuckerberg said in a post on the social networking site that his AI research team was trying to develop an AI that can learn to play GO.

The ancient Chinese game of Go is one of the last games where the best human players can still beat the best artificial...

Posted by Mark Zuckerberg on Tuesday, January 26, 2016

Meanwhile, Google DeepMind’s AlphaGo had already beaten the reigning three-time European Go champion Fan Hui 5-0 in October last year in London.

The game, invented in China 2500 years ago, involves players taking turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. However, as simple as the rules are, Go is a game of profound complexity.

Researchers estimate that there are 10 to the power of 700 possible ways a Go game could be played. By contrast, chess, a game at which AIs can already play at grandmaster level, has about 10 to the power of 60 possible scenarios.

 For the first time, a Google AI has beaten a human champion of the 2,500-year-old complex Chinese game of Go.
Screen showing Chinese Go players in the Ming Dynasty, made by Kano Eitoku. (Photo courtesy: Wikipedia)

To crack the game, Demis Hassabis, the chief executive of Google DeepMind, says they trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time

But our goal is to beat the best human players, not just mimic them.

Demis Hassabis

To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks and adjusting the connections using a trial-and-error process known as reinforcement learning.

“After all that training it was time to put AlphaGo to the test. First, we held a tournament between AlphaGo and the other top programmes at the forefront of computer Go,” said David Silver of Google DeepMind, the lead author of the study. AlphaGo won all but one of its 500 games against these programmes. The researchers then invited Hui for closed-doors match, where AlphaGo won by five games to nil.

In March this year, AlphaGo will face a five-game challenge match in Seoul against the legendary Lee Sedol – the top Go player in the world over the past decade.

(With agency inputs)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

0

Read Latest News and Breaking News at The Quint, browse for more from tech-and-auto and tech-news

Topics:  Google   Facebook 

Published: 
Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More