Science and Tech

Actions

Google's Game-Playing AI Software Isn't Just For Fun

Training a software program to play an ancient Chinese board game helped Google's DeepMind move artificial intelligence forward.
Posted

software program is about to take on one of the best living players of Go, an ancient Chinese board game.

AlphaGo, developed by Google artificial intelligence research company DeepMind, will face off against Lee Se-dol, who has the second most international Go titles. Their series of matches starts March 9 and ends March 15.

Basically, it's man vs. machine, which makes for a pretty entertaining PR event. But the artificial intelligence company might have more to gain from competing than just public awareness and bragging rights.

The competition is drawing comparisons to IBM's Deep Blue supercomputer beating world chess champion Garry Kasparov in 1997.

But Go isn't like chess. In chess, there's an average of 20 possible moves after the first turn; in Go there's an average of more than 361, so it isn't as easy to predict.

DeepMind used deep neural networks so AlphaGo could analyze the moves of the game's best players and track which moves are most successful. Then, it improved what it learned by repeatedly playing games against itself.

DeepMind calls its process "deep reinforcement learning," and it's something that could be used in developing other artificial intelligence programs.

In the short term, it could be used to help Google with image and voice recognition, but AlphaGo's creators have bigger ambitions.

"Perhaps moving into medicine one day where we help patients personalize their treatments by using reinforcement learning to understand which sequence of treatments lead to the best outcomes for particular patients," said DeepMind's David Silver.

It sounds a bit like IBM's Jeopardy-playing robot Watson, which is learning how to diagnose medical conditions and recommend treatments.

This video includes clips from goclubmilano / CC By 3.0 and images from Getty Images.