Google's artificial intelligence software has won its third straight match against a grandmaster of an ancient board game called Go.
Google's program, AlphaGo, won the first three games in a series of five against Lee Sedol, who's considered one of the world's best Go players.
This means Google has secured the $1 million in prize money from the competition, which it says it will donate to charity. But this was about more than money and bragging rights for Google.
Deep neural networks, like the ones used in AlphaGo, are becoming increasingly important to Google's business. It helps identify faces in photos, understands commands spoken into smartphones, chooses Internet search results and more.
And the Go victory over Sedol is a testament to how powerful its machine-learning techniques are. Go is played on a 19-by-19 board, so there are a huge number of possible moves during a match. That's why a lot of Go players say it's a game of intuition as much as anything else.
To master the game, DeepMind, the Google-owned company that developed AlphaGo, used something it called reinforcement learning. Basically, it made the game practice Go by playing thousands and thousands of matches against itself so it could determine the moves most likely to lead to victory.
So now that we know an AI can teach itself to be a top-notch Go player, experts want to see what other things computers can learn.
Some researchers are testing how AI fairs in Texas Hold'em poker to see what it does when it can't see its opponents cards. Another AI is working on standardized testing, like the SATs, so we can see how it processes less predictable questions.
This video includes a clip from Google and images from Getty Images.