Artificial Intelligence Beats 'Most Complex Game Devised by Humans'

go board
(Image credit: Zerber/Shutterstock.com)

Make way for the robots.

An artificial intelligence system has defeated a professional Go player, cracking one of the longstanding grand challenges in the field. What's more, the new system, called AlphaGo, defeated the human player by learning the game from scratch using an approach known as "deep learning," the researchers involved say.

The stunning defeat suggests that the new artificial intelligence (AI) learning strategy could be a powerful tool in other arenas, such as analyzing reams of climate data with no apparent structure or making complicated medical diagnoses, the scientists said.

The researchers reported on the new matchup online today (Jan. 27) in the journal Nature. [Super-intelligent Machines: 7 Robotic Futures]

Man versus machine

Ever since IBM's Deep Blue defeated Gary Kasparov in their iconic chess match in 1997, AI researchers have been quietly crafting robots that can master more and more human pastimes. In 2014, IBM's Watson defeated the Jeopardy! champion Ken Jennings, and last year, a computer named Claudico — that can "bluff" through Heads-Up No-Limit Texas Hold 'em — gave human poker players a run for their money at a Pittsburgh casino.

However, Go was a much harder nut to crack. The strategy game, which originated in China around 2,500 years ago, relies on deceptively simple rules. Players place white and black stones on a large gridded board in order to encircle most territory. Stones of one color that can touch other friendly stones are said to be alive, while those whose escape routes are cut off are dead.

But behind the simple rules lies a game of incredible complexity. The best players  spend a lifetime to master the game, learning to recognize sequences of moves such as "the ladder," devising strategies for avoiding never-ending battles for territory called "ko wars," and developing an uncanny ability to look at the Go board and know in an instant which pieces are alive, dead or in limbo.

"It's probably the most complex game devised by humans," study co-author Demis Hassabis, a computer scientist at Google DeepMind in London, said yesterday (Jan. 26)  at news conference. "It has 10 to the power 170 possible board positions, which is greater than the number of atoms in the universe."

The key to this complexity is Go's "branching pattern," Hassabis said. Each Go player has the option of selecting from 200 moves on each of his turns, compared to 20 possible moves per turn in chess. In addition, there's no easy way to simply look at the board and quantify how well a player is doing at any given time. (In contrast, people can get a rough idea of who is winning a game of chess simply by assigning point values to each of the pieces still in play or captured, Hassabis said.)

As a result, the best AI systems, such as IBM's Deep Blue, have only managed to defeat amateur human Go players. [10 Technologies That Will Transform Your Life]

Deep learning

In the past, experts have taught AI systems specific sequences of moves or tactical patterns. Instead of this method, Hassabis and his colleagues trained the program, called AlphaGo, using no preconceived notions.

The program uses an approach called deep learning or deep neural networks, in which calculations occur across several hierarchically organized layers, and the program feeds input from a lower level into each successive higher layer.

In essence, AlphaGo "watched" millions of Go games between humans to learn the rules of play and basic strategy. The computer then played millions of other games against itself to invent new Go strategies. On its own, AlphaGo graduated from mastering basic sequences of local moves to grasping larger tactical patterns, the researchers said.

To accomplish this task, AlphaGo relies on two sets of neural networks — a value network, which essentially looks at the board positions and decides who is winning and why, and a policy network, which chooses moves. Over time, the policy networks trained the value networks to see how the game was progressing.

Unlike earlier methods, which attempted to calculate the benefits of every possible move via brute force, the program considers only the moves likeliest to win, the researchers said, which is an approach good human players use.

"Our search looks ahead by playing the game many times over in its imagination," study co-author David Silver, a computer scientist at Google DeepMind who helped build AlphaGo, said at the news conference. "This makes AlphaGo search much more humanlike than previous approaches."

Total human defeat

Learning from humans seems to be a winning strategy.

AlphaGo trounced rival AI systems about 99.8 percent of the time, and defeated the reigning European Go champion, Fan Hui, in a tournament, winning all five games. Against other AI systems, the program can run on an ordinary desktop computer, though for the tournament against Hui, the team beefed up AlphaGo's processing power, using about 1,200 central processing units (CPUs) that split up the computational work.

And AlphaGo isn't finished with humans yet. It has set its sights on Lee Sedol, the world's best Go player, and a face-off is scheduled in a few months.

"You can think of him as the Roger Federer of the Go world," Hassabis said.

Many in the Go world were stunned by the defeat — and still held out hope for the mere mortal who will face up against AlphaGo in March.

"AlphaGo's strength is truly impressive! I was surprised enough when I heard Fan Hui lost, but it feels more real to see the game records," Hajin Lee, the secretary general of the International Go Confederation, said in a statement. "My overall impression was that AlphaGo seemed stronger than Fan, but I couldn't tell by how much. I still doubt that it's strong enough to play the world's top professionals, but maybe it becomes stronger when it faces a stronger opponent."

Follow Tia Ghose on Twitterand Google+. Follow Live Science @livescience, Facebook & Google+. Original article on Live Science.

Tia Ghose
Managing Editor

Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.