What Is Intelligence? 20 Years After Deep Blue, AI Still Can't Think Like Humans

World Chess champion Garry Kasparov (left) ponders a chess move during the sixth and final game of his match with IBM's Deep Blue computer on May 11, 1997.
World Chess champion Garry Kasparov (left) ponders a chess move during the sixth and final game of his match with IBM's Deep Blue computer on May 11, 1997. (Image credit: Roger Celestin/Newscom)

When the IBM computer Deep Blue beat the world's greatest chess player, Garry Kasparov, in the last game of a six-game match on May 11, 1997, the world was astonished. This was the first time any human chess champion had been taken down by a machine.

That win for artificial intelligence was historic, not only for proving that computers can outperform the greatest minds in certain challenges, but also for showing the limitations and shortcomings of these intelligent hunks of metal, experts say.

Deep Blue also highlighted that, if scientists are going to build intelligent machines that think, they have to decide what "intelligent" and "think" mean. [Super-Intelligent Machines: 7 Robotic Futures]

Computers have their limits

During the multigame match that lasted days at the Equitable Center in Midtown Manhattan, Deep Blue beat Kasparov two games to one, and three games were a draw. The machine approached chess by looking ahead many moves and going through possible combinations — a strategy known as a "decision tree" (think of each decision describing a branch of a tree). Deep Blue "pruned" some of these decisions to reduce the number of "branches" and speed the calculations, and was still able to "think" through some 200 million moves every second.

Despite those incredible computations, however, machines still fall short in other areas.

"Good as they are, [computers] are quite poor at other kinds of decision making," said Murray Campbell, a research scientist at IBM Research. "Some doubted that a computer would ever play as well as a top human.

"The more interesting thing we showed was that there's more than one way to look at a complex problem," Campbell told Live Science. "You can look at it the human way, using experience and intuition, or in a more computer-like way." Those methods complement each other, he said.

Although Deep Blue's win proved that humans could build a machine that's a great chess player, it underscored the complexity and difficulty of building a computer that could handle a board game. IBM scientists spent years constructing Deep Blue, and all it could do was play chess, Campbell said. Building a machine that can tackle different tasks, or that can learn how to do new ones, has proved more difficult, he added.

Learning machines

At the time Deep Blue was built, the field of machine learning hadn't progressed as far as it has now, and much of the computing power wasn't available yet, Campbell said. IBM's next intelligent machine, named Watson, for example, works very differently from Deep Blue, operating more like a search engine. Watson proved that it could understand and respond to humans by defeating longtime "Jeopardy!" champions in 2011.

Machine learning systems that have been developed in the past two decades also make use of huge amounts of data that simply didn't exist in 1997, when the internet was still in its infancy. And programming has advanced as well.

The artificially intelligent computer program called AlphaGo, for example, which beat the world's champion player of the board game Go, also works differently from Deep Blue. AlphaGo played many board games against itself and used those patterns to learn optimal strategies. The learning happened via neural networks, or programs that operate much like the neurons in a human brain. The hardware to make them wasn't practical in the 1990s, when Deep Blue was built, Campbell said.

Thomas Haigh, an associate professor at the University of Wisconsin-Milwaukee who has written extensively on the history of computing, said Deep Blue's hardware was a showcase for IBM's engineering at the time; the machine combined several custom-made chips with others that were higher-end versions of the PowerPC processors used in personal computers of the day. [History of A.I.: Artificial Intelligence (Infographic)]

What is intelligence?

Deep Blue also demonstrated that a computer's intelligence might not have much to do with human intelligence.

"[Deep Blue] is a departure from the classic AI symbolic tradition of trying to replicate the functioning of human intelligence and understanding by having a machine that can do general-purpose reasoning," Haigh said, hence the effort to make a better chess-playing machine.

But that strategy was based more on computer builders' idea of what was smart than on what intelligence actually might be. "Back in the 1950s, chess was seen as something that smart humans were good at," Haigh said. "As mathematicians and programmers tended to be particularly good at chess, they viewed it as a good test of whether a machine could show intelligence."

That changed by the 1970s. "It was clear that the techniques that were making computer programs into increasingly strong chess players did not have anything to do with general intelligence," Haigh said. "So instead of thinking that computers were smart because they play chess well, we decided that playing chess well wasn't a test of intelligence after all."

The changes in how scientists define intelligence also show the complexity of certain kinds of AI tasks, Campbell said. Deep Blue might have been one of the most advanced computers at the time, but it was built to play chess, and only that. Even now, computers struggle with "common sense" — the kind of contextual information that humans generally don't think about, because it's obvious.

"Everyone above a certain age knows how the world works," Campbell said. Machines don't. Computers have also struggled with certain kinds of pattern-recognition tasks that humans find easy, Campbell added. "Many of the advances in the last five years have been in perceptual problems," such as face and pattern recognition, he said.

Another thing Campbell noted computers can't do is explain themselves. A human can describe her thought processes, and how she learned something. Computers can't really do that yet. "AIs and machine learning systems are a bit of a black box," he said.

Haigh noted that even Watson, in its "Jeopardy!" win, did not "think" like a person. "[Watson] used later generations of processors to implement a statistical brute force approach (rather than a knowledge-based logic approach) to Jeopardy!," he wrote in an email to Live Science. "It again worked nothing like a human champion, but demonstrated that being a quiz champion also has nothing to do with intelligence," in the way most people think of it.

Even so, "as computers come to do more and more things better than us, we'll either be left with a very specific definition of intelligence or maybe have to admit that computers actually are intelligent, but in a different way from us," Haigh said.

What's next in AI?

Because humans and computers "think" so differently, it will be a long time before a computer makes a medical diagnosis, for example, all by itself, or handles a problem like designing residences for people as they age and want to remain in their homes, Campbell said. Deep Blue showed the capabilities of a computer geared to a certain task, but to date, nobody has made a generalized machine learning system that works as well as a purpose-built computer.

For example, computers can be very good at crunching lots of data and finding patterns that humans would miss. They can then make that information available to humans to make decisions. "A complementary system is better than a human or machine," Campbell said.

It's also probably time to tackle different problems, he said. Board games like chess or Go allow players to know everything about their opponent's position; this is called a complete information game. Real-world problems are not like that. "A lesson we should have learned by now… There's not that much more that we can learn from board games." (In 2017, the artificially intelligent computer program called Libratus beat the best human poker players in a 20-day No-Limit Texas Hold 'em tournament, which is considered a game of incomplete information.)

As for Deep Blue's fate, the computer was dismantled after the historic match with Kasparov; components of it are on display at the National Museum of American History in Washington, D.C., and the Computer History Museum in Mountain View, California. 

Original article on Live Science.

Jesse Emspak
Live Science Contributor
Jesse Emspak is a contributing writer for Live Science, Space.com and Toms Guide. He focuses on physics, human health and general science. Jesse has a Master of Arts from the University of California, Berkeley School of Journalism, and a Bachelor of Arts from the University of Rochester. Jesse spent years covering finance and cut his teeth at local newspapers, working local politics and police beats. Jesse likes to stay active and holds a third degree black belt in Karate, which just means he now knows how much he has to learn.