Human and machine intelligence will duke it out on "Jeopardy!" next week when a computer named Watson competes against two of the trivia game show’s most celebrated champions.
Built by IBM with help from several universities, Watson has the unique ability to store and search a vast store of information and come up with answers to questions asked in the English language in seconds. This unique ability comes from its 2,800 computer processing units and a variety of software that function in parallel to come up with answers.
Here’s how Watson works: The computer has been preloaded with information taken from encyclopedias, news articles and even the Internet. Other components include software that analyzes the question or clue and figures out what category — person, place or thing, for example— the clue is referring to, as well as software that can use the result of the clue analysis to generate a large number of candidate answers.
Watson then searches its vast database for information that can support or disprove each of the candidate answers. The answers are each assigned a score, and after thousands of calculations — all performed in a couple of seconds — Watson arrives at a single answer.
“When Watson was developed, it was initially developed and run just on a single quadcore processor and there were questions that took over an hour to finish processing,” said Eric Nyberg, a professor at the Language Technologies Institute at Carnegie Mellon University.
“That’s why IBM started working very diligently on mapping Watson’s computations onto a parallel distributed processor, so with the 2,800 processing units it can do all of this in a couple of seconds,” he added.
Despite having all this computing power at its disposal, Watson’s doesn’t always give the right answer.
“I’ve actually seen Watson give answers that I don’t think any human 'Jeopardy!' player would ever give,” Nyberg told TechNewsDaily. “On the other hand, I’ve seen Watson very quickly extract very obscure answers that I would be surprised if a human could get in one or two seconds.”
When Watson does get an answer wrong, it’s often because it lacks the context that we humans have learned through our experiences in the real world.
Unlike the computer Hal in the movie "2001: A Space Odyssey," Watson can’t learn through experience the way a human can. Instead, the team must transform “experiential knowledge” into factual information that Watson can understand.
This is no easy task, but it could hold the key to improving Watson’s performance in the future. “In the short term, one way of making Watson smarter is to actually think more carefully about how to represent that real-world knowledge or that context that we all have in a way that we could actually make it available to Watson,” Nyberg said.
Next up for Carnegie Mellon is to develop a fast and inexpensive way to build Watson clones.
“We’ve already moved ahead on new project called machine reading which is addressing 'how do you build something like Watson very quickly and inexpensively for a new topic,'” Nyberg said.
“Even though the technology works and it’s very impressive, the question now is, 'OK, how can we cost-effectively develop new applications of Watson with less time and money.'”