The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons.
The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined.
Related: History of computers: A brief timeline
MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely optimistic about AI's future. "Within a generation [...] the problem of creating 'artificial intelligence' will substantially be solved," Minsky is quoted as saying in the book "AI: The Tumultuous Search for Artificial Intelligence" (Basic Books, 1994). [Super-Intelligent Machines: 7 Robotic Futures]
But achieving an artificially intelligent being wasn't so simple. After several reports criticizing progress in AI, government funding and interest in the field dropped off – a period from 1974–80 that became known as the "AI winter." The field later revived in the 1980s when the British government started funding it again in part to compete with efforts by the Japanese.
The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding.
But research began to pick up again after that, and in 1997, IBM's Deep Blue became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings.
This year, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges into thinking he was real skin-and-blood human during a Turing test, a competition developed by British mathematician and computer scientist Alan Turing in 1950 as a way to assess whether a machine is intelligent.
But the accomplishment has been controversial, with artificial intelligence experts saying that only a third of the judges were fooled, and pointing out that the bot was able to dodge some questions by claiming it was an adolescent who spoke English as a second language.
Manyexperts now believe the Turing test isn't a good measure of artificial intelligence.
"The vast majority of people in AI who've thought about the matter, for the most part, think it’s a very poor test, because it only looks at external behavior," Perlis told Live Science.
In fact, some scientists now plan to develop an updated version of the test. But the field of AI has become much broader than just the pursuit of true, humanlike intelligence.
Follow Tanya Lewis on Twitter and Google+. Follow us @livescience, Facebook & Google+. Original article on Live Science.