'Donkey Kong' Smashes Neuroscientists in Thought Experiment
Never mind unraveling the mysteries of the human brain. A new study suggests that neuroscientists might not even have the analytical tools to understand the far simpler logic that drives the "brain" in "Donkey Kong."
In a thought experiment, two researchers asked the question: Could a neuroscientist understand a microprocessor? That is, if one considers the human brain to be an extremely complicated computer, could neuroscientists apply their widely used neuroscience approaches to analyze a simple computer?
How simple? They decided to try the Atari 2600, which in 1981 was a state-of-the-art game console — with what was then a blisteringly fast 6502 microprocessor — that introduced the world to the menacing, chest-beating, damsel-snatching gorilla named Donkey Kong. [Top 10 Mysteries of the Mind]
The researchers — Eric Jonas, a postdoctoral fellow at the University of California, Berkeley, and Konrad Kording, a professor of physical medicine and rehabilitation/physiology at Northwestern University in Chicago — chose the Atari 2600 as their "model organism" because it was complicated enough to present an analytical challenge, yet the engineers who created it had mapped it out thoroughly and understood it completely.
To mimic a typical brain study, they examined three types of "behaviors" for the Atari 2600 in the form of three different games: "Donkey Kong," "Space Invaders" and "Pitfall!" They then applied some of the data analysis methods that are commonly used in neuroscience to see whether those methods would reveal how the Atari "brain" — its microprocessor — processes information. [10 Things You Didn't Know About the Brain]
The methods did "reveal interesting structure" within the microprocessor, the researchers wrote in the paper describing the experiment. "However, in the case of the processor, we know its function and structure, and our results stayed well short of what we would call a satisfying understanding" of the Atari brain.
The results of their experiment were published today (Jan. 12) in the journal PLOS Computational Biology.
The field of neuroscience is expecting a windfall of data from new, large and well-funded research programs that have been developed to understand the human mind, like the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, Jonas told Live Science. Yet Jonas said that he questions the value of such data if the results cannot be properly understood.
"As people doing computational neuroscience, we really struggle to make sense of even the comparatively small data we acquire today, partly because we lack any sort of 'ground truth,'" Jonas said. "But if various synthetic systems like classic microprocessors can serve as a test bed, maybe we can make faster progress."
So, it is "game over" for neuroscience's current methods?
"I am actually very positive about progress in neuroscience," said Kording, who is also a research scientist at the Rehabilitation Institute of Chicago. "The fact that the field is able to take our contribution seriously shows that they at least have plans to overcome the problems we highlight."
Kording said that more than 80,000 people viewed an earlier version of the paper on a preprint server. Many loved it, he said, although many hated it, too. But he was happy that he and Jonas have started a dialogue.
Terrence Sejnowski, who directs the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in San Diego, told Live Science that he appreciates the need for researchers to develop a better conceptual framework for understanding neural processing. Indeed, Sejnowski was the first author on a 2014 paper in the journal Nature Neuroscience, which many in the field consider to be a road map for how to analyze the massive and diverse sets of neuroscience data that are expected to come from research projects in the coming years.
But he's not convinced that the Atari 2600 is a suitable model organism for testing out neuroscience's analytical tools.
"The microprocessor and the brain are two completely different types of computers, and one should not be surprised that different methods are needed to analyze them," Sejnowski said. "Let's do the converse experiment and analyze the brain using methods that work for micros [or, microchips], using a logic analyzer. This works great in reverse-engineering micros but would fail completely with the brain because the brain isn't a digital chip."
To be sure, the brain is a daunting kind of computer. And as neuroscientists go about unraveling its mysteries, they must feel a bit like little Mario, forever battling obstacles in their seemingly endless journey into unknown realms.
Follow Christopher Wanjek @wanjek for daily tweets on health and science with a humorous edge. Wanjek is the author of "Food at Work" and "Bad Medicine." His column, Bad Medicine, appears regularly on Live Science.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Christopher Wanjek is a Live Science contributor and a health and science writer. He is the author of three science books: Spacefarers (2020), Food at Work (2005) and Bad Medicine (2003). His "Food at Work" book and project, concerning workers' health, safety and productivity, was commissioned by the U.N.'s International Labor Organization. For Live Science, Christopher covers public health, nutrition and biology, and he has written extensively for The Washington Post and Sky & Telescope among others, as well as for the NASA Goddard Space Flight Center, where he was a senior writer. Christopher holds a Master of Health degree from Harvard School of Public Health and a degree in journalism from Temple University.
By Sascha Pare
By Ben Turner
By Sascha Pare