Science Fiction or Fact: Could a 'Robopocalypse' Wipe Out Humans?

terminator
Our terrifying nemeses in the coming war of man versus machine? Terminators on the rampage in "Terminator 2: Judgement Day." (Image credit: Tristar Pictures)

In this weekly series, Life's Little Mysteries explores the plausibility of popular sci-fi concepts. Warning: Some spoilers ahead!

If a bunch of sci-fi flicks have it right, a war pitting humanity against machines will someday destroy civilization. Two popular movie series based on such a "robopocalypse," the "Terminator" and "Matrix" franchises, are among those that suggest granting greater autonomy to artificially intelligent machines will end up dooming our species. (Only temporarily, of course, thanks to John Connor and Neo.)

Given the current pace of technological development, does the "robopocalypse" scenario seem more far-fetched or prophetic? The fate of the world could tip in either direction, depending on who you ask.

While researchers in the computer science field disagree on the road ahead for machines, they say our relationship with machines probably will be harmonious, not murderous. Yet there are a number of scenarios that could lead to non-biological beings aiming to exterminate us.

"The technology already exists to build a system that will destroy the whole world, intentionally or unintentionally, if it just detects the right conditions," said Shlomo Zilberstein, a professor of computer science at the University of Massachusetts.

Machines at our command

Let's first consider the optimistic viewpoint: that machines always will act as our servants, not the other way around.

"One approach is not to develop systems that can be so dangerous if they are out of control," Zilberstein said.

Something like Skynet – the computerized defense network in "The Terminator" that decides to wipe out humanity – is already possible. So why has such a system not been built? A big reason: Nuclear-armed nations such as the United States would not want to turn over any of the responsibility for launching warheads to a computer. "What if there is a bug in the system? No one is going to take that risk," said Zilberstein. [What If There Were Another Technologically Advanced Species?]

On a smaller scale, however, a high degree of autonomy has been granted to predator drones flying in the Middle East. "The number of robotic systems that can actually pull the trigger autonomously is already growing," said Zilberstein.

Still, a human operator monitors a drone and is given the final say whether to proceed with a missile strike. That certainly is not the case with Skynet, which, in the "Terminator" films, is given control of America's entire nuclear arsenal.

In "The Terminator," the military creates the program with the objective of reducing human error and slowness of response in case of an attack on the U.S.

When human controllers come around to realizing the danger posed by an all-powerful Skynet, they try to shut it down. Skynet interprets this act as a threat to its existence, and in order to counter its perceived human enemy, Skynet launching America's nukesat Russia,  provoking a retaliatory strike. Billions die in a nuclear holocaust.Skynet then goes on to build factories that churn out robot armies to eliminate the remainder of humankind.

In a real-life scenario, Zilberstein thinks simple safeguards would prevent an autonomous system from threatening more people than it is designed to, perhaps in guarding country's borders, for example. Plus, no systems would be programmed with the ability to make broad strategic decisions the way Skynet does.

"All the systems we're likely to build in the-near future will have specific abilities," Zilberstein said. "They will be able to monitor a region and maybe shoot, but they will not replace a [human] general."

Robots exceeding our grasp

Michael Dyer, a computer scientist at the University of California, Los Angeles, is less optimistic. He thinks "humans will ultimately be replaced by machines" and that the transition might not be peaceful. [Americans Want Robots, and They're Willing to Pay]

The continued progress in artificial intelligence research will lead to machines as smart as we are in the next couple hundred years, Dyer predicts. "Advanced civilizations reach a point of enough intelligence to understand how their own brain works, and then they build synthetic versions of themselves," he says.

The desire to do so might come from attempts at establishing our own immortality – and that opportunity might be too much for humanity to resist. (Whowouldn't want to spend their ever-after with their consciousness walking around in a robot shell?)

Maybe that sort of changeover from biology to technology goes relatively smoothly. Other rise-of-the-machines scenarios are less smooth.

Dyer suggests a new arms race of robotic system could result in one side running rampant. "In the case of warfare, by definition, the enemy side has no control of the robots that are trying to kill them," Dyer said. Like Skynet, the manufactured might turn against the manufacturers.

Or an innocuous situation of overdependency on robots spirals out of control. Suppose a factory that makes robots is not following human commands, so an order is issued to shut off power to the factory. "But unfortunately, robots happen to manage the power station and so they refuse. So a command is issued by humans to stop the trucks from delivering necessary materials to the factory, but the drivers are robots, so they also refuse," Dyer says.

Perhaps using the Internet, robotic intelligences wrest control of a society that depends too much on its automata. ("The Animatrix," a 2003 collection of short cartoons, including some back stories for "The Matrix" movies, describes such a situation.)

Overall, a bit of wisdom would prevent humankind from falling into the traps dreamed up by Hollywood screenwriters. But the profit motive at companies has certainly engendered more automation, and the Cold War's predication on the threat of mutually assured destruction points out that rationality does not always win.

"Doomsday scenarios are pretty easy to create, and I wouldn't rule out that kind of possibility," said Zilberstein. "But I'm personally not that worried."

Plausibility rating: Military leaders and corporations probably will not be so stupid as to add high levels of programmed autonomy to catastrophically strong weapon systems and critical industrial sectors. We give the "robopocalypse" two out of four Rocketboys.

This story was provided by Life's Little Mysteries, a sister site to LiveScience. Follow Life's Little Mysteries on Twitter @llmysteries, then join us on Facebook.

Adam Hadhazy
Adam Hadhazy is a contributing writer for Live Science and Space.com. He often writes about physics, psychology, animal behavior and story topics in general that explore the blurring line between today's science fiction and tomorrow's science fact. Adam has a Master of Arts degree from the Arthur L. Carter Journalism Institute at New York University and a Bachelor of Arts degree from Boston College. When not squeezing in reruns of Star Trek, Adam likes hurling a Frisbee or dining on spicy food. You can check out more of his work at www.adamhadhazy.com.