Sneaky Robots Taught the Art of Deception

The movie "Terminator Salvation" tells of the human resistance struggling to defeat Skynet and its robot army. (Image credit: Warner Bros.)

Imagine a robot deceiving its enemies by hiding so it won't get caught.

It's not a scene from one of the "Terminator" movies — it's the result of what may be the first detailed experiments into giving robots the capabilities for deceptive behavior.

"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine, and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered," said researcher Ronald Arkin, a roboticist at the Georgia Institute of Technology.

The researchers first programmed robots to hide from other robots. First, they simply taught the deceivers how to recognize situations that warranted craftiness — there had to be conflict between the deceivers and those seeking them, and the deceivers had to benefit from the deception. Once the robot deemed a situation warranted deceit, it provided false information to benefit itself, basing their ploys on what they knew of their victim's capabilities and desires.

The researchers then ran 20 hide-and-seek experiments with two robots equipped with cameras. Colored markers stood up along three pathways that led to locations where the deceiving robot could hide. The deceiver randomly chose a hiding place from among the three possible locations.

The deceiver knew it had a pursuer looking for evidence of where it went. As such, before the deceiver moved toward its true destination, it knocked down markers leading to a different location. In other words, the deceiver created a false trail for the robot that went looking for it — "for example, that it was going to the right and then actually go to the left," explained engineer Alan Wagner, also of Georgia Tech.

The deceivers were able to fool the seekers 75 percent of the times, with failures resulting from the hiding robot's inability to knock over the correct markers. "The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots," Wagner said.

Although the researchers explored robot-robot deception, they noted their work could apply to robot-human interactions as well.

Deceptive robots on the battlefield could hide and trick opponents. Crafty robots in search and rescue operations might even mislead victims about how long it would take help to really arrive in order to calm them down or receive cooperation, the researchers suggested.

Still, Arkin noted that deceptive robots could pose ethical dilemmas.

"Machiavelli said that deception was something noble and appropriate in the context of warfare, and despicable in anything else," Arkin told TechNewsDaily. "When is deception appropriate, and when isn't it? Should robots never tell a lie, or are there circumstances that can warrant it? The nature of this research shouldn't be taken lightly, and it should be publicized that it's fairly easy to implement."

Wagner and Arkin detailed their findings online September 3 in the International Journal of Social Robotics.

Charles Q. Choi
Live Science Contributor
Charles Q. Choi is a contributing writer for Live Science and He covers all things human origins and astronomy as well as physics, animals and general science topics. Charles has a Master of Arts degree from the University of Missouri-Columbia, School of Journalism and a Bachelor of Arts degree from the University of South Florida. Charles has visited every continent on Earth, drinking rancid yak butter tea in Lhasa, snorkeling with sea lions in the Galapagos and even climbing an iceberg in Antarctica.