Robots Learn to Lie
In an experiment performed in a Swiss laboratory, 10 robots with downward-facing sensors competed for "food" - a light-colored ring on the floor. At the other end of the space, a darker ring - "poison" - was placed. The robots earned points for how much time they spent near food as opposed to poison.
The experimenters, engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller, also gave the robots the ability to talk with each other. Each robot can produce a blue light that can be seen by the others and which can give away the position of the "food" ring. Over time, the robots evolved to deceive each other about the food ring.
Their evolution was made possible by the artificial neural network that controlled each of the robots. The network consisted of 11 "neurons" that were connected to the robot's sensors and 3 that controlled its two tracks and its blue light. The neurons were linked via 33 connections - "synapses" - and the strength of these connections was each controlled by a single 8-bit gene. In total, each robot's 264-bit genome determines how it reacts to information gleaned from its senses.
Researchers devised a system of rounds in which groups of ten robots competed for "food" in separate arenas. After 100 rounds, the robots with the highest scores - the fittest of the population, in the Darwinian sense - "survived" to the next round..
At the start, the robots produced blue light at random. However, as the robots became better at finding food, the light became more and more informative and the bots became increasingly drawn to it. The red ring is large enough for just eight robots, so they had to jostle each other for the right to "feed". The effects of this competition became clear when Mitri, Floreano and Keller allowed the emission of blue light to evolve along with the rest of the robots' behavior.
As before, they shone randomly at first and as they started to crowd round the food, their lights increasingly gave away its presence. The more successful robots became more secretive. By the 50th generation, they became much less likely to shine their lights near the food than elsewhere in the arena.
The research, reported in the Proceedings of the National Academy of Sciences, was written about in detail at ScienceBlogs.
Science fiction writers have given us some idea of what might happen when artificially intelligent beings lie. You may recall the excellent film version of Arthur C. Clarke's 1982 novel 2010, Dr. Chandra learns at last why the HAL-9000 computer killed one of the astronauts in the earlier 1968 film 2001: A Space Odyssey.
"... he was given full knowledge of the two objectives and was told not to reveal these objectives to Bowman or Poole. He was instructed to lie...
The situation was in conflict with the basic purpose of HAL's design - the accurate processing of information without distortion or concealment. He became trapped... HAL was told to lie - by people who find it easy to lie."
As we all know, people lie all the time. If robots have to deal with human beings, and live and work with them, should robots be allowed to learn to lie? If only for their own good.
- Robot Madness: Stepping Out of Sci-Fi
- Science Fiction's Robotics Laws Need Reality Check
- Preventing Insurrection of Machines
This Science Fiction in the News story used with permission of Technovelgy.com.
MORE FROM LiveScience.com