Sit, Heel, Compute: Computers Learn Better by Imitating Dogs

A virtual dog ready for training (Image credit: Bei Peng, James MacGlashan, Robert Loftin, Michael L. Littman, David L. Roberts, Matthew E. Taylor)

From guide dogs for the visually impaired to search-and-rescue animals, canines can be trained to help with a wide range of critical tasks. So, it might come as no surprise that researchers are now designing machines to learn more like dogs.

Computer scientists have modeled machines to learn like dogs, with the short-term goal of improving human interactions with robots and the long-term hope of more efficiently training service animals.

These machines rely on human feedback. Real animal trainees, like dogs, also provide helpful, subtle cues about their understanding to human trainers, and now that aspect of a training relationship is being transferred to machine learning. [Super-Intelligent Machines: 7 Robotic Futures]

"Just about anybody can teach a dog to sit," said David Roberts, an assistant professor at North Carolina State University who studies video game design and dog training."But right now, you can't teach your computer to sit." That is, making even simple changes to the behavior of a machine typically requires tweaking pre-programmed settings, or would require a user who is proficient in computer programming.

Demonstrating the results of new research, however, trainers recently provided commands to virtual dogs and then gave the machine animals feedback (positive, negative or neutral) as they attempted to complete each task.The research was presented at the International Conference on Autonomous Agents and Multiagent Systems(AAMAS 2016), which was held May 9-13 in Singapore.

The researchers had previously developed a program that allowed their robot dogs to learn from human trainers who were giving different styles of feedback. The newest study added a way for the robots to provide information back to the trainers. When some of the virtual dogs were confident in their understanding of a command, they tended to move quickly, but if they weren't sure what do, their actions typically slowed down. These behaviors are much like what would be exhibited by a real dog, the researchers said.

"When an animal is very confident, then they are more likely to perform that behavior with great energy and great speed and great enthusiasm," Roberts told Live Science. "I wouldn't say there are explicit signals — there's sort of a general feeling or sense you get when you see [that] the animal gets it."

A robot varying its speed is "implicitly communicating its uncertainty," study co-author Matthew Taylor told Live Science in an email. Taylor is the director of the Intelligent Robot Learning Laboratoryat Washington State University.

The variable-speed robot dogs, the ones that gave trainers extra information, performed better than fixed-speed dogs in a variety of measures, the researchers said. For example, the variable-speed dogs took less time to complete a complex task than dogs that always moved quickly or slowly.

However, although the variable-speed dogs received higher-quality feedback from trainers, the trainers reported that they preferred working with fixed-speed dogs. "It's not entirely clear why they didn't like it as much," Roberts said.

Taylor suggested that users didn't understand the reason for the changing speeds. He said he hopes that if users better understand why the dogs speed up and slow down, users might appreciate the variable speeds.

With further development of this style of artificial intelligence, Roberts thinks users could intuitively adjust their own behavior "to more effectively customize the behavior of their gadgets," he said.

And while dogs or robots with specialized skills currently require specialized trainers, the researchers still have an eye toward those in-demand tasks, like drug-detection, performed by real-life canines. Taylor wrote, "The (very) long-term goal is to be able to automatically train dogs so that we can produce more service dogs at much lower cost."

Original article on Live Science.

Staff Writer
Greg Uyeno is a science journalist. He has studied cognitive science at the University of California, Berkeley and journalism at New York University. He’s always interested in the language of science and the science of language.