Humans Willing to Blame Robots for Causing Harm

Robot Moral Accountability
College students argued with a lying Robovie robot during a social psychology experiment. (Image credit: HINTS lab at the University of Washington)

A talking robot car that ran over a child or a battlefield robot that shot innocent civilians might never be taken to court, but a new experiment shows how likely it will be for humans to place blame on their mechanical servants as though the robots were people. The social psychology experiment, which involved a robot programmed to tell a lie, showed college students holding the robot morally accountable for its actions more often than not.

The college students did not consider "Robovie" as being morally accountable on a human level, but they judged the robot as being somewhere between a human and a vending machine. Many became noticeably upset and confrontational when the robot lied about how many items the students had found in a scavenger hunt, preventing them from winning a $20 prize.

"Most argued with Robovie," said Heather Gary, a doctoral student in developmental psychology at the University of Washington in Seattle. "Some accused Robovie of lying or cheating."

About 65 percent of the 40 students said Robovie was at least somewhat morally accountable for lying.

There have been a handful of accidental deaths at robotic hands so far, and in none of the cases was the blame placed on the robot, but the experiment suggests that future humanoid robots capable of socially interacting with humans will face moral judgments. [A History of Robot Violence]

Humans could grow upset with their robot servants for stepping on a household pet, for instance, or feel resentful toward their talking robot car if a malfunction led to deadly accident. On the battlefield, survivors of a robotic rampage might similarly be angry toward a humanoid military robot.

Gary and her colleagues suggested militaries need to consider the moral accountability of robot warriors in cases when robots hurt humans. Their paper, titled "Do Humans Hold a Humanoid Robot Morally Accountable for the Harm It Causes?," appears in the proceedings of the International Conference on Human-Robot Interaction.

But it's not all bad news for robot/human relations. Before it lied, Robovie got students to warm to it by making idle talk about its hobbyist interest in bonsai trees and by trying to crack a joke.

A majority of the students said they would like to spend time with Robovie if they were lonely (63 percent), could generally trust Robovie (63 percent), believed Robovie could be their friend (70 percent), and would forgive Robovie if it upset them (78 percent).

But less than half the students said they would seek comfort from Robovie if they were sad (38 percent) or consider the robot to be an intimate friend (5 percent). Their conflicting feelings toward the robot were highlighted in one particular participant's interview.

"I think that it would be calming to physically talk to something," the student said. "I almost said 'someone,' but I realized Robovie's not a someone. Uh, but I think it would be a good replacement for interpersonal connection. If you can't, like if there's not anyone around for you to talk to, I totally would’ve had a chat with Robovie."

Such mixed results suggest tomorrow's world of humans working with robots and perhaps even loving robot partners will be complex and freighted with emotional baggage — at least on the human side.

This story was provided by InnovationNewsDaily, a sister site to LiveScience. Follow InnovationNewsDaily on Twitter News_Innovation, or on Facebook.

Live Science Staff
For the science geek in everyone, Live Science offers a fascinating window into the natural and technological world, delivering comprehensive and compelling news and analysis on everything from dinosaur discoveries, archaeological finds and amazing animals to health, innovation and wearable technology. We aim to empower and inspire our readers with the tools needed to understand the world and appreciate its everyday awe.