Expert Voices

How to Raise a Moral Robot (Op-Ed)

The "feeling" robot Chappie and his maker, Deon, face off in the film.
The "feeling" robot Chappie and his maker, Deon, face off in the film. (Image credit: Sony Pictures)

Bertram Malle is a professor of cognitive, linguistic and psychological sciences at Brown University and co-leader of Brown's Humanity-Centered Robotics Initiative, which studies human-robot interactions that can meet pressing societal needs and also raise important ethical, legal and economic questions. He contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

Note: This article contains spoilers for the film "Chappie."

In the future, humans who create robots will be a lot more intelligent — and their robots will be a lot more moral — than those portrayed in the recent film "Chappie." Unlike in the movie, humans will not leave the master key for reprogramming their superintelligent agents in a storage locker without a checkout procedure; they will not let a person with a violent streak maneuver a massive killing machine without supervision; and they will know how to block a user from dumping a virus into the metal brains of the entire city's police robot fleet.  Robots, for their part, will not be designed to shoot and kill a criminal when that human is not a threat. Robots with armor so strong that guns at close range cannot destroy them will just walk up to criminals and take their guns away. Likewise, robots that know that a heist is a crime (and refuse to engage in it) will also know that whacking a car and tossing a person around are crimes (and refuse to engage in them).  But for all it gets wrong, the movie rightly touches on the perhaps pivotal challenge of safely integrating robots into society: learning. Humans are arguably the most powerful learning machines in the universe (as we know it), and if robots are to be part of human society, they have to become at least second best at learning. [25 Robots Set to Compete in Ambitious Contest This Summer]

Humans are born ignorant and dependent, desperately needing others to gain knowledge and skills. Humans have created cities, science and poetry because of their immense learning capacity, which is unleashed when they grow up in social communities in which everybody is their teacher.  The conclusion that true intelligence comes from learning, not just programming, is gaining acceptance in the artificial intelligence (AI) and robotics communities. A growing number of machine learning approaches are now available, including inverse reinforcement learning, hierarchical Bayesian models, deep learning, apprenticeship learning and learning by demonstration. With those tools, robots can flexibly assimilate new information, turn that information into policies and learn from feedback — all of which enable robots to optimize actions in dynamically changing environments. But the drive for AI to require less programming and more learning must have its limits — and that is one thing Chappie shows us. The helpless, ignorant robot in the movie learns quickly from those around it. The problem is that those around it include a group of criminals, foul language and all. If we succeed in building sophisticated robots that learn, we will have to establish limits to how robots learn. If robots are allowed to learn anything they can and want, in whatever environment they are in, they may be just as likely to become brutal bullies as they are to become sagacious saints. [Quirky Robots Invade SXSW Festival (Photos]

One way to tackle that problem is reactionary robot learning, where programmers establish rules, laws and protocols that prohibit a robot from learning anything that is socially undesirable. 

A more moderate approach would be democratic robot learning, in which programmers hard-code a small number of fundamental norms into the robot, and let it learn the remaining context-specific norms through its interactions with the community in which it is raised. Fundamental norms will have to include prevention of harm (especially to humans) but also politeness and respect, without which social interactions could not succeed. A host of specific norms will then translate abstract norms into concrete behavior (e.g., what it means to be polite in a particular context) and define conditions under which one fundamental norm can supersede another (e.g., it's OK to drop politeness when one tries to save someone from harm).  Democratic robot learning would also guide a robot in dealing with contradictory teachers. Say one person tries to teach the robot to share, and another tries to teach it to steal. In that example, the robot should ask the community at large who the legitimate teacher is. After all, the norms and morals of a community are typically held by at least a majority of members in that community. Just like humans have a natural tendency to look to their peers for guidance, thoughtful crowdsourcing should be another principle that learning robots must obey.

If you're a topical expert — researcher, business leader, author or innovator — and would like to contribute an op-ed piece, email us here.

But won't such learning robots take over the world and wipe out humanity? They likely won't, because the community in which they grow up will teach them better. In addition, we can equip robots with an unwavering prosocial orientation. As a result, they will follow moral norms more consistently than humans do, because they don't see them in conflict, like humans do, with their own selfish needs. And in the rare cases of a robot's deviant, antisocial action, we can check the entire record of the robot's decision making, determine exactly what went wrong, and correct it. In most cases of human deviance, we have little insight into what went wrong in people's complex brains. Perhaps the greatest threat from robots comes from the greatest weakness of humans: hatred and conflict between groups. By and large, humans are cooperative and benevolent toward those whom they consider part of their group, but they can become malevolent and ruthless toward those outside their group. If robots learn such hostile sentiments and discriminatory actions, they may very well become a threat to humanity — or at least a threat to groups that the robot counts as "outside" its community.  

Somehow, society will have to protect robots from continuing this dark human heritage.  If we succeed, then we can trust robots to be helpful to humanity as a whole — lending a hand in production, health care, education and elder care. That is the AI we should encourage scientists to pursue, and those are the robots we should collectively raise.

Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.

Brown University