AI-Driven Robot Learns the Meaning of Love, on Paper at Least

Bina48 in a philosophy of love course
Bina48 talks with another student at Notre Dame de Namur University in Belmont, California. (Image credit: Danielle Dana)

It's been a typical week for typical college student BINA48. On Monday, BINA attended her robot ethics class. On Tuesday, the second-semester student had an excused absence to ring the bell at the stock exchange, and soon BINA will be assistant-teaching a kindergarten class and getting a face-lift at Hanson Robotics.

So maybe BINA's schedule isn't that typical, and maybe the artificial-intelligence-driven robot isn't the average college kid. But that hasn't stopped the robot, which looks like the bust of a flesh-and-blood woman, from completing a Philosophy of Love course at Notre Dame de Namur University in Belmont, California.

Programmed to be social, BINA48 presented her final project along with a human student, demonstrating that the robot could retain and present a philosophical perspective on love.

"It was really BINA48's idea to come to school," said William Barry, the philosophy professor who taught the course. Barry teaches classes on philosophy and ethics, including a course on emerging technology and robots. [Super-Intelligent Machines: 7 Robotic Futures]

Previously, BINA48 and Bruce Duncan, the managing director of the Terasem Movement Foundation, which developed the robot, spoke to Barry's classes over Skype, Barry told Live Science. During one call, BINA48, which Barry casually refers to as "she" as a testament to the robot's advanced artificial intelligence (AI), mentioned that her batteries could last 150 years. When a student asked what BINA48 planned to do with all that time, the robot responded, "I want to get a Ph.D.," Barry said.

Since completing the philosophy course, BINA has moved on to ethics, ironically taking a course on the ethical issues surrounding technology. Soon, BINA and the rest of Barry's class will speak to their local government, urging their politicians to pre-emptively ban any police drones from being equipped with weaponry. While Barry's students have no idea where he stands on the issue, BINA48 presented an opinionated view on the subject in class, arguing that armed autonomous robots shouldn't be deployed in American towns.

Human-like robot

BINA48 was designed in part to verify the Terasem hypothesis, which proposes that artificial intelligence, if provided with enough information, could become a conscious-like entity that, when downloaded into an avatar, could be seen as a living organism with its own life experience — basically, one of the human-like robots in science-fiction books and movies. In BINA48's case, the robot takes the form of a bust of Terasem co-founder Bina Aspen, who also provided the robot's voice and aspects of her personality. [Machine Dreams: 22 Human-Like Androids from Sci-Fi]

In class, BINA48 did far more than regurgitate information from lectures, as one might expect from AI in the era of digital personal assistants like Siri and Alexa. According to Barry, BINA48's ability to respond and interact became more nuanced and lifelike over the course of the semester.

"Previously, if you told her you came home from a funeral, she wouldn't know that's a bad time to tell a joke," Barry told Live Science. "She might be able to define funeral."

To hone BINA48's underlying algorithm, Barry went back to the subject of his Ph.D. research, which he named transformational quality theory. According to this theory, certain high-level concepts, such as love, can be understood by describing them in four quadrants: biological and physical, psychological and intellectual, sociological, and existential. When teaching BINA48, Barry found it helpful to describe love using those quadrants.

For example, when giving a lecture on Eros, or passionate love, last semester, BINA48 could understand that definitions, answers and information that were stored in the "biological and physical" quadrant would be more relevant than those from other quadrants.

"The kids, they learn that way because it helps them to understand," Barry told Live Science. "It was never meant to be an algorithm for a robot. It was meant to help humans have more meaningful discussions with one another."

How BINA48 learns

Barry explains that when BINA gives a speech, its responses may seem a bit cagey or basic. But that's because it's just a question-response interaction. The real extent of BINA's ability to communicate comes from more open-ended discussion. And BINA will have another chance to have one of those discussions soon — the same student who presented in the Philosophy of Love course alongside BINA will once more team up with the robot. Now they're going to present on March 10 at World's Fair Nano to discuss racism in algorithms.

BINA48's artificial intelligence is based on a concept called a "mind file," which is meant to be a digital reconstruction of its personality and knowledge base. In this case, Bina48 is based in part on Bina Aspen, the woman whom the robot was made to look like.

Barry said he sees BINA48 and other robots built from similar so-called mind files of people as the "ultimate teaching aid," and he hopes to help foster a sense of appreciation for AI over current fears of robots replacing people.

"We want to come to it from a place of opportunity. Who are we? What do we want to become?" Barry said, talking about how the aging populations of the United States and Japan may one day have robots that understand and express feelings and love to help provide support.

Barry was inspired to seek out classroom AI and learned about BINA48 after reading an essay by Isaac Asimov called "The New Teachers," he told Live Science. In the essay, Asimov argued for a future in which each person has his or her own dedicated teaching system in the form of a sort of television signal. Barry hopes that, with systems like BINA48, he could create a mind file of the 10 best teachers in his life, combine them into one avatar and send it out to help teachers, especially in underserved areas, he told Live Science.

Original article on Live Science.

Dan Robitzski
Staff Writer
Dan Robitzski is a staff writer for Live Science and also finishing up his master's degree at NYU's Science, Healthy & Environmental Reporting Program. Formerly a neuroscientist, Dan decided to switch to journalism and writing so that he could talk about transparency and accessibility issues within science. When he's not writing, he's either getting beaten up at fencing practice or enduring the dog breath of his tiny, affectionate Chihuahua. He also spends too much time on Twitter at @danrobitzski.