This Behind the Scenes article was provided to LiveScience in partnership with the National Science Foundation.
Everyone knows machines don't have feelings. But try telling that to your brain.
"We have one social brain, and it's the same whether we're dealing with a person or a machine," said Clifford I. Nass, the Thomas M. Storke Professor at Stanford University, who studies the social aspects of technology. "People use the same social graces with machines, especially computers, as they do with people."
Nass has devoted much of his research career to studying the ways in which humans respond socially to technology. Despite what most people know intellectually, they still often automatically treat computers and other devices like human beings, he said.
In a 1993 study, for example, he found that people unconsciously use social rules when interacting with computers. His subjects were much "nicer" to the computer they had worked with—responding favorably to the computer when it "asked" how it performed—than they were to another computer that "asked" the same question about the first computer. "It was as if they didn't want to hurt the first computer's feelings," Nass said.
Several years ago his unusual research led to a collaboration with Robin Murphy, director of the Center for Robot-Assisted Search and Rescue of Texas A&M University, and a professor of computer science and engineering. He and Murphy, who is regarded as a founder of the field of rescue robotics, are working together to design a rescue robot that is user-friendly.
Rescue robots serve as a trapped disaster victim's lifeline to the outside world. But they are worthless if the victim finds them scary, bossy, out-of-control—or just plain creepy.
"Robots don't make eye contact. Their tone doesn't change. When they get closer to people, they start to violate their personal space," Murphy said. "If you are stuck somewhere for ten hours, and something scares you, or annoys you for long enough, you might start disregarding what it is asking you to do. The term that keeps coming up is 'creepy.' People find the robots that are supposed to be helping them creepy."
Nass and Murphy are working to ease the "creep" factor in rescue robots, hoping to reduce anxiety, and bolster existing rescue efforts. The National Science Foundation has funded the three-year project with a $1.2 million grant shared by the two universities as part of the American Recovery and Reinvestment Act of 2009. As an economic stimulus, the work will create at least five new research jobs in the short term, but, more importantly, the researchers expect it to jump-start a new industry.
"Several of these people will go out and start new companies based on this technology, and students will go out and work for these companies," Murphy said. "There is a burgeoning emergency response market—think about Haiti. We need more technology that is helpful for these situations. We are creating more knowledgeable people, and encouraging them to go into this sector."
Rescue robots have been used for more than a decade, but the early prototypes were mechanically primitive. "The 1995 Oklahoma City bombing and the earthquake in Kobe (Japan) created a great interest in rescue robots," Murphy said. "These events served as motivation to start focusing on rescue robots. But they weren't ready to go into the field until 1999."
The researchers hope to improve the devices in ways that will make them more valuable to law enforcement, such as hostage negotiation, as well as in emergency response situations, where they already are in use. The robots also have potential in the health care setting, where the researchers believe they could have huge economic potential.
The current project, also supported by Microsoft, will create a multi-media "head" attachment called the "survivor buddy" that can fit on any traditional robot and serve as the interface between trapped victims and the rest of the world for the ten or more hours it might take to extract them. An animator from Pixar—the company involved in such popular films as "Wall-E" and "Up"—has volunteered to help design the motions.
"How do you design a robot that is socially appropriate at a time when a person is under extreme stress?" Nass asks. "My role is to come up with all the social aspects. We're doing work on body distance, for example, if the robot comes up too close, and rolls right up next to you, that's pretty horrible. It has to do with the various social tricks humans use—it has to respect your personal space."
"But the robot can't be too far away," he adds. "What if the robot stood 100 feet back and said: 'I am very concerned about you. I am here to help you.' That also would be worrisome—the message is: 'I don't really care about you, because I am too far away.' It seems insincere—so insincerity is a very bad thing."
Robots must be programmed to pick up on human cues and respond appropriately—just as humans do with other humans, Nass said.
"We need to design a robot that knows social graces and can garner trust and show respect and expertise," he said. "If you send down a robot that seems like a moron, that's not going to help. It's not going to make you like it. If it's going to be a companion, a buddy, then you'd better like it. Think of all the things you need to be an effective search and rescue buddy. The robot has to likeable, seem smart, be trustworthy and seem caring, optimistic—but not overly optimistic."
He recalls the lessons learned many years ago when car company BMW introduced its early navigation system—featuring a female voice. Ultimately, the system was recalled. "German male drivers would not take directions from a woman," Nass said. The experience motivated a series of studies "that showed people gender stereotype like crazy," he adds.
The "survivor buddy" will have features to allow victims to engage in two-way video-conferencing, watch the news and listen to music. The media component emerged following a 2005 mine accident—not involving rescue robots—but where trapped miners asked if workers could lower them an MP3 player. "We know people get bored," Murphy said. "These miners got tired of talking to responders on the other side."
The survivor buddy prototype was completed last summer, but hasn't yet been used in a disaster. It is a new robot head that the researchers hope will be able to perform any web-based activity, as well as two-way video conferencing, and the ability to play music and television, among other things. It also will be more user-friendly, hopefully making it less creepy.
"The head will constantly maintain gaze control with you, always maintaining eye contact," Murphy said. "Social gaze is important. Another important thing is the motions—we want it to move more slowly when it's close to you."
Nass adds: "Consider doctors in an emergency room. Doctors move sort of fast—but not insanely fast. You don't see them running really fast—and you don't see them sauntering. There's a right speed for an emergency between wild, frantic speed and sauntering."
The scientists also plan to adjust the volume so that the device speaks more softly the closer it gets to a victim, and it will likely change its coloration. "Most robots now are painted black and have bright head lights," Murphy said.
This can be disconcerting when "you come in the dark at people and blind them—what's more, you can't see the robots in the dark because they are black," she said. "Those are the things we want to avoid. We hope to make it colorful and backlit—and turn the headlights down a little bit."
The scientists plan to test the device in simulated rescue situations using actual people within scenarios as close as possible to the real thing, "without endangering anyone," Murphy said. "You can make people feel they are in a collapse—put them in a dark room, cover them with a blanket."
Previous testing on earlier robots—which prompted the "creep factor" finding—convinced the researchers they needed to make modifications if the rescue robots were to be effective.
"People who were well-fed and well-rested and just in there for an hour were showing significant reactions to the robot," Murphy said. "Imagine if you are already disoriented, or in a lot of pain or fear. The impact will be even more significant. It shows you how important it is to get it right."
For better or worse, research has shown that responses "we thought only applied to people also apply to technology," and that most people are unaware of this, Nass said.
In that earliest computer study, for example, his subjects insisted after the experiment that they would never give different responses to different computers—even though they did.
Moreover, "they were graduate students in electrical engineering in the computer science program at Stanford," Nass adds. "So if anybody knew that computers don't have feelings, these guys did."
Editor's Note:This research was supported by the National Science Foundation (NSF), the federal agency charged with funding basic research and education across all fields of science and engineering. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. See the Behind the Scenes Archive.