Innovation

When robots are good listeners, humans respond positively, says new research

Two studies conducted show that when humans share a personal story with robots that are responsive, they judge themselves more favorably and want to have the robots nearby in high-stress situations.

Dr. Moran Mizrahi with robot Travis
Image: Kobi Zholtack

Robots are coming to take care of us. They have been providing service in social roles, spanning from aides in schools, to assistants in hospitals, to crew members on cruises—and as technology improves, these caregiving roles are bound to expand.

By allowing robots to fill intimate roles in our lives, certain questions persist: Can these robots serve our emotional needs? Do we expect machines to fill in when humans can't? How will we perceive this type of care, and what other effects should we expect?

Given that we will be allowing robots into our lives in these intimate ways, how can we ensure that the humans they care for will respond positively?

New research has examined these questions, looking specifically at what happens when robots display empathy towards humans.

"Perceiving another person as responsive to one's needs is inherent to the formation of emotional bonds," said Gurit Birnbaum, Associate Professor at the School of Psychology, Interdisciplinary Center (IDC), in Herzliya, Israel, and primary author of the research. "The partner's perceived responsiveness, meaning their support and validation of one's own emotional needs, benefits personal and relationship well-being because it signifies the belief that the other person can be counted on to reliably support us."

However, many caregiving robots fail to display these social skills effectively, said Birnbaum.

Most fall short in "evoking the appropriate sense of responsiveness that is characteristic of human disclosure and well-being," she said.

The new research, said Birnbaum, explored "whether implementing responsiveness cues in a robot would be compelling enough for these keys to human bonding to be also evident when interacting with an inanimate object."

Each study involved human participants interacting with robots in one-on-one sessions in which they would reveal a personal event to a non-humanoid robot. Upon disclosure, the robot would (via gestures or text) respond either responsively or unresponsively.

SEE: Can the presence of a robot affect whether humans behave ethically?

They chose to use non-humanoid robots, said Birnbaum, "because humanoid robots may engender repulsion instead of attraction in many people due to their uncanny resemblance to humans. Consequently, these humanoid robots may fail to elicit positive perceptions and desire for future interactions."

"Non-humanoid robots, paradoxically, allow people to project their needs and desires and to respond to them in ways in which they typically respond to social partners," said Birnbaum, "for example, by seeking the robot's psychological proximity through their body language."

The first study was intended to show how a responsive robot may be more appealing to humans—and, further, that humans will treat a responsive robot differently than a nonresponsive robot by approaching it more often and "using it as a source of consolation in times of need." To test this, 102 undergraduate students were faced with Travis, a "non-anthropomorphic robot with a vaguely creature-like structure, but without a face, capable of basic gesturing (e.g., nodding, swaying)" that stood at under a foot tall. Here were the instructions:

Upon arrival at the laboratory, participants were led to believe that we were testing a new speech-comprehension algorithm developed for robots. Then, they completed a demographic questionnaire and were asked to sit on the couch, facing Travis, and to disclose a personally negative event to it. Participants were informed that the robot would try to understand what they say and respond with a relevant response, using artificial intelligence and speech recognition.

In these videotaped sessions, participants would speak for up to seven minutes. There were two ways that Travis could represent "responsiveness: a verbal and nonverbal condition. In the verbal condition, Travis would choose from a group of standardized responses. According to the study, they included: "You must have gone through a very difficult time"; "I completely understand what you have been through," etc. and could be altered to fit the story.

SEE: 6 ways the robot revolution will transform the future of work

In the nonverbal condition, Travis would display certain physical cues, such as facing forward, gently swaying back and forth, and nodding, along with the moments of disclosure in the human subject's storytelling.

In contrast, in the "nonresponsive" condition, Travis would simply not display any nonverbal or verbal behavior after the subject spoke about the personal event.

The results? Responsive robots were perceived as more social; more likely for subjects to want to take on as companions, and increased "approach" behaviors from humans. In the words of the authors, the increased approach shows that "the human mind utilizes responsiveness cues to ascribe social intentions to technological entities, such that people can treat robots as a haven of safety or as a source of consolation in times of need."

Study 2, in contrast, examined how humans would perceive responsiveness in a different condition: After disclosing a positive event. In addition to studying the participants' evaluation of robot responsiveness, the researchers also looked at whether the responsive robot would bolster the participants' self-perception during a stressful task.

In this study, participants were told they were testing an algorithm developed for dating sites, and were then instructed to discuss a recent, positive, dating story.

Results mirrored those in the first study. In addition, participants self-evaluations during a stressful task improved in the responsive condition.

The authors of the study believe that, taken together, the results show "that humans not only utilize responsiveness cues to ascribe social intentions to robots, but they actually adjust their behavior towards responsive robots; want to use such robots as a source of consolation; and feel better about themselves while coping with challenges after interacting with these robots."

The findings could, potentially, mean that there are significant consequences that come with the design of sociable robots.

Also see...

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks