Project

Human-Robot Trust

David DeSteno

Unconventional mixing of research fields introduces a new method to study human behavior using social robots.

When you meet someone for the very first time, we can intuitively answer “How much can I trust this person?” Trust is a complex social-emotional concept, but this inference can be quickly formed with an accuracy that is better than chance. In these situations where a person’s past behaviors or reputation are unknown, we rely on other possible sources of information to infer a person’s motivations. Nonverbal behaviors are a source of information about such underlying intentions, goals, and values. Much of our communication is channeled through our facial expressions, body gestures, gaze directions, and many other nonverbal behaviors. Our ability to express and recognize these nonverbal expressions is at the core of social intelligence.

But what are the nonverbal behaviors that constitute a signal related to the trustworthiness of a novel person? The psychology department at Northeastern University identified a candidate set of nonverbal cues—face touching, arms crossed, leaning backward, and hand touching—that was hypothesized to be indicative of untrustworthy behavior. However, in order to confirm and further validate such findings, a common practice in social psychology is to employ a human actor to perform certain nonverbal cues and study their effects in a human-subjects experiment. A fundamental challenge inherent in this research design is that people regularly emit cues outside of their own awareness, which makes it difficult even for trained professional actors to express specific cues in a reliable fashion. 

Our strategy for meeting this challenge was to employ a social robotics platform. By utilizing a humanoid robot, we took advantage of its programmable behavior to control exactly which cues are emitted to each participant (see example video above). In collaboration with the Social Emotions Lab at NEU, Johnson Graduate School of Management, and Cornell University, we found through a human-subjects experiment that the robot’s expression of the hypothesized nonverbal cues resulted in participants perceiving the robot as a less trustworthy agent.