Modeling the Dynamics of Nonverbal Behavior on Interpersonal Trust for Human-Robot Interactions



Jin Joo Lee

Jin Joo Lee. Modeling the Dynamics of Nonverbal Behavior on Interpersonal Trust for Human-Robot Interactions. Masters Thesis, Massachusetts Institute of Technology, 2011.


We describe the design, implementation, and validation of a computational model for recognizing interpersonal trust in social interactions. We begin by leverage pre-existing datasets to understand the relationship between synchronous movement, mimicry, and gestural cues with trust. We found that although synchronous movement was not predictive of trust, synchronous movement is positively correlated with mimicry. That is, people who mimicked each other more frequently also move more synchronously in time together. And revealing the versatile nature of unconscious mimicry, we found mimicry to be predictive of liking between participants instead of trust. We reconfirmed that the following four negative gestural cues, leaning-backward, face-touching, hand-touching, and crossing-arms, when taken together are predictive of lower levels of trust, while the following three positive gestural cues, leaning-forward, having arms-in-lap, and open-arms, were predictive of higher levels of trust. We train and validate a probabilistic graphical model using natural social interaction data from 74 participants. And by observing how these seven important gestures unfold throughout the social interaction, our Trust Hidden Markov Model is able to predict with 94% accuracy whether an individual is willing to behave cooperatively or uncooperatively with their novel partner. And by simulating the resulting model, we found that not only does the frequency in the emission of the predictive gestures matter as well, but also the sequence in which we emit negative to positive cues matter. We attempt to automate this recognition process by detecting those trust-related behaviors through 3D motion capture technology and gesture recognition algorithms. And finally, we test how accurately our entire system, with low-level gesture recognition for high-level trust recognition, can predict whether an individual finds another to be trustworthy or untrustworthy

Related Content