Publication

Novel Wearable Apparatus for Quantifying and Reliably Measuring Social-Emotional Expression Recognition in Natural Face-to-Face Interaction

Alea Teeters, Rana Kaliouby, Matthew Goodwin, Shandell M., Rosalind W. Picard

Abstract

Our wearable self-cam technology successfully gathered facial-head movement videos from natural face-to-face conversations, enabling construction of a new test of non-acted expression reading ability. The videos are much more complex than those in the MindReading DVD, including natural and potentially distracting movements such as head turns, hands over face, lips moving with speech, and other facial gestures that are hard to read by most people out of context. Using a subset with high NT-rater agreement, the best and worst recognition was when the Groden students looked at videos of themselves and their peers.

Related Content