Publication

Affect valence inference from facial action unit spectrograms

Daniel McDuff, Rana Kaliouby, K. Kassam, Rosalind W. Picard

Abstract

The face provides an important channel for communicating affect valence, the positive or negative emotional charge of an experience. This paper addresses the challenging pattern recognition problem of assigning affect valence labels (positive, negative or neutral) to facial action sequences obtained from unsegmented videos coded using the Facial Action Coding System (FACS). The data were obtained from viewers watching eight short movies with each second of video labeled with self-reported valence and hand coded using FACS. We identify the most frequently occurring Facial Actions and propose the usefulness of a Facial Action Unit Spectrogram. We compare both generative and discriminative classifiers on accuracy and computational complexity: Support Vector Machines, Hidden Markov Models, Conditional Random Fields and Latent-Dynamic Conditional Random Fields. We conduct three tests of generalization with each model. The results provide a first benchmark for classification of self-report valences from spontaneous expressions from a large group of people (n=42). Success is demonstrated for increasing levels of generalization and discriminative classifiers are shown to significantly outperform generative classifiers over this large data set. We discuss the challenges encountered in dealing with a naturalistic dataset with sparse observations and its implications on the results.

Related Content