Talk: Prof. Jeffrey Cohn on automated analysis and synthesis of facial and vocal expressions

January 27, 2012


MIT Media Lab, E14-493


Computational behavioral science is making significant efforts in analysis and understanding of naturally occurring behavior. Active appearance models are an especially exciting approach. They may be used both to measure naturally occurring facial behavior and to synthesize near-photorealistic real-time avatars to experimentally perturb and reveal the dynamics of social behavior. Cohn's interdisciplinary group of psychologists and computer scientists uses and extends these capabilities. Professor Cohn will present recent studies and the opportunities they offer; topics will include communication of emotion and pain, depression severity, and mother-infant synchrony. The group's findings inform behavioral science and raise new challenges and opportunities for computational behavioral science.


Jeffrey Cohn is professor of psychology at the University of Pittsburgh and adjunct professor at the Robotics Institute, Carnegie Mellon University. He received his PhD in psychology from the University of Massachusetts at Amherst. He has led interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis of facial expression and prosody and applied those tools to research in human emotion, interpersonal processes, social development, and psychopathology. He co-developed the influential Cohn-Kanade, MultiPIE, and Pain Archive databases, co-edited special issues of Image and Vision Computing on facial expression analysis, and co-chaired the most recent IEEE International Conference on Automatic Face and Gesture Recognition (FG2008). For more information, please see http://www.pitt.edu/~jeffcohn.

More Events