Our goal is to create autonomous systems—including interactive physical robots and synthetic characters in virtual worlds—that learn to communicate and interact in human-like ways. We also aim to better understand how children learn to communicate by observing and analyzing in vivo observations of children in natural learning environments. Underlying both of these research threads is a theoretical interest in the cognitive structures and processes that ground symbolic communication in embodied interaction with the environment.
Applicants with experience/strong interest in: computer vision and video analysis, machine learning, knowledge representation, cognitive/behavioral modeling, developmental psychology, human-machine interface design. Interests in philosophy of language/mind and visual design are appreciated but not required.
Submission of a portfolio is optional.