Language, Cognition, and Computation Lecture Series|
(Department of Computer Science, Oregon Health & Science University):
"Recent Progress in the Design of Advanced Multimodal Interfaces"
Monday, June 14, 2004, 3:00 PM EST
Bartos Theatre, MIT Media Lab (E15)
AT&T Career Development Professor of Media Arts and Sciences
Cognitive Machines group
The advent of multimodal interfaces based on recognition of human speech, touch, pen input, gesture, gaze, and other natural behaviors represents just the beginning of a progression toward pervasive computational interfaces that are capable of human-like sensory perception. Such interfaces eventually will interpret continuous, simultaneous input from many different input modes, which will be recognized as users engage in everyday activities. They will also track and incorporate information from multiple sensors on the user's interface and surrounding physical environment in order to support intelligent multimodal, multisensor adaptation to the user, task, and usage environment.
In this talk, Oviatt will describe state-of-the-art research on multimodal interaction and interface design, and in particular two topics that are generating considerable activity at the moment both within her lab and around the world. The first topic focuses on major robustness gains that have been demonstrated for different types of multimodal systems, compared with unimodal ones. The second involves a recent surge of research activity on human multisensory processing and users' multimodal integration patterns during human-computer interaction, as well as implications for the design of adaptive multimodal interfaces. The long-term goal of research in these and related areas is the development of advanced multimodal interfaces that can support new functionality, unparalleled robustness, and flexible adaptation to individual users and real-world mobile usage contexts.
Sharon Oviatt is a professor and co-director of the Center for Human-Computer Communication (CHCC) in the Department of Computer Science at Oregon Health & Science University (OHSU). Her research focuses on human-computer interaction, spoken language and multimodal interfaces, and mobile and highly interactive systems. Examples of recent work involve the development of novel design concepts for multimodal and mobile interfaces, robust interfaces for real-world field environments, adaptive conversational interfaces with animated software characters, and modeling of diverse user groups across the lifespan. She is an active member of the international HCI, speech, and multimodal communities. She has published over 85 scientific articles in a wide range of venues, including work featured in recent and upcoming special issues of Communications of the ACM, Human Computer Interaction, Transactions on Human Computer Interaction, IEEE Multimedia, Proceedings of IEEE, and IEEE Transactions on Neural Networks. She received an NSF Special Extension for Creativity Award in 2000, and chaired the International Conference on Multimodal Interfaces 2003.
MIT Media Laboratory Home Page | Events Main Index