MACK: Media lab Autonomous Conversational Kiosk

Justine Cassell, Tom Stocky, Tim Bickmore, Yang Gao, Yukiko Nakano
Kimiko Ryokai, Dona Tversky, Catherine Vaucelle, Hannes Vilhjįlmsson
MIT Media Lab
Cambridge, MA, 02145 USA

In this paper, we describe an embodied conversational kiosk that builds on research in embodied conversational agents (ECAs) and on information displays in mixed reality and kiosk format in order to display spatial intelligence. ECAs leverage people's abilities to coordinate information displayed in multiple modalities, particularly information conveyed in speech and gesture. Mixed reality depends on users' interactions with everyday objects that are enhanced with computational overlays. We describe an implementation, MACK (Media lab Autonomous Conversational Kiosk), an ECA who can answer questions about and give directions to the MIT Media Lab's various research groups, projects and people. MACK uses a combination of speech, gesture, and indications on a normal paper map that users place on a table between themselves and MACK. Research issues involve users' differential attention to hand gestures, speech and the map, and flexible architectures for Embodied Conversational Agents that allow these modalities to be fused in input and generation.