Embodiment in Conversational Interfaces: Rea

J. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhjálmsson, H. Yan
Gesture and Narrative Language Group
MIT Media Laboratory
E15-315
20 Ames St, Cambridge, Massachusetts
+1 617 253 4899
{justine, bickmore, markb, elwin, tetrion, hannes, yanhao}@media.mit.edu

In this paper, we argue for embodied conversational characters as the logical extension of the metaphor of human – computer interaction as a conversation. We argue that the only way to fully model the richness of human face-to-face communication is to rely on conversational analysis that describes sets of conversational behaviors as fulfilling conversational functions, both interactional and propositional. We demonstrate how to implement this approach in Rea, an embodied conversational agent that is capable of both multimodal input understanding and output generation in a limited application domain. Rea supports both social and task-oriented dialogue. We discuss issues that need to be addressed in creating embodied conversational agents, and describe the architecture of the Rea interface.