subtitle.jpg (7232 bytes)
Overview

Similar to its predecessor [BodyChat], Situated Chat automatically animates the visual representations (avatars) of the participants of an online graphical chat. While BodyChat concentrated on the use of a social model to animate appropriate social behavior such as greetings and farewells, Situated Chat also builds a model of the discourse context, taking into account the shared visual environment, and then uses it to generate nonverbal behavior of propositional nature such as referring gestures.

The System

The image above shows an example of how a pointing gesture is generated for the avatar portraying a user that just said "Have some wine".  Discourse entities are extracted from the typed input and their status checked against the current discourse context.  If the entity is not found in recent discourse history, but a related object is found in the environment, a pointing gesture towards the object is generated.  Relationships between discourse entities and objects are explored using the WordNet database.  If a discourse entity is neither found in the discourse history nor the environment, that entity is marked brand new and is emphasised with an iconic gesture.

 Further information

Situated Chat is a part of the [Life-Like Avatars] project and is the thesis project of [Hannes Hogni Vilhjalmsson].

line.gif (181 bytes)

wpe8.jpg (2362 bytes)
name.jpg (2368 bytes)