Project

WordSense

Christian Vazquez

As more powerful and spatially aware Augmented Reality devices become available, we can leverage the user’s context to embed reality with audio-visual content that enables learning in the wild. Second-language learners can explore their environment to acquire new vocabulary relevant to their current location. Items are identified, "labeled" and spoken out loud, allowing users to make meaningful connections between objects and words. As time goes on, word groups and sentences can be customized to the user's current level of competence. When desired, a remote expert can join in real-time for a more interactive "tag-along" learning experience.

Sentences & Definitions

Word embeddings can be further enhanced by dynamically linking to an example sentence and definitions database. A number of definitions and sentences databases are available to the public in multiple languages:

Video Clips

After an object is identified and embedded with new vocabulary, WordSense can fetch a clip that portrays the usage of the word within cinematic content. A database that contains short video clips of famous movie quotes is queried using the word to identify the time where the word is spoken. The clip is fetched and streamed above the object. When multiple clips exist for a particular vocabulary word, the content is randomized.