Publication

FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adaptive Presentation for Video Conferencing

Lining Yao, Anthony DeVincenzi, Anna Pereira, Hiroshi Ishii

Abstract

We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.

Related Content