Can we become more socially connected with one another through the facilitation of a socially embodied agent? What types of social intelligence would enable these embodied agents to promote positive human-human interactions and relationships? What approaches should we take to responsibly design, build and evaluate computational systems that support a robot’s social intelligence in human-human interaction. This dissertation aims to shed light on how we study these questions, deeply motivated by a societal calling for responsible artificial intelligence (AI). The increasing availability of AI devices in people’s everyday lives gives rise to the concerns about their potential harm to human-human connections, as well as heightens the need to investigate the responsible design of AI agents for human flourishing.
Research efforts in social human-robot interactions (HRI) have been mainly targeted at a single-person interacting with a single robot in various domains such as education and healthcare.The increasing availability of social robots in people’s everyday lives poses a new urgency to expand the HRI research focus from single-person contexts to multi-person interactions, such as with a small group of 2-6 people. A growing number of HRI studies have started to examine how to design a robot’s interaction role (e.g., robot moderator) and social behaviors (e.g., robot’s backchanneling behaviors) in multi-person HRI (MHRI), as well as how a robot can influence processes and dynamics of human groups (e.g., group conflict and group participation).Nevertheless, this MHRI paradigm is still largely under-explored, particularly its conceptual frameworks, design principles and technical tools. The current single-person HRI (SHRI) theories, approaches and technical tools cannot readily scale to human groups and sufficiently capture a fundamental change in complexity introduced by MHRI yet.
Motivated by this need, our research takes a multidisciplinary approach to develop both design frameworks and computational tools for fully autonomous personalized robot companions that can engage in social interactions with two people in the long term. To achieve this research goal, we are working on multiple projects targeting different aspects of this novel personalized MHRI paradigm.
In the DYADIC-MODEL project, we collected a multimodal dataset of 34 parent-child dyads reading and conversing together with the goal of examining and modeling the interpersonal dynamics in dyadic interactions. For example, we analyzed both parents’ and children’s individual and dyadic nonverbal behaviors in relation to their four relationship characteristics ,i.e., child temperament, parenting style, parenting stress, and home literacy environment, and showed the importance of accounting for both individual- and dyad-scale nonverbal behaviors when predicting dyadic relationship characteristics.
In the MHRI-DESIGN project, we designed, developed and implemented a novel parent-child-robot interaction paradigm in the context of shared reading. Then, we conducted a pilot triadic robot interaction study with 12 parent-child pairs families.The pilot study aims to investigate the effects of triadic reading on the human dyad’s socio-affective connections and reading behaviors, and compares the effects of different robot behaviors strategies. In addition, we are currently conducting qualitative and quantitative analyses on how to take a human-centered approach to design next-generation robot companions for parent-child story time.
In the MHRI-THEORY project, we proposed a novel context-generic design framework, ADAPT-MHRI for designing long-term adaptive multi-person interactions with a social robot. The presented framework builds upon existing work within the HRI field, aiming at unifying and extending key concepts in MHRI. It makes the following unique contributions. First, it proposes the first generalizable MHRI design framework that integrates both robot behavior design and adaptation components together while taking both group-level and individual-level design factors and considerations into account. Second, it provides step-by-step design guidelines foreach component in ADAPT-MHRI as well as three novel and distinct MHRI case studies, scaffolding researchers and designers to develop their contextualized MHRI studies from scratch. Lastly, it presents an overview of the state-of-the-art MHRI research, key challenges and future directions for MHRI.
In the MHRI-MODEL project, we are designing both the robot affective sensing models in the multi-person context and the robot behavior personalization models for the triadic dyad-robot interactions. In addition, we are designing new evaluation methods to analyze the robot’s long-term personalization effectiveness. This project is built upon our prior work (SHRI-MODEL) on designing novel personalized human-robot interaction paradigm and understanding the impacts of interaction contexts on human-robot dynamics in the single-person context,
The DYADIC-MODEL project:
H. Chen, Y. Zhang, F. Weninger, R. Picard, C. Breazeal, and HW Park. "Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset". Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI), 2022.
The MHRI-THEORY project:
The MHRI-DESIGN project:
The SHRI-MODEL Project: