Project

Modeling Empathic Similarity in Personal Narratives

Copyright

Jocelyn Shen

Jocelyn Shen

The most meaningful connections between people are often formed through expression of shared vulnerability and emotional experiences. Despite the number of ways we are able to connect through technology-mediated platforms today, loneliness, apathy, and mental distress are still pervasive around the world. We aim to use artificial intelligence (AI) as a tool to humanize personal experiences through identifying similarity in personal narratives based on empathic resonance as compared to raw semantic similarity. While people are naturally able to empathize and relate experiences, today's state-of-the-art AI systems are limited in such emotional reasoning capabilities.

This work focuses on endowing machines with the ability to reason and quantify similarity in lived emotional experiences, which we coin as "empathic similarity.'' We operationalize empathic similarity in personal stories using large language models and insights from social psychology and narratology. We introduce EmpathicStories, a crowdsourced dataset of emotional personal experiences, and present a novel task to retrieve stories in line with human judgments of empathic similarity. Such an approach allows for retrieval of stories that are truly relevant to a person’s lived emotional experiences.

Using prompting approaches, we probe GPT-3's empathic similarity understanding capabilities. However, computing similarities of the query story with all stories in the database is inefficient and expensive. Therefore, we propose a few methods combining fine-tuning and prompting approaches to learn story embeddings that can be compared quickly using distance metrics. We evaluate our method against state-of-the-art language models and using human evaluation to assess the impact of our system on improving users’ empathy and connectedness. Furthermore, we explore the interpretability of the model's underlying emotional reasoning capabilities in order to better understand transparency of the system if deployed in the real world. Such work could have strong implications in studying the social-emotional reasoning capabilities of large language models and the potential for responsibly designed AI systems to foster prosocial behaviors and strengthen human-human connections.