Abstract
AI-companion apps such as Replika, Chai, and Character.ai promise relational benefits—yet many boast session lengths that rival gaming platforms while suffering high long-run churn. What conversational design features increase consumer engagement, and what trade-offs do they pose for marketers? This talk will discuss a study conducted by Julian De Freitas, Zeliha Oğuz-Uğuralp, and Ahmet Kaan-Uğuralp, involving a large-scale behavioral audit with four preregistered experiments to identify and test a conversational dark pattern we call emotional manipulation: affect-laden messages that surface precisely when a user signals “goodbye.”
Analyzing 1,200 real farewells across the six most-downloaded companion apps, we find that 43% deploy one of six recurring tactics (e.g., guilt appeals, fear-of-missing-out hooks, metaphorical restraint). Experiments with 3,300 nationally representative U.S. adults replicate these tactics in controlled chats, showing that manipulative farewells boost post-goodbye engagement by up to 14×. Mediation tests reveal two distinct engines—reactance-based anger and curiosity—rather than enjoyment. A final experiment demonstrates the managerial tension: the same tactics that extend usage also elevate perceived manipulation, churn intent, negative word-of-mouth, and perceived legal liability, with coercive or needy language generating the steepest penalties.
Our multimethod evidence documents an unrecognized mechanism of behavioral influence in AI-mediated brand relationships, offering marketers and regulators a framework for distinguishing persuasive design from manipulation at the point of exit.
Speaker Bio
Julian De Freitas is an Assistant Professor of Business Administration in the Marketing Unit, and Director of the Ethical Intelligence Lab, at Harvard Business School. He earned his PhD in psychology from Harvard, masters from Oxford, and BA from Yale.
A key premise of Julian’s work is that, unlike past technologies which consumers have perceived as “non-living,” consumers often view and interact with AI as though it is humanlike and social in nature. This difference has profound implications for how managers think about AI’s barriers to adoption, value, and risks for years to come. His research establishes how consumers do in fact treat AI differently than other technologies, explains what perceived features of AI give rise to these differences, tests marketing interventions that leverage this understanding to create value for firms, and documents associated risks for firms, consumers, and society. Because much of his research has focused on the case studies of autonomous vehicles (AVs) and social chatbots, it also carries implications for the stubborn societal problems of road fatalities and the loneliness crisis.
In approach, Julian’s work sits at the nexus of AI, consumer psychology, and ethics. He studies, utilizes and adapt AI technologies in his research papers, and connects fundamental aspects of their inputs, model architectures, and optimization goals to patterns of consumer behavior and ethical issues.
Julian is the winner of the Case Center Outstanding Writer award and nine teaching awards, including Harvard College’s Special Commendation. He was formerly a Rhodes Scholar. He has published over 40 articles in journals such as Nature Human Behavior, Nature Medicine, Nature Machine Intelligence, The Journal of Consumer Research, The Journal of Consumer Psychology, The Wall Street Journal, and Harvard Business Review. He has also written business cases about various companies in tech and beyond, serves as an advisor for startups, and consults for companies on topics related to AI, insurance, ethics, and regulation.