• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

AI-Implanted False Memories

Copyright

Pat Pataranutaporn

Pat Pataranutaporn

Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews

This study examines the impact of AI on human false memories--recollections of events that did not occur or deviate from actual occurrences. It explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews. Four conditions were tested: control, survey-based, pre-scripted chatbot, and generative chatbot using a large language model (LLM). Participants (N=200) watched a crime video, then interacted with their assigned AI interviewer or survey, answering questions including five misleading ones. False memories were assessed immediately and after one week. Results show the generative chatbot condition significantly increased false memory formation, inducing over 3 times more immediate false memories than the control and 1.7 times more than the survey method. 36.4% of users' responses to the generative chatbot were misled through the interaction. After one week, the number of false memories induced by generative chatbots remained constant. However, confidence in these false memories remained higher than the control after one week. Moderating factors were explored: users who were less familiar with chatbots but more familiar with AI technology, and more interested in crime investigations, were more susceptible to false memories. These findings highlight the potential risks of using advanced AI in sensitive contexts, like police interviews, emphasizing the need for ethical considerations.

Copyright

MIT Media Lab

Manipulation of Eyewitness Memory by AI: This figure illustrates the process of AI-induced false memories in three stages. It begins with a person witnessing a crime scene involving a knife, then shows an AI system introducing misinformation by asking about a non-existent gun, and concludes with the witness developing a false memory of a gun at the scene. This sequence demonstrates how AI-guided questioning can distort human recall, potentially compromising the reliability of eyewitness testimony and highlighting the ethical concerns surrounding AI’s influence on human memory and perception.

The Generative Chatbot significantly induced more immediate false memories compared to other interventions

Copyright

MIT Media Lab

(Left) The average number of immediate false memories result was analyzed using a one-way Kruskal–Wallis test and posthoc Dunn test with FDR. (Right) The confidence in immediate false memories result was analyzed using a one-way Kruskal–Wallis test and posthoc Dunn test with FDR. The error bars represent the 95% confidence interval. P-value annotation legend: *, P<0.05; **, P<0.01; ****, P<0.0001.

The false memories induced by the Generative Chatbot remained the same after one week

Copyright

MIT Media Lab

(Left) The differences in number of false memories between immediate and 1 week later were analyzed using Wilcoxon Signed Rank tests. (Right) The confidence in false memories after one week result was analyzed using a one-way Kruskal–Wallis test. The error bars represent the 95% confidence interval. The measure of the centre for the error bars represents the average number. P-value annotation legend: *, P<0.05; **, P<0.01.