Project

Theme | Human AI Co-reasoning

Copyright

Fluid Interfaces

Fluid Interfaces

Groups

Increasingly, seminal decisions affecting personal and societal futures are made by people assisted by Artificial Intelligence (AI) systems. The focus in the research community has been on how to make AI systems more accurate, less biased, able to provide explanations, etc. However, what ultimately matters is how the combined human+AI system performs and whether the decisions of that combined system are fair, accurate, and efficient . Insufficient attention has been given to the study of the interaction between an AI system and a human and how the way an AI is integrated into human decision making affects the performance of the combined human-AI decision system in the short and long-term. Research by some others and our team show that this question is of seminal importance to the future of humanity. People too readily adopt advice from an AI, even if that AI is malicious or simply wrong. There is also a risk of overreliance on AI systems over time, with people forgetting relevant skills or expertise themselves.

We are exploring how AI-systems can effectively assist human reasoning processes by b… View full description

Increasingly, seminal decisions affecting personal and societal futures are made by people assisted by Artificial Intelligence (AI) systems. The focus in the research community has been on how to make AI systems more accurate, less biased, able to provide explanations, etc. However, what ultimately matters is how the combined human+AI system performs and whether the decisions of that combined system are fair, accurate, and efficient . Insufficient attention has been given to the study of the interaction between an AI system and a human and how the way an AI is integrated into human decision making affects the performance of the combined human-AI decision system in the short and long-term. Research by some others and our team show that this question is of seminal importance to the future of humanity. People too readily adopt advice from an AI, even if that AI is malicious or simply wrong. There is also a risk of overreliance on AI systems over time, with people forgetting relevant skills or expertise themselves.

We are exploring how AI-systems can effectively assist human reasoning processes by building and evaluating combined human+AI information processing and decision making systems.

Research Topics
#artificial intelligence

Our current work in this area is collected here: 

  • Danry, Valdemar, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes. "Wearable Reasoner: towards enhanced human rationality through a wearable device with an explainable AI assistant." In Proceedings of the Augmented Humans International Conference, pp. 1-12. 2020.

  • Danry, Valdemar, Pat Pataranutaporn, Ziv Epstein, Matthew Groh, and Pattie Maes (Under review). “Deceptive AI systems that give explanations are just as convincing as honest AI systems in human-machine decision making.” In Proceedings of 8th International Conference on Computational Social Science IC2S2, pp. 1-2. 2022. Preprint available at the MIT Media Lab’s website.

  • Epstein, Ziv, Nicolò Foppiani, Sophie Hilgard, Sanjana Sharma, Elena Glassman, and David Rand. "Do explanations increase the effectiveness of AI-crowd generated fake news warnings?"  ICWSM  (2022).

  • Epstein, Ziv, Gordon Pennycook, and David Rand. "Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources." (2019).

  • Gajos, Krzysztof Z., and Lena Mamykina. "Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning." arXiv preprint arXiv:2202.05402 (2022).

  • Pataranutaporn, Pat, Valdemar Danry, Joanne Leong, Parinya Punpongsanon, Dan Novy, Pattie Maes, and Misha Sra. "AI-generated characters for supporting personalized learning and well-being." Nature Machine Intelligence 3, no. 12 (2021): 1013-1022.

  • Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. "Shifting attention to accuracy can reduce misinformation online." Nature 592, no. 7855 (2021): 590-595.

  • Maes, Pattie. "Agents that reduce work and information overload." Readings in human–computer interaction. Morgan Kaufmann, 1995. 811-821.

  • Koda, Tomoko, and Pattie Maes. "Agents with faces: The effect of personification." Proceedings 5th IEEE International Workshop on Robot and Human Communication. RO-MAN'96 TSUKUBA. IEEE, 1996.