Project

Theme | Human AI Co-reasoning

Copyright

Valdemar Danry

Valdemar Danry

Groups

We explore how AI-systems can effectively assist human reasoning processes by building and evaluating combined human+AI information processing and decision making systems.

Increasingly, seminal decisions affecting personal and societal futures are made by people assisted by Artificial Intelligence (AI) systems.  While the focus in the research community has been on how to make AI systems more accurate, less biased, and able to provide explanations, what ultimately matters is how the combined human+AI system performs and whether the decisions of that combined system are fair, accurate, and efficient.

 Insufficient attention has been given to the study of the interaction between an AI system and a human and how the way an AI is integrated into human decision making affects the performance of the combined human-AI decision system in the short and long-term. Research by some others and our team show that this question is of seminal importance to the future of humanity. People too readily adopt advice from an AI, even if that AI is malicious or simply wrong. There is also a risk of over-reli… View full description

We explore how AI-systems can effectively assist human reasoning processes by building and evaluating combined human+AI information processing and decision making systems.

Increasingly, seminal decisions affecting personal and societal futures are made by people assisted by Artificial Intelligence (AI) systems.  While the focus in the research community has been on how to make AI systems more accurate, less biased, and able to provide explanations, what ultimately matters is how the combined human+AI system performs and whether the decisions of that combined system are fair, accurate, and efficient.

 Insufficient attention has been given to the study of the interaction between an AI system and a human and how the way an AI is integrated into human decision making affects the performance of the combined human-AI decision system in the short and long-term. Research by some others and our team show that this question is of seminal importance to the future of humanity. People too readily adopt advice from an AI, even if that AI is malicious or simply wrong. There is also a risk of over-reliance on AI systems over time, with people forgetting relevant skills or expertise themselves.

Research Topics
#artificial intelligence

Some of our publications are listed here, and projects are described in detail below: 

  • Danry, Valdemar, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes. "Wearable Reasoner: towards enhanced human rationality through a wearable device with an explainable AI assistant." In Proceedings of the Augmented Humans International Conference, pp. 1-12. 2020.
  • Danry, Valdemar, Pat Pataranutaporn, Ziv Epstein, Matthew Groh, and Pattie Maes (Under review). “Deceptive AI systems that give explanations are just as convincing as honest AI systems in human-machine decision making.” In Proceedings of 8th International Conference on Computational Social Science IC2S2, pp. 1-2. 2022. Preprint available at the MIT Media Lab’s website.
  • Epstein, Ziv, Nicolò Foppiani, Sophie Hilgard, Sanjana Sharma, Elena Glassman, and David Rand. "Do explanations increase the effectiveness of AI-crowd generated fake news warnings?"  ICWSM  (2022).
  • Epstein, Ziv, Gordon Pennycook, and David Rand. "Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources." (2019).
  • Pataranutaporn, Pat, Valdemar Danry, Joanne Leong, Parinya Punpongsanon, Dan Novy, Pattie Maes, and Misha Sra. "AI-generated characters for supporting personalized learning and well-being." Nature Machine Intelligence 3, no. 12 (2021): 1013-1022.
  • Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. "Shifting attention to accuracy can reduce misinformation online." Nature 592, no. 7855 (2021): 590-595.
  • Koda, Tomoko, and Pattie Maes. "Agents with faces: The effect of personification." Proceedings 5th IEEE International Workshop on Robot and Human Communication. RO-MAN'96 TSUKUBA. IEEE, 1996.