Cognitive biases detrimentally influence decision-making across a spectrum of domains, including healthcare, electoral behavior, and organizational management. Addressing the challenge of mitigating these biases, this study introduces an automated system capable of identifying cognitive biases from individuals' speech transcripts.
MindMapper leverages large-language model agent based techniques to continuously learn and adapt to the user, giving users a model of their behavioral trends to inspect and learn from in order to identify personal vulnerabilities and improve their own critical thinking.
To evaluate the system, we generated AI-agent users whose behavior was simulated over a one-week period with biases injected into their behavior, serving as a dataset for system validation. The system's performance was quantitatively assessed by comparing its bias identification outcomes against the known biases of the in-silico characters.
This promising result underscores the potential of the proposed system to serve as a foundational tool for real-time bias detection and intervention, thereby enhancing decision quality in critical settings. Future work will focus on refining the system's accuracy and exploring its applicability in live environments, with the ultimate goal of facilitating more objective and bias-aware decision-making processes.