• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Publication

Multimodal Inductive Transfer Learning for Detection of Alzheimer's Dementia and its Severity

@article{sarawgi2020multimodal, title={Multimodal Inductive Transfer Learning for Detection of Alzheimer's Dementia and its Severity}, author={Sarawgi, Utkarsh and Zulfikar, Wazeer and Soliman, Nouran and Maes, Pattie}, journal={arXiv preprint arXiv:2009.00700}, year={2020} }

Abstract

Alzheimer's disease is estimated to affect around 50 million people worldwide and is rising rapidly, with a global economic burden of nearly a trillion dollars. This calls for scalable, cost-effective, and robust methods for detection of Alzheimer's dementia (AD). We present a novel architecture that leverages acoustic, cognitive, and linguistic features to form a multimodal ensemble system. It uses specialized artificial neural networks with temporal characteristics to detect AD and its severity, which is reflected through Mini-Mental State Exam (MMSE) scores. We first evaluate it on the ADReSS challenge dataset, which is a subject-independent and balanced dataset matched for age and gender to mitigate biases, and is available through DementiaBank. Our system achieves state-of-the-art test accuracy, precision, recall, and F1-score of 83.3% each for AD classification, and state-of-the-art test root mean squared error (RMSE) of 4.60 for MMSE score regression. To the best of our knowledge, the system further achieves state-of-the-art AD classification accuracy of 88.0% when evaluated on the full benchmark DementiaBank Pitt database. Our work highlights the applicability and transferability of spontaneous speech to produce a robust inductive transfer learning model, and demonstrates generalizability through a task-agnostic feature-space. The source code is available at this URL.

Related Content