Project

Predicting Driver Self-Reported Stress by Analyzing the Road Scene

Different approaches and models have been used to detect driver’s stress and affective states, using physiology, facial expression, and self-reports. Including contextual information is expected to help improve the accuracy of the models. 

This project focuses on vision-based extraction of driving environmental context. Thanks to recent advances in machine learning with shared real-world datasets, we believe it is now feasible to train an automated system to predict a driving-induced state of stress from a visual scene.

This work may help not only with predicting driver stress in real-time applications but also in expanding the utility of other unlabeled data sets for additional research. 

Different approaches and models have been used to detect driver’s stress and affective states, using physiology, facial expression, and self-reports. Including contextual information is expected to help improve the accuracy of the models. 

This project focuses on vision-based extraction of driving environmental context. Thanks to recent advances in machine learning with shared real-world datasets, we believe it is now feasible to train an automated system to predict a driving-induced state of stress from a visual scene.

This work may help not only with predicting driver stress in real-time applications but also in expanding the utility of other unlabeled data sets for additional research.