Project

Real-time Smartphone-based Sleep Staging using 1-Channel EEG and Machine Learning

Judith Amores

Automatic and real-time sleep scoring is necessary to develop user interfaces that trigger stimuli in specific sleep stages. However, most automatic sleep scoring systems have been focused on offline data analysis. We present the first, real-time sleep staging system that uses deep learning without the need for servers in a smartphone application for a wearable EEG. We employ real-time adaptation of a single channel Electroencephalography (EEG) to infer from a Time-Distributed Convolutional Neural Network (CNN). Polysomnography (PSG)—the gold standard for sleep staging—requires a human scorer and is both complex and resource-intensive. Our work demonstrates an end-to-end, smartphone-based pipeline that can infer sleep stages in just single 30-second epochs, with an overall accuracy of 83.5% on 20-fold cross validation for 5-stage classification of sleep stages using the open Sleep-EDF dataset. 

For comparison, inter-rater reliability among sleep-scoring experts is about 80% (Cohen’s = 0.68 to 0.76). We further propose an on-device metric independent of the deep learning model which increases the average accuracy of classifying deep-sleep (N3) to more than 97.2% on 4 test nights using power spectral analysis.

Identification of sleep stages is important not only in diagnosing and treating sleep disorders but also for understanding the neuroscience of healthy sleep. PSG is used in hospitals to study sleep and diagnose sleep disorders and involves the recording of multiple electrophysiological signals from the body, such as brain activity using EEG, heart rhythm through Electrocardiography (ECG), muscle tone through Electromyography (EMG) and eye-movement through Electrooculography (EOG). PSG involves a tedious procedure which requires skilled sleep technologists in a laboratory setting. It requires a minimum of 22 wires attached to the body in order to monitor sleep activity. The complexity of this setup requires sleeping in a hospital or laboratory with an expert monitoring and scoring signals in real-time. This results in an unnatural sleep setup for the subject that affects the diagnosis and causes sub-optimal utilization of time and energy resources for recording and scoring. Therefore, there has been a significant development in research on automating sleep staging with wireless signals and more compact, wearable devices.  Nevertheless, as far as we are aware of, none of these systems implements a 5-stage classification of sleep in just 30-second real-time epochs on a smartphone using single-channel EEG.  

The goal of our research is to simplify and automate PSG on a smartphone for  real-time interventions which can potentially be used in Human-Computer-Interaction applications.

We achieve automated classification through adaptation of a Time-Distributed Deep Convolutional Neural Network model to classify the 5 sleep stages. As per the new AASM rules, these stages are—Wake, Rapid-Eye-Movement (REM) and Non-Rapid-Eye-Movement (N-REM) stages N1, N2, and N3. We make use of a single channel EEG recorded through a modified research version of the Muse headband. We have developed a TensorFlow Lite Android app that uses only a single channel recording of EEG. The app has a friendly user interface to visualize sleep stages and EEG data with real-time statistics. It connects via Bluetooth Low Energy (BLE) to the flexible EEG headband making it portable and comfortable to use at home. 

You can download our BSN research paper here:

Koushik, A., Amores, J., & Maes, P. (2019, May). Real-time Smartphone-based Sleep Staging using 1-Channel EEG . In 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN). IEEE.

As well as our arXiv paper that we published at the Machine Learning for Health (ML4H) Workshop at NeurIPS 2018:

Research Topics
#sensors #sleep #wellbeing