Project

ECHOS: Enhancing Communication using Holistic Observations and Sensing

Copyright

U.S. Air Force photo by Airman 1st Class Valerie Monroy

U.S. Air Force photo by Airman 1st Class Valerie Monroy

Groups

Approximately 3.5 million people in the US have Autism Spectrum Disorder (ASD), of which about 30% are nonverbal or minimally verbal (mv). Individuals with mv-ASD often communicate emotions and desires through vocalizations that do not have typical verbal content, as well as through gestures, e.g., pulling a caregiver to a desired toy. Some vocalizations have self-consistent phonetic content (e.g., “ba” to mean “bathroom”) and others vary in tone, pitch, and duration depending on the individual’s emotional or physical state or intended communication.  

We present, to our knowledge, the first project studying communicative intent and effect in naturalistic vocalizations that do not have typical verbal content for people with mv-ASD. Interviewed parents of children with mv-ASD cited miscommunication with people who do not know their child well as a major source of stress. Our long-term vision is to design a device that can help others better understand and communicate with individuals with ASD by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverb… View full description

Approximately 3.5 million people in the US have Autism Spectrum Disorder (ASD), of which about 30% are nonverbal or minimally verbal (mv). Individuals with mv-ASD often communicate emotions and desires through vocalizations that do not have typical verbal content, as well as through gestures, e.g., pulling a caregiver to a desired toy. Some vocalizations have self-consistent phonetic content (e.g., “ba” to mean “bathroom”) and others vary in tone, pitch, and duration depending on the individual’s emotional or physical state or intended communication.  

We present, to our knowledge, the first project studying communicative intent and effect in naturalistic vocalizations that do not have typical verbal content for people with mv-ASD. Interviewed parents of children with mv-ASD cited miscommunication with people who do not know their child well as a major source of stress. Our long-term vision is to design a device that can help others better understand and communicate with individuals with ASD by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverbal communication. Our focus is currently on developing personalized models to classify vocalizations using in the moment live labels from caregivers via the ECHOS labeling app. As part of this work, we are developing scalable methods for collecting and live labeling naturalistic data, and processing methods for using the data in machine learning algorithms. We are currently piloting and refining our data collection, machine learning models, and vision with one family through a highly participatory design process.

Research Topics
#autism research