Commalla: Communication for All


Authors retain copyright

Kristina T. Johnson

Interested in learning more about the Commalla project?  Fill out this form

Project Description

Over 1 million people in the U.S. are non- or minimally speaking with respect to verbal language  (mv*) , including but not limited to people with autism spectrum disorders (ASD), Down syndrome (DS), and other genetic disorders.  Mv* individuals communicate richly through vocalizations that do not have typical verbal content, as well as through gestures, AAC, and other modalities. Some vocalizations have self-consistent phonetic content (e.g., “ba” to mean “bathroom”) and others vary in tone, pitch, and duration depending on the individual’s intended communication and affect.  

We present, to our knowledge, the first project studying communicative intent and affect in naturalistic vocalizations that do not have typical verbal content for mv* individuals. Interviewed parents of mv* children cited miscommunication with people who do not know their child well as a major source of stress. Our long-term vision is to design a device that can help others better understand and communicate with mv* individuals by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverbal communication. Our focus is currently on developing personalized models to classify vocalizations using in the moment live labels from caregivers via the Commalla labeling app. As part of this work, we are developing scalable methods for collecting and live labeling naturalistic data, and processing methods for using the data in machine learning algorithms. We are currently piloting and refining our data collection, machine learning models, and vision with a small number of families through a highly participatory design process.  


Commalla is co-led by Kristy Johnson ( and Jaya Narain (

(Note: This project was previously called ECHOS.)  

Additional Commalla information: