Post

Frequently asked questions: conformable facial code extrapolation sensor

  1. How did this research come about? What inspired c-FaCES?
  2. Where and how long did this project take?
  3. What will c-FACES do?
  4. What is the importance of 3D-DIC?
  5. What kind of applications can be performed with this facial interface system?
  6. Can the device be worn daily?
  7. Will this research be applicable to other areas of the body or to monitor other modalities?
  8. How long will it be before we or practitioners can use cFaCES?
  9. What’s the benefit of this facial interface nonverbal communication technology compared to the rest?
  1. How did this research come about? What inspired c-FaCES?

    It was a beautiful, warm April night when I met Dr. Steven Hawking at Harvard Society, where I was a Junior Fellow just prior to joining the MIT Media Lab as a faculty. Harvard Society dinners are known for their lively, scintillating conversations amongst some of the most brilliant scholars and interesting minds; a place where conversations carry-on, hours into the night. During dinner, Dr. Hawking exuded such a warm and patient presence with so much to tell and share, yet I sensed his struggles; it was taking too long for him to compose a sentence via his computer system. On that night, while sitting by his side, I made up my mind to tackle his struggle by designing and developing a conformable interface which would allow him (and those like him) to compose messages seamlessly, and thus carry on the conversation.

  2. Where and how long did this project take?

    This project took a year and a half to finish, starting in June 2018 at the MIT Media Lab. 

  3. What will c-FACES do?

    cFaCES is a conformable device composed of piezoelectric thin films on compliant substrates. It realizes mechanically-adaptive, predictable, and visually-invisible in vivo monitoring of spatiotemporal epidermal strains and decoding of distinct facial deformation signatures. cFaCES thus enables a predictable method for continuous tracking of dynamic skin strain on the face and biokinematic assessment of human musculature. cFaCES is widely-deployable system for real-time detection of facial motions—which utilizes low-cost materials, easily manufacturable processes, and a seamless pipeline for fabrication, testing, and validation—to offer potential for clinically-viable nonverbal communication technologies. 

  4. What is the importance of 3D-DIC?

    The Conformable Decoders lab aims to translate in earnest the plethora of physical patterns constantly communicated by the human body into beneficial signals. To decode these patterns, we create seamless conformable devices that can accurately track soft tissue deformations using ultrathin piezoelectrics encapsulated in stretchable polymers. We realized, in the process of creating these devices, that most biomedical sensor development pipelines lack an in-depth study of the target soft tissue prior to the design and fabrication of the device. Such knowledge would better inform sensor design in regards to how best to account for soft tissue strains across a wide array of dynamic motions, before the device design stage. We thus decided to use three-dimensional digital image correlation (3D-DIC) as a method for detailed biokinematic study of the target region upon which a sensor—with mechanically-active functional material, such as piezoelectrics—would ideally be placed. Similar to how chemical assays of a body part would be conducted before designing medication for disorders of that body part, so too does 3D-DIC allow for the mechanical study of biological soft tissue, before designing the mechanically-active functional materials on mechanically-adaptive substrates that are meant to intimately integrate with that soft tissue. 3D-DIC can determine the surface strains and deformations associated with any 3D object of interest.

  5. What kind of applications can be performed with this facial interface system?

    The immediate demonstration of the technology is as a nonverbal communication tool for individuals who may have any type of speech or other impairment which makes it difficult for them to speak and/or use standard communication technologies. Such systems can also be developed to conduct continuous clinical monitoring of a wide range of neuromuscular conditions, where variations of strain values measured by cFaCES are anticipated due either to time-dependent alterations in muscle movements, and thus measurable epidermal deformations, due to neurodegeneration or a response throughout medical therapy. 

  6. Can the device be worn daily?

    The device has a mechanical compliance similar to that of skin, making it comfortable for users to wear for extended periods of time. In addition, the specific use of AlN fabricated in an 8-inch wafer process results in a low cost ($10/cFaCES) and disposable device structure.The device can be made visually, as well as mechanically, invisible, which can allow for mental and physical comfort for use in daily life, and lead to easier adoption of the technology in society. 

  7. Will this research be applicable to other areas of the body or to monitor other modalities?

    This technology can be also utilized in applications where discrete communication is desired, such as military communications, or situations where a user may be in danger and coded messages can be sent to notify loved ones or authorities.

  8. How long will it be before we or practitioners can use cFaCES?

    We would need several years of further development to have this technology accessible and ready for use in clinics and at home, such as performing further system optimization and even more mechanical and computational studies involving larger sets of motions and messages.

  9. What’s the benefit of this facial interface nonverbal communication technology compared to the rest?

    Present methods for in vivo characterization of facial deformations involve electromyography (EMG), skin impedance measurements, or camera tracking. Yet these typically result in cumbersome computational load or have rigid, bulky structures with highly visible interfaces to soft skin, presenting difficulty for continuous use in daily life, especially for individuals with neuromuscular disorders. Upon comparison to metrics available in the literature for these other technologies, our work represents a technology with significant improvements towards more seamless interface coupling and lower computational load, while maintaining modest accuracy levels. cFaCES has a simplified structure, i.e. 2 x 2 spatial array of piezoelectric elements, to reduce the amount of data processing required during real-time decoding in an effort to push the boundary of the decoding accuracy with lower cost and lower data processing power and time. 

    Furthermore, when laminated on the facial skin, this low-cost, mass-manufacturable cFaCES enables the creation of a library of motions from which a large subset of human language could possibly be inferred. The size of this subset depends on the method of mapping facial motions to language as well as the number of distinct facial motions chosen for decoding. The final number of motions chosen for decoding depends on the number of phrases or ideas desired to be communicated as well as the chosen mapping strategy, potentially allowing for an exponential mapping between facial motions and coded messages.

Related Content