By Josh Quittner
A Boston startup called AlterEgo on Monday unveiled a wearable device that allows users to communicate silently with computers, marking the first serious attempt to commercialize a revolutionary technology pioneered at the MIT Media Lab.
The device, described by the company as a “near-telepathic” interface, does not read brain activity. Instead, it detects faint neuromuscular signals in the face and throat when a person internally verbalizes words. Those signals are decoded by machine learning software and transmitted as commands or text. Responses are delivered privately through bone-conduction audio.
The approach builds on research first presented at MIT in 2018, when Kapur, then a graduate student, introduced a prototype headset under the same name. That version demonstrated that subvocal speech—words uttered in silence—could be captured with sufficient accuracy to control simple systems. The lab positioned it as a potential aid for people with speech impairments, while also suggesting broader applications in human-computer interaction.
AlterEgo has not disclosed details about funding, launch timing, or commercialization strategy, but the company will present the technology publicly at the Axios AI+ Summit in Washington, D.C., on Sept. 17.