Giving Voice to the Voiceless Using Decoded Brain Signals

Decoded brain signals can be transformed into spoken words and sentences to provide a potential solution for people who have lost the ability to speak or gesture as a result of neurological impairments, according to a study published in Nature.

“Finding a way to restore speech is one of the great challenges in neurosciences,” says Dr. Leigh Hochberg, a professor of engineering at Brown University who wasn’t associated with the study, in an article published by NPR. “This is a really exciting new contribution to the field.”

Currently, people living with paralysis who lack the ability to speak or gesture reply upon eye moments or brain-controlled computer sensors to communicate, which allow them to slowly spell out words one letter at time. However, this method is problematic in that it only enables individuals to speak fewer than 10 words per minute, juxtaposed to approximately 150 per minute people can complete using natural speech. Therefore, this method is “not the most efficient way to communicate,” said Dr. Edward Chang, a neurosurgeon at UCSF and one of the study’s authors.

In this study, researchers sought to find a viable means for paralyzed patients to seamlessly generate words and sentences. They assessed five volunteers with severe epilepsy who had electrodes temporarily situated on their brain surfaces. These electrodes enabled doctors to pinpoint any areas triggering seizures. While recording signals located in the brain’s speech centers controlling muscles in the tongue, lip, jaw, and larynx, the volunteers were prompted to read hundreds of sentences aloud. Subsequently, those brain signals for were decoded by a computer and used to synthesize speech.

As “Recognizable as Speech”

In closed vocabulary tests, results of the study suggest that listeners could effectively transcribe speech through brain activity. The decoder was able to synthesize speech even when participants silently gestured sentences. Moreover, volunteers were able to discern what the computer was saying most of the time. The authors wrote that these findings “advance the clinical viability of using speech neuroprosthetic technology to restore communication.”

Dr. Hochberg was especially impressed by the quality of recordings synthesized, saying that “I pressed play, I listened to it with my eyes closed and what I heard was something that was recognizable as speech.”

Despite the study’s encouraging findings, which add to the evidence of restoring fluent speech to patients with neurological impairments, Hochberg cautions that there’s “still a lot of research, and clinical research in particular, that needs to happen.”