Duke University Neuroscientists Create Brain Implant That Translates Thoughts Into Speech
In an effort to address communication challenges for individuals with neurological disorders, a team of Duke University neuroscientists, neurosurgeons, and engineers have developed a breakthrough technology that translates brain signals into coherent speech. This innovation holds great potential for individuals with conditions such as ALS or locked-in syndrome, offering a promising avenue through brain-computer interfaces.
The technology developed neurology professor Gregory Cogan, PhD, in collaboration with Jonathan Viventi, PhD, and a team of neurosurgeons at Duke University Hospital, involves the incorporation of high-density, flexible brain sensors onto a medical-grade plastic substrate. This incredible technological feat enables the discernment of signals from neighboring brain cells, a crucial aspect for accurate predictions about intended speech. Tests conducted on four patients who temporarily received the implant during brain surgery for other conditions achieved impressive results in translating brain activity into speech.
Lead author of the study, Suseendrakumar Duraivel, and the team used a machine learning algorithm to process neural and speech data gathered during a listen-and-repeat activity, which recorded the activity from the patients’ speech motor cortex. The results demonstrated significant accuracy in predicting sounds produced based solely on brain activity recordings.
While the current speed of speech decoding remains slower than natural speech, the team envisions significant progress toward improvement in the future. Ultimately, the findings of the study were published in the journal Nature Communications, marking a significant advancement in brain-computer interfaces and the potential for individuals with neurological disorders to communicate through thought alone.