2021 Seed Grant
Joseph G. Makin, Ph.D.
Losing the ability to speak can be devastating. Sometimes, though, people lose this ability without losing the ability to plan the proper mouth movements for speaking. For example, a stroke, or Lou Gehrig’s disease, can spare the parts of the brain responsible for issuing the right “commands” to the lips, jaws, and tongue, while damaging the parts of the nervous system through which those commands would normally travel. In these cases, it’s theoretically possible to restore speech by monitoring the brain cells associated with intended speech, and then using an algorithm that translates those patterns of brain activity directly into the words the person is trying to say.
However, to do that, an electronic monitoring device needs to be implanted on the surface of the brain, under the skull. Fortunately for speech research, some people already get these devices temporarily implanted: patients who are undergoing treatment for epilepsy. In recent work, I developed and implemented a system that translates this brain activity into speech with nearly perfect accuracy–but only when the participant was restricted to speaking just 50 predetermined sentences, using a total vocabulary of 250 words.
Now our lab is proposing to scale this speech decoding up to allow the participant to form any sentence he or she chooses, from a larger vocabulary of 1000 words. To do this, we plan to record brain activity from a group of epilepsy patients for several hours, while they speak sentences from a new set of sentences that are optimized for flexibility. Then we will develop and apply an improved speech-decoding algorithm, incorporating recent advances in “machine translation,” the technology behind (e.g.) Google Translate.