Why the brain misunderstands speech after stroke
In the aftermath of a stroke, people are often left confused by what others are trying to tell them, but researchers have more to learn about what causes communication to go awry.
Now, researchers at Stanford’s Wu Tsai Neurosciences Institute and KU Leuven in Belgium suggest that, after a stroke, the brain spends insufficient time puzzling over the individual sounds that make up words. This makes it harder to figure out what someone else is saying.
Those results, published January 28, 2026 in the Journal of Neuroscience, could help design new diagnostics and treatments for a variety of language disorders.
Such advances could have a significant impact. One in four adults will suffer from a stroke, and a third of stroke survivors will develop aphasia, a disorder that makes it harder to understand and produce written and spoken language. Aphasia can interfere with daily tasks and relationships, leaving people feeling painfully isolated.
To better understand aphasia, the new study focused on how the brain processes speech sounds—specifically, the sounds linguists call phonemes, which are the fundamental building blocks of meaning in our speech. The simple /r/ sound, for example, is the difference between removing a mole and a molar.
“We were interested to see whether people with a language disorder after stroke would mix up the order of different sounds when they hear speech, or whether it would be some other mechanism that goes wrong,” said Jill Kries, a postdoctoral fellow in psychology and the lead author of the new study.
Kries collected the data as a doctoral student advised by Maaike Vandermosten at KU Leuven. During the study, 63 people—39 of whom had post-stroke aphasia—listened to a story while the researchers tracked electrical activity in their brains with an electroencephalogram (EEG).
When Kries started her postdoc in the lab of Laura Gwilliams, a faculty scholar with Wu Tsai Neuro and Stanford Data Science and assistant professor of psychology, the three researchers used machine learning to identify how the brain responded to individual speech sounds—and how that response differed in people with aphasia.
They were surprised to find that people with aphasia not only stopped processing sounds sooner than others but also showed less electrical activity in the parts of their brains associated with language processing.
There were also differences in how long the brains of people with aphasia spent parsing speech sounds in ambiguous parts of words. Some sounds occur in many different words and so provide less information for listeners to predict the rest of the word. In listening to more sounds, the word becomes more complete and its meaning less ambiguous. (For example, the phoneme “m” starts many words, so it would be considered more ambiguous than the combined phonemes of “man,” which start fewer words. Even more certain is “manat,” which the brain can more confidently use to predict the full word: manatee.)
People without aphasia spend more time processing ambiguous parts of words compared to more certain ones, Kries and colleagues found. People with aphasia, on the other hand, did not take additional time with these ambiguous sounds.
The findings suggest that, rather than mixing up the order of sounds they hear, as Kries had hypothesized, people with aphasia may simply stop processing sounds too soon, before they can figure out what word they’re hearing.
The results could have implications for diagnosis and therapy, said Kries, who is already testing whether it’s possible to use this approach to identify which parts of an individual’s language are disrupted by aphasia.
Continued research may also inform the development of brain-computer interfaces to help people with aphasia communicate via computer. Currently, such devices have shown promise in people whose neural language systems remain intact, but who can’t speak due to paralysis or neurodegenerative diseases, for example. Kries hopes her work can help extend such interfaces to people with language-processing disorders such as aphasia by better understanding precisely what goes amiss in the brain.
Finally, the team is expanding its studies by conducting similar experiments in children with dyslexia,epilepsy and autism, which are also associated with language difficulties.
“Language is affected in many neurological disorders,” said Kries. “Hopefully, we can find something that will help more than one group of people.”
Research Team
Study authors were Jill Kries, a postdoctoral scholar in the Department of Psychology, Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science and an assistant professor in the Department of Psychology and Maaike Vandermosten from KU Leuven Department of Neurosciences, the Leuven Brain Institute, and the Leuven Interdisciplinary Language Institute.
Research Support
This work was supported by Research Foundation Flanders - FWO (G0D8520N), the BRAIN Foundation (A-0741551370), the Whitehall Foundation (2024-08-043), and the Esther A. & Joseph Klingenstein Fund.
Competing Interests
The authors declare no competing interests.