People with aphasia — a brain disorder affecting about a million people in the U.S. — struggle to turn their thoughts into words and comprehend spoken language.
A pair of researchers at The University of Texas at Austin has demonstrated an AI-based tool that can translate a person’s thoughts into continuous text, without requiring the person to comprehend spoken words. And the process of training the tool on a person’s own unique patterns of brain activity takes only about an hour. This builds on the team’s earlier work creating a brain decoder that required many hours of training on a person’s brain activity as the person listened to audio stories. This latest advance suggests it may be possible, with further refinement, for brain computer interfaces to improve communication in people with aphasia.
“Being able to access semantic representations using both language and vision opens new doors for neurotechnology, especially for people who struggle to produce and comprehend language,” said Jerry Tang, a postdoctoral researcher at UT in the lab of Alex Huth and first author on a paper describing the work in Current Biology. “It gives us a way to create language-based brain computer interfaces without requiring any amount of language comprehension.”