Researchers at The University of Texas at Austin (UT Austin) in the United States have created an artificial intelligence (AI) system that could aid individuals who are mentally aware but physically incapable of speaking, such as those incapacitated by strokes.
The system, named a semantic decoder, can translate a person’s brain activity while listening to a story or silently imagining telling one into a constant stream of text, as stated in a study published in the journal Nature Neuroscience on Monday. Unlike other language decoding systems that are currently under development, this system does not necessitate surgical implants, making the procedure non-invasive.
According to the study, the speech reconstructions in the newly created decoder are not word-for-word but can grasp the “essence” of what the user is hearing.
“This represents a significant step forward for a non-invasive method compared to previous efforts, which typically involved single words or short phrases,” stated Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
“We are teaching the model to decode continuous language with complicated ideas for prolonged periods of time.”