[ad_1]
Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself during dinner; Your unvoiced impression of your new partner is the best. Now imagine if someone could hear it.
On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, researchers describe an AI that can interpret the private thoughts of human subjects by analyzing fMRI scans, which measure blood flow to different areas of the brain.
Already, researchers have developed language decoding methods to take speech attempts from people who have lost the ability to speak, and to allow paralyzed people to write when they only think about writing. But the new language decoder is one of the first not to rely on implants. In the study, it was able to convert imagined speech into real speech and, when subjects were shown silent films, they were able to produce relatively accurate descriptions of what was happening on the screen.
“It’s not just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We get the meaning, the idea of what’s going on. And the fact that we can do it is exciting.
The study focused on three participants, who came to the laboratory of Dr. Huth for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listen, an fMRI scanner records blood oxygenation levels in parts of the brain. The researchers then used a large language model to match patterns in brain activity to the words and phrases the participants had heard.
Big language models like OpenAI’s GPT-4 and Google’s Bard are trained on a lot of typing to predict the next word in a sentence or phrase. In the process, the model creates a map that shows how words relate to each other. A few years ago, Dr. Huth found that these pieces of the map – called context maps, which capture the semantic features, or meaning, of phrases – can be used to predict how the brain lights up in response to language.
In a basic sense, says Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is an encrypted signal, and language models provide a way to understand it.”
In his research, Dr. Huth and his colleagues effectively changed the process, using other AIs to translate participants’ fMRI images into words and phrases. Researchers tested the decoder by having participants listen to a new recording, then see how closely the translation matched the actual transcript.
Almost every word does not fit in the decoded script, but the meaning of the passage is preserved. Essentially, the decoder paraphrases them.
Original transcript: “I got up from my air mattress and pressed my face to the glass of the bedroom window hoping to see eyes staring back at me, only to find darkness.”
Decoded from brain activity: “I just kept walking to the window and opened the glass I stood on my toes and looked out I didn’t see anything and looked up again I didn’t see anything.”
While in the fMRI scan, participants were also asked to silently imagine the story; afterwards, they repeat the story out loud, for reference. Here too, the decoding model captures the essence of the unspoken version.
Participant version: “Find a message from my husband saying that he changed his mind and he came back.”
Decoded version: “To see him for some reason I thought he would come to me and say he misses me.”
Finally, subjects watched a brief, silent animated film, again while undergoing an fMRI scan. By analyzing brain activity, language models can decode a rough synopsis of what they’re seeing – perhaps an internal description of what they’re seeing.
The results show that the AI decoder not only captures the words but also the meaning. “Language perception is an externally driven process, while imagination is an active internal process,” says Dr Nishimoto. “And the authors show that the brain uses common representations in that process.”
Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said it’s a “high-level question.”
“Can we decode meaning from the brain?” he continued. “In some ways, they show that, yes, we can.”
This language decoding method has limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are very large and expensive. In addition, training the model is a long and difficult process, and to be effective it must be done individually. When researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that each brain has a unique way of representing meaning.
Participants can also protect their internal monologue, discarding the decoder by thinking about something else. AI may be able to read our minds, but for now it needs to read them one at a time, and with permission.
[ad_2]
Source link