Brain scanner reads thoughts — at least a little bit

Brain scanner reads thoughts — at least a little bit APA Science

With brain scanners and AI, US researchers were able to capture, at least approximately, certain types of thoughts in willing individuals. A decoder they developed was able to use so-called fMRT images in certain experimental situations to roughly reproduce what was going through the participants’ minds, as the team writes in the journal “Nature Neuroscience.”

This brain-computer interface, which does not require surgery, could one day help people who have lost the ability to speak, for example as a result of a stroke, the researchers hope. However, experts are skeptical. The authors of the University of Texas study emphasize that their technology cannot be used to secretly read minds.

Brain-computer interfaces (BCI) are based on the principle of reading human thoughts through technical circuits, processing them and translating them into movement or speech. This way, paralyzed people could use mind control to control an exoskeleton, or people with locked-in syndrome could communicate with the outside world. However, many of the corresponding systems currently under investigation require the surgical implantation of electrodes.

New approach relies on computers

In the new approach, a computer forms words and sentences based on brain activity. The researchers trained this speech decoder by having three test subjects listen to stories for 16 hours while lying down in a functional magnetic resonance imaging (fMRI) scanner. With such an fMRI, changes in blood flow in brain areas can be made visible, which in turn is an indicator of neuron activity.

In the next step, the subjects listened to new stories while their brains were scanned again in the fMRI tube. The previously trained speech decoder was now able to create sequences of words from the fMRT data, which the researchers said correctly reproduced the content of what was heard. The system did not translate the recorded fMRI information into individual words. Instead, it used the connections recognized in training and artificial intelligence (AI) to assign the measured brain activities to the most likely sentences in new stories.

Rainer Goebel, Head of the Department of Cognitive Neurosciences at Maastricht University in the Netherlands, explains this approach in an independent classification: “A central idea of ​​the work was to use an AI language model to count the number of possible sentences that can be combined consistent with a pattern of brain activity.”

bad with pronouns

At a press conference about the study, co-author Jerry Tang illustrated the test results: The decoder rendered the phrase “I still don’t have my driver’s license” as “She hasn’t even started to learn to drive.” According to Tang, the example illustrates a difficulty: “The model is very bad with pronouns – but we still don’t know why.”

Overall, the decoder succeeds to the extent that many selected sentences in new, i.e., untrained, stories contain words from the source text or at least have a similar meaning, according to Rainer Goebel. “But there were also a lot of errors, which is very bad for a complete brain-computer interface, as for critical applications – for example, communication with inpatients – it is particularly important not to generate false statements.” Even more errors were generated when subjects were asked to independently imagine a story or watch a short animated silent movie and the decoder was asked to play back events in it.

For Goebel, the results of the presented system are too poor overall to be suitable as a reliable interface: “I dare to predict that fMRI-based BCIs (unfortunately) are likely to remain limited to research papers with few subjects in the future – as in this study will stay.”

Christoph Reichert from the Leibniz Institute of Neurobiology is also skeptical: “If you look at the examples of rendered and reconstructed text, it quickly becomes clear that this technology is still far from reliably generating an “imagined” text from brain data. .” However, the study hints at what could be possible if measurement techniques improved.

Also ethical concerns

There are also ethical concerns: depending on future developments, measures to protect intellectual privacy may be necessary, write the authors themselves. . “If they mentally counted during decoding, named animals or thought of another story, the process was sabotaged,” describes Jerry Tang. The decoder also performed poorly when the model was trained with someone else.

Service: Specialized article number DOI: 10.1038/s41593-023-01304-9