The latest research on artificial intelligence seems to be approaching science fiction. A Japanese team has unveiled a model of a mind-reading machine based on stable diffusion, reports BFMTV. A detailed study was published on Biorxiv’s website last December.
To achieve this, two researchers combined technology based on stable diffusion, the tool for generating images from plain text, with functional magnetic resonance imaging (fMRI) recordings. It’s about reconstructing visual experiences from fMRI data, thereby translating what a person was thinking into an image.
Eight volunteers
For a specialist in decoding IMRf signals interviewed by Liberation, the two Japanese researchers do not make much difference to current research “whether in terms of application, methodology or scientific discovery”. To construct these images, Yu Takagi and Shinji Nishimoto pulled data from the Natural Scenes Dataset (NSD). Created in 2022, it collects the fMRI readings of eight volunteers who were exposed to ten thousand images.
text, then images
Thus, 27,000 associations were encoded. The researchers then had to develop a mathematical method to translate the fMRI recordings into text so that Stable Diffusion could produce an image.
Conclusion: If we observe the difference between the images presented to the group of volunteers and the images designed by Stable Diffusion from the recorded signals, the result is very similar. At best, we can concede that the visuals provided by the AI are less sharp and precise than the original images.