Neurotechnologists have developed a decoder that can reconstruct what you are seeing, thinking, and imagining – and put it into words.
A machine that reads your thoughts is now more science than science fiction, thanks to neurotechnologists at the University of Texas at Austin (UT Austin).
And for the first time ever, this machine doesn’t require its subject to be wired up with implants and electrodes.
Instead, this decoder is non-invasive, using functional magnetic resonance imaging (fMRI) to measure the changes in blood flow around your brain to translate ideas into words.
“We are decoding something that is deeper than language,” Dr Alexander Huth from UT Austin told BBC Science Focus and other press. The decoder can grasp the intangible – the various shapes our thoughts take – and turn them into something understandable and, crucially, communicable.
This means that the person who is having their mind ‘read’ does not have to verbalise their thoughts. For those who have lost the ability to speak – following a stroke, for example – the decoder could restore communication channels non-invasively.
“Speech impairments can be highly debilitating,” co-author Jerry Tang, graduate student at UT Austin, said. “Providing some sort of additional communication channel could be really valuable.”
Until now, non-invasive speech decoders have only been able to reconstruct single words or short phrases. This paper, published in Nature Neuroscience, reveals a machine which can decode continuous, natural language.
The machine works by combining well-established decoding methods with modern machine learning techniques. Essentially, it works similarly to ChatGPT, predicting the ends of sentences based on what has been learnt before.
Akin to an AI chatbot, the decoder needs to be trained on a lot of data – in this case, MRI scans that measure blood flow in the brain. To amass enough neural data, each study participant listened to 16 hours of podcasts including The Moth Radio Hour and The New York Times’s Modern Love.
Huth said: “What we were going for was mostly what is interesting and fun for the subjects to listen to, because that’s critical in actually getting good fMRI data – rather than boring subjects out of their skulls.”
Participants also watched silent clips of films – mostly those short stories played at the beginning of Pixar films that do not contain dialogue. Interestingly, this part of the experiment showed that the decoder translates something beyond language: ideas.
Blood flow changes slowly in the brain, over the course of a few seconds rather than the nanoseconds of a neuron impulse. This means that the decoder doesn’t translate the exact words we are reading, but how we interpret them. As Huth explains, how we make sense of something slowly changes – “so we can see how the idea evolves, even though the exact words get lost.”
While MRI machines are not portable, in the future the researchers hope to test their methods using affordable and portable devices including functional near-infrared spectroscopy (fNIRS) which similarly uses non-invasive techniques to measure blood oxygenation in the brain.
What does this mean for brain privacy?
For many, the prospect of a brain-reading machine rings alarm bells. But the researchers highlight how the current model can only be used on the individuals it has been trained on. It also needs a participant’s cooperation to run, as the study found that participants could actively resist being ‘decoded’.
They called this “sabotaging the decoder”: by performing simple tasks like counting, naming animals or telling their own story, participants could ‘distract’ the decoder from their other thoughts.
Of course, future technology may be able to get round this – so the researchers underlined the importance of researching privacy implications going forward.
“While this technology is in its infancy, it’s very important to regulate what brain data can and cannot be used for,” says Tang.
“There could be negative consequences if misused and misunderstood, so we think it’s important to make sure that the decoder’s capabilities are not misrepresented.”
Read more:
- Instant Genius Podcast: The fight to keep our brains private, with Nita Farahany
- How does my brain differentiate the different languages I speak?
- Brain scan study reveals how dogs respond to language