Will Neuroscientists Be Able to Read the Human Mind in the Future?
top of page

Will Neuroscientists Be Able to Read the Human Mind in the Future?



Pink Floyd's "Another Brick in the Wall, Part 1" As the chords of the song filled the operating room, neuroscientists at Albany Medical Center carefully recorded the activity of electrodes implanted in the brains of patients undergoing epilepsy surgery.

Purpose? To capture the electrical activity of brain regions tuned to the characteristics of music such as tone, rhythm, harmony and words, and to see whether they can recreate what the patient hears.

More than a decade later, after neuroscientists at the University of California at Berkeley analyzed data from 29 such patients, the answer is clearly yes.

"After all, it was just a brick in the wall" The phrase appears visibly in the reconstructed song, its rhythms intact and the words muddy but decipherable. For the first time, researchers have reconstructed a recognizable song from brain recordings.

The reconstruction demonstrates the feasibility of recording and translating brainwaves to capture musical elements of speech as well as syllables. In humans, these musical elements (rhythm, stress, stress, and intonation), called prosody, It carries meanings that words alone cannot convey.

Since these intracranial electroencephalography (iEEG) recordings can only be made from the surface of the brain (as close to the auditory centers as possible), no one will be able to listen to the songs in your head in the near future.

But for people who have communication problems due to paralysis or paralysis, such recordings from electrodes on the surface of the brain could help reproduce the musicality of speech that is missing in today's robot-like reconstructions.

This is a great result," said Robert Knight, a neuroscientist at the Helen Wills Neuroscience Institute and UC Berkeley professor of psychology, who conducted the research with postdoctoral researcher Ludovic Bellier. he said. "One of the things I think about music is that it has prosody and emotional content. As this whole field of brain-machine interfaces advances, this gives you a way to add musicality to future brain implants for people who need it, whether they're with ALS or someone. Another neurological or developmental disorder that negatively affects speech output. This gives you the ability to decode not just the linguistic content, but also some of the prosodic content of the speech, some of the affect. I think that's really where we start to solve this problem. code is clear."

As brain recording techniques improve, it may one day be possible to make such recordings without opening the brain using sensitive electrodes attached to the scalp. He said that he could measure brain activity to detect it, but that the approach took at least 20 seconds to identify a single letter, making communication laborious and difficult.


"Noninvasive techniques are not accurate enough today. Let's hope that in the future we will be able to read activity in deeper parts of the brain with good signal quality for patients, just through electrodes placed outside the skull. But we are far from that." from there" said Bellier.

Bellier, Knight, and colleagues reported the results today in the journal PLOS Biology and "another brick in the wall of our understanding of music processing in the human brain." They stated that they added


Are you reading his mind? Not yet.

Brain machine interfaces used today to help people communicate when they cannot speak can decode words, but the sentences produced have a robotic quality that resembles the sound the late Stephen Hawking made when he used a speech-generating device.

Bellier said, "Right now, technology is more like a keyboard for the mind." he said. "You can't read your thoughts from the keyboard. You have to press the buttons. And it kind of sounds robotic; Of course, there is less of what I call freedom of expression."

Bellier needs to know. He has been playing music since his childhood; drums, classical guitar, piano and bass, at one point performing in a heavy metal band. When Knight asked him to work on the musicality of the speech, Bellier said: "I'm sure I was very excited when I got the offer.

In 2012, Knight, postdoctoral researcher Brian Pasley, and colleagues became the first to reconstruct words a person hears from recordings of brain activity alone.

Other researchers have recently taken Knight's work much further. UC San Francisco neurosurgeon and senior co-author of the 2012 paper, Eddie Chang, used displayed words to reconstruct a paralyzed patient's intended speech. recorded signals from the motor area of the brain associated with jaw, lip and tongue movements. on a computer screen.

This study, reported in 2021, used artificial intelligence to interpret brain recordings of a patient trying to vocalize a sentence based on a sequence of 50 words.


Although Chang's technique was successful, the new study suggests that recordings from the auditory regions of the brain, where all aspects of sound are processed, could also capture other aspects of speech that are important in human communication.

Bellier said, "Decoding from auditory cortices, which are closer to the acoustics of sounds, as opposed to the motor cortex, which is closer to the movements made to create the acoustics of speech, is extremely promising." he added. "It will add some color to what is being decoded."

For the new study, Bellier reanalyzed brain recordings obtained while patients were played a roughly 3-minute segment of a Pink Floyd song from the 1979 album The Wall 11100000- 0000-0000-0000-000000000111_. He hoped to actually reconstruct musical phrases through regression-based decoding models, going beyond previous studies that tested whether decoding models could identify different musical pieces and genres.

Bellier emphasized that the study, which used artificial intelligence to decode brain activity and then encode a reproduction, did not just create a black box for synthesizing speech. He and his colleagues also explored new areas of the brain involved in detecting rhythm. (such as the sound of a guitar) and discovered that parts of the auditory cortex – the superior temporal gyrus, located just behind and above the ear – were detected. - responds to the onset of a voice or synthesizer, while other areas respond to sustained vocals.

Researchers also confirmed that the right side of the brain is more attuned to music than the left side.

Knight said, “Language is mostly in the left brain. Music, on the other hand, is distributed more with a tendency towards the right.” he said.

Bellier said, "It wasn't clear that the same thing would happen with musical stimuli." said. "So here we confirm that this is not just something specific to speech, but is more fundamental to the auditory system and the way it processes both speech and music."

Knight is beginning new research to understand the brain circuits that allow some people with aphasia due to stroke or brain injury to communicate by singing when they cannot find the words to express themselves.

Most Read Articles

Latest Posts

bottom of page