A recent discovery explains how ours Brain manages conversations in noisy environments and could have a significant impact on the development of more efficient hearing aids.
Vinay Raghavan, a researcher at Columbia University in New York, gave an interesting explanation of how the brain processes language perception. According to him, the prevailing idea was that only the voice of the person we are paying attention to is processed by the brain.
However, Raghavan challenges that notion, noting that we don’t ignore someone yelling in a crowded place, even when we’re focusing on someone else.
Experts study how the human brain processes voices
In the controlled study by Vinay Raghavan and his team, electrodes were placed on the brains of seven people undergoing epilepsy surgery to monitor brain activity.
During this process, participants were presented with a 30minute audio clip with two voices overlaid.
Participants remained awake during the operation and were instructed to alternate their attention between the two voices included in the audio. One of the voices was that of a man, the other that of a woman.
The overlapping voices spoke simultaneously and at similar volumes, but at certain times in the clip one voice was louder than the other, simulating the volume differences found in background conversations in crowded environments.
The research team used data from the participants’ brain activity to develop a model that predicted how the brain processes voices of different loudness and how this might vary depending on which voice the participant was asked to focus on.
search result
The results showed that the louder voice of the two was encoded in both the primary auditory cortex, which is responsible for conscious sound perception, and the secondary auditory cortex, which is responsible for more complex sound processing.
This result was surprising because the participants were instructed not to focus on the loudest voice, but the brain made sense of this information.
According to Raghavan, this study is groundbreaking because it uses neuroscience to show that the brain encodes language information even when we are not paying active attention to it.
The discovery opens a new way to understand how the brain processes stimuli to which we are not paying attention.
Traditionally, it has been assumed that the brain selectively processes only those stimuli that we consciously focus on. However, the results of this study challenge this view and show that the brain continues to encode information even when we are distracted or engaged in other tasks.
The results also showed that the lower voice was only processed by the brain in the primary and secondary cortices when participants were instructed to focus their attention on that particular voice.
What’s more, surprisingly, the brain took an extra 95 milliseconds to process that voice as speech compared to being told to focus on the loudest voice.
However, according to Vinay Raghavan, the results of the study show that the brain likely employs different controls to encode and represent voices at different loudness levels during a conversation. This understanding could be used to design more effective hearing aids.
The expert suggests that if it were possible to design a hearing aid that can detect which person the user is paying attention to, it would be possible to increase the volume of just that particular person’s voice.
A breakthrough of this caliber could greatly enhance the listening experience in noisy environments, allowing the user to better focus on the sound source of interest.