MIT Scientists Develop New Computer Models to Mimic How Brain Processes Sound

computer model for human hearing
HHTM
December 14, 2023

A team of researchers from MIT has developed computer models that closely replicate how the human brain processes sounds. These models could lead to improved hearing aids, cochlear implants, and other auditory devices.

The models use artificial intelligence, specifically deep neural networks. The MIT team trained the neural networks to perform hearing-related tasks like identifying environmental sounds and musical genres.

Inside the models, abstract representations of audio inputs like speech emerge as the system analyzes sounds to perform its tasks. The researchers compared these to brain scans of people listening to the same sounds. They found significant similarities in how both the models and auditory cortex represent real-world sounds.

Study co-author Josh McDermott explained:

“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain”

Mimicking Both Structure and Function

The auditory cortex is the part of the brain responsible for hearing. Its hierarchical and compartmentalized structure allows different stages of processing complex auditory scenes. The new computer models seem to mimic not only the cortex’s neural representations but some of its processing functions as well.

Earlier stages of the model were more similar to early auditory cortex areas. Later stages matched better with downstream cortical regions. Different parts also specialized based on the model’s training. For example, speech-focused modules better resembled language-related brain areas.

Senior author McDermott said this modularity supports existing theories of the brain’s hearing pathways. “The auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions,” he explained.

Insights for Engineering Auditory Devices

The researchers think these bio-inspired computer models can inform audio technologies like cochlear implants. These devices stimulate the auditory nerve to restore partial hearing to the deaf. But improving implant users’ speech comprehension in noisy environments remains an ongoing challenge.

Study lead author Greta Tuckute said models trained on both noise and speech-related tasks produced brain representations most like humans. She speculated that “training models in noise” better replicates real-world hearing. The findings suggest noise inclusion is an important aspect of designing brain-like audio processing systems.

The team also found that versatility matters. Models trained on multiple types of inputs across different listening contexts were more human-like. This highlights the importance of adaptable and dynamic processing for advanced hearing technologies.

Tuckute concluded, “A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors.” Accurate computational models of human hearing could revolutionize assistive auditory devices and brain-computer interfaces. McDermott’s team now plans to leverage their findings to develop improved models for these applications.

Reference:

  • Tuckute G, Feather J, Boebinger D, McDermott JH (2023) Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLOS Biology 21(12): e3002366. https://doi.org/10.1371/journal.pbio.3002366

 

Source: MIT, PLOS Biology

Leave a Reply