Music and Hearing Devices: A Minimalist Perspective

Marshall Chasin
January 20, 2015

Naomi Croghan is a classical musician and a Research Audiologist at Cochlear Americas. Her research interests focus on improving music perception and speech perception in complex environments with cochlear implants and hearing aids.

Naomi Croghan

During the holiday season, I began reading a book called Minimalism: Live a Meaningful Life, by Joshua Fields Millburn and Ryan Nicodemus. This book describes how to use the principles of minimalism to develop the meaningful aspects of one’s life, such as health and relationships, by eliminating life’s excess. Aside from experiencing (hopefully) a little self-improvement, I began thinking about how the concepts of minimalism can also be applied to my professional work.

To many people, part of living a meaningful life involves hearing all of the sounds life has to offer, including music. The purpose of hearing aids and cochlear implants is to deliver meaningful auditory information to the listener while reducing the amount of distracting information. In other words, hearing devices should do only what they need to do for the task at hand. However, when someone is listening to music, it is difficult to know what information is “meaningful” and what is “excessive” – actually interfering with the music listening experience rather than enhancing it.

For speech, the concept of a meaningful auditory signal is relatively straightforward. Vowels, consonants, and transition cues must be audible and identifiable. Even a relatively rough representation of a speech signal can be intelligible. However, excess information may be introduced through signal processing – for example, with fast-acting dynamic-range compression. As long as the trade-off between audibility of the signal and distortion is tipped in favor of speech intelligibility, listeners will often tolerate some distortion if it improves speech understanding.

For music, any amount of distortion can have a significant effect on sound quality. Using the previous example, fast-acting compression could be considered an “excessive” hearing-aid algorithm for music. A more simplified, linear amplification scheme may be sufficient to transmit the “meaningful” auditory information in a music signal.

This begs the question, what is meaningful auditory information for music? The short answer is: it varies. For example, a practicing musician may need to have access to high-frequency harmonics that are spectrally balanced to the fundamental frequency, but a casual listener may be perfectly satisfied with a more restricted bandwidth. Luckily, Marshall Chasin has written on the subject of different acoustic characteristics for different instruments, which will help determine what constitutes meaningful information in various musical contexts.

The minimalist principle guides people on how to eliminate unnecessary things, but not everyone is expected to live out of a backpack. In other words, each individual is encouraged to find his or her own path to living a meaningful life. Likewise, different individuals may benefit from different levels of complexity in listening to music with hearing devices – the “stripped down” version of a device might not be appropriate for all listeners.

There are likely several reasons for this type of individual variability. In a study that Kathy Arehart, Jim Kates, and I did at the University of Colorado, we found that hearing aid users with better frequency resolution showed a slight preference for more complex signal processing with music (e.g., 18-channel compression vs. 3-channel compression). Although this effect was small, it is consistent with previous work showing that listeners with better spectro-temporal auditory processing are less susceptible to degradations in speech signals.

Another area currently receiving attention is the interaction between cognitive factors and speech perception. In my experience working with cochlear implant recipients, we have found that some recipients are able to adjust to experimental sound-coding strategies more easily than others, and that this variability extends to their preferences for one strategy over another when it comes to music quality.

Could there be a cognitive aspect to using new and different strategies for both speech and music listening? The topic of individual variability in music perception with hearing devices is wide open for further study, and, of course, it would be useful for clinicians and researchers to be able to separate the “backpack” minimalists from those who might benefit from more advanced device parameters.

When faced with patients who are seeking improvements in music listening, a good starting point is to examine what meaningful information they are hoping to receive. What type of music do they want to listen to, and are there any specific instruments that they want to hear better? What’s more important – being able to understand lyrics or having balance between treble and bass? Is the patient an audiophile who wants the signal to sound clean (probably better served with a stripped-down program) or someone who just wants to rock out to as much sound as possible (probably okay with some distortion if it improves audibility)?

Understanding what is meaningful to the patient will help guide the type and amount of processing that is appropriate and eliminate what is excessive for preserving music quality.

 

Leave a Reply