“Less change is more”, except when it comes to money: Part 1

Marshall Chasin
November 18, 2014

For listening to music through hearing aids there seem to be two rules of thumb(s): One is that less change is more, and the other is that music settings are not all that different from speech (in quiet) settings. There are no thumbs left for the other side of the issue: Music settings are more compressed than those for speech.

One can argue (and I have argued this since the early 2000s) that the compression settings of hearing aids have more to do with the pathology of the individual’s cochlea than the nature of the input stimulus. It doesn’t matter whether the input is music or speech. If cochlear hair cell damage is significant enough to require compression, then the settings for speech, music, and dog barking should be similar.

After more than a decade of thinking about this issue for hard-of-hearing musicians and of seeing hundreds of them in the clinic, I do think that it’s “almost” true. However, in some well-controlled studies performed mostly by researchers Brian Moore and Todd Ricketts and their colleagues, it becomes apparent that music and speech have some subtle differences when it comes to compression.

There are, however, major differences when it comes to cochlear pathology, especially in terms of bandwidth. For example, based on the work of both Moore and of Ricketts, we can say that if a person has a mild or mildly sloping (sensori-neural) hearing loss, then generally the bandwidth should be as wide as possible; if the hearing loss and/or the audiometric slope are more severe, then less may be more; a narrower bandwidth would be useful. This is true of both speech and music and has more to do with hair cell pathology than the nature of speech or music.

But back to compression. Ear and Hearing, the official publication of the American Auditory Society, ran an excellent article by Naomi Croghan and her PhD supervisors, Kathryn Arehart and James Kates (although everyone that I know calls him “Jim”). Both of them are well-known in this field and those interested in hearing aids and music, should follow their publications. And, now we have Naomi Croghan to watch as well!

In their article, Music Preferences With Hearing Aids: Effects of Signal Properties, Compression Settings, and Listener Characteristics , a virtual hearing aid was used where music could be processed through 3 channels vs. 18 channels, and a short release time (50 msec) vs. a long release time (1000 msec). Music (classical and rock) was piped through these various processing strategies and played to 18 experienced hearing aid users.

A virtual (or simulated) hearing aid approach is gaining wider acceptance for research (and education) in the hearing health care field; well defined and controlled stimulus changes can be made that are not limited by the technology and software algorithms of any one hearing aid manufacturer. I use this approach in my clinical environment as well and this can be useful by letting hard-of-hearing patients adjust their own settings while listening to various stimuli, including different languages that they may speak.

When comparing classical and rock music, Croghan and her colleagues found the following:
“Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels.” (p. e170).

The first part of this makes sense (and would be similar to the preferences for speech if they had measured that in this experiment).  In both cases, less compression appears to be better than more compression. Or, more specifically, one could say that the change between linear and non-linear compression should be minimized. When it comes to “change,” less is more. For the slow-acting WRDC with a longer 1000-msec release time, music is more similar from time to time, because the processing stays with the same compressed properties for much of its duration; there is rarely a 1-second interval quiet enough to allow the WDRC to become linear.

If we were to measure the percentage of time that a circuit was in compression for music (or similarly being processed in a linear mode for speech), the higher the percentage of “similar” processing, the better it would be perceived. A circuit that changes quickly from compression to linear, and then back again, such as with a very short 50-msec release time, would be perceived as poor quality. “Less change is more.”

In part 2,I will re-examine the finding that 18 channels is better than 3 channels, at least for classical music. One would expect it to be the other way around.

Leave a Reply