The rest of the stuff- part 7 of 7

Marshall Chasin
June 2, 2015

The music program is similar to the speech-in-quiet program

In previous entries of this 7 part blog series, the problem and some clinical and technical solutions were discussed.  ASSUMING that we now have a hearing aid that can handle the higher sound level elements of music without distortion, what constitutes an optimal music program?

In previous blog entries the benefits of a single channel system, especially for string heavy music was discussed.  In this blog entry, I would like to talk about frequency response, OSPL90 properties, and compression.

This sounds like a large task for a single blog entry but everything below can be summarized as “the same as the speech-in-quiet program” or “less is more”.

Tackling the “trivial problem” of frequency response and compression first, the reason why the programming should be similar to a speech-in-quiet program has to do with the fact that this is a cochlear issue and not an environmental stimulus issue.  Both frequency response and compression has to do with hair cell damage, and perhaps only secondarily (if at all) with the differing spectral and temporal natures of music and speech.

Frequency response:

The work of Brian Moore and of Todd Ricketts really is all that is required- open a bottle of pinot noir wine and sit down by your computer or bookshelf and read everything you can get your hands on by these two authors.  Many of their publications have nothing to do with music, but by the second glass of wine, some of the connections will become apparent. (Beware- by the fourth glass of wine, you may not be able to even read the text of the articles so hopefully any important information will already have been gleaned by the second glass.)

Here is what can be derived from their research:

  1. If a hearing loss is mild to moderate, the widest possible frequency response would be the best.
  2. If a hearing loss is greater than a moderate level, then a narrower frequency response would be the best.
  3. If a hearing loss configuration is steeply sloping, then a narrower frequency response would be the best.

For more precipitous hearing loss configurations, the possibility of cochlear dead regions may rear its ugly head, so avoiding these regions would be the best strategy.  This also goes for more severe hearing losses where there may be significant inner hair cell damage- less may be more.  This is true of both speech and music.

For milder hearing losses, there is no reason why there should be any restriction in frequency response.  If more high frequency gain can be provided, this should be for both any speech program and a music program.  One may hear from a manufacturer that a “music program should have an extended bandwidth”.  This is silly- if an extended bandwidth is possible for music, it should also be possible for a speech program.

Compression:

Assuming we are not on our fifth glass of wine by now, the optimal settings of compression for a music program should also be similar to that of a speech program.  Again, like frequency response, the setting of compression is a cochlear issue and only secondarily related to the temporal and spectral characteristics of the input stimulus.

Whatever the compression setting was in the speech-in-quiet program was, this should be similar to the compression setting for the music program.  There are sound theoretical reasons for this and this has been born out both clinically and in the research environment.

And like frequency response settings, if changes need to be made on follow up visits, go ahead and make them.  This “less change is more” philosophy just points us in the right direction.

Output and OSPL90:

Assuming that compression is similar to a speech-in-quiet program, then the OSPL90 for a music program should be 6 dB lower than that of a speech-in-quiet program.

This 6 dB difference derives from the difference in crest factor between a speech signal and a music signal.  Speech emanates from a highly damped vocal tract, replete with soft tongue, cheeks, lips, nasal mucosa, and saliva.  There is a lot of acoustic damping in the human vocal tract.  The same cannot be said about a guitar, drum, violin, or clarinet- these are instruments that have hard walls and very few, if any soft surfaces- musical instruments have inherently less damping that speech sounds.  Subsequently the peaks of musical instrument spectra are peakier (by about 6 dB) than those of speech.

If our goal is not to exceed a client’s loudness discomfort level, we need to ensure that even the peaks of the input stimulus do not exceed the loudness discomfort level.  If the peaks of music are 6 dB higher than speech of the same sound level, then the OSPL90 for music should be 6 dB lower than that of the speech-in-quiet program.  This assumes that the OSPL90 level is determined by the RMS or average of a signal and not the peak.  With modern hearing aids, this is typically the case.

A music program and a speech-in-quiet program:

  1. Same frequency response
  2. Same compression characteristics
  3. 6 dB lower OSPL90 setting for the music program vs. the speech-in-quiet program

Leave a Reply