A new reference test gain standard for music? Part 1

Marshall Chasin
December 1, 2015

Audiologists are quite aware of the ANSI standard S3.22 (and the equivalent IEC European standard) which specifies how to report whether a hearing aid works or doesn’t work.  This is not a performance standard that states that a hearing aid should work in a specific way; but just a reporting standard on how to perform each test and then make conclusions whether the hearing aid does or does not meet a published standard.

If a hearing aid is made of chocolate and distorts at 100%, that is fine as long as the standard says that the hearing aid distorts at 100%.  I don’t even think that being made of  chocolate is necessarily a bad thing although perhaps some stuffy engineers may disagree with me!

 

But such reporting standards are quite important because a hearing aid tested today in Toronto, Canada should yield the same results as the same hearing aid tomorrow tested in Montana, USA – assuming the hearing aid is still functioning in the same way.  Also, such a standard provides information on gain, output, and bandwidth which could assist an audiologist to select one hearing aid over another. Frequency response is one parameter that is measured at the reference test gain setting and this in some sense, allows us to “calibrate” our hearing aids, or at least provide us with some limited guidance.

An underbelly of reporting standards is that simple things that have minimal relevance to the clinical population can be done to make a hearing aid “look better” on a specification sheet.  For example, reducing the gain at 1000 Hz, perhaps by using acoustical damping, would make the frequency response look wider, despite the fact that the absolute gain at 6000 Hz or 7000 Hz would be the same in both the damped and undamped conditions.

So, what does this discussion of something very technical, and only really important to clinicians and hearing aid manufacturers have to do with music?

Now here it does get technical (sorry to our non-audiologist and non-engineer readers).

The reference test gain (according to ANSI S3.22) is the volume setting on the hearing aid that generates gain equal to the OSPL90 – 77 dB.  It is typically a reduced volume from full-on but can be equal to a full-on volume control setting in some hearing aids.  The interesting point for me is “77 dB”.

77 dB is equal to 65 dB SPL + 12 dB.

 

The 65 dB SPL is the level of average conversational speech at 1 meter so this is self-explanatory.  The 12 dB is the difference between the instantaneous broad band average or RMS of the speech and its peaks- also known as the crest factor.  For speech, because of the complex but highly damped nature of the human vocal tract, this difference is roughly 12 dB.  It would be different for baboons, dogs, and goldfish speech but that blog is best left for April 1 next year.  It would also be different for musical instruments.

Musical instruments are nowhere near as damped as the human vocal tract-  tubas and violins don’t have soft cheeks, tongue, saliva, and nasal cavities with narrow openings.  Actually the brass and woodwind instruments do have saliva (or at least condensation which the uninitiated confuses for saliva) but overall, a musical instrument has less overall damping than the human vocal tract.  Subsequently the peaks are at a higher level (and have narrower bandwidth) than the peaks emanating from a speaker’s mouth.

For musical instrument input to hearing aids the reference test gain should have a value that is much lower than for speech as an input to a hearing aid.  The crest factor is on the order of 6-8 dB greater than for speech, and this element in itself should mandate that the music reference test gain should be 6-8 dB lower than for speech.  For music perhaps the reference test gain should be OSPL90 – 83 or OSPL90 – 85 dB and not OSPL90 – 77 dB?

 

Leave a Reply