Most of our training regarding hearing aids for hard-of-hearing people is based on the characteristics of speech. This makes sense because most of our clients are primarily concerned with hearing speech in quiet, and in noise. But what about those people who need amplification to help them play music, or clients who just want to be able to listen to music on occasion? Does the “music program” need to be that different from a “speech in quiet program”?
Surprisingly, the answer is no.
Of the many differences between speech as an input to a hearing aid and music as an input to a hearing aid, the two primary ones, at least for fitting hearing aids, are: 1. higher sound levels of music, and 2. the crest factor. Even quiet music can exceed 100 dB SPL, whereas the highest sound levels of speech are in the mid-80 dB SPL region. The crest factor is just the difference between the average level of the sound and its peaks. This is on the order of 12 dB for speech, but for music it can be 18-20 dB. Because musical instruments are not as “damped” as the soft-walled, saliva- and mucous-filled human mouth, the peaks for music tend to be 6 to 8 dB higher than for speech; music is less damped than is speech. Both these factors contribute to the higher sound levels of music.
The primary difference- the greater sound level of music (and its higher level peaks)- means that the hearing aids should be able to handle this higher level input without distortion. This is where most hearing aids fall short.
Many modern digital hearing aids (primarily because of the analog-to-digital [A/D] converter and other “front end” characteristics), simply cannot handle these higher level inputs that are characteristic of music. This is a “front-end hardware” issue and has nothing to do with the software programming settings that occur later in the hearing aid circuitry. If the input is distorted at this early input stage, then no amount of software manipulation later on will improve things. So-called “music programs” have limited usefulness unless the front-end issue is taken care of first. However, let us assume that we are dealing with those several hearing aids that have been designed with hardware that can handle the louder inputs of music without distortion.
What are some software programming issues that we need to be aware of:
1. “Compression should be greater for music.” Although there may be slight differences, the selection of compression has more to do with the sensory/neural damage to the hard-of-hearing person’s cochlea and only secondarily to the properties of the input stimulus. With modern hearing aids (all using a form of average or RMS-based compression), the compression parameters that are used in the speech-in-quiet program should be similar to those used in a music program.
2. “A broader bandwidth is better.” Many manufacturers suggest that their music programs should have a wider bandwidth than those for speech. This is based on erroneous logic. The widest possible bandwidth of the amplified signal should always be sought unless there is some cochlear limitation such as cochlear dead regions. The bandwidth of a speech-in-quiet program should be similar to a music program. Rarely is there enough amplification in the higher frequency region for even a speech-in-quiet program, so, if at all possible, the extra-high-frequency amplification should be applied across the board. Bandwidth, like compression, is an individual issue and is based primarily on cochlear function and not the nature of the input stimulus.
3. Extended low-frequency amplification. While it is true that the entire left hand side of the piano keyboard (essentially the notes on the bass clef) are typically below the lowest note that is generally amplified for speech, it is not true that these low-frequency notes would need to be amplified. There are three reasons for this: a. most hearing aid fittings are non- or semi-occluding and, as such, these low-frequency fundamental notes enter through the vent and are not amplified, but are audible; b. While the fundamental energy of the note may not be amplified, its higher frequency harmonics are amplified adding to the appreciation of the music; and c. it is incorrect to assume that hearing of a specific note defines that pitch; it is the difference between any two adjacent harmonics that define the pitch, not the note per se. One does not need to hear the note C with a fundamental at 131 Hz, just the 131 Hz difference that can even occur between 1000 and 1131 Hz. This is called the missing fundamental. For these reasons, extended low-frequency amplification for music is not required. Extended low-frequency sounds may be great for elephants and whales, but not for music…