This may sound like the title of a science fiction novel or a movie with Bruce Willis (or was that the “Fifth Element”?), but this is all about that one last piece of the puzzle to optimize a hearing aid for music. The last several years have seen a remarkable improvement in a hearing aid’s ability to handle the higher level inputs associated with music.
Whereas speech has sound levels on the order of 60-80 dB SPL, even quiet music can have levels on the order of 100-110 dB. Until recently, modern digital hearing aids have been playing “catch up” to the analog technology of the late 1980s and early 1990s. These old-style hearing aids were analog and, as such, did not have analog-to-digital (A/D) converters. Modern A/D converters were typically restricted to transduce inputs of only 90 dB – 95 dB SPL, which resulted in poor fidelity when it came to playing and listening to music. More on this can be found at.
In the last several years, technologies have become available that have shifted up the maximum input that can be digitized through an A/D converter to over 110 dB SPL. These include, among others, the Live Music Plus technology from Bernafon, the Dream circuitry from Widex, and, most recently, the North Platform from Unitron and Venture Platforms from Phonak. In these last two cases, the manufacturer has changed from 16-bit architecture to 24-bit, thereby allowing a greater dynamic range and a lower noise floor.
Yet one thing remains- the final element, and surprisingly, it is very “low tech”. So low tech, in fact, that it was previously available, but was withdrawn from the marketplace- a single-channel hearing aid.
Single-channel hearing aids have been shown to be less than optimal for speech, especially in noisy environments. The resulting signal-to-noise ratio (SNR) can be rather poor with single-channel broadband amplification. Multi-band compression has been the mainstay of hearing aids since the late 1980s with the advent of the K-AMP.
However, speech is not music.
While typical SNRs for speech can be on the order of 0 or 5 dB, typical SNRs for music can be greater than +30 dB. The advantages of SNR improvement for multi-band compression with speech are not necessary when it comes to the higher sound levels that are typical.
Let’s examine what would happen with a typical multi-band compression hearing aid. A violin would be played generating lower frequency fundamental energy (e.g., at the G, just above the middle of the piano keyboard G[392 Hz]), but also with evenly spaced harmonics at integer multiples of 392 Hz- 784 Hz, 1176 Hz, 1568 Hz, 1960 Hz, and so on. The relative magnitudes of the harmonics of G are crucial, especially with stringed instruments such as the violin, viola, cello, and bass.
Imagine a multi-band compression hearing aid amplifying the fundamental G [392 Hz] by, say, 20 dB, and then amplifying the harmonics by any amount. The resulting amplified spectrum would look like any other instrument, except a violin.
With stringed instruments, the amplification needs to be applied equally across the frequency band in order to sound like the instrument that it is. Multiband compression can make a violin sound like a flute by differentially applying more or less gain to any number of the harmonics.
Perceptively when listening to or playing stringed music, the magnitude of the fundamental/harmonic ratio is crucial and must be maintained. Only a true single-band hearing aid will be able to accomplish this.
Woodwinds are a slightly different animal. When I play my clarinet, it is the lower frequency inter-harmonic noise that I am listening to, that defines a high-fidelity sound. Despite my clarinet and a violin being able to generate a wide band spectrum, the perceptive requirements of a woodwind sound is restricted to the lower frequency region, and in many cases, below 1000 Hz.
For string-heavy music such as classical music, a single-channel hearing aid is indeed the missing element. This is probably less so for hearing and playing woodwind music, but given the impressively higher SNRs that are characteristic of music, a true single-channel hearing aid is a necessary requirement, which will have no downside for listening and/or playing music.
And, from my experience, the placement of the EQ points has a lot to of effect on fidelity.
The violin chart which says “392 Hz- 784 Hz, 1176 Hz, 1568 Hz, 1960 Hz, “. On most hearing aids, in that band width, there are only 3 EQ points- 500hz, 100hz and 1.5KHz. Allowing one compressor per EQ point, satisfactory sound quality can’t be achieved because there are not enough EQ points. Further, note the frequencies for the violin – they are multiples of harmonics, and, as such, are based upon 8 (octal), not 10, as the conventional hearing aid EQ points are. Again, you cannot expect fidelity for music reproduction in an aid only has 2 EQ points and two compression bands for half of the range of a piano, and the EQ points are not properly centered to be effective. Hearing aid processors are up to the task, but the control points are long outmoded.
Your last post on Compression for music was great and informative, but omitted the other part of compression, and that is release time. It has been long accepted for hearing aids that slow release compression works better for music. For music, release time must be as fast as possible – otherwise, and I say this as a performing musician, all that will happen is the compression will stay clamped down, the dynamic range will be reduced to practically nil. It is very stressful to be on stage and the hearing aid compression and glacially slow release time has squashed all instruments into a tiny dynamic range, so the snare drum is at the same volume as the piano and guitar, and the voice , too. The sound becomes one big mush. And, all the work that was put into getting the EQ just right will be wasted because the ever on compressors step in and change the final EQ.
It is my belief, drawn from personal experience, that hearing aid control points should be rethought, and brought into line with the principles of audio production. Principles like locating the EQ points in groups per octave, quick release compression and redoing MPOs (limiters) so they work with compression better. I am sure there are some who will say that this is not worth doing, because it affects the needs of a small group of people will hearing loss, but I say this is incorrect. Drawing a parallel to auto racing – just about every performance and safety feature that is in your car today was born from auto racing. IOW, what is learned by addressing the needs of musicians and people with critical listening skills, can be applied to improving the sound quality of aids for all hard of hearing people.
Indeed. One of the reasons that Variable Speech Processing from Sonic is great for music is due to the impressively fast release times (and attack), so that the more natural dynamic range between the peaks and valleys is restored. I believe this is something that even analog hearing aids couldn’t do.
Hello Marshall,
Thank you for this interesting article. One channel compressor option (or may be 2 with about 3KHz crossover) shall be a part of any hearing aid or personal sound amplifier. User shall be able to choose a profile with such type of processing. Actually, it is only a software problem and it is strange it has not been implemented in the modern aids. ADC gain can be controlled by software as well so that I don’t see a problem to adapt it accordingly. At least we are planning to do that in our HearPhones software reference design http://www.alango.com/hearing
Great contribution explaining why how we handle inter-modulated harmonics matters.