Rick Ledbetter is a long time professional bass player and composer with a profound, late onset hearing loss. He has been programming his own aids through several sets, to meet his needs for live performance. Rick is quite thoughtful about this area. In April 2016 I had an open letter to hard of hearing musicians, which was repeated last week during the holiday season break, and what better response than this wish list for audiologists and hearing aid manufacturers…
Musicians. We have picky ears, from years of critical listening and learning what musical instruments should sound like, both solo and in ensemble. We can listen to an ensemble play, and pick out what one instrument is doing while the music is being played. So, to do that with hearing aids, we need aids that reproduce sound with the greatest fidelity and real world frequency response at all volume levels, but particularly from 65dB to over 100dB, to handle the brief peaks that even an orchestra can produce.
Hearing aids have long been considered as speech amplification devices, and the programming reflects that: lack of clean low end, a reliance on sound modifiers like speech enhancers, noise reduction, directionality, etc., to make speech more intelligible in day to day life. However, these same active sound modifiers work against fidelity in live music situations. Feedback reducers don’t know the difference between feedback and a flute, a sustained violin, and an electric guitar high note. Noise reducers don’t know the difference between noise and a drum set, with cymbals. And automatic program switchers go nearly berserk in the presence of live music, especially music that has a wide dynamic range. Musicians can hear this,
In my experience, there isn’t much consideration given to fidelity at the dynamic range of live music, particularly at louder levels. MPOs set too aggressively will serve to squash the sound and bury the sonic detail of the music. What I have found with the aids I have had, is too much amplification at the 80 dB EQ curve, especially from 500 Hz to 2 kHz, and too much limiting from the MPOs. So the volume gets loud too quickly, then limited too soon by the MPOs.
Typical ways to try to address this have been to create a single EQ band “Music” program, with little to no sound modifiers. But that doesn’t allow enough dynamic range, and under -amplifies at lower volume levels. The solution usually offered is to change aid programs, but a performing musician doesn’t have the time to do that, number one, and most aids will temporarily mute and chime when switching programs, which is huge “no no”. We need to hear all the time, and we also don’t have time to be fumbling with buttons on the aids or on the cell phone hearing aid app to change programs. We need both hands to play our instrument, so whatever program is used, it has to work for both live music and conversation.
Then there is a matter of EQ placement points. Over half of a piano is under 500 Hz. And more than half of musical instruments have their tonal characteristics down there, too. An electric guitar is mostly from 250 Hz/ 500 Hz, then all the way up to 6 kHz. Electric bass is from 32 Hz (low B) up, with the overtones in the 125 to 600 Hz range. Saxophones are mostly around 500 / 750 Hz and up. So it is critical for musicians to have even response from 750 Hz down. But hearing aids generally have 3 EQ points in that range. So, when a musician needs low end, and there is only a 250 Hz EQ to do that, too much gain in that region can risk occlusion, also known as “mud” and “boom” to a musician.
Musicians cannot be subject to the “try this and come back in two weeks” fitting process. We need our aids to be right, from the beginning, or at least 80% there. The pre-programming formulas are not right for the demands of live music, and the audiologist often doesn’t have the sound gear to create real world level music in the clinic with real world sound samples. An important point there is that 99% of recorded music is compressed, reducing the dynamic range.
Add to that, expecting high end computer speakers to do the fitting job doesn’t work. The in situ and RTA tests are helpful, but, at least for me, the results of the adjustments from in situ and a RTA machine have never worked above 70 dB SPL.
Then finally, there is the communication barrier. I know it’s frustrating to an audiologist when the patient says, “too loud” but what they need is “less 500 Hz at 80dB SPL”. So imagine what it is like when the musician patient says, “the tenor saxes in their mid-range are too blatty and the sound there is so muddy that I can’t tell the difference between the tenors and trumpets on stage”.
To an audio engineer, this would mean that this is probably a 750 Hz issue. But if the audiologist doesn’t have the same frame of reference, then what the musician says isn’t understood, and it becomes a case of frustration on both sides.
So I would like to propose a wish list for musicians, to address these points:
1- Get the initial programming as close to “right” from the very first.
2 – Redefine what a proper “Music” program should be, to address the demands of live music, at real world volume levels. This would have no signal processing / sound enhancers at all, just
EQ-ing and the maximum outputs properly set. In my experience, I have had better success by spreading out the three EQ curves, and using WRDC, and a couple of other basic settings.
3 – Addition of EQ bands at the sub 500 Hz, at least one at 125 Hz, which some aids have, but others do not. I know some hearing aid manufacturers have relocated the EQ points to reflect the demands of music, which is a great step forward. What would be helpful would be to add in two sweepable narrow range parametric EQ points that can notch out those problematical in-between -spots.
4 – Spend more time dialing in better fidelity at louder volume levels.
5 – For hearing aids that use a cell phone app as a remote controller, add in more EQ adjustment points, besides just bass and treble. Possibly put a sweepable parametric EQ instead. Then give the audiologist the ability to download the data from the phone for in clinic modifications to the settings.
6 – Allow an option for the hearing aid to change programs without notification and without muting. I rely on my cell phone app to change programs, so I can see what program I’m on there.
7 – Increase the headroom and fidelity on the input stage and output stage of the hearing aid, both with the hardware and the hearing aid’s processor and programming.
Many of the needs of performing musicians overlap with those of other aids wearers. While we have better critical listening skills and more demands on our aids, some of the basic issues, like reducing the number of fitting sessions with more accurate initial programming, still apply to everyone who wears an aid.
Great and interesting article Marshall! When I was practicing audiology I would have loved to have had Rick Ledbetter enter my office. What a great opportunity to learn more….
I am a professional trombone player, mostly jazz, and an electrical engineer.
I would love to have hearing devices dedicated to music performing and listening (live or recorded), which would be separate from my everyday hearing aids.
My music aids would have a frequency response which equalizes my hearing spectrum.
A primary goal in hearing aid has been for them to be nearly invisible. This requirement forces them to have tiny batteries, which forces low current-consumption electronics, which forces small dynamic range. Which results in distortion at high volumes and under-amplification at low volumes.
I say, throw away that “nearly invisible” requirement. The system could be
1. Pin wired microphones on clothing
2. Wire microphones directly to the processor which could be strapped to your arm, in your pocket, hanging from your neck – whatever.
3. The processor is big enough for long lasting batteries, and good audio signal processing circuitry, and a blue tooth transceiver to the headset.
4. Headset (blue tooth connected to processor) is user’s choice. Something like my Sennheiser MM400. Needs to have the required dynamic range.
It would even be possible to make this setup primary by including the speech enhancement algorithms of hearing aids in the processor.
This system could be cheaper to manufacture than hearing aids too.
I invite your opinions
Thanks to Rick Ledbetter for his ultimate “list.” Right on.
Tomorrow, this chamber music pro (piano) has the 3-week followup (ha)–with the senior audi in a large practice–the boss–and I am bringing the “list.” Just maybe… we will finally speak the same HoH language.
On Sunday (if not snowed in in Atlanta, GA), a lesson with my piano coach. He is used to my “please repeat,” sitting just three feet away; the 9-foot concert grand is not the problem, I assure you.
With my old Oticon “Atlas” baseline-digital aids, music and speech sounded “natural”–same program, pas de problem. Few others since could match the fidelity. Switching at the piano from my current Widex takes time–impractical, and my head hurts. Now I wonder if an “experienced listener” with “picky” ears and profound loss can keep on making beautiful music–and survive with her beautiful brains intact?
Kind regards to Dr. Chasin,
Esther Sokol
Like many hearing impaired musicians I used analog aids first. They worked pretty well-they were simple. After they gave out about the only aids available were digital. I went through years of trying to find a brand that did not produce distortion when playing the banjo but the digital aid manufacturers obviously did not think that musicians made up much of the hearing aid market. The emphasis was and still is on convenience, small size, streaming wireless data etc.. It was kind of ironic that the WIDEX 440 d-9’S that I currently use did in fact eliminate most of the distortion noise but they are electronically coupled together( did the WIdex boys really think that everybody has equal hearing loss in both sides!) so that I cannot use the left aid at all. Another one of those counter productive convenient features. Oh well. I also had a a hard time getting enough high frequency amplification out of them. Fortunately my current audiologist accepted Marshalls suggestion of using a “Libby” tube and made a new ear mold with a bell shape in the end of the tubing-it really worked quite well.
I know that for many of us with sound mixing experience it is maximally frustrating that the aid companies don’t use a common sense graphic equalizer adjuster. Too much automatic stuff-like one size fits all in their software. Maybe they will come around.
Lastly, I still get a kick out of going into the audiologists office when I meet an audi that does not know my background and make a request for a fine adjustment and mention the problems I’m having using some of their terminology and make reference to frequency range etc.. They dabble around for a little while then look up with a curious little smile and say-“Hmmm you must be a musician”
All the Best-Banjo Bruce
Great article ! Shredded some insight for me to understand musicians hearing needs.