This is the first of a seven-part blog series about music and hearing aids. Part 1 of this series defines the problem. This is not a new topic and has been covered at least a dozen times over the last several years in this blog. Part 2 in this series describes some clinical strategies that can be used while the client is sitting there, perhaps wearing a less-than-optimal hearing aid for music. Parts 3, 4, 5, and 6, are about technologies and how the hearing aid industry has responded to this problem. Parts 3 to 6 are in roughly the same order as the innovations they discuss were introduced into the marketplace to improve things for listening to and playing music. And part 7 of this series is about all of the other elements that make up a music hearing aid program.
Speech as an input to hearing aids is not a problem. Even in the olden days of analog hearing aids the microphones were quite capable of transducing all elements of speech. The sudden bursts of plosives such as stops (e.g., t,d, and k) or those of afficates (e.g., ch, j) could all be handled by hearing aids (and have been since the invention of the electret capacitor microphone). And the sound levels of speech have always been pretty much within the operating range of even the most rudimentary hearing aids of the past.
But music is different.
The sudden transients of music such as a loud percussive blast from a cymbal can be easily handled, but the sound levels that are typical of music had posed a problem.
The past several months have seen a significant improvement, and hence this blog series. In the next almost 2 months, there will be reviews of some of the technologies that can handle the higher sound levels that are characteristic of music: some are newer technologies, and others have been around for nearly a decade.
And, as with any new technologies, there are some unforeseen benefits that can help with speech. The primary example of this is to improve the sound quality of a hard of hearing person’s own voice. Speech at 1 meter is around 65 dB SPL with peaks on the order of 12-18 dB higher. Hearing aids that can handle inputs around 90 dB SPL are therefore quite good at handling input peaks of 83 dB SPL (65 dB + 18 dB peak). But what about a hard of hearing person’s own voice? Because of the close proximity of a person’s mouth to their hearing aid, typical sound levels tend to hover around 85 dB SPL with peaks that are 12-18 dB higher still, sound levels of over 100 dB SPL can reach the hearing aid input stage.
Simply stated, a hard of hearing person’s own voice is distorted with hearing aid technology.
This is one area where work on improving a hard of hearing person’s ability to hear music better has ramifications for a person’s own voice.
Think of a hearing aid as a doorway into the programming. Imagine the doorway being quite low so people have to bend down to get under it, or, if possible, to jack up the doorway so they don’t have to stoop too low. Hitting one’s head on the top of the doorway is akin to the peaks of music hitting the top and being prevented from getting through (at least without significant distortion). The technologies that will be discussed in parts #3 to #6 of this blog series is all about ducking under the doorway, or jacking it up in some way. Part #2 of this blog series will be about clinical strategies- things we can do with currently existing hearing aids.
More on this can be found at www.Chasin.ca/distorted_music . At this site, three audio files will be shown demonstrating this phenomenon. And the next 2 months will be spent on fixing this problem.