Three distinct programs for music?  So… what does the literature say?

Image
Marshall Chasin
June 28, 2021

First, I must declare a conflict of interest!  I have a book coming out (hopefully) in the spring of 2022 called “Music and Hearing Aids” (Plural Publishing).  As part of the book a literature review of previous results needed to be performed … and this is what the research says….

Actually, the research was from many sources, over many years, and with differing research approaches.  But in reviewing everything, it seems that going forward all hearing aids, in addition to their various speech programs should have three distinct programs for music- listening to recorded/streamed music; listening or playing live music; and listening or playing to “instrumental only”

music.

1. Listening to recorded/streamed music:

Much of the research on this came out of the work by Croghan, Arehart, and Kates over several studies.  Among other things they looked at pre-recorded music that would serve as an input to the hearing aids.  Pre-recorded music has already been compression limited (CL) once during the recording process in order to ensure that the dynamic range of the sounds of music could faithfully be reproduced given the more limited dynamic ranges of the mp3 and similar media, and a second time would be by the hard of hearing listeners own hearing aids.  This double compression could be problematic.  And indeed, these researchers found that linear or slow acting WDRC would be the best.  Their research results had a dependence on the type of music, but a “less is more” approach to this hearing aid program seems reasonable- linear compression, or slight WDRC with slow acting time constants.

2. Listening or Playing live music:

This separate music program has been examined by many researchers over the years, including Croghan, Arehart, and Kates, but also researchers out of Vanderbilt (Rickets and colleagues), and Cambridge University (Brian Moore, Michael Stone, and colleagues).  When all was said and done, this particular music program would be similar to a speech in quiet program but with frequency lowering, noise reduction, and feedback management disabled. (You have to read the book for the full story!)

Issues related to frequency response and frequency bandwidth are more related to an individual’s cochlear damage rather than the nature of the input stimulus per se.  Similar frequency response and compression settings as a speech in quiet program would also be the goal of this live music program.

3. Instrumental music program:

This third music program is based on some work from about a decade ago by Francis Kuk and his colleagues as well as some recent work I did on frequency lowering- that one octave island of refuge that appeared in HearingReview.com last December.  The idea is if the only input to the hearing aid is music, and no vocals, then one can impose a linear frequency lowering algorithm by exactly one octave.  This technology was available in the hearing aid industry about a decade ago but was recommended for bird songs, speech, and for music.  However, if it is just restricted to instrumental music, the results can be excellent.

When instrumental music is frequency lowered linearly by exactly one octave, then the first (and odd numbered harmonics) line up perfectly with already existing harmonics of the music, and the second and even numbered harmonics created perfect fifths or thirds in the music.  These additional notes (fifths and thirds) were perhaps not what the composer had in mind, but they would not sound dissonant; just different, but acceptable.  And like the “Listening or Playing live music program”, noise reduction, and feedback management should be disabled (or at least minimized).

Some References:

Croghan, N.B.H., Arehart, K.H., and Kates, J. M. (2012). Quality and loudness judgments for music subjected to compression limiting. The Journal of the Acoustical Society of America 132, 1177-1188.

Kuk, F., Korhonen, P., Peeters, H., Keenan, D.A., and Jessen, A. (2006). Linear Frequency Transposition: Extending the Audibility of High-Frequency Information. Hearing Review, Oct. Uploaded June 21, 2021.

Moore, B.C.J. (1996). Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear and Hearing. 17, 2, 133–161.

Ricketts, T.A., Dittberner, A.B., and Johnson, E.E. (2008). High frequency amplification and sound quality in listeners with normal through moderate hearing loss. Journal of Speech-Language-Hearing Research. 51, 160–172.

 

  1. Music is everything to me. When hearing loss began at age 80, music was my greatest loss. Previously, I had saved songs from iTunes, to replay on my iPad.
    SO. Hello hearing loss, goodbye music! Or, is there still hope for me?

    1. Marshall Chasin Author

      There is definitely hope. Most who feel that their appreciation of music has been degraded also have what is colloquially referred to as “cochlear dead regions”. This is where the nerve endings in a certain frequency region are too damaged to transmit the sound up to the brain without appreciable distortion. It is generally best to avoid these regions – less may be more. With speech we can use a form of frequency lowering that shifts the higher pitched consonants to a healthier part of your hearing mechanism. This will not work with music. Instead, slightly decreasing the amount of amplification may significantly improve your appreciation of music. You should discuss this with your audiologist.

    2. There over are millions of cochlear implantees who have difficulty appreciating music. Late deafened adults have different experiences than children who are born deaf and learn music as children. All implantees hear rhthymic music almost as well as normal hearing adults. Prof Kate Gfelller of the University of Iowa as a leading authority on what kind of music implantees hear as well as people without hearing loss. This also should be of interest to people where just hearing aid amplification does not create enjoyable musical experiences. I am active in the Foundation for Hearing & Speech Resources in Chicago where we are supporting innovative experiments improving musical experiences especially for children who use cochlear implants. www. FHSR. org. Dr. Gfelller is on our team
      Paul Lirie

    3. Marshall Chasin Author

      Hi Karen- pre-recorded music from iTunes or any other source has already undergone compression (actually called compression limiting). You may ask your audiologist to develop a program that is more “linear” than what you currently may have for iTune listening.

  2. Cannot wait for Dr Chasin’s book to appear! What an encouraging development. I love music, play it, record it – and long ago was trained and worked at the BBC. Now, 50 years on, my vintage ears have to re-imagine the sweet upper harmonics of voice or violin! So, all strength to those very special audiology innovators who, as well as giving us great speech-in-noise improvements, also appreciate just how much live music means to people and what a challenge it is to create and fit a hearing aid that can do the job. Feels like there’s progress.

    1. Marshall Chasin Author

      Hi Mr. Owens-

      I would be “hopefully optimistic”. We still need to put amplified sound through a damaged hearing system so that there will always be some limitations. You can have the best stereo and sound system in the world but if the loudspeakers have a tear in the material, there will always be limitations.

      Having said this, a lot of progress has been made and you may want to chat with your local audiologist sooner than later… the book won’t be out until next year and its aimed at audiologists and not the general consumer of hearing aids.

Leave a Reply