Music and Cochlear Implants- part 2

Marshall Chasin
February 3, 2015

In part 1 of this blog series, we touched on some technologies that have improved the enjoyment and usability of hearing aids for listening to, and the playing of, music.  Part 1 ended with a caveat that perhaps the implicit assumption that what is best on a number of perception tests will undoubtedly result in an increased enjoyment and appreciation of music.

This bring us to the work of Ward Drennan and his colleagues.  They recently published in the International Journal of Audiology an article entitled “Clinical Evaluation of Music Perception, Appraisal, and Experience in Cochlear Implant Users.  Of interest is that the International Journal of Audiology is the official organ of the International Society of Audiology (ISA).  The ISA has a convention every other year called the World Congress of Audiology and in 2016 will be in Vancouver, Canada.  The conference website is www.WCA2016.ca.

But back to what these researchers found…

The authors wanted to see if there could be one, or a number of clinically feasible tests (lasting in total, no more than 1 hour) that could reflect how well a cochlear implant recipient will do in real life.  They also wanted to determine the correlation between how the recipients did on a formal music perception test and how much they appreciated and enjoyed music.    It is the assumption in many types of work such as this, that if someone does well on a test, this will transfer to real life benefits.  In some sense, Drennan and his colleagues should be applauded to even asking this question and not just assuming it to be true.

The authors first looked at the Clinical Assessment of Music Perception (CAMP) test which has been in widespread use in laboratories examining objective benefit with cochlear implant recipients for the past 5-7 years.  This test has 3 perception subtests that the authors would find relevant for music.

They then looked at the Iowa Musical Background Questionnaire (IMBQ) which allowed the cochlear implant recipients the chance to judge music on a number of 10 point scales such as “Sounds unlike music…. Sounds like music” or “Unnatural …. Natural”.

Both the CAMP and the IMBQ were administered to 145 cochlear implant users at 14 clinical sites across North America.

The results:

In short (… or is it too late?) … there was no significant correlation between how an individual cochlear implant user functioned on the CAMP objective benefit test and the IMBQ appraisal/judgment scale for music.  There was a very weak correlation and a statistically significant result between the CAMP music timbre perception test and the IMBQ appraisal/judgment test, but that was all.

Just because a cochlear implant recipient performed well on a perception test (even in a clinical setting where fatigue is rarely a problem) this did not translate in to increased enjoyment of the music.

There are some admitted limitations of this study and the authors themselves pointed out that the sterile (my word) stimuli used in the CAMP test for perception did not correlate to real music.

The bottom line is that algorithms and technologies should not solely be measured by “objective” tests which may not reflect reality, nor should they be measured solely against a person’s preference or stated enjoyment of a music stimulus.

Of course, I would lean towards the latter- if there is any weighting that needs to be done, I feel that it should be weighted more heavily towards what the hard of hearing person feels sounds the best.  Just like fitting hearing aids we need to do the initial fitting that is acceptable to the individual, and only do the “fine tuning” later once it would be more acceptable to bring it in line with a “theoretical target” and only then, if necessary.

 

 

  1. “Of course, I would lean towards the latter- if there is any weighting that needs to be done, I feel that it should be weighted more heavily towards what the hard of hearing person feels sounds the best. ”

    This is all too true,and I would like to know how many of those tested were done on performing musicians, or audio engineers with trained ears. I will venture that none were field tests, and all were done in a lab. Not just the average CI user who was asked to evaluate some music clips played through a pair of speakers in a sound booth. And none were conducted in the real world. Even with conventional aids, this still applies – what looks good on a graph or works in a lab, doesn’t always work in real life. As you pointed out, this is about as far away from “real music” as it could be. Far better to either recreate a real world music environment or to just hand the adjustment software over to musicians who know audio engineering, and see what they come up with.

    The problem for performing musicians is that the sound levels we work in – mostly around 80db – are very hard to recreate in a lab. On stage, we have to deal with room acoustics, occasional very loud sounds like percussion, proximity of musical instrument amplifiers and sound systems, and quickly changing wide dynamic ranges. I think that any testing of CIs for music needs to include this sort of situation.

Leave a Reply