Two neat capstone projects for music

There are many unresolved questions and issues when it comes to the subject area of hearing and music.  We actually know very little about how music is processed in the brain, and really are only scratching the surface of this area.  Nevertheless, not to be stymied by the lack of knowledge in our field, I will at least attempt to delineate two areas for further research.  These would also make for fascinating graduate projects or Capstone essays for graduating audiologists.  Both topics are relevant clinically, both are empirical, and both have an academic twist.

1.       One receiver or two receivers or… for music?

 This first topic is primarily about in ear monitors (and also hearing aids for music).  Should an optimal in ear monitor (or hearing aid) for listening to music have one receiver (as is the case for hearing aids) or be a multi-receiver system?  In the industry, manufacturers of in ear monitors typically offer a wide selection from a single receiver system right up to 5 receivers.  The range of available monitors is typically consistent with differing price points.  But, what is the science behind this, and is more actually better?

If one examines speech, at any one point in time, there are either low frequency sonorants or high frequency obstruents, but never both at the same time.  Low frequency sonorants are the vowels, nasals, and liquids (l, and r). Obstruents are the other stuff such as stops (e.g., p,d,g), affricates (e.g., ch, j), and fricatives (e.g., s, and z).  The hearing aid or in ear monitor receiver either is vibrating slowly (for the sonorants) or quickly (for the obstruents) but never at the same time.

In contrast, music can have low and high frequencies concurrently.  At any time there can be low frequency fundamentals and high frequency harmonics simultaneously.  The hearing aid or in ear monitor needs to vibrate with a much more complex pattern than when transducing speech.

The same can actually be said about hearing aid microphones that are “tuned” to be optimized for different frequency regions, but this is a much more complicated experiment since the frequency response is usually programmed after the microphone.  This is not the case with the end-point receiver.

This rationale for multi-receivers may be compelling, but is it true?  There is an empirical study waiting to happen.  I have seen subjective judgements of single versus multi-receiver systems, but nothing objective in the literature.  If I am incorrect, please send the reference my way.  Even if I am wrong (and I have been once or twice in the distant past), this would still make for a fascinating study for the interested student.


2.       One channel or two channels or …  for music?

For a good quality of string music one needs to hear the relative balance of the lower frequency fundamental and the higher frequency harmonic structure.  If this is true (merely an assumption?), then a single channel hearing aid that treats the low and higher frequency region the same would be ideal.  In contrast, for other musical instruments such as woodwinds, the lower frequency inter-resonant breathiness (noise) defines a high quality sound; even though a clarinet and a violin have similar bandwidths in their spectral output, the perceptive needs of a clarinet are much more low frequency emphasis.

This is one of the reasons why many woodwind musicians with an acquired high frequency (presbycusic or noise induced) hearing loss still can perform and enjoy their music.  The same cannot, however, be said for violin and viola music.

The research question is whether for high content string music (e.g., classical) one channel that treats everything the same may be better than a multi-channel instrument.   One can argue that when programming a hearing aid, one music program can be set for classical music – one channel, and non-classical music (i.e., minimal strings) can be set to as many channels as one would want.

There are no current one channel aids on the market, and while setting every channel to the same kneepoint and compression characteristics will approximate a single channel hearing aid, it will still not function as a true single channel hearing device.  The higher frequency harmonics may be treated differently because of their differing spectral levels relative to the fundamental.

Due to the lack of commercially available single channel hearing aids with which to perform a well-controlled experiment one would argue that this may have no ultimate clinical relevance in any event.  However, stay tuned for a neat end-run around this empirical problem.  A technology is just around the corner and hopefully I will be able to blog about it within the next 6 months.

If it does turn out that a single channel approach would be better for listening to, and playing classical-like music, then this is certainly something that a hearing aid manufacturer may want to implement in their current line of hearing aids.  Just because something doesn’t yet (or no longer) exist doesn’t mean that it cannot in the future.  If my memory serves me correctly, directional microphones were commonly implemented in hearing aids in the 1970s and early 1980s, but seemed to disappear for almost a decade until they came back with force.  Perhaps it is time for a single channel hearing aid to return.  The 1988 K-AMP hearing aid was single channel and when stacked up against multi-channel hearing aids of the era, the K-AMP outperformed them  all.

More on this in a future blog… stay tuned!

About Marshall Chasin

Marshall Chasin, AuD, is a clinical and research audiologist who has a special interest in the prevention of hearing loss for musicians, as well as the treatment of those who have hearing loss. I have other special interests such as clarinet and karate, but those may come out in the blog over time.