I am always surprised by how the various hearing aid manufacturers lump the two words “speech” and “music” together in one sentence…. “Hearing aid X can help with speech and music, and can help you jump higher and run faster….”.  Of course everyone knows that the last part is true and many of my hard of hearing clients can leap tall buildings in a single bounce.  But lump “speech” and “music” together in one sentence??!!

In many ways, the construction of speech is much more-simple than that of music.  I am fond of saying that speech is sequential and music is concurrent. Speech is one speech segment followed by another in time. There are some overlaps which speech scientists call “co-articulation” and some assimilation of one speech sound with an adjacent one and linguistics would refer to this as being governed by “phonological rules” but in the end, speech is characterized by either lower frequency vowels and nasals (sonorants) one moment and possibly higher frequency stops, fricatives, and affricates (obstruents) the next moment. Speech does not have both low frequency sounds and high frequency sounds at the same time. It is like playing the piano keyboard with only one hand – you are either playing on the left side, the middle, or the right side, but never both sides at the same time.  Speech is sequential – one sound at a time followed by another sound, a moment later.

Music is concurrent; unlike speech, we must have both low frequency sounds and high frequency sounds that occur at the very same time.  Musicians call this harmony. Even while playing a single note on the piano keyboard, there is the fundamental or tonic – the note name that is played – and then integer spaced multiples of that note spread out on the right hand side of the piano. 

Music cannot be like speech, one frequency region at a time. And speech cannot be like music, many frequency regions at a time.

Graphs that compare music and speech are simplistic; they should not be used to define amplified frequency responses or the characteristics of compression circuitries. They look pretty but have very limited value.

Graphs such as this are useless.

Algorithms that have been optimized for hearing speech need to be different than those that are optimized for listening to music. Even something as simple as feedback management can be quite useful for speech but disastrous for music. Imagine a feedback management system that confuses a high frequency harmonic for feedback- music would essentially be shut off.  Ad hoc features such “only restrict the feedback manager to sounds over 2000 Hz” would be slightly better than feedback managers that are active in all frequency regions, but even then, the higher frequency harmonics of the music would be nullified.

Another algorithm that has shown itself to be of great assistance with speech is frequency transposition or frequency compression. These phrases are meant to apply to the wide range of frequency shifting algorithms that are available including linear and non-linear shifting and compression.  Imagine the second or third harmonic of a note being moved elsewhere. Discordance will result. 

The best music algorithm for severely damaged cochlear regions would not be to transpose away from that region, but simply reduce the gain in that frequency region.  A creation of additional in-harmonic energy where it is not supposed to be will ruin music.

And indeed, in most cases of algorithms for amplifying music, less is usually more.  Turn off the fancy stuff and just listen to the music.  Wider or narrower frequency responses have nothing to do with the input source- music or speech- and as such should not be different between a “speech in quiet” program and a “music” program.

I recently saw a music producer whose ears are her life – and unfortunately she suffered a sudden partial sensori-neural hearing loss in one ear. We were lucky enough to have her seen by an otolaryngologist within hours and after an MRI, steroid injections were started. So far, this is not an unusual situation and course of action although, with a few “favors” cashed in, we were able to get her into the system faster than usual.

The producer, not wanting to leave anything to chance, searched out Dr. Google, and found something that I was not aware of.  This was a January 2014 article with the great name “Constraint-induced sound therapy for sudden sensorineural hearing loss –behavioral and neurophysiological outcomes”.

Music therapy can be useful BUT in conjunction with steroidal therapy. Courtesy of www.Risingstarzmusic.com

The idea behind the research is to listen to music in the affected ear while plugging the unaffected ear; this supposedly acts synergistically with the steroid injections to facilitate cochlear function recovery.

This was published in a www.Nature.com publication so it was well peered reviewed, but I was surprised that I had never heard of this.

The word “Constraint” in the title refers to plugging up the unaffected ear (with an earplug) and then music is played at a safe level in the affected ear. The authors of the report claim that this can also be quite useful to re-establish normal cortical auditory maps; something that can be permanently altered despite resolution of the peripheral pure tone sensori-neural hearing loss.

The authors provide several possible explanations for how this works.   They point out that “sound stimulation dilates blood vessels and increases red blood cell velocity in the cochlea”.  This, they argue, could improve the micro-circulation of blood (and oxygen) within the cochlea. Since oxygen deprivation is a major cause of cochlear hair cell death, this improved blood circulation may have helped to resolve this hypoxic situation. The authors also point out that even if there was no oxygen deprivation in the damaged cochlea, improved vascular flow brought about by sound stimulation would improve the overall cochlear metabolism and other metabolic benefits by allowing a more optimal removal of toxic substances in the cochlea such as rogue anti-oxidants.

Music is Medicine, but it does not replace medicine. Courtesy of www.PetersonFamilyFoundation.org

The authors of this study do caution that they still can’t really conclude that the music in the affected ear was, in itself, beneficial or whether the music enhanced the steroidal effect. There is no evidence to date that music in the affected ear can be useful; steroidal use is still the gold standard and music in the affected ear appears to supplement the steroidal effect but it is yet unknown what the mechanism/s is/are.

So – while this is interesting (and something that I had previously not known as a clinical audiologist), it is important to underscore that listening to music in the affected ear is NOT a replacement for steroidal therapy.  t appears to supplement the steroidal therapy but the mechanisms are still not understood.  Beware of avoiding the otolaryngologist at a time like this.  This is not a substitute for medical or steroidal intervention.