Rachel Hottle is the guest contributor to this week’s blog at HearTheMusic.  Rachel is a fourth-year undergraduate at Swarthmore College in Philadelphia, Pennsylvania, where she is studying music and biology. This summer she is volunteering as a research assistant in the SMARTlab (Science of Music, Auditory Research, and Technology) at Ryerson University. She hopes to pursue graduate study in the field of music cognition.

Rachel Hottle

The roar of a train, the laughter of children, the swell of a symphony. All are colorful aspects of daily life that are communicated to us via our sense of hearing. Hearing is an important asset that helps us respond to and interact with the world around us. Age-related hearing loss is a pervasive problem that often affects older adults’ quality of life. Such hearing loss can make it difficult to perform everyday tasks, such as determining where a sound is coming from, distinguishing speech from background noise, and understanding emotions conveyed in speech—all of which can lead to isolation and depression. Over the last few years, researchers in the SMART (Science of Music, Auditory Research, and Technology) lab at Ryerson University have become especially interested in the interaction between hearing loss and perception of emotion from auditory cues.

Perceiving and correctly identifying emotion in speech is important for interacting with and responding to others, and the loss of this ability can cause social isolation in older adults. Emotion is conveyed in speech through features such as pitch contour and dynamic contrasts. For example, emotions such as anger and fear tend to be conveyed in a loud voice, while sadness and tenderness are conveyed via softer tones. In hearing impaired individuals, constrictions in the dynamic range may cause “loud” emotions to be less well differentiated from “quiet” emotions such as fear. This dynamic range is often limited in individuals with sensorineural hearing loss due to a problem known as recruitment. In sensorineural hearing loss, some of the hair cells in the inner ears die and are no longer able to convey sound information to the brain. Our brains “recruit” adjacent hair cells to vibrate at the frequency of the dead hair cells, as well as their original frequency. This causes the vibration reaching the brain to sound louder than usual. It can also cause hearing distortion, as hair cells for multiple frequencies can be vibrating at the same time due to their now double or triple function. As a result, hearing aids can be uncomfortable for individuals with severe recruitment.

To deal with the problem of recruitment, hearing aids must both amplify quiet sounds and compress loud sounds to make them less uncomfortable and minimize distortion. Since emotion in speech often relies on a wide dynamic range, it is important to investigate the effects of hearing loss as well as hearing aids on perception of emotion in speech. Our lab has found small, but promising, results regarding the effect of hearing aids on emotional perception. A study led by PhD student Gabe Nespoli found that hearing aided individuals have similar emotional responses to emotional speech as normal hearing individuals (as measured by skin conductance), but are slower and less accurate at identifying emotions presented in speech (although still better than hearing impaired individuals without hearing aids).

Another study conducted by Dr. Huiwen Goy, a postdoctoral fellow in the lab, found that hearing aids yielded small but significant improvements in emotion identification in speech compared to hearing impaired individuals without hearing aids. While hearing aids may not be able to increase emotion recognition levels to that experienced by normal hearing listeners, we see encouraging results when compared with individuals with hearing loss who do not use hearing aids. Nonetheless, there is certainly room for improvement.

Although self-report questionnaires that measure hearing loss and its associated limitations do exist, until recently, no scale existed to assess the experiences of individuals with hearing loss with respect to emotional communication. In collaboration with Dr. Gurjit Singh and Stefan Launer, research scientists at Phonak, we developed the EmoCheQ (Emotional Communication in Hearing Questionnaire, in press Singh, G., Liskovoi, L., Launer, S., & Russo, F. (submitted). The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and validation.) to address this gap. Our seventeen-item questionnaire includes questions relating to characteristics of talkers, speech production, listening situations, and socio-emotional wellbeing. After validating our scale, we tested it with older adults with normal hearing, hearing aids, and hearing impairments without hearing aids. We found that, in the domain of talker characteristics, individuals with normal hearing reported significantly less handicap than those with hearing aids or hearing loss, while in the domain of situational factors, there was no difference between the performance of individuals with normal hearing and those with hearing aids. This provides yet more evidence that hearing aids in their current form can help with some features of emotional perception in speech, but not others.

While musical emotions rely on some of the same prosodic cues as spoken emotions (dynamic range, pitch contour, speed), musical emotion differs from spoken emotion in some important ways. Emotions in music are often redundantly coded, meaning that multiple emotional features are present at once. For example, a “sad” musical passage may be simultaneously soft, slow, and low in pitch. Since instrumental musical passages are lacking in the semantic content of emotional speech (the words we speak, which may give clues to the emotion we are attempting to convey), the emotions presented in music need to be more exaggerated to be distinguishable to the audience. All this might mean that it should be easier for hearing impaired and hearing aided individuals to parse musical emotion compared with spoken emotion. However, it is also possible that recruitment and dynamic range compression may make it more difficult for hearing impaired and hearing aided individuals to appreciate the emotion conveyed by music, which, especially in the classical genre, relies on a wide dynamic range from pianissimo to fortissimo. 

A new preliminary study led by undergraduate Domenica Fanelli (Fanelli, D. Perception of Emotion in Music by Hearing-Impaired and Hearing-Aided Listeners. Unpublished undergraduate thesis, 2017.) found promising results regarding the parsing of musical emotion by hearing aided individuals. Her study involved the presentation of music stimuli judged to be either happy, sad, angry, fearful, or tender/calm to older adults who were either hearing impaired, hearing aided, or had normal hearing. She found that the hearing aided group was slightly better than the other two groups at distinguishing low arousal emotions such as sadness and tenderness, and performed just as well as the normal hearing group at judging high arousal emotions of happiness, anger, and fear.

While these results are preliminary, it appears from our recent work that hearing aids are beneficial for those with hearing loss in understanding emotional cues from speech and music. This is encouraging, as parsing emotion is a key part of speech communication as well as musical enjoyment. More research remains to be done to investigate if there is a way to improve hearing aid technology to further increase emotional understanding.

As of yet, unplublished references:

Singh, G., Liskovoi, L., Launer, S., & Russo, F. (submitted for publication). The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and validation.

Fanelli, D. Perception of Emotion in Music by Hearing-Impaired and Hearing-Aided Listeners. Unpublished undergraduate thesis, 2017.

We are a bag of bio-chemicals mixed in with a lot of water and some other tissues and bone that give us structure.  My waste-line may be a slim 34” at sea level, but over 35” at the top of Aspen Mountain- we are held together by atmospheric pressure, bones and other tissues- but what makes us go round are the interactions of the various enzymes, proteins, amino acids, and a multitude of other chemical, bio-electrical, and mechanical processes.

What was colloquially referred to as the “mind/body connection” in the 1960s is now being investigated with tools that were just not available back in the bell-bottom era.  Although we are still just scratching the surface, we are learning about the long term effects of factors such as stress on our body.  We are beginning to understand the science behind phrases such as “stress is the big killer”.

In parts 1 and 2 of this blog series we reviewed a little of what we know about stress and even how negative emotions (through a stress mitigated system and glial excitotoxicity) can generate toxic levels of Glutamate that can make our cochlea and our auditory system function at less than an optimal level.

Adenosine is no more complicated than caffeine and too little or too much of either of them can have dramatic effects.

Recent research by Dr. Stanislav Zakharenko and his colleagues at the St.Jude Department of Developmental Neurobiology have uncovered what they think is another biochemical that may be implicated in the difference between how children and adults acquire language and music.

It is long thought that there is a “critical period” after which young adults lose their ability to absorb language and music to the same extent that children do.  In short, children are sponges and adults, at least in my case, are brick walls.  This is especially true with learning a second language and music.  But what is the reason behind this critical period?

Dr. Zakharenko’s group examined the neuromodulator “adenosine”- one of the genetic building blocks of life- that naturally occurs in the auditory thalamus and found that, at least in mice, limiting its concentration allows the adult mice to perform more like younger mice.

Age related critical period for learning a new language and for learning music

“By disrupting adenosine signaling in the auditory thalamus, we have extended the window for auditory learning for the longest period yet reported, well into adulthood and far beyond the usual critical period in mice,” said corresponding author Stanislav Zakharenko, M.D., Ph.D., a member of the St. Jude Department of Developmental Neurobiology. “These results offer a promising strategy to extend the same window in humans to acquire language or musical ability by restoring plasticity in critical regions of the brain, possibly by developing drugs that selectively block adenosine activity.”

Learning language or music is usually a breeze for children, but as even young adults know, that capacity declines dramatically with age. St. Jude Children’s Research Hospital scientists have evidence from mice that restricting a key chemical messenger in the brain helps extend efficient auditory learning much later in life.

“By disrupting adenosine signaling in the auditory thalamus, we have extended the window for auditory learning for the longest period yet reported, well into adulthood and far beyond the usual critical period in mice…. These results offer a promising strategy to extend the same window in humans to acquire language or musical ability by restoring plasticity in critical regions of the brain, possibly by developing drugs that selectively block adenosine activity.”

It turns out that changing the concentration of the neuromodulator adenosine (which limits the production of the neurotransmitter substance Glutamate) in some way is implicated in an extension or delay of the critical period of learning language and music.

Like Glutamate (and coffee, especially my first cup in the morning), too little is as bad as too much.  But it does look like there may be an optimal concentration of Adenosine that may allow adults to learn and discriminate as well as children. Stay tuned to this line of research.