Rachel Hottle is the guest contributor to this week’s blog at HearTheMusic. Rachel is a fourth-year undergraduate at Swarthmore College in Philadelphia, Pennsylvania, where she is studying music and biology. This summer she is volunteering as a research assistant in the SMARTlab (Science of Music, Auditory Research, and Technology) at Ryerson University. She hopes to pursue graduate study in the field of music cognition.
The roar of a train, the laughter of children, the swell of a symphony. All are colorful aspects of daily life that are communicated to us via our sense of hearing. Hearing is an important asset that helps us respond to and interact with the world around us. Age-related hearing loss is a pervasive problem that often affects older adults’ quality of life. Such hearing loss can make it difficult to perform everyday tasks, such as determining where a sound is coming from, distinguishing speech from background noise, and understanding emotions conveyed in speech—all of which can lead to isolation and depression. Over the last few years, researchers in the SMART (Science of Music, Auditory Research, and Technology) lab at Ryerson University have become especially interested in the interaction between hearing loss and perception of emotion from auditory cues.
Perceiving and correctly identifying emotion in speech is important for interacting with and responding to others, and the loss of this ability can cause social isolation in older adults. Emotion is conveyed in speech through features such as pitch contour and dynamic contrasts. For example, emotions such as anger and fear tend to be conveyed in a loud voice, while sadness and tenderness are conveyed via softer tones. In hearing impaired individuals, constrictions in the dynamic range may cause “loud” emotions to be less well differentiated from “quiet” emotions such as fear. This dynamic range is often limited in individuals with sensorineural hearing loss due to a problem known as recruitment. In sensorineural hearing loss, some of the hair cells in the inner ears die and are no longer able to convey sound information to the brain. Our brains “recruit” adjacent hair cells to vibrate at the frequency of the dead hair cells, as well as their original frequency. This causes the vibration reaching the brain to sound louder than usual. It can also cause hearing distortion, as hair cells for multiple frequencies can be vibrating at the same time due to their now double or triple function. As a result, hearing aids can be uncomfortable for individuals with severe recruitment.
To deal with the problem of recruitment, hearing aids must both amplify quiet sounds and compress loud sounds to make them less uncomfortable and minimize distortion. Since emotion in speech often relies on a wide dynamic range, it is important to investigate the effects of hearing loss as well as hearing aids on perception of emotion in speech. Our lab has found small, but promising, results regarding the effect of hearing aids on emotional perception. A study led by PhD student Gabe Nespoli found that hearing aided individuals have similar emotional responses to emotional speech as normal hearing individuals (as measured by skin conductance), but are slower and less accurate at identifying emotions presented in speech (although still better than hearing impaired individuals without hearing aids).
Another study conducted by Dr. Huiwen Goy, a postdoctoral fellow in the lab, found that hearing aids yielded small but significant improvements in emotion identification in speech compared to hearing impaired individuals without hearing aids. While hearing aids may not be able to increase emotion recognition levels to that experienced by normal hearing listeners, we see encouraging results when compared with individuals with hearing loss who do not use hearing aids. Nonetheless, there is certainly room for improvement.
Although self-report questionnaires that measure hearing loss and its associated limitations do exist, until recently, no scale existed to assess the experiences of individuals with hearing loss with respect to emotional communication. In collaboration with Dr. Gurjit Singh and Stefan Launer, research scientists at Phonak, we developed the EmoCheQ (Emotional Communication in Hearing Questionnaire, in press Singh, G., Liskovoi, L., Launer, S., & Russo, F. (submitted). The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and validation.) to address this gap. Our seventeen-item questionnaire includes questions relating to characteristics of talkers, speech production, listening situations, and socio-emotional wellbeing. After validating our scale, we tested it with older adults with normal hearing, hearing aids, and hearing impairments without hearing aids. We found that, in the domain of talker characteristics, individuals with normal hearing reported significantly less handicap than those with hearing aids or hearing loss, while in the domain of situational factors, there was no difference between the performance of individuals with normal hearing and those with hearing aids. This provides yet more evidence that hearing aids in their current form can help with some features of emotional perception in speech, but not others.
While musical emotions rely on some of the same prosodic cues as spoken emotions (dynamic range, pitch contour, speed), musical emotion differs from spoken emotion in some important ways. Emotions in music are often redundantly coded, meaning that multiple emotional features are present at once. For example, a “sad” musical passage may be simultaneously soft, slow, and low in pitch. Since instrumental musical passages are lacking in the semantic content of emotional speech (the words we speak, which may give clues to the emotion we are attempting to convey), the emotions presented in music need to be more exaggerated to be distinguishable to the audience. All this might mean that it should be easier for hearing impaired and hearing aided individuals to parse musical emotion compared with spoken emotion. However, it is also possible that recruitment and dynamic range compression may make it more difficult for hearing impaired and hearing aided individuals to appreciate the emotion conveyed by music, which, especially in the classical genre, relies on a wide dynamic range from pianissimo to fortissimo.
A new preliminary study led by undergraduate Domenica Fanelli (Fanelli, D. Perception of Emotion in Music by Hearing-Impaired and Hearing-Aided Listeners. Unpublished undergraduate thesis, 2017.) found promising results regarding the parsing of musical emotion by hearing aided individuals. Her study involved the presentation of music stimuli judged to be either happy, sad, angry, fearful, or tender/calm to older adults who were either hearing impaired, hearing aided, or had normal hearing. She found that the hearing aided group was slightly better than the other two groups at distinguishing low arousal emotions such as sadness and tenderness, and performed just as well as the normal hearing group at judging high arousal emotions of happiness, anger, and fear.
While these results are preliminary, it appears from our recent work that hearing aids are beneficial for those with hearing loss in understanding emotional cues from speech and music. This is encouraging, as parsing emotion is a key part of speech communication as well as musical enjoyment. More research remains to be done to investigate if there is a way to improve hearing aid technology to further increase emotional understanding.
As of yet, unplublished references:
Singh, G., Liskovoi, L., Launer, S., & Russo, F. (submitted for publication). The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and validation.
Fanelli, D. Perception of Emotion in Music by Hearing-Impaired and Hearing-Aided Listeners. Unpublished undergraduate thesis, 2017.
Thank you for this interesting article.
There is a wealth of additional information at the Smartlab link near the beginning of the article. Under the guidance of Dr. Frank Russo, this lab has generated some fascinating research about emotion and music.