I recall reading an article from 1983 in a then, new journal of the American Auditory Society, called Ear and Hearing. I remember showing it to my colleagues and commenting how silly it was and that they would publish anything, just to get an article to fill the pages of Ear and Hearing.
Well, I was wrong! Ear and Hearing is, and remains, one of the premier journals in our field and the authors of that article (Lindgren and Axelsson) were two giants of our field in the realm of epidemiology of hearing loss. These guys just don’t make mistakes!
What Lindgren and Axelsson did was expose ten subjects to noise and to music of equal exposure – equal time weighted Leq and durations – and then measure the Temporary Threshold Shift (or TTS) as a result. Temporary Threshold Shift, as the name suggests, is the temporary elevation of one’s hearing threshold as measured on an audiogram – it typically resolves in 16-18 hours.
When I first studied TTS and hearing loss back in the late 1970s and early 1980s, we only knew about two physical parameters that affected our auditory system – the sound level and the duration. So of course there should be no difference between whether a person is exposed to noise or to music, as long as the sound levels and durations were the same.
But, Lindgren and Axelsson found that while four of the ten subjects had similar TTS results for the noise and music, six of the ten demonstrated a greater TTS for the noise than for the music. Simply stated, that as far as TTS is concerned, in the majority of cases, noise appeared to be more damaging than equivalent exposures to music.
The article sat on my desk gathering dust for quite some time because I couldn’t figure out why that was the case. I eventually ran across a translation of an article written in German by Hormann and his colleagues in 1970 and this provided some insight. The article was a few years earlier than the Lindgren and Axelsson one and was actually in their reference section, which I never noticed (possibly because it was in German) but Professor Hormann and his colleagues did a really neat study.
I am not sure that a University ethics board would give approval today but then again, there were no lasting effects of what he did, other than perhaps some hurt feelings.
In a large first year Psychology class, students were asked whether they could roll their tongues and bring the two sides together. Of course this is genetic and other than being the life of a party, there is no real advantage of being able to do this.
If you could do it, you were treated like royalty and asked if you would go into a room to listen to some noise and then have your hearing tested. The people were smiling and I think he offered to take them for a beer after. This was a “positive” experience.
But, if you could not do it, the students were treated very poorly. They were yelled at and told that they needed to go in a room and have their ears blown out. Everyone involved was gruff and mean. Of course, this was a “negative” experience.
Unbeknownst to the two groups, the exposure to create TTS was identical other than being “positively” viewed or “negatively” viewed.
It turns out that the TTS was much greater (18 dB) in the group that was negatively predisposed, but only 12 dB for those who were more positively predisposed.
When the statistics were done and compared with large scale noise exposure models of the time, there was an expected 12 dB TTS but for some reason, being negatively predisposed made one more susceptible to TTS (by an additional 6 dB of measureable TTS). Again, there was something mysterious going on with noise and being negatively predisposed.
Well, it turns out that being negatively predisposed, increases the overall stress level, and it turns out that the Glutamate levels increased to a toxic level… but more on that in Part 3.