What a good question! And also what a great Capstone project this would be for an Audiology student! The short answer is that we don’t really know, but here is the slightly longer answer.
Music, like speech, has lower frequency fundamental energy, and then a series of higher frequency harmonics whose frequencies depend intimately on the acoustic characteristics of the musical instruments. The sound pressures of the various harmonics typically characterize the quality of the musical instrument and help us distinguish a clarinet from a trumpet.
Speech is rather straightforward in that the harmonics are all integer multiples of the fundamental frequency (which we colloquially call pitch). Because the vibrating source in speech is our vocal cords, which are held tightly at both ends (much like a violin), the human vocal tract functions as a half wavelength resonator. My fundamental frequency is about 125 Hz (which is also an octave below middle C, which can be really great at parties) so my first harmonic would be at 2 x 125 Hz or 250 Hz; the second harmonic at 3 x 125 Hz or 375 Hz, and so on. All harmonics are equally spaced by 125 Hz and it is this spacing that allows us to assign a pitch to one’s voice. In speech, it doesn’t really matter that much whether the fourth harmonic has a lower sound level than the third harmonic- it does, but even if we could artificially amplify the fourth harmonic, my voice would still sound natural, albeit slightly amplified.
In short, the human voice is relatively immune to errors in its production. Stating this another way, humans can accept a wide range of energy patterns before we would judge it as being odd or not human sounding. We can understand a child’s [a] sound and know that it is the same as that of a fully grown adult’s [a] sound, despite having a dramatically different underlying harmonic structure. My fundamental frequency is 125 Hz whereas a young child may have one of 300 Hz and the phonetic elements of his speech would still be quite recognizable.
However, music appears to be a different animal all together. Slight changes in the fundamental, or even missing harmonic elements that perhaps are masked by noise, or notes that are unintentionally skipped are immediately noticed and our judgment of that music can be severely affected.
And this is where an interesting Capstone project comes in. What are the limits of changes in a musical spectrum that will not be noticed? Now, I am being slightly facetious. This is not just a Capstone project, but a life’s work. Boiling it down to one simpler, more manageable question may be, for stringed instruments how crucial is the harmonic balance (in terms of sound level) for good music appreciation? Let’s backtrack a bit, to clarify why I chose this one question of the many that could have been asked.
There is an assumption in the field (and I personally think that it’s true) that altering the harmonic structure of stringed instruments is devastating to the quality of music, and that this is not the case for woodwinds and brass. It is the balance between the lower frequency fundamental and the higher frequency harmonics that defines the difference between a student model of violin and a Stradivarius. A Stradivarius violin, because of the wood used as well as a myriad of other undefined parameters, sounds the way it does because the higher frequency harmonics are at a specific sound level relative to the fundamental, while a student model has a different set of relationships. Focusing in on the research methodology, using modern day digital techniques, the amplitudes of the various harmonics can be altered and these .wav files can then be played to groups of musicians (and non-musicians?) to determine preferences and quality judgments.
What I have just said may be completely bunk and the assumption that the balance is crucial may be erroneous, but now we have an empirical project waiting to be done by some eager graduate student.
In contrast to stringed instruments, woodwind players need to hear the lower frequency inter-resonant breathiness and although a clarinet can generate significant higher frequency harmonic energy, it is not as important as it would be for string instrument players and listeners…. Or at least, that’s the assumption… another Capstone project waiting to happen?
What does this mean in term of hearing aids and music?
These assumptions have led to more than two decades of hearing aid designs that were thought to be optimal for listening to, and the playing of music. The first such hearing aid that was designed with music in mind was the 1988 K-AMP from Etymotic Research (www.etymotic.com), and indeed is still being marketed by General Hearing (www.generalhearing.com) in the United States. To this day, the K-AMP is still state of the art for the listening to, and the playing of music.
The K-AMP is a single channel hearing aid and one of the rationales for this was to be able to treat the lower frequency fundamental energy in the same way as the higher frequency harmonic structure. A 5 dB reduction for the fundamental also resulted in exactly a 5 dB reduction for each harmonic- no less, and no more. The structure of music is maintained with a single channel hearing aid.
Despite the lack of a single channel hearing aid in today’s marketplace (except for the K-AMP), I suspect that at least for string-heavy music such as Classical and Baroque music, adhering to the principle that compressors should treat all frequencies the same is reasonable. I would advocate for the hearing aid industry to consider developing a single channel hearing aid or at least a program that is truly single channel that can be used as a “music program.”
I would imagine that for non-classical music where there is less dependence on large string sections, a multi-channel hearing aid would be fine.
Well, at least that’s my assumption… still waiting for some neat Capstone projects to verify my intuition!