One of my all-time favorite movies is Parent Trap- not the remake but the original starring Brian Keith, Maureen O’Hara and Haley Mills.  In the movie, the very wealthy grandmother of Haley Mills’ character, used one sentence to summarize the extreme wealth of the family and the large size of their house.  She said to a servant “Put both Steinways at the north end of the drawing room”.  To this day, this is one of my all-time favorite sayings; it expresses so much information in just 11 words. 

I have never had a chance to use that phrase, however. The closest I came to it was when I attend a 2 piano/4 hands concert at my Musicians’ Clinics of Canada’ partner’s house where he indeed has two full sized grand Steinway pianos in the same room.   Of course, at his house, this occupies most of the first floor.

But what is the largest musical instrument ever built?

There are several candidates such as the four story tall harp that was constructed at a recent Burning Man festival and the tuba that my friend had to drag home from school in grade 8. (He later switched to the piccolo).

Over the years, musical instrument builders have competed silently among themselves to create a very large version of their typical smaller-scale instruments.

Eight German violin makers were responsible for this rather large sized violin in the picture below.

Large tubas have been created and even the “octobass” has been created.

But the largest (so far) of any instrument is the Stalacpipe underground organ at the Luray Caverns that takes up about 3.5 acres.  This was the brainchild of mathematician Leland W. Sprinkle (that’s really his name!).  When spelunking in a cave one day, his son hit his head on a stalactite, only to hear a distinct ring.  Where other dads may rush to see if his son was OK, Professor Sprinkle began to examine the distinct rings that other stalactites appeared to have, when hit.  I hope he didn’t continue to use his son’s head!

In any event, because stalactites (and presumably stalagmites) tended to have a resonant frequency that was a result of their size (length and mass), in order to have a full representation of the notes on an organ keyboard, he needed to search about 3.5 acres of cave to come up with a sufficient assortment of stalactites to cover the musical range.  An array of electronic hammers were set up that could be enticed to hit any number of individual or combinations of stalactites to achieve any note or chord.

Clearly this professor was tenured and didn’t need to publish-or-perish!

To date, this underground organ is the largest musical instrument ever built.  But tomorrow is another day!

Rick Ledbetter is a long time professional bass player and composer with a profound, late onset hearing loss. He has been programming his own aids through several sets, to meet his needs for live performance.  Rick is quite thoughtful about this area.  In April 2016 I had an open letter to hard of hearing musicians, which was repeated last week during the holiday season break, and what better response than this wish list for audiologists and hearing aid manufacturers…

Musicians. We have picky ears, from years of critical listening and learning what musical instruments should sound like, both solo and in ensemble. We can listen to an ensemble play, and pick out what one instrument is doing while the music is being played. So, to do that with hearing aids, we need aids that reproduce sound with the greatest fidelity and real world frequency response at all volume levels, but particularly from 65dB to over 100dB, to handle the brief peaks that even an orchestra can produce.

Hearing aids have long been considered as speech amplification devices, and the programming reflects that: lack of clean low end, a reliance on sound modifiers like speech enhancers, noise reduction, directionality, etc., to make speech more intelligible in day to day life. However, these same active sound modifiers work against fidelity in live music situations. Feedback reducers don’t know the difference between feedback and a flute, a sustained violin, and an electric guitar high note. Noise reducers don’t know the difference between noise and a drum set, with cymbals. And automatic program switchers go nearly berserk in the presence of live music, especially music that has a wide dynamic range. Musicians can hear this,

In my experience, there isn’t much consideration given to fidelity at the dynamic range of live music, particularly at louder levels. MPOs set too aggressively will serve to squash the sound and bury the sonic detail of the music. What I have found with the aids I have had, is too much amplification at the 80 dB EQ curve, especially from 500 Hz to 2 kHz, and too much limiting from the MPOs. So the volume gets loud too quickly, then limited too soon by the MPOs.

Typical ways to try to address this have been to create a single EQ band “Music” program, with little to no sound modifiers. But that doesn’t allow enough dynamic range, and under -amplifies at lower volume levels. The solution usually offered is to change aid programs, but a performing musician doesn’t have the time to do that, number one, and most aids will temporarily mute and chime when switching programs, which is huge “no no”. We need to hear all the time, and we also don’t have time to be fumbling with buttons on the aids or on the cell phone hearing aid app to change programs. We need both hands to play our instrument, so whatever program is used, it has to work for both live music and conversation.

Then there is a matter of EQ placement points. Over half of a piano is under 500 Hz. And more than half of musical instruments have their tonal characteristics down there, too. An electric guitar is mostly from 250 Hz/ 500 Hz, then all the way up to 6 kHz. Electric bass is from 32 Hz (low B) up, with the overtones in the 125 to 600 Hz range. Saxophones are mostly around 500 / 750 Hz and up. So it is critical for musicians to have even response from 750 Hz down. But hearing aids generally have 3 EQ points in that range. So, when a musician needs low end, and there is only a 250 Hz EQ to do that, too much gain in that region can risk occlusion, also known as “mud” and “boom” to a musician.

Musicians cannot be subject to the “try this and come back in two weeks” fitting process. We need our aids to be right, from the beginning, or at least 80% there. The pre-programming formulas are not right for the demands of live music, and the audiologist often doesn’t have the sound gear to create real world level music in the clinic with real world sound samples. An important point there is that 99% of recorded music is compressed, reducing the dynamic range.

Add to that, expecting high end computer speakers to do the fitting job doesn’t work. The in situ and RTA tests are helpful, but, at least for me, the results of the adjustments from in situ and a RTA machine have never worked above 70 dB SPL.

Then finally, there is the communication barrier. I know it’s frustrating to an audiologist when the patient says, “too loud” but what they need is “less 500 Hz at 80dB SPL”. So imagine what it is like when the musician patient says, “the tenor saxes in their mid-range are too blatty and the sound there is so muddy that I can’t tell the difference between the tenors and trumpets on stage”.

To an audio engineer, this would mean that this is probably a 750 Hz issue. But if the audiologist doesn’t have the same frame of reference, then what the musician says isn’t understood, and it becomes a case of frustration on both sides.

So I would like to propose a wish list for musicians, to address these points:

1- Get the initial programming as close to “right” from the very first.

2 – Redefine what a proper “Music” program should be, to address the demands of live music, at real world volume levels. This would have no signal processing / sound enhancers at all, just
EQ-ing and the maximum outputs properly set. In my experience, I have had better success by spreading out the three EQ curves, and using WRDC, and a couple of other basic settings.

3 – Addition of EQ bands at the sub 500 Hz, at least one at 125 Hz, which some aids have, but others do not. I know some hearing aid manufacturers have relocated the EQ points to reflect the demands of music, which is a great step forward. What would be helpful would be to add in two sweepable narrow range parametric EQ points that can notch out those problematical in-between -spots.

4 – Spend more time dialing in better fidelity at louder volume levels.

5 – For hearing aids that use a cell phone app as a remote controller, add in more EQ adjustment points, besides just bass and treble. Possibly put a sweepable parametric EQ instead. Then give the audiologist the ability to download the data from the phone for in clinic modifications to the settings.

6 – Allow an option for the hearing aid to change programs without notification and without muting. I rely on my cell phone app to change programs, so I can see what program I’m on there.

7 – Increase the headroom and fidelity on the input stage and output stage of the hearing aid, both with the hardware and the hearing aid’s processor and programming.

Many of the needs of performing musicians overlap with those of other aids wearers. While we have better critical listening skills and more demands on our aids, some of the basic issues, like reducing the number of fitting sessions with more accurate initial programming, still apply to everyone who wears an aid.