Things We May Not Have Thought About (Yet) for Music and Hearing Aids

adjusting hearing aids for music
June 4, 2024

Dr. Marshall Chasin, Director of Auditory Research at the Musicians’ Clinics of Canada, explores innovative topics around music and hearing aids. In this session, originally part of the 2024 Future of Hearing Healthcare Conference, Dr. Chasin examines aspects we may not have considered yet for optimizing hearing aids for musical inputs.

Dr. Chasin provides practical tips such as disabling advanced noise management features on hearing aids for music programs, using more linear settings for compressed streamed audio, and allowing user control over equalization settings.

He also introduces novel concepts like potential frequency raising algorithms to help cochlear implant users better perceive low frequencies.

Full Episode Transcript

Welcome to talk today on four things we may not have thought about yet for music and hearing aids. I’m going to begin with some of the things we do know and go beyond that to some of the things that may show up or may not show up, depending on many factors in the future. My name is Marshall Chasin, and I’m an audiologist at the Musicians’ Clinics of Canada. And as the name suggests, I do a lot of work with musicians, both hard of hearing musicians, as well as people that just like to listen to music, many of whom have normal hearing. Some people do have the hearing, have hearing difficulty. Conflict of interest declaration I should point out is that I do have a book called Music and Hearing Aids. It’s not really a conflict of interest. I think it’s an area of interest declaration. Its from Plural Publishing. It’s actually quite a good book. Well I wrote it, but obviously it’s quite a good book. But it’s written from a clinical perspective. So it has all the research and all the references. But also it has 15 to 18 I can’t quite recall embedded audio files so that you’ll be able to click on it and actually listen to the phenomena that I will be talking about. This is a picture has nothing to do with this talk. It’s from 1880s. This was from a gentleman by the name of Brown was his last name. He gave a talk in the 1880s about how he thought the ear evolved sound into various musical notes. In other words, how we hear. Long before we understood the function of the cochlea in the basilar membrane. Function, von Bekesy was in the audience, apparently, and he approved of it. It’s completely wrong, but it’s very romantic. I’ve checked it out and you can rub certain parts of your ear, and it’s supposed to make certain sounds different parts of the pinna at certain resonances. That’s not the case, but it’s an interesting story nevertheless. Eight things we do know about music and hearing aids. First we have to disable the advanced features. I’ll be talking a little bit about that later. But of course, we want to just disconnect a feedback management system. Even today we have hearing aid technology where the feedback manager cannot discern the difference between feedback and a higher frequency harmonic. And I have had cases where flutes have been turned off by the feedback management manager, confusing the harmonics of the flute for a feedback squeal. Some manufacturers have restricted their feedback manager to the higher frequencies. That has improved things somewhat, but still, it’s quite problematic. So if you can completely disable the feedback manager, the same thing which I’ll talk about in a second about disable the noise management. As another advanced features, we’ll be talking about similar frequency responses. We do know that there’s a similar frequency response in a hearing aid for a music program as for a speech in quiet program. This like the next one. Similar compression characteristics for a music program versus a speech in choir program are also the case because both of these things, the frequency response and the compression features, are more cochlear damage issues rather than the nature of the input stimulus per se. And so that’s why they would be very similar. If you’ve, let’s say, implemented or want to implement a broader frequency response for a music program, then you should go back and ensure that your speech and choir program also has that similar program. And I’ll be talking a little bit about that later. Consider a more linear setting for streamed music. We know that from the work done in Colorado by Krogan Earhart and, and Cates, where they argued quite successfully and correctly, I think, that MP3 s are already compression limited once. And then maybe the smartphone that they’re using also has a compressor on it that cannot be disabled. So what is streamed out of your smartphone may be actually a highly compressed signal, and you don’t want your hearing aid program to compress it a third or fourth or fifth time down the line. It can actually make it very poor. It’s amazing how much compression we can get away with before the human auditory system notices something bad going on. But if you can, if you have the availability, you can have a linear setting for streamed music so that the hearing aid doesn’t overly compress highly compressed music that’s already coming in. Of course we want to make sure that the analog to digital front end of a digital hearing aid does not distort with higher levels of associated with music. And if anyone’s heard me talk about this issue before, I do maybe talk too much about this issue, but that’s at the very front of the hearing aid. And if you don’t take care that the hearing aid sound doesn’t distort the music coming in. Rather once music is distorted at the front end, the analog to digital stage of the hearing aid, no amount of programming or magic later on in the circuitry or the programming will improve things. So you want to have an input limiting level that is sufficiently high such that the louder components of loud music or live music, whether you’re playing yourself or just listening to it. And also the peaks that are involved with music are not clipped or distorted in any way. You want to keep your digital delay as low as possible. Now, there are many circuits that we do use, and programs such as noise management. This is notoriously time hungry. The way many noise management systems work is that they sample the sound in a moment. Later, they sample again, and they correlate the two. And if there’s a high correlation between the two, then it must be speech and music. If there’s a low correlation between the two, then it must be noise. But it does take two samples at least over a period of time, and many hearing aids that boast digital delays in the order of one or two milliseconds. Once the noise management circuit has its way, it can be 30 or 40 milliseconds in many cases. Also a suggestion that maybe we’re not there yet, but to give the musician as much control as possible with the app over, or various programs over the sound of their music. Many apps now have at least three band equalizers that can be used. The one thing that I won’t give control to the musician over is the maximum output. I want to ensure that that does not exceed their tolerance limiting level, or, God forbid, create any further additional hearing loss. So I’m in favor of giving the musician as much control as possible, with exception of the maximum output, where I want to verify that and limit that to the extent possible. And I do want to disable frequency lowering algorithms, whether it’s called sound recovery or audibility extender. And different terminology in the field, except in one case. And that one case is, let’s look at frequency lowering. First. We have to go way, way back to 1950 Hollywood. Davis did a wonderful study in the 1950s with a lot of his colleagues. And in fact, he was a subject himself. I met Hallowell Davis back when Asha was in Toronto, I think in 80 or 81, or maybe 80, 219, 82. And steal yourself. When you ever meet someone famous in the field, be prepared to have something to say. I kind went up to him and blatantly, my first question even before hello, was, were you a subject in the 1950 experiment? And he said, oh, geez, not another groupie. And I’m sure he’s forgotten all about the restraining order he took out of me after that. But Davis did a very interesting. No, he’s not like that. He wasn’t like that. Anyway he did a very interesting study. He took ten servicemen, or ten to twelve, I can’t remember the number. He said, I want you to sacrifice one ear for your country. So he created a temporary, what he hoped was a temporary threshold shift in one ear by blasting them with a lot of loud level noise and protected the other ear. So we had a unilateral high frequency sensorineural loss, hopefully temporary. He was one of the subjects himself. That was common back then in the 1940s and 50s for experimenters to use their own self as subjects. This true phonetics with loderfoged as well. He was frequently a subject in his own phonetics. But what they would do once they created this temporary hearing loss on one ear is that they were provided with two unmarked knobs. One was frequency and one was sound level or intensity. And they were asked to match the sound in the good ear with what they perceived or heard in the bad ear and in the lower frequencies. There was a good one to one correspondence. As the pitch went up in the good ear, the bad ear, the pitch went up as well, and there was a good one to one correspondence. But when you got into the higher frequency region around 3000 or 4000 hz, where the hearing loss was significantly affected, as the frequency in the good ear went up, the pitch in the bad ear didn’t get higher pitched, it got louder. Hence many people use the term ‘Recruitment’. More nerves were recruited but the pitch was not higher. In other words, the damaged ear was more and more flat relative to the good ear. And in fact, we heard, the subjects heard the sound is flat relative to the good ear. And this is a great diagnostic indicator that we can use clinically. If someone comes in you ask them the question, does music sound flat? Or they may more often than not volunteer to you. My music sounds flat compared to the way I remember it. That’s an indication of a, what we now call a cochlear dead region. Of course, in Hallowell Davis’s time, they just called it diplacusis. The opposite study has yet to be done. What about a low frequency sensorineural hearing loss or reverse slope, many errors or some other function, endolymphatic hydrops, people would presumably hear the sound as being sharp. It’s never been published yet, but I am working with two students right now at different universities that are trying to replicate that study with a low frequency sensorineural hearing loss. Let’s go to this slide first, Brian Moore from Cambridge University in the 1990s came out with a wonderful technique test called the ten test, threshold equalization and noise test. And then he came up with a version that was HL, that was calibrated for the audiogram so you wouldn’t have to calibrate. It took about ten minutes for four frequencies. And that was me. Some people claim they can do it eight minutes. I could never do it in less than ten. I don’t do it anymore. It was great. It gave us a certain frequency region that the damage was significant enough such that we want to avoid. And he called this the cochlear dead regions. Nowadays I just use this, my piano keyboard. It takes 12 seconds. This is something that the musician can actually do before they even come to the office. They can find a piano at home, a friend’s place, a music store. I happen to have a clinic piano. But you can download any number of apps off the Internet and be able to play audio files for your clients. Start from an octave or two above middle C where the damage would typically occur, and have them play adjacent notes, white note, white note, white note, black note, black note, white note. So on, all the way up to the right hand side of the piano keyboard. And the task is to tell us whether two adjacent notes do not sound different in pitch. So you get to a region near the right hand side of the piano keyboard, which is around 4000. If two notes don’t sound significantly different in pitch, that’s evidence of a cochlear dead region that takes ten minutes. That’s something that the musician can do themselves. They may not know it’s 3000 hz because they think in terms of a, b and b flat and c. But if you know that the top end of the piano keyboard is around 4100 hz, say 4000 an octave below is 2000. And if, let’s say it was four or five notes, white notes from the top end of the piano keyboard, you suspect it’s probably around 3000. But they would see the g, we would say 3000 hz. So we know that 3000 hz is the beginning of a cochlear dead region that we want to avoid. This whole thing takes ten to 12 seconds and it’s also something that the musician can do at home. Bring with us that they’re bring with them that information. If you wanted to look at the research, there is two wonderful studies that talk about whether you would have these dead regions, whether, in other words, you would have a narrower frequency response, less high frequencies than someone else. You have to look at Hazir Aazh and Brian Moore’s study in 2007, and then a similar study by Todd Ricketts, Andrew Dittberner and Earl Johnson at Vanderbilt in 2008. They were not looking at music, they were looking at speech. But again, this is a cochlear thing, not a nature of the input sound per se. So it doesn’t matter whether it’s music or speech. And they found that with a mild loss, there’s probably no chance of any cochlear dead regions. And you could have a broad bandwidth. If the hearing loss was much more than a moderate level, 60 or decibels sensory neural loss, then you, usually, you’re going to run into cochlear dead regions and you would have to restrict the higher frequency amplification or transpose lower into a healthier region. And similarly, if there’s a steeply sloping audiogram, then also this would be evidence of a cochlear dead region. And again, you might want to have a narrower bandwidth. So the rule of thumb is widest possible bandwidth for speech and music programs if it’s a mild loss, not so wide, or frequency lowering if it is more greater than a mile loss frequency If you want to look at what the industry has done or the terminology that the industry has used for their frequency lowering algorithms frequency transposition is a linear decrease. An example of that is the widex audibility extender frequency translation also known as spectral envelope warping. And that starkey spectral IQ has that the first one can be thought of as a cut and paste, taking it linearly as a chunk and moving it to a lower frequency region. Frequency translation is more of a copy. Reduce the level somewhat, and paste level one and two are actually very similar in that they’re both linear. And we’ll come back to this shortly. A third approach, sound recovery, is a nonlinear compression. Phonak unitron and a few other manufacturers use this non linear approach. When it comes to speech. Any of these can work very usefully and be very useful when it comes to music. Either don’t do it, or we’ll be talking about number one and two. The linear decreases very shortly. And the reason why frequency lowering works is that what is being lowered? It’s the high, the broadband signals, the s’s, the sh’s, the siblings, the africates, the fricatives. These have broad bands of energy. S for example, is energy beginning at around 4000 hz, but it would still sound like an s if that was lower to 3600 above. So it’s moving entire bands of noise a little bit lower if it’s too low. Of course it might be confused with the /sh/ which has energy starting at about 2500 hertz. And there’s some data that too much can be problematic for speech, especially in the higher frequencies. But a little bit can be very, very useful. But what is not frequency lowered are the harmonics. So in speech, if you look at the vowels, the nasals they’re made up of harmonics, multiples of a fundamental frequency. My fundamental frequency is 125 hz so I have harmonics at 125 hz 250, so on multiples of my fundamental and these are not altered at all. Music, however, is a little bit different than speech in that its music is like the lower frequency of speech. That is, its harmonics in the lower frequencies like speech, but also harmonics in the higher pitch. So if you decrease a harmonic that let’s say is at 3000 to 2900 hz it would sound odd because pitch in music is so important and it’s only defined by the relationship of the harmonics. So if you squish them down you can run into certain problems with them. This is an experiment I did where I just took one half of one semitone decrease and only for those harmonics above 1500 hertz. And the black is the original signal. This is a violin, I believe, and you can see in the higher frequencies on the right hand side it’s being just decreased by one half of one semitone. Almost nothing. So let’s listen to that, an ABA file and see how it sounds again. A being non transposed, b being one half of one semitone and c a again being the original. And now we’re going to look at the same thing with the full music score. But it’s okay for speech because what is being transposed? It’s only the siblings or the broadbands. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. So there’s no significant decrease or negative aspect for speech. So what do you do for music? Well, we generally just gradually roll off the high end. So this is an example of a six decibel per octave roll off above 1000 hz. You still have the higher frequency elements there, but it’s reduced so yes, 3000 decibels may, in fact be problematic and may cause distortion in the cochlea but 3000 or 50 decibels may not cause it. So at a lower level you can still get away with it. And again, if you listen to it it really doesn’t appreciably change. Again, the aba system. But there is one example that is an exception. It’s the island of refuge. And what if it was exactly one octave? So if all frequencies above 1500 hertz were dropped exactly linearly, not logarithmically or otherwise, by exactly one octave. So all the harmonics line up with pre existing harmonics. But then you have creation of a few other harmonics that were not there in the original signal. And it could be something like a third or could be a fifth, but a third and a fifth. Maybe that’s not what the composer had in mind, but it still sounds pretty good. So this is an example. Let me see now. This is a study I just did with Dave Fabry from Starkey and Francis Kuk from widex. It came out in the January issue of hearing review and we talk about how you can do that as long as it’s a linear system. This is an example where the black is the original but the orange is the one octave transposed. And as you can see along the left hand side many of the line up with pre existing harmonics. And then you have the creation of those intervening orange harmonics that in this case are would be a perfect fifth. So it’s an a but it creates an e and an a and e together sound pretty good. Not what the composer had in mind, but it works pretty good. And this is what it sounds like. Okay, this would be an example of a quarter wavelength instrument such as a clarinet or trumpet or french horn. And again, you get other things but because it’s a third wavelength length with odd number multiples again you’d have the a, but it’d be a c. And again, an a and a c together. Not what the composer had in mind, but it still sounds pretty good with a perfect third. But not for music. If you not for speech. If you do play it for speech, it does sound all muffled. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. Finally I can curl up with my book and escape to the roaming wilds of Nottingham. So that was a case where too much frequency transposition, linear or otherwise, for speech is just so detrimental. So that brings us to the second topic and this gets us into the future. Frequency raising algorithms as opposed to frequency lowering algorithms. To my knowledge, there has not been a technology or circuit that is a frequency raising algorithm. But I’m going to suggest that it’s possibly time for that for two populations. One with those people with reverse slope audiograms where you have a low frequency cochlear dead region, cocchlear low frequency diplacusis or distortion. And you want to shift that sound up into a healthier region of the cochlea, mid or higher frequency. Also, cochlear implants have a significant difficulty transducing the lower frequency sounds and the rhythm. And many people feel that if they are bimodally fit, that is cochlear implant in one ear and a hearing aid in the other ear. It’s better than with two cochlear implants because the hearing aid gives them something. The cochlear implant does not give them something. So if the frequency raising algorithm is linear and again is exactly one octave, this should be good for both speech and music. Again, you’ll have lining up of certain of the harmonics. And sometimes you can have a creation of, of things that are not there in the original signal, such as again, a perfect fifth c. Becomes a g or c has creates also a g. So this is a bass clef. And as you can see, it creates this new orange thing there. And if you look at it on the left hand side, this is a more of a copy and paste thing where the arrow is. It suggests that this new thing, which is harmonic, quite non dissonant, sounds pretty good. So this is a cello, again in an ABA format. Okay. And again, this is for orchestration, not just music. So it sounds pretty good. And it takes all the sounds that would either be missed by the processing of the cochlear implant or maybe been highly distorted by a low frequency sensory neural hearing loss and shifts it up into a healthier region of the cochlea. So that’s something to look forward to, I think, in our industry. The third point is I want us to all rethink 1500 hz. For those that know me, it’s my favorite frequency. I know that sounds nerdy to have it favorite frequency, but let’s talk about 1500, what that means to people. Well, one way we can accomplish this is by using oversized receivers for hearing aids. Now the receiver power in the size generates gain in the output in the hearing aid. And, of course, we use the larger receivers to provide this gain and output for more significant hearing losses. And we did a study where we used overly sized, oversized receivers for people with more minor mild and up to moderate level hearing losses. We, of course, had to compensate to reduce the gain in the output by about six decibels. When we went from a standard to a power receiver or a power receiver to a standard one, rather. But they – they thought when we did that, it sounded so much better. And the receiver side and also the port opening, the opening of the nozzle defines the resonances. This is what we all learned in school as Helmholtz resonances, named after von Helmholtz. But we do know that larger receivers having a larger volume. Have lower frequency resonances than smaller ones. And also, if you look at the nozzle port, port, we do know that the narrower the thinner the port, the frequency, the resonant frequencies are moved down a little bit. And we don’t have to go through the Helmholtz equations, although they’re not that difficult. We learned this in speech acoustics in school. We do know, for example, looking at this slide, this is the formant structure along the first two formants of vowels – e ranging on the far left to, U, on the right. And we can go from the e to the oo. Yeah, right to the very back. So that you think of this as time along the x axis. The y axis is frequency. So this is actually the resonant frequency locations of any number of or second resonances in the vocal tract. Of course, we call them formants. But if you look at the one in the far left, either, and the one on the far right oo, it’s the lowest frequency, first formant. That’s no surprise. That just means whenever you have the tongue close to the roof of the mouth, there’s a very little opening, a very narrow opening there. And that little narrow opening creates a very low frequency formant. So if we can put an insert into the nozzle of a normal receiver, it would actually lower all the resonances. I’m not suggesting we do that because it would roll off the higher frequencies as well. But you could. You could do that. Well, why am I even talking about that? Well, many of my musician patients and others as well, do feel that when the frequencies are lowered such that there’s one in the 1500 Hertz region, it sounds warmer and more pleasant. And that’s the beauty of 1500 Hertz. We also get that when we cup the hands behind your ear. And when we’re doing that we’re actually increasing the sound around 1500 hertz by about ten to twelve decibels. This is from Uchansky and Sarli. They did some interesting work and with a probe tube microphone system. And they found that there was in this case about a twelve decibel mid frequency enhancement. When you cup your ears, of course, the pinna itself creates a pinna effect which continues on to the higher frequencies. But this artificial cupping just gives you back that 1500 hz. Also cupping your hand around a microphone for though we’ve all seen pop singers, usually women, that when they grab the microphone, they don’t grab it by the handle as we’re taught to do in school, but they grab around the cone of the microphone, the top of the microphone, and that also increases the output at around 1500 hertz. If you look on the right hand side, the middle one, that cupping of the hand there is a 1500 hertz enhancement. They may claim it sounds better, but I think it more looks better on a video that they’re almost kissing the microphone. I’m sure that the sound engineer can just boost up the gain by about five or six decibels at 1500 hertz to give them the warmth that maybe they think they’re getting. So there’s no need to do this. But it does look good. So many people are suggesting training mics where you could prevent people’s hands from cupping over The ball, the cone of the microphone. But it does give you more boost at 1500 hertz. Also old style media, the cassettes that were used initially in the sixties, but certainly well into the eighties, the old analog cassettes. Both the stones and the Beatles would only want to use one type of tape back then for all the music. And some of the them gave you a bit of a boost around 1500 hz. So let’s look at the three that were available in the 1960s and seventies and early eighties. Ampex 456 was a very commonly used one and it had a bump in the lower frequency range. Scotch brand 250 had a bump in the higher frequency range. But the Agfa 467 had an increase around 1500 hertz. And this was the only one that the Beatles and the stones would allow their music to be played on or mass produced and distributed on in the cassette era. And this came out of a biography of Keith Richards by Bokris. And there are several biographies of him, and that’s where I got that information from. And he was quite adamant about that. I’m not sure that Keith Richards knew it was 1500 hertz, but he probably knew it just sounded better. Didn’t know why, perhaps, but we know, and I think it’s due to 1500 hertz. So if you have patients, and they’re happy with the hearings, aids for both speech and music, as an experiment, boost up the gain and the output around 1500 hertz. Not a lot, maybe four or five decibels, and see what they say. Some may not notice much of a difference. Some may not like it. I think most of them will like it, especially for the music program. Four. Using one’s own hearing aids, as in the ear models monitor. Now there are two things that in the ear hearing aids cannot do yet. In the ear monitors cannot do yet. One is that they cannot provide a level dependent output like hearing aids can. Where we have, let’s say, a lot of gain being provided for soft level inputs, less gain being provided by middle level inputs, and almost no gain or very little gain provided for very high level, loud level or high level inputs. So that level dependent nature of hearing aids has made it so useful to try to address some of the issues with the damaged cochlea. In the, in ear monitors currently do not do this. They are linear or straight amplifiers. They can be connected to other things on the rack, which could change the frequency response equalizers, for example, but they are not level dependent. And the other thing brings us to that other point, that hearing aids own hearing aids can provide a frequency response that has been optimized for someone’s hearing loss with all the little nooks and crannies. And we can verify this with real ear measurement in the person’s ear with an, in the ear monitor. If you ever dropped a probe into the ear, they’re anything but flat. They can be made flat and you can work with them to make them flat, but that’s quite rare. For that to happen off the ship shelf, it’s like a first fit. You have to do a little bit of playing with it, just kind of get it, get it just right. So that’s the second thing that in ear monitors cannot do. So can we use our own hearing aids in place of, in ear monitors for hard of hearing musicians? That’s this topic here. There’s an article at the bottom here. Jason and Morris. Steve Morris, he and I wrote an article that came out in the issue 2024 of Hearing Review which was a cookbook on how do you do it. The original article that got me thinking about that was by Lesimple in 2020. At that time he worked for Bernafon and now I believe he works for Sonova Corporation. And they suggested well why can’t you use a emitter Bluetooth accessory like a tv listening device, hook it up to your output of your, your audio rack in a live performance and as long as the musician is within 30 feet about 10 meters, which is the limit of bluetooth today they should be able to hear effectively their own music and play it and they can play it through their own amplification with their own level dependent input and their own frequency response. When I did that with some patients it didn’t sound very good. Some cases it sounded a little bit quiet, other cases it sounded distorted. And so I started to scratch my head as to why. And then I talked to some of the audio people and they said well why don’t we just put a preamp between the output stage and the And the Bluetooth accessory. A preamp is just a volume control. For some people it did improve things. It made it sufficiently loud and they said oh that sounds pretty good. Others were finding that it was sounding a little bit distorted. So that wasn’t the total solution. And it turns out that a solution was that we were, we want to ensure that balanced balance and unbalanced. Unbalanced. Well we don’t normally deal with issues like balanced and unbalanced and audiology. But if you’ve ever taken a sound recording class maybe as part of your accumulation, acoustics, phonetics course in school certain microphones like, like the old dynamic microphones that Didnt require a battery, I could have a Cable that would maybe be a 2 meters long or maybe four 4 meters or 10ft or 20ft and that was pretty fine. If it got beyond 15 to 20ft of cabling the sound from the microphone degraded significantly and that was a, an unbalanced microphone. In contrast, the more modern type, the capacitor or what used to be called the condenser microphone has a power source in it is a balanced system. We could have a cable that is 1ft long, 500ft long or 5000ft long. It wouldn’t really matter. The the sound would not degrade and that was a balanced system. So that’s how we learned about balanced and unbalanced. And we could have a whole session on the difference between balanced and unbalanced. But trust me that some things that come out of an audio rack are balanced. Some things are not balanced. And if we just ascribe to the belief that we want to match them, make sure that balance goes to balance and unbalanced goes to balance, we can actually solve this problem. And that’s what I deal with in this Chasin and Morris article from March of hearing in Hearing Review. This is an example where the sound coming out of the rack, I would put it into the thing on the top left. That’s, yes, it’s a preamp, but a preamp is about 80 or $90. This device I pick up for about, it’s called a digital injector or dynamic injector sometimes, but it’s, it’s a preamp and a balance unbalanced converter. It automatically takes care of care of it. And this was also 80 or $90. So I have one of these for 80 or $90. I have to unfortunately make sure that there’s an adapter coming out of the rack. You know, has female to female XLR adapters that will go into this preamp system. On one side it will have the input, on the other side it would go to, to my accessory that you see there. And through Bluetooth it listens or transmits to the performer. By the way, you’d have to have what’s called an XLR adapter for lack of any other reason to talk about XLR. It’s one of my favorite topics. Why are those cables called xlrs? Those are the cables that you see in the top left that have three little dots in them as opposed to these little rch jacks. Canon Canon was the first manufacturer to make them. And on their Canon X series, hence the x, they came out with a three pronged output stage, which was excellent for a balanced system like a dynamic, I’m sorry, like a capacitor or condenser microphone. They found though that as they pulled the cord it came off, it came detached often. So they needed a latch, a little clip on the side of it, hence the l for a latch. And then sometimes when they plugged in those little three pins they would bend the pins over. So you wanted to encase them, release the holes in a hard rubber so that they would go into a certain slot. And we’ve seen them before and that’s the r for rubber. So xlr just means cannon, x series, latch, rubber adapter. But we all know about XLR cables. So from the rack goes into the XLR into of my little adapter with a little volume control to act like a preamp. And then the other end goes to any tv listening system. And through the tv listening system, it directs it to the individual listener and they can route them directly to their hearing aid. So with this system it’s one accessory, one adapter, which is maybe less than $100. You can actually use your own hearing aids up on stage. And we’ve done this successfully on a number of occasions. Now, this is the most famous violin in the world. This is known as the Red Violin. There was a movie in the 1990s with Samuel Jackson about it. The idea behind this Felix Mendelssohn red violin is that Stradivarius. When he was making the violin in, unfortunately, his wife passed away. He was so distraught, the story goes, that he used some of his wife’s blood in the varnish of the violin that’s been disproven. They’ve done genetic testing and there’s no evidence of any blood in it. But that was the story, and you still hear that story today. This is the current owner of the red violin, Elizabeth Pitcairn. She happened to be giving a summer camp up in the Adirondacks, which is an upper New York state. And my wife has a cottage there. And so my wife said, do you want to go and see the red violin? I thought she meant maybe a local theater group was doing a reenactment of the movie with Samuel Jackson about the red violin. But sure enough, when I got there, it was the red violin. And at the end of the performance, I was able to bully my way onto stage. They had this twelve year old kid acting as security, but I could take them down, no problems. I have a black belt in karate. I’m actually only certified to take down five year olds because the eight year olds, they typically come after you. But I was able to get very close to her. I had to keep my hands in my pocket and have my picture taken. I was not allowed to touch it. And I’m also sure that Elizabeth has forgotten all about that restraining order she took out on me as well. Unfortunately. This is some contact information for me. Marshall dot Jason will get directly to me as well. Info at dot is kind of neat. It’s my website, but it has a lot of good publications, so if you click on it, scroll over to the publications, it will have sections on hearing loss prevention for musicians hearing aids and music. For musicians hearing aids and music. Another section on occurring acoustics and the final section on it’s a humorous articles I’ve written over the years that talk about technical principles. But for example, what if Humphrey Bogart in the maltese falcon, which is a 1940s film noir, was actually an audiologist and somebody had come in complaining about a missing c sharp? And in fact it’s called the case of the missing c sharp. Anyway, if you scroll over a little bit more to the demo section and then scroll down all the audio files that you heard today can be found in in the demos section. The frequency raising audio files are not yet in there but they should be in there shortly. Thank you.

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week, and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

About the Presenter 

Marshall Chasin, AuD, MSc, is Director of Audiology and research at the Musicians’ Clinics of Canada, Adjunct professor at the University of Toronto (in linguistics), and Associate professor in the School of Communication Disorders and Sciences at Western University. Dr. Chasin holds a BSc in Mathematics and linguistics from the University of Toronto, a MSc in Audiology and Speech Sciences from the University of British Columbia, and his AuD from the Arizona School of health Sciences. He is the author of over 200 articles and 8 books including Musicians and the Prevention of Hearing Loss. He is one of the founding editors of Hearing Health & Technology Matters and also writes a monthly column in Hearing Review called Back to Basics. Marshall has been the recipient of many awards over the years including the 2004 Audiology Foundation of America professional leadership Award, the 2012 Queen Elizabeth ii Silver Jubilee Award, the 2013 Jos Millar Shield award from the British Society of Audiology, and the 2017 Canada 150 Medal. He has developed a TTS app called Temporary hearing loss Test app.



Leave a Reply