At the recent Consumer Electronics Show (CES) in Las Vegas, companies from around the world showcased their latest technologies. Dave Kemp hosts Andrew Bellavia to discuss his firsthand experience at CES, with a focus on innovations in hearing and hearable devices. Andy sheds light on the overarching technological advancements that are shaping the future of hearing devices.
During his time at CES, Andy had insightful discussions with various companies. He spoke with EssilorLuxottica about their groundbreaking hearing aid glasses and visited Absolute Audio Labs to explore their latest developments and partnership with Renesas. Additionally, Andy engaged in discussions with the teams from Knowles and AVAtronics, as well as JLabs’ plans for entering the over-the-counter (OTC) market in 2024.
Dave and Andy analyze the implications of these new developments and discuss the latest trends in the hearing health and audio industries. They highlight key takeaways from CES, offering valuable insights into the future of audio technology and its impact on hearing health.
Full Episode Transcript
All right, everybody, and welcome to another episode of This Week in Hearing, joined today by my good buddy, Mr. Andy Bellavia. Andy, how you doing today? Doing pretty well, thanks. Despite the fact that I came away from CES with COVID I’m fully recovered and ready to go. Right. that’s kind of what happens when you put 150,000 people in the same proximity. you’re bound to walk away with something. Unfortunately, it seems. Unfortunately, it’s true. Everybody used to joke about CES flu. Well, now there’s a few more things to worry about. So as you just alluded to there, I wanted to have you on kind of right on the heels of CES to hear about this year’s show. I know you were there and kind of bore witness to the whole thing. and so just wanted to kind of get a sense from you of what your big takeaways were, what you saw that was very interesting, noteworthy. so I’ll pass it over to you and let you take it away. Okay, thanks, Dave. And I appreciate being on to share my experiences at CES and really focusing on hearing and hearable devices. first off, it’s impossible to see everything at that show. Really what I had to do was pick and choose to try and get a sampling of the different innovations that are going to affect what hearing devices and hearables look like in the near future. In terms of actual hearing devices, there were some companies there showing the typical familiar products. there was one notable exception, I’ll share, but generally speaking, it was the underlying technologies like chips and software in the partnerships, the ecosystem partnerships forming, that were really interesting. The one exception into hearing devices that was unusual was EssilorLuxottica. they made a big splash introducing their hearing aid glasses that they had announced earlier. as many people watching this episode probably know, you and I both saw the original prototypes in Milan in the summertime, and I’ve had a relationship with them since. So I had to go over there and check it out for sure. Their chief audiologist and head of marketing shared their approach with me on their hearing device, and I recorded it in a short conversation. I can share with you now here. With me, Tami Harel. She’s the chief of audiology. And I have Davide D’Alena. He’s the head of marketing for Nuance Hearing. EssilorLuxottica Thank you for joining me today. Thank you, Andy. Thank you to you, Andrew. I’d like to understand first, because you’re not a hearing company, how did this project come about? What was the initial impetus for developing hearing aids in any form, let alone in glasses form? Well this is coming from a long time. There is a vision already from few years to opportunity to converge vision hearring industry. This for better serving the consumer. There is a strong overlap in the need of consumer, especially after the age of 50, which has hearing impairment and need of vision correction. So we do believe that there is a strong opportunity to serve better customer in the need. We are targeting people with mild to moderate hearing loss. So basically, we know that in the world there are 1.6 billion people with a hearing impairment. Out of this 1.6 billion, which is the current picture, there are 400 million, which has a moderate to profound to severe hearing loss, which are the actual target of traditional hearing aids. There are 1.2 billion people with mild to moderate hearing loss, which has no solution today. And this is our main target. Those people are not approaching the category for a number of reasons, for stigma, for price, for comfort reason. And we do think we have a solution to solve this problem for those consumer. So characterize the kind of consumer with hearing loss who would choose glasses over earbuds or behind the ear style, traditional form factors. Well, as an audiologist, it is very difficult to have someone coming in the clinic with mild hearing loss and have him go out with a set of hearing aids because of the reason that Davide is saying mostly stigma. Unfortunately, we tackle both stigma and comfort because we have a completely different form factor of hearing aid. Because this is the hearing aid. As you see, nothing goes in the ears. It’s completely invisible. And because of a specialized technology, it’s called beam forming, that is able to capture the sound in front of you. We also answer for the need of better function in noisy situations. We, at Nuance Hearing, started to develop an array of microphones, a table microphone, for example, that is able to direct specifically this function. When EssilorLuxottica reached out, we essentially took this array of microphone, embedded it on glasses to create the best directional experience that you can have. And so, clearly, speech and noise was one target use case for these glasses. And I know from my trial in July that it worked exceptionally well, even though my hearing loss is outside the range that the glasses are meant to serve. What other use cases are you aiming at? Well of course, speech in noise is the main use case. So actually the main problem a lot of people in the mild to moderate hearing loss has is the cocktail party effect. So this is the main use case. But there are other main case, like a work environment. Sometime in a work environment, even not so noise allowed, you cannot really understanding all the word of your colleague in a meeting. Another use case is, for example, when you are speaking not in your own language. And so you need to focus really to get exactly the word and the meaning of the other people. And I must also say that you don’t have to have hearing, hearing loss. Okay. You can be someone with normal hearing, having challenges in noisy situation. So the glasses provide the best solution for that, because you can do it seamlessly. Whenever you feel that you need some help or you are experiencing some effort that you have to put in, you can relax and you can use it and let the glasses do your best. Signal noise to signal to noise ratio for you. Well, you two made a couple of really important points, I think. First off the hidden hearing loss, the National Acoustics Labs just in the United States identified 25 million people who don’t measure as having hearing loss, but they struggle to hear, typically in noise. And so these glasses could serve better than hearing aids because of the directional microphones. And you brought up the work environment. I’ve talked about this before in the run up to me getting my own. When I started in the hearing industry a dozen years ago, I didn’t realize that six years after that I would be a customer. But what I realized, I was going to China all the time and talking to a lot of people who are speaking know with different accents and various levels of English proficiency, and I’m struggling to understand them. And I would do, say, a two week trip in China, and I would be dead tired. First time I went to China after wearing my hearing aids, I had twice the energy. I think that’s a critical point for people who are on the fence about getting a hearing device. I think you got exactly the point, Andrew. And I think our aim is to reduce listening fatigue. So potentially there is application also for normal hearing people. When you have an intense day of work and then you have social life at night in a busy restaurant, you want some superpower. And I think these kind of glasses are giving you some superpower. That’s our mission, to improve quality of life for millions of people, bringing hearing a better hearing to millions of people in the world. I think that video is really a good encapsulation of a lot of the different key features and points of that device. I know when you and I were in Milan, and we were playing around with the prototype, and we went to know, kind of as Davide said, you don’t necessarily have to have hearing loss, or at least hearing loss that registers on an audiogram. I don’t. And to the point that you both were making around the listening fatigue, I think that’s a very compelling point because that’s how I felt. Was wearing that for about an hour at dinner. you kind of forget that you’re wearing it, and then you take it off and it becomes really noticeable. And I know that was kind of the feedback that everybody had. Abram said something really similar, which is like, I think, a really interesting point, that how much are our brains working and straining on a daily basis? And I think that’s a very interesting sort of point of value for these kinds of devices. Absolutely. And one size won’t fit all and the glasses. Form factor are going to be terrific for a lot of people. And I think they’re going to reach a whole new audience of people who won’t put hearing devices that are uncomfortable with earbuds in their ears. But it’s also the concept of it, because there are earbud designs approaching the same sort of thing, like the Sennheiser conversation, Clear Plus, which are not a hearing aid, but are meant to reduce listening fatigue and social situations. And so I really like that there is a developing range of solutions so that wherever a person is, at whatever style of device they want to wear, they will have a device for them that will improve their lifestyle, regardless of their hearing loss level. Everything from these sort of situational devices up to prescription hearing aids, whatever it is, is necessary to help them lead a better lifestyle. And the glasses form factor, I think, is a key part of that, because it’s different than all the others. It’s going to reach more people than just the others. I think the other interesting point, too, is that company EssilorLuxottica is so dominant in the eye care space that they have so many different brands. And so when they introduce this new product category, I think that the thought is sort of that whichever of those brands that you want to choose from initially, I’m sure they’re going to have a limited set of options. But I think down the line, the idea would be that if you want Ray Bans or if you want Persols or any of that family this is like an added option. You can choose these amplification frames in the same way that you can choose a very specific kind of prescription lens. And so I think that’s a very important thing to point out, because we in this industry, in the hearing healthcare industry have sort of all lived in this purview of these five major manufacturers. And so now you have another giant company that has the capacity to really sort of usher in cultural change. Like you and I have talked about this at length, which is hearing aids have a lot of this kind of stigma, baggage associated with it. We’ve talked about how that serves as a detractor for adoption. but as we’ve pointed out before, I’m not sure if replacing that with an earbud is really culturally going to be acceptable, more or less. I don’t know if people are going to be willing to draw attention to themselves, more or less, by wearing earbuds to sort of solve that speech and noise problem, the loud cocktail party effect. And so I think that it’s such a clever approach where you take something that is ubiquitous and kind of as innocuous as glasses have now become, and you layer on this new functionality to it. Again, it’s not to say that this is like a replacement for hearing aids or anything like that. I think, as we’ve pointed out before, kind of the name of the game in this whole mild to market category, whether it’s like, how OTCs are going to impact things. The point is, how do you get more people to engage with this system earlier and not wait until it progresses to the point to where it gets so bad that you’ve crossed into that? Like, it’s now a medical Rx prescription grade problem that requires a medical grade solution. I think that that’s what’s for me really exciting about this, is that it’s just another avenue of how you can maybe rope some people into getting that first experience with it. And again, as we can all kind of attest that have used this, it is a very visceral thing of using it and then feeling it be taken away. And again, it’s that effect of, like, how much is my brain working unbeknownst to me, and straining. And how taxing is that on myself? Yeah, absolutely. And you said a lot of really interesting things there, particularly around the stigma associated with wearing earbuds in social settings. I’ve been following Jlab because they are a very youth oriented brand, selling good performing, cost effective earbuds of various kinds and a year ago, they had announced that they would be introducing over to counter hearing aids. their first model would be a preset device with a target price of $99. What was supposed to come out towards the end of last year, and it didn’t. So I was more than a little interested to go visit JLab and see what they were up to. And their hearing device was actually there. And to their credit, they explained that it didn’t meet their performance criteria. So they held it back to work on it some more. And we should see that device mid 2024 now, because they are a company who’s getting earbuds in people’s ears at a young age, because that’s their target audience. they’ve been building a complete set of hearing solutions and reaching young people with them. For example, they have passive hearing protection there. Win Cramer, when I had him on earlier after the CES announcement last year, he’s the CEO of JLab. One of the things he said was all new products are going to have a listening safe mode. Well, indeed, it was true. Everything new they were introducing there had the listening safe mode to prevent you from going to excessive sound levels, which I thought was really important because you’re also reaching the young people who are their demographic with the hearing conservation message, plus the passive hearing protection. They also had kids safe earphones. So I’m really looking forward to seeing these things come out that I’m looking forward to trying their first over the counter hearing aid and getting that experience, because that’s another way to reach young people who’ve been using JLab earbuds and may very well consider wearing. It was a very small, comfortable design OTC hearing aid from JLab. Though I do have to admit, one of the most fun things I saw when I was there was they had their new flagship the Epic Lab edition there. And that was fun to see because that was a project I worked on with Knowles. they had adopted Knowles listening curve, which I thought was really interesting that they adopted it and publicly branded it. One of the things I worked on. But I also like it because that listening curve is also geared towards people with hearing loss. It’s one of those things, the fusion between hearable devices and hearing devices like music personalization. and I’ll say a little bit more about the listening curve and where that’s going a little bit later on. But it was a lot of fun to see that earbud in action and it released, and not the least because it also had Auracast. So seeing more and more brands coming with LE Audio and auricast. And I think that train is now leaving the station, and I’ll share some more about that. but really, I think the thing that was most intriguing about CES this year. What was going on with the DSP chipmakers and the people in the ecosystem surrounding it. Unfortunately, a lot of what I learned was on confidential basis, so I can’t share everything. But there were a lot of people there doing interesting things. some of them are startups that audience members will be familiar with, like Greenwaves, Femtosense, who had interviewed last year, and another company called Aon Devices. They’re all targeting machine learning applications and audio, plus the big established players like Cynthian Analog Devices, and Renesas people who design earbuds and other audio devices will be familiar with them. They’re the bigger players in the industry, and they’re all doing really interesting things, especially the latter two. I saw some things in their suites that are worth mentioning, they and their ecosystem partners. Analog Devices, for example, they recently announced a partnership with Mimi so that the Mimi Hearing system that’s in earbuds and televisions and other devices where you can take the hearing test and get the custom Mimi profile to make devices sound better given wherever state your hearing is at. they announced a partnership with Analog Devices for their personalization system to run natively on their chips, so Mimi could run natively on earbuds and headphones using analog devices, chips. They actually had one of Mimi’s developers Dr. Nuchavella, she was there representing and explaining the collaboration between the two. So this is a big deal that’s going to get hearing personalization and a lot more devices. they also had a really interesting display in their suite showing their pure voice system. And this is where you see all the advanced techniques we’ve talked about before. Beam forming a voice vibration sensor, and noise suppression, geared towards delivering good call quality in challenging situations and also making using a hearing assistant with your earbud more seamless because the voice quality is better. But they’re also applying some of these acoustic techniques on the incoming audio as well. So you start to see the incoming sound quality getting better and better, because it’s becoming possible to do more and more sophisticated techniques acoustically. And I’ll say something about the voice vibration sensor, too, because that has some really interesting applications in hearing as well. But while we’re on the subject of chips, I’ll touch on Renesas as well. they made an announcement that was really interesting with Absolute Audio Labs. Absolute Audio Labs is a software company. They developed what they call the soft hearing aid in other words, it’s hearing aid software that will run on today’s consumer chips. Now, most of today’s consumer chips don’t go all day, so they can run their hearing aid software on, for example, Qualcomm’s chips and create a hearing aid with a consumer DSP, but it’s a situational device. Well, Renesas has the stated goal of creating fully capable chips that will run hearing aid, full capable hearing aid software with all day battery life. And they formed a partnership with acoustic audio labs so that AAL’s hearing aid software will run on Renesas’s chips. So how long have we talked about this fusion? Right, right. You used the term, I don’t remember where you got it from, but the dividends of the smartphone wars and how consumer DSPs are advancing by leaps and bounds. We’ve been talking about this for years. Well, now we’re at the point, actually where all day chips will be able to run full hearing aid software. And that’s really going to change the industry in important ways. again, my takeaway, almost every single time I talk to you after you go to one of these CES events or whatever kind of consumer trade show event it is, is like this continual progression on the way I think of it is like you pop the hood of the device and all of the pieces underneath there are all in this Renaissance period right now where they’re undergoing these transformations and this upgrade period. And it seems like that upgrade period is almost complete, where a lot of what we’ve discussed in the past, which is like the dsps and the consumer grade level technology, being equipped to be able to handle these more sophisticated feature sets that will usher in entirely new use cases for the consumer grade devices like the AI algorithms being able to do more sophisticated levels of processing parsing out now you know different sounds from one another and isolating them. so that again, it sort of all equates to this ongoing progression that feels as if it’s been going on for five to seven years. And now it sure seems like we’re kind of right on the precipice of, I would guess maybe next CES at the 2025 CES is where you’re going to really start to see a lot of these products now start to come to market. I’m sure there will be some that are ahead and will kind of leapfrog the others, but that’s kind of the general trend here. It seems like that so much of this is moving in that direction where consumer grade earbuds, $100 devices are going to be capable of performing the types of applications that previously $1,000 devices were limited to. And the reason being is that the components inside of them, from the DSPs to the processors and the systems on the chip and all of that, have advanced to the point to where not only is it sort of the Moore’s law scale of how everything just continues to get more powerful, but also smaller and more cost effective. We seem to be at that point where everything is now going to start to kind of bear fruit, and you’re going to now start to see $100 pair of earbuds that are capable of things that weren’t really fathomable five, seven years ago. And that’s where it’s kind of, we’ve been in this holding pattern for this to all upgrade. And now that it’s almost here, it sort of starts to beg the question of how is this going to come to market, and what is that going to look like? And again, I know you were privy to some information that’s confidential, but what’s your sense of what this period will look like between now and next year? CES, in terms of everything starting to kind of come online, more or less. Yeah, that’s really right. I would say another year is going to tell a lot. what I saw, what people were demonstrating in terms of the really sophisticated speech and noise separation techniques were not quite there yet, but underneath it all uses an example. A company called Cadence and Cadence provides the building blocks for advanced audio DSPs. And they have a series called the hi fi series with different capabilities for different applications. Well, they recently announced a new line of neural network processors that companies can incorporate into their chips. Now, it’s going to take a while, but at some point, you’re going to start to see earbud DSPs, meant for in ear use, that have these neural processors embedded, and that should start to really open things up. Now, in the meantime, there are some pretty sophisticated noise reduction algorithms that are being developed that can run on the chips that are available today. I actually went and visited one of the companies working on this, AVAtronics. it’s another one I have a relationship with. They had a demo system there, and they did a really good job explaining what they’re doing today and why it’s important, because it’s available now versus waiting for these neural network chips, they get more sophisticated. So I’m going to play that short conversation with them. So this is Finn Møller of AVAtronics hello demonstrating their new ANC system and speech enhancement. For what applications? It’s a speech enhancement for OTC hearing aids that we’re in the process of making a reference design for where we try to rely on our technology within active noise canceling and make a system that combats some of the bad problems in enhancing hearing in OTC earbuds and OTC hearing aids. And so in this demo then you have a combination of your wide band ANC and speech enhancement. Tell me about the speech enhancement technique. Yes, so the idea is that if we take the state of the art of an active noise canceling and combine that with AI based speech enhancement, so to say, take the advantage of using active noise canceling to remove the noise in an earbud or headphone first, and then add in a speech enhanced version of the desired signal, then that combination of technologies, I believe, would produce a very neat way of combating the situation of speech intelligibility in a noisy environment like a cocktail party. And yes, it’s a great place to demonstrate this. Right. So what method of speech enhancement are you using? Well, we prepared a small setup here that highlights, that outlines the principle where we are applying a traditional mask based approach to doing speech enhancement and mask based speech enhancement. That’s really a method where you kind of make a piano kind of equalizer based system that notches down the frequency bins of the sound spectrum where the signal to noise ratio is believed to be bad, and then leaving the ones that is believed to be good still intact and feeding that. And this is very dynamic. Correct. So different voices, the mass, will lay themselves out differently so that you’re still filtering the noise and leaving the speech. That’s what the machine learning takes care of. It learns the principles of how human voice sound like and trained on a large amount of speech data. It knows how speed sounds like compared to what noises sound like, if it has a sufficient signal to noise rating to work with. Okay, and there are people doing this with inline actual noise extraction and separation techniques. Why are you using the mass based technique instead of that? Well this approach here is something that you can actually implement and on devices they can buy today, it’s implementable, for instance in the cadence I five or in other processors on the market today. So we can implement this and go to market with their approach based on something like this without having to wait for the maturity in machine learning to become low processing power consuming enough to be able to be built into a wearable that sits on the ear of a person. The whole demo is running here on this board. The active noise canceling runs on this one here. And it then combines the noise canceled signal picked up by a demo here. Take the speeds enhanced signal being fed through a pc demo here. It of course, doesn’t mimic exactly how the system work in the end, but it shows the combination and the marriage of speech enhancement with the active noise canceling and shows what that can lead to when you take state of the art of both these technologies and put them together. And even though it’s running on a pc, this is the algo that you could port to a low power. Exactly. Well, let’s give it a try. Yeah, let’s do that. So this is now a silent, with these devices here, as you can probably get with AVAtronics, active noise cancelling. So this is with the system off. so for you who are watching this, I’m like really struggling to hear. Finn talk right now, which some would argue it’s a good thing, but that’s it. So if I then start up the system and let you wear a microphone that picks up my voice signal and feeding that through a machine learning where it removes the noisy pieces and leave the voice unharmed left in the signal. So let’s turn that on now, of course. Signal, omnidirectional mic. It actually worked pretty good. Right. And if you do a little beam forming, like my hearing aids are right now. Right. That of course, be a part of the reference design as well, obviously. and picking all the low hanging fruits in the traditional way is, of course, obvious. Well, and I think the key here really is, okay, I had my hearing aids out, so I was getting no hearing correction whatsoever. But because the SNR was improved so much, I could still understand you perfectly when using this microphone in the speech enhancement system. Whereas if I take these out like this in this noisy environment, it’s a lot more dicey, right. For me, I look at this and again it’s echoing. What I just said, which is that all of this is coming together where you have it reminds me a lot of the conversations we had with Giles back when he was with Chattable AI. And, you know, this whole notion of time, right? And this whole notion, know what if hypothetically you had a system that was so fast that it was able to capture the audio and then parse the audio in real time, filter out the background noise and leave the speech and be able to identify the two and do it so fast that there was such a little latency that you didn’t really notice it. And it sounds Sci-Fi but I think that, like you said, he was ahead of his time. But it seems like that’s kind of where a lot of this is moving toward. And to your point in the video, suddenly it’s like, do you need necessarily then amplification more or less in that scenario or correction, if you will, when what you really need in that in situ, very specific example is to really turn down the ambient noise and just hone in on the speech. Yeah, that’s right. You can go a long way with SNR improvement. Now, to be fair, somebody with the hearing loss level I have, it won’t be enough. It was fine with his voice, but with a higher pitched voice, not so much. I mean, in a quiet room, if my spouse turns around and is facing the other way and she says something like, I have a hard time understanding her. So amplification is necessary in cases like mine, but for many people, it will be enough situationally to just have SNR improvement through these sorts of techniques like EssilorLuxottica is doing or like AVAtronics is doing and so on, without necessarily having lots or any amplification at all. So, very much so. And what I really like is it’s a range of solutions for people ranging from somebody who has trouble with speech and noise, but no hearing loss at all, all the way up to people with profound hearing loss who are going to start to see these same techniques working on speech and noise for them as mean. You know, the way I’ve thought about the OTC market for a while now is that what we have right now in this world of hearing care, in my opinion, is two very distinct markets. You have the prescription Rx market, which is sort of a mature market. And I think that by and large, it does a really good job of treating the devices, the distribution model, the manufacturers, everything. I think in concert, do a pretty good job of treating that problem at scale, I think there could be improvements. But you think about that’s that, and that is sort of the market that’s always existed. Again, the prescription will always exist and will always exist. But what everything you’ve talked about today is this new nascent market, which is the OTC mild to moderate market. And I think that when we kind of look at the macro numbers and you see them kind of get lumped in with the prescription market. Personally, I think that it’s a little bit misleading because I think that it’s sort of a false equivalency of saying that the OTC hearing aids are more or less adjacent to the prescription hearing aids. But in reality, they’re really tackling two very different markets. And so I kind of think that there have been some missteps by the manufacturers that are getting into that space in terms of how they’ve priced them, in terms of how they’ve positioned them. And I kind of think that what we’re seeing is, because this is, again, such a nascent new emerging market, you’re seeing a lot of different sort of lines of attack into this market. And I think we’re at the early stages still, whether it’s. You’re again, looking at it from the lens of like, what would happen if you had one. Devices that have the ability maybe they’re not intended to be amplification devices, but the ability to parse out the SNR is so sophisticated that it operates like a hearing aid in some capacity, or you have glasses that sort of circumvent the whole stigma conundrum, and suddenly you have people that are willing to kind of take the plunge in that regard. So again, I just think that what we’re seeing with the emergence of the OTC market really is more about how do you sort of go after all of the systematic obstacles that are in the way, whether, again, it’s stigma or it’s price or it’s access, it’s all of these different things. But rather than just taking a watered down hearing aid and saying, here you go, here’s the solution I don’t think that’s ever going to work. And I think that what we’re kind of seeing as you’re a testament to, is a lot of really creative, out of the box thinking about ways that might be more palatable for the masses to embrace this in a way that I don’t think they ever will, if it’s just an OTC hearing aid, to be perfectly frank. Yeah, absolutely. And I think there were more than a few of us who were disappointed and wrote about or spoke about their disappointment in the first year of OTC hearing aids, where we didn’t really see the kind of innovation that the legislation was supposed to spur on this year, we’re now seeing it we’re starting to see the creative ideas coming. And I think when we revisit the second year of OTC. I think we’re going to have a completely different feeling. And I mentioned the ecosystem before, and I think that’s part of it, because there are a lot of people building these links that are creating the ecosystem necessary for people to innovate. And I’m going to use as an example my ex employer Knowles, because they’re kind of at the center of it all, and they’ve been making ecosystem partnerships with a variety of companies in order to facilitate that innovation. I went and had a short conversation with them about that, about the role of the ecosystem, and I’m going to play that now. This is Kalyan Nadella. He’s the manager of receiver development at Knowles. All right, thanks, Andy. Please tell more about the preferred listening curve and including its role in giving the best sound quality for people with hearing loss. Got it. So traditionally, TWS manufacturers have been using Harman curve for most of their design development. But back in the day, the frequency range for this study was limited to 8 kHz, based on the limitation of the 711 coupler that was traditionally used. What we have done is to see if there is a preference for the listening curve that goes beyond 8 kHz all the way up to 16 khz. We went ahead with blind studies, trying to go ahead with around 100 volunteers to see if there is a preference for having a select boost in the high frequency region. So once we went ahead with the blind tests, we were able to come up with the Knowles curve. The Knowles curve basically tells us the preferred listening curve for the users that gives them the best listening experience. Very good. And how about the role of the listening curve with hearing impaired people? Definitely. Actually, you bring up a very great point. during our study, we also found that based on age, like, if you classify the volunteers based on age, as the age progresses, we see a slight deterioration in our listening capability, and people with advanced age actually preferred a higher boost in the high frequency range. And that is actually one of the key findings that was published in the journal article of AES. And the article is actually available for download for anyone who is interested. Okay, terrific. And you have products now actually tuned to the Knowles curve and using a Knowles tweeter, correct? That’s absolutely right. The latest product on the market is our JLab right here that actually has one of the EQ options as the Knowles preferred go. You may or may not have seen my visit to JLab before this, so you may have heard this twice at this point. Terrific. Thank you. the next part of the ecosystem that we’ve been talking about is our collaboration with various ODMs. So what we realized is some of the new customers who want to get into this TWS marketplace are actually hesitant, or they don’t have enough resources to allocate to come up with new designs. We have various options in terms of form factors, in terms of the chipsets being used, in terms of the feature set that are actually embedded into different reference designs. And one special note is we also have few ODM products working on OTC hearing aids. And this particular device right here has a single BA, which is a full range BA, and which can go to an amplification about 40 dB. Back in 2019, once the Apple AirPods Pro were introduced, ANC became the industry norm. Every product that comes to the marketplace had to have ANC to be considered a decent product. We believe that personalization and customization will be the next big thing coming to the TWS marketplace. Personalization, like a final user feeling a product belongs to them and it’s dedicated for them, brings a value to the product. So we have worked with various partners here that are listed audio do, Mimi and Sonarworks. The main takeaway is that most of these partners actually look at your listening curve. They actually go through the tests in order to figure out if you have any degradation in listening performance for the whole frequency bandwidth. And they try to compensate for that in order to bring back the original way that you used to listen to the music when you were kilt. I really like this concept, because when you do hearing personalization, it’s a non-threatening way to introduce people to thinking about their hearing and where their hearing is at, and at the same time, to get a better listening experience. That’s right. that’s a very good way to put it. It’s like going away from a one size fits all to a personalized hearing. Terrific. Thank you. Appreciate your time. Thanks, Andy. You’re welcome. I mean, I look at this, and again, for me, my first reaction here is that in order for this space to really ever take off, you’re going to need the companies that sell the picks and the axes if this is a gold rush. And so you need the infrastructure players, you need the people that are enabling these other companies to kind of layer their applications on top of it. And so I think that it is going to be, it’s a group effort. I think for people that are watching this, ultimately what I’m gathering here is what they’re really doing is they’re helping to enable a lot of other companies to be participants in this space. And some of these are like more niche, smaller companies that are coming at it with one singular piece of innovation, if you will. and kind of enabling that more or less. And so I just think that this will be one of those things that it’s sort of kind of like an invisible area of innovation for many. But it is that enablement layer that so much of what will come next will be kind of layered on top of so it’s so necessary. And again, it speaks to what we’ve talked about, about how we’re in this upgrade period right now. And so these are the kinds of incremental things that need to happen in order for the tangible applications to manifest. Yeah, absolutely. It’s the development of the interlinked ecosystem that’s going to enable innovation. And of course, one of the other things I like about the way Knowles is doing things is a lot of emphasis on music quality. And every hearing impaired person in the world will tell you that listening to music, to their hearing devices is not a great experience. Right. Mean, there are people who spend their lives trying to make it better for people like Dr. Marshall Chasin, for example. and one of the things I like about some of the new developments, and I’ll go back to absolute audio labs here, is because they focus on music quality. So in this development of the soft hearing aid is pretty sophisticated. And when I tried to demo, and this was their gen two system, the gen three system, they announced that they’re collaborating with Renesas on, wasn’t available for demo there. But already the gen two system had really good vocal quality in the noisy environment. But it was the music which intrigued me the most. I think it presages the time when all hearing aids are going to be better with music and not focus solely on the voice. Because for people at a younger ages coming into the need for hearing devices now, music has always been an important part of their lives. So I’m going to let them share a little bit about their music focus as part of the soft hearing aid. I’m here with Aernout Arends of Absolute Audio Labs. they have announced their partnership with Renesas and they’re demonstrating their pure audio system for music enhanced hearing aids. Please tell us what you’re demonstrating today. What we’re demonstrating today is basically building hearing aids using standard audio SOCs. So you’re using the chips that you can find in regular TWS earbuds. And we add the speech intelligibility to that. So the great benefit that you get, you get standard connectivity, you get great music, great sound, great audio. And the speech intelligibility can be brought on par with the best hearing aids in the world. So this is really like a breakthrough in the hearing aid market. Yeah. And this is really seminal because we’ve been watching the convergence of TWS chips and hearing devices getting closer and closer to each other. And now, through your partnership with Renesas, you’re actually in a place now where you can create a standard chip based hearing aid with all day battery life, correct? Absolutely. So the great thing about the partnership with Renesas is that the line of audio chips that they will be launching are so energy efficient that actually this, for the first time, it will take a SOC based hearing aid entirely through the day without a single charge. So that really opens up that market. Terrific. And then these devices will have both, first in class audio enhancement, but also music quality. Oh, absolutely. It is really best of both worlds. So you get an audio experience that’s uncomparable to anything in the market out there. On hearing aid you get connectivity, standard Android, iOS, no problem, just your normal connectivity. You get all kinds of benefits that could enhance your audio experience, such as 3D audio. It can integrate Alexa or other voice assistants, everything, because it’s a standard audio chip. And on top of that, of course, it’s all within the frame of good speech understanding with all the algorithms that are needed to build a premium hearing aid. Okay, excellent. So let’s do the demo. All right, we’ll do the demo. We have here a setup with two artificial heads. The top one contains odicon, more hearing aid with a best in class music experience. They won an award for their, my music feature. This is a SOC based hearing aid. It’s still the old platform, the PYOUR Audio 2.0. PYOUR Audio 3.0 will be launching this year but it’s based on a qualcomm chip. So we’ll be streaming music from telephone directly to the hearing aids. And you can have a listen to the hearing aids using the headphones here, you can switch between the Oticons and the AAL prototype devices, and I will start the music streaming right now. Ready? So what’s the bandwidth of your device? the chip can handle 20,000 to 20,000 hz. currently the bandwidth is somewhat limited by the BA because it’s an inconel speaker, a BA. That one ends around eleven khz. the low end starts around 60 70. You get a property. It does actually provide under that, but it’s so low that you can barely hear it well. And it’s really a shame because I’m wearing my hearing aids, which top out at about seven. And I can tell you, as a person who loves his music, I very much am waiting for devices that go a little higher. There is life after 7 khz. Actually. The music quality is very good with this. Yeah. What you really experience is the difference. If you take a chip built for speech and try to add music, or if you take a chip built for music and you add speech, that’s really the difference. And so really your value then is to incorporate both worlds so that you have good music listening enjoyment and speech enhancement. Yes. I don’t think there’s a reason why the hearing impaired should be deprived of good audio. Thank you. Looking forward to see when PYOUR 3.0 comes out and everything you’ve accomplished and the fruits of the relationship with Renesas as well. Yeah, thank you very much. We’re also very much looking forward to that. It’s a development that’s ongoing and we’re sure it’s not ended yet. It will take us much further. Well, thanks for spending some time with me today. Thank you very much. So in your opinion, as he says. Now we have the ability to have all day wear. He’s alluding to this breakthrough in battery life, what really has sort of been the crux of this hub of innovation that is permitting all of this. I mean, I know we’ve talked about this to some extent, but is it a culmination of a bunch of different things, or is there sort of a root of what’s going on here? Is it the DSPs? Is it the systems on a chip? or is it kind of all of the above? Yeah, no, it is the chip development. It is the collateral benefits of the smartphone wars. The chips are getting increasingly fast, increasingly sophisticated, and consuming less power while they’re doing it. And so that allows people like AAL to do a lot of really interesting things. So the typical reason why hearing aids have only gone to 7 khz or so is processor power. If you widen out the bandwidth, the chips have to run faster, they consume more power. And a traditional hearing aid was always focused on speech intelligibility and putting all the necessary resources into improving that. So when you go to beam forming microphones, it costs you chip power, right. It consumes more power to run beam forming microphones than the not. The audio processing that goes on for a hearing impaired person consumes power so music quality was one of the things that would go out the door. We’re going to limit the seven khz because that’s enough for speech, and we’re going to try and get as much done as possible. Well, now the processors have gone out so much, you can have your cake and eat it too. You can have high bandwidth, you can have full earbud functionality, you can have standard bluetooth connectivity, and you can have full hearing aid functions within a device that lasts all day because the consumer chips have been advancing by leaps and bounds. So, in essence, when we say a rising tide lifts all ships, the rising tide here would literally be the advancements around the chip architectures and the chips themselves in terms of what they’re able to sort of take on. More capable, more powerful, smaller. Right. This is in essence, and therefore the way that this manifests as you go higher up in the tide. I think this is a very important point, which is that it’s not just the consumer. And then as it relates to this conversation, the OTC market, if you will, that’s going to benefit. This is the prescription market too, right? Because they’re going to be able to be, not having to have some of the trade offs that they’ve historically sort of been faced with, where at the end of the day they’ve usually opted for, we want to make sure that these things do well with people’s voices and that they can be worn for extended periods of time. But as that becomes less of a trade off, you start to be able to kind of add in more things that used to have to be You had to make a decision of what you were going to go with. Yeah, exactly. So the prescription hearing aid companies have had to do very customized chips to get it all done with all day battery life, and now the consumer chips are capable of doing that. It’s going to create some really interesting scenarios because you could actually take a hearing device made with the Renesas chip and AAL software, and you could actually have either an earbud, an over to counter hearing device, or prescription hearing device all in one unit. So imagine you get an over to counter hearing device made with these two partners, and you use it for a while, but your hearing loss gets worse over time. Imagine actually walking into an audiologist and under professional care, they open up the prescription capability of the device and then give you custom tuning, as an audiologist will do. And you could conceivably even do that by teleaudiology. So I own an earbud which is an OTC earbud. And then later a professional, either in person or remote, then gives me a custom fitting according to prescription principles. The device hasn’t changed any. That’s a sort of interesting pathway that we could see coming in the not too distant future. That’s really interesting and very exciting because again, I think that it’s all about, it used to be so binary about you had to have this or that you had to have this trade off or that trade off. sure, I can provide you with a piece of amplification that’s going to really help you out with all of these ambient situations. But frankly, one of the trade offs is going to be that the music quality that streamed through there is not going to be great. And I just think that’s a really exciting kind of prospect into the future of less and less of these trade offs that whether it’s the manufacturer having to make or ultimately the patient consumer that’s having to kind of make those decisions based on what the device is capable of. Yeah, absolutely. So I think it’s really a bright future because there’s so many avenues of innovation opening up to meet people where they are at all levels of hearing loss and all levels of auditory function more broadly. Yeah, I have one more video and I’m going to go back to Knowles because the voice vibration sensor is another innovation, which I think is making life really interesting. in the hearing world, one of the primary things you have to wrestle with is how you hear your own voice. And there’s some pretty sophisticated work that goes on in order to make that comfortable for a person, especially with a sealed hearing aid. the voice vibration sensor makes that easier to deal with and provides a lot of other different things different benefits. And so I’m going to share that because it’s relatively new. There have been voice vibration sensors before, but now that technology is also increasing and you can see the applications for making hearing devices better. I’m here in my old stomping grounds at Knowles with Nikolai. He’s applications engineering, and Knowles recently released a voice vibration sensor with a myriad of applications, including in hearing health. Nikolai is going to share with us. Yes. hi, Andy, and thanks for the opportunity. Here I have a prototype of an earbud you can think about as a TWS device or a hearing aid device. And inside this prototype I have a microphone and voice vibration sensor integrated on the inside. And one of the common problems in hearing aid devices is self voice feedback or self voice echo. Conventional microphones, integrate in the hearing aid, are picking up the sounds around us, including the voice of the user himself or herself. And when played back automatically into the ear, it can be disturbing and annoying for the user. So how can we find a data source that will allow us to really separate user speech versus all the rest of the sounds that we’re trying to listen to in the environment? And the voice vibration sensor comes really handy when integrated in the device. When I talk, it will be vibrating only from the impact of my own voice and not from the sounds around me. In order to do a little demonstration, we’re going to play a small game. I’m going to start a recording and I will be reciting the Alphabet. I’ll ask Andy to count to ten and we’ll compare the outputs of microphone and the vibration sensor and we’ll make some conclusions from there. All right? You ready, Andy? Ready. Okay. A. One. C. Four. B. Good. Thank you. So what you see on the screen is microphone data on the left and vibration sensor data on the right hand side. so if I play it, you can hear that microphone is picking up both my voice and Andy’s voice. This is what it sounds like. Andy’s mic should pick it up. A-B-C two F-G-H-I-J-K-L. Apologize for a lot of interference in the end, but you get the point that microphone picks up both of us. However, I’m going to play back now. Vibration sensor output due to a rubber tip piece here on the earbud, it limits vibration pickup only to low and mid range frequencies. So I’m limiting the playback up to five khz. But listen to vibration sensor row output abcdefghijk so you only hear the Alphabet but not the numbers that Andy was saying. So in conclusion, I can say that vibration sensor provides very valuable data. So if you see the signal on vibration sensor, it means the user is talking. So you could choose to shut down the hearing aid feedback loop to prevent a self echo or implement a cancellation algorithm subtracting vibration data from the microphone data. This way only the useful surround sound information will be present and will be played back to users here. So you can actually control the amount of own noise feedback, right? Because with the hearing aid you want a little bit of own voice feedback, but you can actually control that because you have the two inputs. Yes. So what I’m demonstrating here is the capability of a sensor itself and there is a lot of room for creativity for DSP, for best user experience. And then in related application areas, you can use it for wind noise reduction too. Correct. Correct. Wind noise reduction, clear phone call experience in a loud environment when you only want to have access to a user’s voice and you want to cancel out everything around you. Okay, terrific. And when will we start to see this in your products? So the sensor was launched earlier this year. It’s in mass production. We have a lot of interest in the market. Multiple brands are evaluating it across different industries, consumer, automotive and others. So how about we’ll meet at CES 2025 and we’ll show you some of the products using V2S inside. Hopefully. Looking forward to hearing it and seeing you there. Thank you. Thank you, andy. Thank you. Right, how long has this plagued hearing aid wearers? is this own voice problem? And so it’s like, well, how do you solve it? And now we’re finally getting to the point where you have this really sophisticated set of sensors and innovative solutions that are kind of like being integrated and layered into these products that are capable of making the devices more intelligent so that they’re able to kind of get to the root of what you’re trying to solve. And so in this example, it’s being able to signal to the device what is your voice and what is an external sound source. And so I just think that, again, if you just sort of extrapolate out a few more years of this technology to continue to percolate, and as he mentioned, there’s going to be a lot of creative DSP use cases for this. So it’s that base layer of the foundation for there to be innovative applications to be built upon. These are the picks in the axes for this gold rush. And so I just think that this will usher in some really exciting things. And it’s, again, one really specific use case. But as more time goes on, this seems to be how a lot of this stuff is going to get solved is, in essence, it’s giving the device more mechanisms to have its own little brain so that it can sort of autonomously operate in the fashion that you’re trying to program it to. And you’re just adding more capabilities and senses, more or less, for it to have that sort of make those determinations on its own. I didn’t even share the half of it. We could have talked for another hour about all the things I saw with regard to Auracast and different sensors in devices. I’ve probably used up enough time already. I’m with you. And again, I think that what’s exciting is that I don’t think we’ve really seen the consumer market. Put aside the notion of OTC. Think of it more around the consumer market. That’s going to cater to people that have milder losses. Right. But they’re not, as we’ve talked about before, these aren’t necessarily solutions that are single use attacking that. It’s more like these are consumer applications that cater to the insatiable demand for audio podcasting and streaming. And just in ear devices, the demand is up and to the right, and it doesn’t seem to be stopping. And so I think what we’re seeing are all of these next generation ways in which those devices are going to kind of advance. And I think that what’s exciting is that sort of the real byproduct of that will be these devices are going to be multipurpose, that will cater in some fashion to the various challenges that a hearing impaired person faces. but because they’re not necessarily just solutions for your hearing loss, it’s like an added bonus for everything. I think that that is going to really appeal to the masses because it’s not perceived as a medical device, it’s not perceived as something that signals your body is wearing down, or that you’re getting older or anything like that. These are lifestyle type products that cater to wherever you are on that hearing spectrum. And as you mentioned, with it being about enhancing the fidelity and the music quality and all that, I mean, there’s so much opportunity for education here in terms of. The majority of people think of hearing loss like a knob, right? A volume knob. Up and down, up and down. But I think it lends itself to the opportunity to educate people to say no. It’s more about, you have a spectrum of frequencies, and some of those frequencies maybe have deteriorated in your ability to process those different sounds. You do an amazing job of really distilling down some of the key points and making it easy to understand what’s really going on in a very tangible and understandable way. So thank. Oh, well, thanks for that, Dave. I appreciate it. Well, thanks for everybody who tuned in here to the end. We will chat with you next time. Cheers. Bye bye. Thanks, everyone.
Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week, and follow This Week in Hearing on LinkedIn and Twitter.
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
About the Panel
Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.
Dave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot.ai, and has been featured on NPR’s Marketplace.