This week, host Andrew Bellavia is joined by Dave Fabry, the Chief Innovation Officer of Starkey.
In this episode, the pair discuss the development of the newly launched Genesis AI hearing aids, which was a process that is said to have taken over 5 years. The Genesis AI boasts an entirely new chip, compression algorithm, fitting formula, a new app called My Starkey and new professional fitting software. Overall, the Genesis platform is intended to provide a comprehensive and personalized hearing experience for the patient and the professional.
[Andy] Hello, everyone, and welcome to This Week in Hearing. For this episode, we have Dave Fabry. He’s the Chief Innovation Officer at Starkey. He’s a holder of a master’s degree in audiology and speech language pathology and has PhD in hearing science. We’re going to get under the hood a little bit on the New Genesis AI hearing aid. But before we do, Dave, anything else you can add by way of introduction?
[Dave] No, that was thorough. I like to say I have three degrees below zero, all from the University of Minnesota, even though I’m a native Wisconsinite and lifelong Packer fan. There’s a lot of buzz about the packers these days with our quarterback. But my professional career as an audiologist has been divided equally, really, between academic clinical appointments, where I worked at Walter Reed Army Medical Center after finishing my PhD. And then returning to Mayo Clinic in the sunny southeastern tropical portion of Minnesota, where I had done some of my training. And then after I had been at Walter Reed, I was invited back to join the staff at Mayo, and I remained there for a total of about 15 years. And then I did a couple of stints in industry prior to my role at Starkey, which began in around 2009, where I’ve been in a variety of roles. And as you said, my current role as Chief Innovation Officer really allows me to ensure that the voice of the patient is felt in the development of new products. I have the privilege of still working. I’m licensed the state of Minnesota and Florida and also in the country of Rwanda. That’s a longer story for a different time, but I do still see patients on a regular basis as part of my role, so that in the vernacular of keeping my saw sharp, I ensure that I’m staying fresh with our technology and the clinical tools and the voice of the patient.
[Andy] Terrific. Well, you really bring a broad background to Starkey. I’m sure they’re really glad to have you. Now, if this were a sports podcast, I would ask how the packers ended up being the farm squad for the Jets, but we won’t go there. Hey, be nice. Yes. We’ll see who is going to get the better end of that deal. Presuming that it does go take place. That’s for another podcast, too. Just like I’d love to hear more about your experience in Rwanda. Indeed. But for today, like to start by understanding why now was the right time to develop a completely new platform from the ground up. And since you chose this time to do it, what were the key goals in doing so?
[Dave] Sure. And when you say now, I’m. It. It would have to be an understatement in the sense that we really began this journey five or six years ago. When then Achin Bhowmik joined Starkey in around 2017 as our Chief Technology Officer and came from intel, where he was running the Perceptual computing division, and Brandon Sawalich in his role as president of the company, and now CEO, along with Bill Austin, the founder of the company, really recognized with the opportunity to rethink the way people consider hearing loss and hearing aids and try to really reimagine and reinvent hearing aids from a single purpose device that importantly amplify speech and other sounds to audibility meeting patient needs, but then also think about moving into a multipurpose, multifunction device. In 2018, we were the first hearing aid manufacturer to put sensors in hearing instruments and that really began to look at health and wellness as another role, considering comorbidities with hearing loss and then also using an intelligent assistant. So that’s similar to the rest of us in the way that we’re using consumer earbuds, but to really help lessen that stigma associated with the use of hearing devices. And I think, as we’ve seen the transition from the traditionalist generation into the baby boom generation as the primary target for our market, this journey that began five or six years ago in the development and envisioning where we would be right now is really the culmination of a lot of hard work and dedication on the part of, I would say, every single person in this organization, and specifically the four to 500 folks that we’ve had working in our R & D group over the last five years developing this feature set. And we like to say around here, it’s all new, everything. As you said, we have a new software, programming software for the professional called Profit. We have a new user application called My Starkey and then new form factors for the receiver in the canal, the most popular devices on the market, a new receiver assembly that now uses ten wires out to the receiver, and a new Snap Fit 2.0. I think I’m biased because I do work for Starkey, but I think we had arguably the best Snap Fit device for the receiver assembly in the past because it didn’t require any Pins to secure with my aging eyes to pop out Pins to secure those receivers. But I think we’ve improved upon that even more with the new Snap Fit 2.0 receiver assembly. And so, in addition, underneath the hood, the chip is a new chip that really sets us up for the next. Several years in terms of the computing capabilities, and we can talk a little bit about that. And then the new compression algorithm, a new fitting formula. So there really was no stone left unturned as we developed this technology with the patient and the professional in mind. Well, there’s a lot to unpack there, and why don’t we go through them one at a time? And given that this was a five year design process, it’s interesting that you’ve got the Deep neural network running on board the chip because you go back to five years ago when you must have been conceiving this. That was pretty rarefied error at that time. So it must have been a very iterative design process as the silicon caught up with your ambitions. Indeed. How did the final layout end up? I’m assuming and tell me if I’m wrong you’re still not doing neural processing in the audio stream, but you’re using the neural processor to classify sounds. Is that correct? Yeah, we’re using the deep neural network right now. We are using it on board. It’s an onboard DNN accelerator. And we are using it in our Edge Mode application for moderate noisy environments and noisy environments, actually allowing the DNN to do its thing with Edge Mode, which, for those who are uninitiated to that and maybe I should back up and talk a little bit about edge mode, which we first introduced a couple of years ago on Edge AI, then improved it with Evolv AI. And with Genesis, we’ve allowed sort of this one button. We say it puts the power of artificial intelligence at the patient’s fingertips. We know that we’ve used machine learning classification in hearing aids for a decade or more. Really. I mean, back in the Dark Ages, when I first started fitting hearing aids and we wanted to equip them with directional microphones and noise management and feedback cancellation, we would have to have separate programs for the types of environments quiet, noisy, musical, et cetera that a patient would manually switch from one environment to the next. And so we’ve had environmental classification of devices which use the machine learning processing where we know how to characterize via a number of attributes of speech in noise and music from timing, intensity, pitch differences. And so automated environmental classifiers have been available for some time as a machine learning process. We also know that AEC, as it’s abbreviated, is really only accurate. Even the most sophisticated ones are only accurate about 80% to 85% of the time. Why not a rhetorical question. Why is it that AEC systems aren’t 100% accurate? Because each individual person’s hearing loss is a little bit different. And I know that from my own experience, truth and hearing aid usage here. I’m running Phonak Audeo Life’s right now, their waterproof version. And if I let the automatic mode run, its speech in noise is pretty good, but I was able to tweak it around and get a little bit better performance out of it, creating a custom program riffing off of what the automatic mode was doing. So I understand completely where you’re coming from. Yeah. In hearing loss, a person’s, Stuart Gatehouse coined the term auditory ecology years ago, before his untimely death and premature death, let’s say, and the person’s experience with sound, their social and auditory experience, auditory, non auditory differences make up that difference. But then, as well, it’s not usually the case when you’re using your hearing aids that you’re in machinery noise or mechanical noise. It’s far more common. You’re at a cocktail party or at a restaurant where you want to hear one voice but not another voice. And it’s hard for a machine learning system to differentiate between the signal of interest and the loud talker behind you. And many patients will often remark, I can hear the person over here who’s shouting at the person that he or she is eating with rather than the person I’m interested in talking with. And so what we did was combine sophisticated machine learning classifier with listener intent, if you will, initially by allowing them to double tap on the device using the Imus. The sensors that we put in hearing aids to track physical activity could also be used to recognize a tap on the device. And now we’ve also streamlined it further to enable the people to use our user app and engage that button as well. What happens is roughly analogous to an acoustic snapshot at the moment of whatever environment I’m in, with the acoustic environment, whether there’s background noise or music playing and the location of those environments, but with the understanding that what’s in front of me right now is what I want to hear. And in that case, the tweaking is done automatically by the circuitry to optimize the audibility for the sound in front of the individual, typically that’s speech. So it’s doing automatically for that 15% to 20% of the time, where speech can be a stimulus of interest, but it can also be a background noise. Music can be something you want to hear, or it can be something piped in overhead that is in an elevator that is noise. And so, by combining sophisticated machine learning with listener intent, with this edge mode that enables them to take an acoustic snapshot in the environment where they are at that moment, we can customize further and optimize by applying additional offsets to make. That stimulus that’s in front of them more audible. And that’s been available with Edge Mode for some time now. We’ve streamlined that and begun to use this onboard DNN accelerator specifically to give still that one touch button to always apply the best Audibility for sound in front of them. But in addition, there’s some additional granularity on the current app to be able to use Edge mode and select to either enhance speech further to even go further with offsets for enhancing the audibility of speech or to provide more comfort in a noisy environment by making the noise management settings even more aggressive, considering where they are and the frequency content of those environments. If a person simply wants to have best sound as they’ve used Edge Mode in previous products, they have that. But we’re beginning to take that DNN onboard accelerator in those noisy restaurant environments and high noise environments to make a more aggressive noise management system than is available in the AEC programs that you talk about, the automated programs that work pretty well for the majority of time, but it does provide additional benefit and additional granularity. Now, for those users who have the capability to understand if they want comfort or clarity in a specific environment. And I would even say that even within an individual, a specific hearing aid user, let’s say you’re going to a lecture or going to a book reading at a bookstore and you want to hear what the person is saying, but then you move to a reception afterwards. And now maybe it’s a little later in the evening and the noise level is picking up and you want to just optimize that comfort later. Whereas earlier you wanted to optimize clarity. You have that flexibility now within this Edge Mode with a one button touch to really customize optimize without having to have the ability for a non technical person to go into the software and make those adjustments. Edge Mode Plus is doing that for you. Okay, got it. And then will the hearing aids learn? So, in other words, if I use Edge Mode in a certain situation, will it remember what I have done in that situation and revert to that setting straight off the bat the next time? Yes and no. This platform is setting us up for the next multiple years, and now we have the horsepower in terms of the processing and computational power, but we’re just starting on that pathway to really begin to exercise that DNN capability. What we do have now is the capability in this software to save programs. So you have the programs that the professional would program, like restaurant or crowd in this case. But in addition, I have a custom program that enables me to use Edge Mode Plus starting on. The way to learning and customizing. So that let’s say I go to Starbucks every morning and the barista is in one spot and I’m in line and it’s the same time every day, it’s pretty noisy. I can either apply Edge Mode every day or I can have Edge Mode saved as a custom program as that optimization that I made one time when I walked in without having to redo it the next time it learns and remembers that customization of what was made at that moment in that environment. So we’re on the way to making it learn and customize for individuals. But today truly, in terms of the true Turing Test of is it learning in that sense of what DNN is? We’re early in that process but that’s where we’re going. That’s absolutely where we’re going. No, I understand. So today, once I’ve got Edge Mode doing what I want, I can save that setting and pop into it before I go into the Starbucks the next time or you can still have the flexibility of saying every time I want to go. It’s combining machine learning plus listener intent and that’ll get you a long ways. But this journey over the next several years, with the horsepower that we have on this new chipset, in combination with the signal processing and the noise management, directionality paints a picture for where we could be in several years. Of where. Let’s say you and I have the same hearing loss and we’re programmed by a professional, by an audiologist and go into the world, our real world. And let’s say you’re more active and I’m more sedentary. The idea may be in several years that the devices, when they come back to the same professional will be optimized differently for you versus me and that’s where we’re going. And now that creates both opportunities and challenges for the professional in this case. Because in a sense the DNN is the ultimate in black box. It’s learning more similar to the way that humans learn speech and language. The example I use with dnn, I mean, people often use the cat and the dog and all that but the example I think of is the way that a child learns language at a young age when they’re plopped in their Play pen and these humans come into their space. No one has taught a child phonetics or grammar or word structure or anything, but they just have these humans come into space and uttering sounds at them that are unintelligible at first. But they realize that this one always says Mama, this one says, Dada, this one says something else, and they start to mimic or replicate those sounds. And that person appears happily. And the way that we learn language is not so much rule based as it is by experience. And we’re entering into a period in time in the next few years where taking into all of the acoustic environments that a hearing aid user might want to wear their. Devices can be trained, but then according to a rule structure, then the idea of this onboard DNN accelerator is going to be able to be different for you than me. And that’s a really exciting time. But as the professional, the audiologist in me starts thinking about then when the patient comes back in and says I’m hearing really well, but I’d still like to tweak a little bit here or there and I don’t know exactly in a DNN model how it’s gotten to that point and how I improve it. So I think the challenge both on the industry side and on the professional side is to unpack that black box of a DNN model which isn’t going according to the same rule based set. Like when we say, well, occlusion, you always want to reduce loud sounds and the low frequencies by three dB or by this amount when a patient says my voice sounds funny now it’s going to be a little bit more interesting and personalized both for the professional and the patient. And I think while some professionals may fear for their role, I think it creates an opportunity for those professionals who can really dig into their patient and understand their patient and their expectations. And that auditory ecology to really help and optimize the hearing by combining what the machine can do very well and what the professional knows and understands in the form of empathy. That’s that barrier that we have as humans that can connect and try to understand and appreciate what adjustments might be made. Yeah, that’s right. I would say that I don’t think the professional has anything to fear in the sense that it’s a long, long way from a hearing aid being able, especially in more severe hearing loss, to be able to be fit properly. In other words, when a person walks out of the audiologist office for the first time, they’re going to be most of the way there on account of the audiologist profession and the expertise. But most people don’t have the language to describe exactly why a situation is not optimum. I’m telling an SLP this everybody’s auditory system, complete auditory system is different and there’s no way an audiologist can know what’s happening in a complete auditory system under different circumstances. And the usual client doesn’t have the language to describe it. So I think the neural network application is perfect, especially if you give the audiologist the capability then of calling back what the user has done so they can see the deviations to the original fitting algorithm. In different circumstances, you might have ended up with different compression settings, for example, based on experience. I think you raised several really good points in there. First of all, when we think about this notion of understanding, we’re not just fitting a pair of ears and. Real ear, matching real ear target for a prescriptive target, whether it’s proprietary one like our ESTAT 2.0, or whether it’s one of the independently validated ones. The reality is they’re done for the average hearing loss like that. And your ears are merely acting as sensors, supplying what’s going on in terms of the integration in the auditory cortex for processing. And really, when we think about it, the ears are sensors, the eyes are sensors. You know very well that lip reading, I love it when I see patients and they say, well, I’m losing some of my hearing, so I better learn to lip read. And I said, you’ve been lip reading. You’ve been doing it and really integrating that eyes and ears. And we used to have in the early days of cochlear implants, they had tactile responses that could also be people could learn to understand voiced and unvoiced sounds from vibrations on the skin. So we don’t even know the way that hearing makes up at the cognitive level. Not only auditory processing and damage to the hair cells and directional, but we’re really at a point, I think, where now we have the computing power to take into consideration some of the complexities of hearing that are taking place cognitively and start to integrate not only visual and hearing and spatial awareness that is so important to hearing aid users. But we’re entering, I think, a really exciting time. But I, like you, am not at all worried about the role of the professional. And I think this is where we have to think not only about fitting a pair of ears, but how it fits into the individual hearing aid users ecosystem. Well, yeah, I agree with you 100%. Let’s then take that in a slightly different direction and talk about overall cognition. Because of the developing understanding of the relationship between hearing loss and dementia and the role that an audiologist could play and in fact, the role that the devices could play in monitoring overall cognitive ability, which is something that Starkey has been working on for a while. So take me there, what this new device is doing in that respect and where you see this going in the future. Really what I’m talking about is health, well being, sensor fusion, integrating of different sensors, and how you see that playing out. Yeah, and I know that this is a recorded session, but today is sort of an interesting and important day. I love challenging the impossible. And they’d say, well, we can’t make hearing aids become a multipurpose, multifunction device. But March 16 in 1926 was the day that Robert Goddard flew the first rocket, if you will, that well before anyone even envisioned the impossible ability of space travel. And so he did this not quite 100 years ago and only 43 years. Years later, we landed on the Moon. And one of my favorite quotes by him is to say it’s difficult to say what’s impossible. The dream of yesterday is the hope of today and the reality of tomorrow. So I have to back up a little bit when we talk about sensor fusion and sensor integration as to why we got to where we are right now with Genesis started already. Several years ago, Achin Bhowmik, as mentioned, came to us. He headed up the perceptual computing division at intel. So he was well versed with using AI applications and vision. And he’s learned our discipline in terms of the hearing side of things and really thinking about comorbidity. You point to cognition. I’ll point to earlier research that some 20 years ago started to link hearing loss to cardiovascular disease, diabetes, high blood pressure, anything stroke, anything that restricted blood flow to the ears and the eyes showed the strong comorbidity between cardiovascular disease and hearing loss. And many cardiologists in the aging population will say that the ear is an overall barometer of cardiovascular health. One angle of why we started to incorporate sensors was to encourage people to be more physically active by measuring and monitoring their activity throughout the day walking, running, even just getting up and sitting down to improve musculoskeletal strength. The other part, people say, Well, I can do that on my wrist. Yes, the thing you can’t do on your wrist very easily is monitor not only whether a person is wearing their hearing aids but whether they’re engaged in conversation with other individuals. Now, we’re not measuring brain function. We don’t have Eegs in these devices. But indirectly we’re measuring and we continue to become more sophisticated, not only with the physical elements in Genesis. Now we can monitor and automatically, if you will, determine whether a person is walking, running, bicycling, laying down, standing. And so we know a little bit about their movements throughout the day. All good things to encourage people to be more physically active. Similarly, we can use the AEC in the same way that we’re optimizing for different acoustic environments. We know from the AEC when speech is present. Right now, we’re able to monitor when I’m wearing the hearing aids, but if I’m wearing them 14 hours a day, but I’m sitting alone in the dark with no interaction with other people, that’s not very good. So what we’re doing, in the same way that we’re monitoring the physical activity in terms of steps, we can monitor their hearing activity, if you will, in this case. And it’ll say hearing aid usage. And it’ll say whether I’m using the hearing aid and whether I’m interacting with other individuals. Currently, I’ve interacted for about 2 hours and 25 minutes today with the devices on. And I’m on my way to have. A satisfactory amount of engagement with other people on the basis of monitoring what acoustic classes I’m in. And that serves to motivate me to hit that daily target. Seems simple, but one of the things we know from the connection and the comorbidity with cognitive decline and hearing loss is we want people and encourage people to be in a variety of acoustic environments through the day and to engage with other humans in conversation. We’re indirectly measuring that, but in a means of we think what gets measured gets done. And if on an esoteric level we say cognitive decline is correlated with untreated hearing loss. If some of the early results of the Achieve trials which we’re expecting to come out mid year this year when they’re presented in amsterdam. If those show a causative relationship, that’s going to help really drive boomers like myself who worry more about cognitive decline than my parents worried about cancer and cardiovascular disease. As a boomer, I’ve spent a lot. My dad had an 8th grade education. I have my PhD. I want to try to preserve as much of this as I can for as long as I can. And if anything can stop the seven to ten year delay from the time someone tells me I should get my hearing checked and when I actually get hearing AIDS, anything that shortens that delay will help. And I think the boomers, it’ll serve as a catalyst if indeed we start to see a causative relationship between untreated hearing loss and cognitive decline. Hearing aids are not going to solve or cure dementia. But if we can shorten that timeline and get people stimulating the social engagement, that’s a good thing. And the way that we’re going now is by monitoring the acoustic environments they’re in, reporting that and tracking that so that I can track over a day, a week, a month, a year, whether I’m improving. That’s a good thing. And for me, that’s one of the key elements of what Genesis has today. Tomorrow we’re going even in other directions that will become more sophisticated for discerning interactions in ways that I’m not quite at liberty to talk about too much because we’re working on this for future generations of products. But in Genesis today, we’re monitoring social engagement and physical activity for the reasons that I said. The comorbidity, some of the most common comorbidities with cardiovascular disease, with cognition, even falls. We know even a mild degree of hearing loss leads to a three times elevation in the risk of falling. And we’re working not only on a fall. Well, we have a fall detection feature that can alert up to three trusted contacts at five fall while I’m wearing the devices and even show on a map using the location services on the phone. Where I was when I fell. But that’s really, in many cases, as someone who has a mother that fell and unfortunately started that downward spiral that led to her death a few years later, we want to actually move from fall detection to trying to monitor fall risk and potentially even prevent falls before they occur. So we have a strong direction for that too. Okay. And that’s kind of where I was heading because we’re verging on the capability where you could actually monitor. And this is where you get into the machine learning again, be able to monitor a person’s balance over time through characterizing their activities and watching how those activities change or how they change when performing the same activity. Even the possibility of taking voice prints and using changes in voice print over time to monitor changes in cognition. So we’re really on the verge of being able to do a lot. You’re singing my song. Yeah. We were the first to recognize, I say, as an audiologist. This is my 40th year as an audiologist. Hearing and balance are both a part of the same system. The cochlear-vestibular system is all connected. And too often people forget about balance as a critical component that has the strongest of comorbidity with hearing loss because they’re all connected in the inner organ of corti and the vestibular system. And one of the things that we think is professionals should be thinking about the patient not only in terms of their hair cells that are in their cochlea, but also thinking about the vestibular system and that risk of falls and also what comprises what goes on with regards to our balance. So we were really pleased to be the first and only manufacturer in our space to introduce a fall detection feature. And we’re going to stay ahead of that curve too, because we have ambitions to do in some of those areas that you’re exactly mentioning, to really help individuals over their lifespan, keep their balance and improve their balance, if possible, to the degree that their anatomy and physiology allows. Okay, yeah, that’s a really interesting line of research and line of activity. I’ll be looking forward to seeing how that plays out in the years ahead. Love to come back and talk to you about that as we go. We’re really excited about that area. Terrific. But let’s talk a little bit about the Genesis AI. Specifically. And now you’ve also got the neurosound technology, which I understand a lot of it is wrapped up in your new compression system and that you’re actually running fast and slow compression additively. How does that work and what are the benefits of it? Sure. So first, just a little bit about the neuroprocessor. Normally when you talk from an engineering standpoint, you’re always at odds with computational power and battery life in our world. World. It’s always interesting to me when new entrants come into our space and they think, this can’t be so hard, isn’t it? Just turning things up and turning things down. And it’s just small devices that’s just kind of like an equalizer, right? But then when you tell them how low of power consumption requirements, and especially as we’ve transitioned into rechargeable batteries, how long the expectation is, I don’t know whether your devices use rechargeable batteries or replace. So you have the expectation that you want to be able to use them all day, every day. And one of the challenges is, as you increase the computational power, normally, that leads to lower battery life. So what we’ve done with our neuroprocessor is we’ve got six times more transmitters, four times faster processor, faster noise reduction capability, ten times the noise reduction capability. And in this processing, we now allow up to 80 million computations per hour. And it seems like, well, that’s just a big number. But when you think about all of the different compression bands and noise reduction and everything that the hearing aid is monitoring in the background, especially as we start thinking about some of these DNN models going forward, integrating directionality noise management, gain compression, thinking about whether wind is present, et cetera, we need that computational power. And now we’ve got the horsepower to do what we need to do, not only today, but into the future. And the beginning of that was to start really looking at this additive compressor, as you mentioned. And what we’ve done is combined that new compression strategy with the ESTAT 2.0 fitting model and with 118 DB input dynamic range, which is the largest input dynamic range in the industry. Why is that important? The dynamic range of hearing for most people is about 100 DB or more. And we wanted to be able to capture, regardless of the degree of hearing loss, the soft sounds, the moderate and loud sounds. And my friend Mead Killian, years ago when he developed the K-amp, one of the earliest single channel nonlinear amplifiers, talked about hearing aid users at the time were largely using linear devices that limited by feedback or by peak clipping or compression limiting. He said hearing aid users don’t want to live their life under 100 watt light bulb. They want to be able to hear the dynamics of different sounds. So the biggest moving into the sound quality issue, some of the biggest comments and most frequent comments we’ve heard from patients is how quiet these devices are when there’s low ambient environments. It looks like you’re sitting in a low ambient environment there, and we don’t want the hearing AIDS to have a high noise floor. So we’ve reduced our noise floor by by 20 dB. We’ve increased the computational power to that 118 DB. Range to be able to map as carefully as possible as broad a frequency response as a person’s hearing loss allows and as broad a dynamic range as possible. And Estat 2.0 does that to optimize the residual auditory area for hearing losses, regardless of whether they’re a mild to moderate loss or a severe to profound loss. We know that there’s this new category of OTC devices that we’re seeing. We still believe very strongly that the best outcomes are achieved with our technology in the professional’s hands. We know that and we applaud accessibility and affordability, but we know with our technology optimized by the professional, it does lead to optimal results. And it’s really taking into consideration the dynamics of the amplitude and the frequency response to map that to the residual auditory area in quiet, noisy, musical environments, as many different environments as a person can encounter. And that’s sort of the dynamics of this new compressor to take into consideration to make soft sounds soft. It sounds so simple. Soft sounds should be soft, moderate sounds should be comfortable, loud sounds should never be uncomfortable and people should preserve spatial awareness. Those are the top four drivers year in and year out of the markeTrak survey data that show what expectations are from people with hearing loss, from hearing aids. They want sound quality to be excellent, they want speech and noise. They want to prevent loud sounds from being uncomfortable and they want to be able to locate sounds, not just hear that they’re present. For someone with good vision. If I hear a sound and I look the wrong way, it’s over here. It’s an annoyance for someone who’s blind or has low vision that can be life threatening. So those areas are what we’re taking into consideration. Matching up that most people are wearing binaural devices and trying to preserve as much of the residual auditory area as possible. And where we’ve seen considerable improvements is in terms of the speech intelligibility, the sound quality and how quiet these devices are. Both for the professionals who often do listening checks of the devices and especially the patients who are wearing them, who say, now, I can hear a watch tick, and I can hear a clock in the room, and I’m not uncomfortable, but I’m aware of things that I wasn’t picking up before. And then moving into that, if you will defying gravity, of being able to get this improved computational power, what we did with our rick. This is the receiver in the canal RT. So it has a telecoil in it and it’s a rechargeable battery. And we’ve delivered up to 51 hours of battery life out of a single charge. That’s where when you think about the computational power in that new neural sound processor plus this improved battery life life even knowing when. Have an Android or iPhone. I don’t know what brand you use, but regardless of that Android, an Android, or on your computer or on any battery powered device years in the future, lithiumion batteries degrade. We want and have the expectation that every patient should have the confidence with our rechargeable devices for all day use without having have range anxiety and think about how am I going to recharge in the end of the day? So with starting out on this model with 51 hours, even after three years of use, the battery life on this will still be longer than our closest competitor in a RIC today out of the box. So we’re future proofing the battery life of these devices, knowing that there will be degradation of lithiumion batteries over time, still providing that confidence for all day use. In the custom devices like I’ve got on here, we’re starting out with 42 hours, and in the micro RIC, it’s called the mRIC. This is the smallest receiver in the canal with a sensor. These all have the sensors and do the physical tracking, social engagement. This is 41 hours. So we’re very pleased. And we think we have a strong differentiator with that improved computational power, delivering sound quality, speech, intelligibility improvements, with the expectation for all day performance out of that battery life. And that’s been something that has been extremely well received in our testing in preparation for this. When we talk about the five years or six years of development of this, over 500 patients were fitted with these devices in the development of this technology with over 11,000 hours of use, time out in the real world before this product ever came to market. So we’re quite confident in terms of our results that we’ll be publishing in forms of white papers and peer reviewed publications on this product, and we’ve improved the durability and the reliability. You mentioned you’re fitted with devices that are waterproof. We have IP 68 ratings on all of these products, which is the highest rating that a hearing aid can get. I’ve taken these devices and dunked them, my very own devices. Dunked them in the water, in a meter of water for 30 minutes. Pulled them out, dried them off, put them back in my ears. Ran the self check feature, which we have a tool in an app, right in the user app. The My Starkey app that enables me to, at any moment in time. Run a self check feature that just tells me whether the receivers, the microphones and the circuits are all working fine for me. As a clinician, I’ve never punished patients when they came back to me sheepishly and said I forgot to take my hearing aids out before I got in the shower or before I jumped in the pool in the summer. And I said, Wait, I’m not going to yell. At you. For me, that’s the best compliment you can give me. If you forgot to take your hearing aids out and mess with that comfortable natural that you were happy with them and forgot that you had them in, well, it’s funny you talk about dunking yours in the water, because I had these two, three weeks, we went down to the Gulf Coast of Florida, and I couldn’t resist. They got a salt water dunking every day I was there. And when I got, all I did was take my squirt bottle and squirt them and rinse them off. I thought about waterproof, rain, like afternoon shower. You forget you have them on, you jump them in. But when we went and I think this is part of lifestyle, especially for younger, more active people or even people of our age who remain active, is that you want to do a lot of different things. And I hadn’t thought about it in advance, but when we would go out in the water, I was confident that I could wear them in the water. When we would talk, my spouse and I, or we would talk with the people around us, I could hear them. So the fact that I could confidently go wear them in the sea was fantastic because it would be a lot harder to interact with people without wearing them. And yet up until these waterproof models came around, you were not able to take them, especially in salt water. So that’s really good. But I want to explore a couple of areas in the relationship between processing power and battery life. When I saw the range of 40 to 50 hours battery life, the first thing I, who had been listening with headphones and earphones my entire adult life, was I would be willing to throw away 10 hours of that battery life and have wider bandwidth for music. And so I was going to ask you that question. I understand that you’re covering the long term degradation of the battery by going with longer battery life, so that three and five years from now, they’re still working all day. Yes. But are we at the point where you think and Starkey will develop wider bandwidth devices because you’re going to have more people for whom music, especially live music, is part of their experience? What’s your feeling about that trade off between battery life and bandwidth? Well, it’s a great point that you make. And again, as I said, 40 years this year as an audiologist. In the past, it would frustrate me a lot of times because people would say music isn’t that important to hearing aid users because they have hearing loss and the boomers. We always like to change convention, and I think we’ve really challenged, both internally and externally, to say no. Yes, I know I have a hearing loss, but I still want to have as broad a dynamic range as possible. And you’ve isolated that area of opportunity when you compare hearing aids to consumer devices, they do have active. Drivers or mechanisms to deliver broader bandwidth. But the price you pay for driving those low frequencies, which is really what delivers that sound quality, is shorter battery life. And so it is constantly that trade off between long battery life and the ability to deliver the low frequencies that will drive sound quality for music in particular, but also for speech and other sounds. And yeah, I think it’s safe to say because of the impact of the boomers, because we’re saying, no, I may have a hearing loss, but I still want to listen to the music that I’ve listened to and enjoy it. My goal ultimately will be for our lane is always going to be focused on not consumer audio, but on patients with hearing loss and trying to provide them with the mechanisms to have the best audio and the other health and wellness and the intelligent features. But the idea of just I have really minimal hearing loss, but I use these to stream audio, audiobooks, music and everything else, and I want to deliver that same broad bandwidth as some of those consumer audio devices deliver, knowing that the battery life will be shorter. But like you, I’m saying I’ll give up a certain amount. I still want to make sure I can get all day, every day. But I am willing to sacrifice a little bit of battery life in exchange for improving in particular the low frequency sound quality. So stay tuned on that. But I think it’s safe to say that’s an area of opportunity for development, as we’ve seen computational power and bandwidth and battery life improve now with the expectation of all day every day. And that sound quality for music, we’re continuing to work on that to try to deliver the best sound quality and speech intelligibility for music and for speech in every environment. Very much looking forward to that. I was actually thinking more of the live music experience because I want to stream music. I’ll put earbuds and I have earbuds that have a personalization routine in them so I could help compensate. But when it comes to live music experience, you’re getting your bass naturally through the domes anyway, right? And as you know, one of the issues with the sound quality delivered, picking up on the hearing aid microphones delivered into the ear or through here, that the more you occlude the ear, the more bass you can deliver, but then the more occlusion you get. And there are a variety of efforts that have been developed and used to some success, but no one, I think, has really solved that riddle yet for live music, of delivering that broadband while ensuring low occlusion for your own voice and delivering some of that natural sound quality. But I think there are a number of ways that you’re going to see. And expect improvements not only for streamed audio, but also for live music, because we want to continue to and again, I always say that as I get older, it seems like the decibel level gets louder and louder at live venues. And we want to be sure that people are not potentially exposing themselves to levels that could cause additional damage to their hearing in those live venues. We actually work on products that would seem strange for hearing aid manufacturer to try to prevent hearing loss. That’s a conversation for another day. But we also have devices and products that we’re working on for people before they have hearing loss to prevent it before it occurs. But for live music, there continue to be opportunities, I think, to improve that bandwidth and listener experience as well as for streamed audio and music. Yeah, that’s terrific. And when I see 50 hours of battery life, I know you’re now getting the headroom to be able to do that. But part of, or it seems like a major part of what you worked on in this device was also the clinician experience. Tell me about that and how the clinician experience is improved and how that in turn reflects on the client experience. You bet. And I did realize the one thing you asked me about is the additive compression system before we leave that topic completely. So indeed, one thing that we found is that in comparison to previous technology and other technologies on the market, this additive compressor provides the slower time constants to get into that residual auditory area. That 118 DB dynamic range. The slow keeps it kind of in the midpoint of the residual auditory area, and then the fast constants help with the peak and the valleys to ensure that those soft sounds are audible and the loud sounds are not uncomfortable, and providing that long term audibility with the intelligibility that comes with the shorter constants. So just to close the loop on that one, let me ask you a question in that regard, because the line in the press release was optimizing sound over 80 million times an hour, which comes out to more than 20,000 times a second, which is wider than the bandwidth of the hearing aid. Yeah, well, again, remember, you got multiple channels, you’ve got speech, you’ve got noise, you’ve got compression. So it’s not necessarily just the bandwidth thinking about that, it’s in the amplitude and the frequency and considering all of the subband changes. So it’s not broader than the frequency of the hearing aid, but we’re considering across those 24 channels in the premium product, that’s about as many third octave bands as the auditory system has within the range that we’re mostly concerned with. Okay. That makes sense. It’s not the overall 80 million times per hour, but you’re really talking about, within each individual channel, a slice of that. And so now that makes a lot more sense. All right, so let’s talk about the clinician experience and what’s different. So what is it that you as a professional then? What are you concerned with, with the user experience in the fitting software? In the fitting software that you’re using it? Well, I’m not a clinician, but what I would say is I want the initial Fit to be as accurate as possible in the shortest period of time as possible. Okay. So what we did in our profit software, which is the new update to Inspire, first of all, we had a lot of feedback from our customers. We did consult in the same way we talked about with 500 patients were fitted with this product. We also consulted with countless professionals, those who worked with us and who worked with other products about what they liked and what they didn’t like. And many of them echoed what your concern was. They want to get to first Fit fast, but then also still have all of the other tools under the hood. And in the past, with Inspire, people really liked the capability to independently adjust in left and right ears, each of the 24 channels for soft, moderate, loud, and Mpo settings relatively independently. And so we didn’t want to take that away, but we did want to answer the call that some people who said, I want to get to that first Fit fast. So now we have something called Minute Fit, which we say is from the box to first Fit in four clicks. And so literally from the pre fitting until you get to that first initial Fit, whether you’re using the manufacturer’s prescribed formula or whether you’re using integrated real ear, which we have as well, that gets to that verification in the individual ear in four clicks. And so that really shortens the time that it takes in terms of getting to that initial Fit to get initial patient feedback as to whether you’re going to continue to adapt or adjust to it. A couple of things within that and a couple pro tips for people who are familiar with us or who aren’t we’re now using in the feedback initialization. Some people do run the feedback initialization algorithm on devices, and others do not. And they wait until someone complains about a feedback problem. We recommend that they run the feedback initialization. And then I also hear some people who say they run the feedback initialization with the devices on the table. Wrong way to do it. Because what we’re doing, I don’t understand why people do it that way. They don’t want to expose the patient to loud sounds. But what happens is, whether it’s a custom or standard product with the dome tip, whether it’s vented or invented or a custom receiver mold is we want to. Run the feedback cancellation and take into consideration the Venting parameters, the depth of insertion, and actually adapt the initial settings on the basis of the feedback initialization with the Venting in place, it’s an optimization method that takes into place into consideration the coupling of the device to the individual’s ear. So run the feedback initialization with the devices in the patient’s ear, and when it asks you if it wants to consider the updating, the adjustments, the initial adjustments based on the feedback optimization, say yes, because that’s something we’ve gotten a lot of favorable comments on. It streamlines that optimization personalization to the individual ear, and it all does this automatically in that process. The other thing that we have is with regards to the receivers that I mentioned in the past, when you put up the receivers in and you did a pretest of this, you had to go in and maybe look at the serial number and see which device was hooked up to the left receiver. Which one was hooked up to the right receiver. So now, as soon as you couple the LM or H receivers and the length and you pop them in, it knows whether it’s a left or right receiver, which power level it is, and that feeds automatically into the fitting parameter. So it’s just a streamlined process. Again, for the professional to really get the prefitting with the receiver length and size and power, and whether it’s a left ear, right ear, get to that four click process, run the initialization quickly for the patient and optimize individual, and then whether you use integrated real ear measurements or not is up to you. We strongly encourage it for that personalization. I like to say that if you don’t use real ear measurements, whether it’s integrated or not, it’s like practicing by astrology rather than astronomy. We know that every patient is different, but at least you got to make some measurements to get in the ballpark. Another area that professionals will be interested in is that in the profit software, we’ve dramatically improved the speed of firmware updates. Now, you can do binaural firmware updates in the profit software in about three minutes, which is much quicker than it’s been in the past. The other thing that we’ve also done is enabled for those patients who can handle it. They can go into the MyStarkey app at their convenience if they have the technical capability, and update the firmware when it’s convenient for them, without having to necessarily go into the professional’s office. And they can just see every time they go in and look at their settings, my devices, it’ll say Firmware is up to date, or you have the capability of doing a firmware update. And it does it really fast. In just a minute or two for those firmer updates, getting those latest features, that’s something again. We baby boomers. We’ve been around computers. We understand that it doesn’t necessarily improve the overall life expectancy of the hearing aid, but throughout its life, if we can have those latest feature updates and firmware updates, that’s a nice feature, both for the patient and the professional. And then the last thing is our telehear synchronous telehealth. We’ve integrated that, made that really simple to operate so that you have much like this zoom connection. I can see my patient and I can talk to them in real time. I can connect through the app and we just use the app. They’re on one end with their phone and I’m on the other end with my programming computer. And I basically can make nearly every adjustment that I could face to face in a virtual sense to ensure that it’s efficient for the patient and the professional to get those optimized settings. Too often, I think both professionals and patients would say, do I really want to take time to make those adjustments or I’ll just live with them until I’ve got three or four of them and go in and see the professional. And being able you’re in a state with a large rural population, too, so there are plenty of people for whom it’s a real journey to do that. So being able to do it through television COVID taught us we could do things that we didn’t know that we could do. People thought, oh, 60 year olds, they can’t use telehealth, they can’t use this, they’re not going to take the time for it. Well, we’re all zooming or using teams or using Webex, and it’s just one more little jump to use that telehear feature. It doesn’t replace face to face interaction, but whether I’m as a patient or as a provider, if I know that I can see my caregiver face to face when I need them, that’s great for important things, but also for making minor tweaks. To be able to do that remotely, simply without having to take the day off or travel rural areas, as you say, it’s a big thing. That’s a pretty important convenience function. And our telehare system is very sophisticated in terms of the capacity that we’ve provided the professional to be able to interact with their patient as if they were in the same room with them. Yeah, that’s terrific. That’s terrific. There’s a lot of talk about the uneven spread of audiologists throughout the country and how many counties don’t even have audiologists. So the more this interim activity that can be done through telehealth is definitely a benefit for the patients. Yeah, we’ve lived through the development of accessibility and affordability to the technology, but that should not be at. The sacrifice of accessibility to professionals and telehealth is an important way in many of those areas of the country where there is a significant rural population. And we want to make sure that patients continue to have the assistance of a professional on their hearing journey to help them optimize their outcomes. It’s a great way of stating it. So I think we’ve done a pretty good job getting under the hood and taking a tour of the device. Are there any last things you want to say before we close it off? No, I think you really did your homework and I appreciate your questions and the opportunity to talk a little bit beyond the high level information to talk about the ways that really the three areas we’re most concerned with are ensuring first and foremost, sound quality speech intelligibility, but then that health and wellness. And then the one area we didn’t talk a lot about was the voice assistant. We’ve had this capability, we can do real time translation now for 71 languages in this product and we’re pretty pleased with that. We’ve had a lot of feedback from that. Now, I’m not going to over promise on this that it’s not going to turn you, it’s not William Gibson and Neuromancer where I’m instantly fluent in a language that I know nothing about. But I can tell you that I’ve worn devices and used the translation feature around the world to communicate essential things like how do I find my hotel and how do I get a beer and be able to communicate with people in a language that I know nothing about. And it’s really been a treat to use that and the transcription and the ability to just tap and ask a question like what’s the weather going to be? The advance that we made in Genesis now is I can ask it within the app on the home screen now we have a little microphone up in the right and rather than having to tap or I can just press on that and ask what’s the weather going to be today? And I don’t have to say what’s the weather going to be today in Eden Prairie, Minnesota. It will now use the location services on the phone. So all I have to do is ask the weather and it’s just depressing anyway because it’s freezing rain right now and snowing. So I don’t even ask that. I’m not that far south of you, so yeah, I know. It’s actually raining. Raining here. And I didn’t go there because the voice assistant use I’ve done entire seminars on voice assistant use through hearing AIDS, and I appreciate that sort of thing very much, but didn’t want to go there because it’s a little bit outside the core hearing function. And yet I think for people our age and younger, that sort of thing becomes more and more important because they are used to doing those things and the rest of their life and it becomes a negative if they can’t do the same sort of things through their ears. So it makes a lot of sense and I appreciate that. We really wanted to do was make it all in one package. The new MyStarkey app, we think, has a very simple layout. The basic functions volume control, edge mode. Plus, as I showed you earlier, programs where I would go to update my firmware, then the programs, and then if I want, can just simply swipe oh, I’m in edge mode, so I can just swipe on here. And change programs. And you notice that the color changes, too. For somebody with aging eyes, to be able to simply swipe like that and not have to squint to see what program I’m in really takes vision, hearing, and a simple aesthetic into that. And then to be able to use that voice assistant, all I have to do is hit the button. I know. Yeah. I could go in and use a different assistant program. But integrating it all in a simple, easy to use application is part of the simplicity that we want with the sophistication under the hood, as we’ve been talking about for the last hour. Well, and that’s a great way to wrap it up, because it’s clear you’ve thought about the user experience from start to finish with the input from patients and providers alike. Yeah. So that’s excellent. I really appreciate you spending the time to go through in detail the new hearing aid and everything it does and what the design process and the design thinking was behind it. So thanks a lot. I appreciate you spending the time with me today. It’s my pleasure. And someday I look forward to trying to give you a test ride. That would be great. Thank you. All right, thanks. Take care. Bye bye.
Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.
Dave Fabry, PhD, is the Chief Innovation Officer at Starkey, responsible for driving end-to-end innovations within the clinical audiology department. With a Ph.D. in hearing science from the University of Minnesota, he has had an accomplished career spanning academia, clinical roles, and industry positions. Dr. Fabry is also an active member of several audiology societies and is a licensed audiologist in Minnesota, Florida, and Rwanda. His expertise and experience in implementing forward-thinking concepts have been instrumental in shaping Starkey’s superior product designs.