Exploring ReSound Vivia Hearing Aids: AI-Powered Sound, Bluetooth LE Audio, and More

resound vivia hearing aid technology review
HHTM
February 4, 2025

GN’s new ReSound Vivia hearing aids integrate advanced AI to deliver a more adaptive and personalized listening experience. The devices leverage a deep neural network (DNN) trained on 13.5 million spoken sentences, enabling it to differentiate speech from background noise more effectively than previous models. Alongside ReSound Vivia, GN also launched ReSound Savi, a new essentials range featuring Bluetooth LE Audio and Auracast™ capabilities for expanded accessibility and connectivity.

To gain deeper insights into these innovations, Andrew Bellavia and Shari Eberts interviewed Laurel Christensen, Chief Audiology Officer at GN, and Andrew Dittberner, Chief Technology Officer. The discussion highlighted ReSound Vivia’s Intelligent Focus feature, which allows users to direct their attention more naturally in noisy environments, mimicking the brain’s ability to process sound. This advancement marks a shift from traditional directional microphones and noise reduction filters to a more intuitive AI-driven listening experience.

In addition to speech enhancement, both ReSound Vivia and ReSound Savi integrate Auracast™ broadcast audio, a next-generation Bluetooth feature that allows users to connect to public audio sources in venues such as airports, theaters, and conference centers. GN has also introduced the world’s first app-integrated Auracast Assistant, simplifying access to streaming audio.

Full Episode Transcript

Until recently, hearing aids have for the most part, used purely acoustic techniques to improve speech intelligibility in noisy situations. These typically include advanced filtering and directional microphones. AI played a supporting role in areas such as acoustic scene identification to automate hearing aid settings for different environments. That in itself was a major improvement. It sure beats having to change settings in the app when going from situation to situation. And modern devices often perform better than a person trying to create programs manually. More recently, AI has been implemented in other ways, such as identifying multiple voices and steering the direction of mics toward them, even if they move. I’ve addressed this before, so I’ll just point out briefly that these techniques have, for the most part, been tapped out with little room for further improvement. As many hearing aid wearers continue to struggle in noise, manufacturers have been pushing the boundaries of a more advanced kind of AI to add another level of performance, inline noise separation and speech enhancement. This requires a high performance deep neural network, or DNN, placed directly in the audio path of the hearing aid and carefully trained to recognize what is speech and what is noise in any hearable device or hearing aid. The DNN can either reside in the main processor chip or be a separate chip altogether. Either way, the DNN has to be fast and powerful to identify and remove enough noise in real time to make a difference without speech or adding too much delay. You have likely already benefited from AI noise separation without realizing it. For example, Internet meeting platforms have been cleaning up your speech for years, but they have extremely large resources in the cloud to create the DNN and train it. Only recently has it been possible to pack that kind of performance in a hearing aid. The latest company to accomplish this is GN, with their new model known as the Resound Vivia and Beltone equivalent ‘Envision’. Each hearing aid manufacturer implements AI noise reduction in their own way. Shari Eberts and I had the pleasure of attending GN’s prelaunch event in Las Vegas, where we got the details on how they did it. Let’s hear from Chief Audiology Officer Laurel Christensen and Chief Scientific Officer Andrew Dittberner, two of the people who are key to Vivia’s development. Laurel, thanks again for joining me and love to have Shari with us as well. This is great combination because you’ve got two hearing impaired people who approach life from totally different points of view, I think, right along with obviously an expert in hearing care and how to implement hearing devices. So thanks a lot for joining us. Absolutely. So congratulations on the launch of the Vivia. It’s really exciting to see how you’re implementing the DNN to achieve even better response. Now, if I understand correctly, and I’ll use the Nexia as an example, I’m wearing the Nexias now. I’m really getting good results in this environment. So now I’m thinking about, all right, if it’s in quiet and I’m all in all around mode. Okay, it’s working. It starts to get a little noisier. I’m doing the Nexia is doing some acoustic noise reduction and then gets noisier to beam. Formers are dialing in. But now you got next level with the DNN. Is it correct that it’s kind of layered? The DNN would kick in afterwards. So it’s not exactly how it works. So in your all around setting, you guys are, you’re getting what first you get what we call asymmetric directionality. It turns on one directional microphone so that you can still hear everything around you when the environment gets very loud. Then we turn on two directional microphones. This is a whole step above that. And then you actually have to make a conscious decision to turn it on. And so it’s when you are buried in noise, you are not doing well. You want to hear what you’re looking at, but you’re not hearing it very well. You’re going to change to the program that has the artificial intelligence. And, and I love that both of you are here because you just said you approach things in very different ways and everybody does. And what might be the signal of interest to you is not the signal of interest to you. And we don’t want the hearing aid to ever make those decisions for you. And so the way we do this is that we have trained a deep neural network chip with very common known noises, things that we would all agree are noise. We have trained it to know what that noise is. And then when it gets into that environment, it will, it will take the noise down and you will definitely hear it going down. But it doesn’t all go away. You’re still going to get that noise because there are a lot of things that you might want to hear out of that noise. We’re going to let the brain do what it does best and process the sound. But we’re going to have spotlighted the speech in the midst of all that. Well, I’m really glad you didn’t consult my mother because she would have told you that Pink Floyd is noise. Yes, see, that’s exactly my point. We just don’t know what is the signal of interest. And we need you to decide what that is, not the hearing aid to decide that for you. Well, I love that. And that, I think, is really the future. And this is, as I understand it, just that first step in that process. Tell us a little bit about what your vision for the future is in terms of letting the consumer make that decision. Yeah, I mean, the future is bright. You know, I think there was a time where we all looked at hearing in noise and hearing aids and wondered, was it going to get any better? Yeah, you’d run out all of the acoustic possibilities. That’s right. And I think now we’re in a state where we can do more, but there are power consumption issues with it. And so today, anything in hearing technology that uses a deep neural network is trained offline. And so we train it with speech embedded in noise and sentences in different languages. And it’s trained and trained and trained on millions and millions of samples. And then it is put into the hearing aid to then do that in the regular environment, you know, over time, you know, and with the power consumption limitations being better, you could have real continuous learning, you know, and you could do some other things. I mean, you’re going to be able to even get more noise out. You’re going to be able to learn and change all the time. You know, my colleague Andrew Dittberner, who’s, you know, really and I. Talked with him before I talked to you. I don’t know what order it’s going to end up in the podcast, but. Yeah, well, my colleague Andrew talks about the fact that you’ll even be able to kind of crowdsource the learnings that the hearing aid has had. Someday, if somebody goes into this restaurant, you probably will be able to download someday what worked in that restaurant, and you won’t have to go through that. So, I mean, we’re talking some years in front of us for sure, but the future for hearing and noise is bright. But I think the most important thing is that you have to consider what the brain does, what the brain is capable of doing, because the brain is not damaged with most people with hearing loss. So we don’t want to interfere with the signal processing. We want to give the brain what it needs to make its own decisions. And you decide what you want to hear, not the AI. And that’s why we call it intelligence augmented. So we’re going to augment what the brain can already do. Yeah, so you can’t really distort what you’re hearing too much in different ways or you’re going to disrupt the whole auditory process. And unfortunately that happens quite a lot where the hearing aid is making the decisions and then you can almost be stuck in a situation that you don’t want to hear something and you can’t get out of it because the hearing aids decided that’s what you want to listen to. And so part of this philosophy, if I understood correctly, I have to manually activate what Andrew calls ‘Beast Mode’. Yes. So I have to turn on Beast Mode. I could do it from here, like selecting modes, or I could get out the app and I can turn it on. And so if I choose the situation to turn on Beast Mode, then it is going to go to work in the environment I’m in at that moment. At that moment. And then it will take out the background noise. And at the same time we activate our 4 mic beamformer. So whatever you’re looking at is what you’re going to hear. But we’re also not going to take all the noise out. Even though the noise reduction is quite nice, you’re still going to be able to, you know, you’re still going to be able to hear some of that noise so that you don’t lose sight of everything else that’s happening. And if I’m in full on noise reduction mode in a Nexia, I get a certain level of performance. What extra do I get? In whatever terms you want to express it. How much better is Beast Mode? What am I getting with Beast mode? It totally depends on the environment in many ways. I mean, the acoustics of the environment, how it was trained, what the background noise is it, it will vary per environment. But yes, you will get a signal to noise ratio improvement above and beyond what that beam former gives. And the beam former actually gives quite a lot. Okay, excellent. Thank you. You’re welcome. So how important is Auracast to this new product? Well, you know, for me, I can’t wait until Auracast is everywhere. You know, just a little background on it. I mean, Bluetooth low energy audio is a new Bluetooth standard that does have this public broadcasting component. And we’re starting to see Bluetooth low energy audio in things. Not just hearing aids. You’re seeing it in earbuds, cochlear implants anybody who has it. And we, we, we’re not seeing everywhere now. I’m hearing like there’s maybe six places now in the U.S. okay, we gotta start somewhere. That’s right, we’re doing something. But you know, the reason for that is we really haven’t had really good commercially available Auracast transmitters until recently. Ampetronic now you know, has one. And you know, when, when you can just put that, you know, up, it’s, it’s just all you have to do is put that up, you know, and. There, there’s, there’s actually several now there’s a Bettear in Israel is doing it and there’s a French company, I can’t remember doing it now. So yeah, it’s coming. The transmitter infrastructure is coming Yep. And that’s what we had to have. That and an Auracast assistant because you have to be able to find those streams somehow. Just like you find WiFi, you know, you gotta find the stream and, and then you select your stream and then it’s coming straight to your ears. No distance, no noise. It, it’s game changing. And it’s game changing for everyone. Not, not just people with hearing impairment. And so people who are fitted with this hearing aid and install the companion app, they get the Auracast assistant in the app directly. Yep, they get the Auracast assistant directly. And then we have a TV streamer today that we, we’ve actually tried out at Lincoln center and we were able to, to use it at Lincoln center and get all of the range that we wanted just with our TV streamer. I was shocked at how well that worked. That’s amazing. It really is game changing. And I don’t have a hearing loss and I wanted my hearing aids in. I kept taking them out thinking, you know, does it sound better in or out? And it was better with those hearing aids in every single time because it was plugged right into the soundboard. And well, that to me is what I get so excited about for Auracast is because everyone’s going to want it. It’s not going to be something that’s just for people with hearing loss. It’s going to be something for everyone to experience sound in a new and enhanced way in these difficult type of listening situations. And so who’s going to benefit the most? The people with hearing loss. But we really need to bring everyone along in that experience so that it rolls out faster. I agree. The faster the better. Everyone will benefit. And I just think if you just start the ball rolling, it’s really going to roll because it’s really not expensive compared to what we used to do or, you know, looping which is a technology that’s, you know, a great technology that really served a phenomenal purpose. But for sound quality and ease and, you know, everything you’re going to get with Auracast, it’ll just be a lot better and looping. Advocates will say we still need to have those loops there until this product is available everywhere. Right. Because people need it now and we need it in the future as well. Absolutely. Couldn’t agree more. There is no reason to. To take a loop out or do anything like that. These need to coexist until really Auracast is everywhere. Exactly. Yeah. Yeah. Hopefully that day will be here soon. I’m looking forward to it. I think it’ll be a big year for Auracast. What is the Auracast experience you’re most waiting to have? Oh, that’s a great question. For me, it’s going to be at conferences and at theater so that I can go into any conference with confidence that I am going to be able to get that signal directly into whatever device I choose to use and be able to hear that clearly. And then going to the theater now, a lot of times you have different types of devices that you can use. They’re not great. You know, they don’t always use. It matters where you sit. And I just think having that freedom and confidence to attend anything that I want at any time is just life changing. It’s transformational. Absolutely. I cannot wait till we have that. What would be yours for Auracast? Yeah, the same things. I mean, I. You know, I’ve spent 35 years of my life developing hearing aid technology. I mean, I am passionate about helping people hear and this is just one of those leaps forward that it will help everyone. It’s really. I don’t think there’s so many people that really didn’t even experience telecoil and how much that does. I mean, I just think that this is going to be very game changing for anyone with hearing loss. And that’s why I do it. Yeah. And the availability is so much easier. Right. It’s so much cheaper and easier for venues to implement this. And they’re not only helping a small, what they might think is a small segment of their, you know, their venue attendees is helping everyone. So that to me is. Yeah. I actually think CES is a great example because in the Venetian, they have the three levels of meeting rooms as you’re going into the convention center. Those are all portable, configurable meeting rooms. And so they can’t loop those. Right. Because they can be a different setup for everything. Some large, where the keynotes are, some small for private meeting spaces and so on. So they bring in a portable sound system, put up a couple of speaker speakers. It’s really reverberant, it’s hard to hear while you can just take the Auracast transmitter on a pole, set it up, have a bank of receivers for people, or they can use their own devices. Good to go. Did either of you experience the Auracast at CES yesterday, they had an accessibility. Were you there? I did. I went to one of the sessions and I recorded and I even tried several different devices. Oh, cool. Yeah, so they had multiple rooms that you could go to different sessions and use and experience or cast. Yesterday I didn’t get to go. But I think, you know, having that on the, you know, on the program, it was, you know, very pointed out that it was accessibility. I just think more and more awareness is what we have to have. So congratulations once again. Thank you. Thank you. And a pleasure to talk with you as always. Really good to talk to both of you. According to GN, their DNN was trained on 13.5 million spoken sentences in various languages and with varied vocal effort across 3.9 million tune sound parameters. These large data sets are necessary to provide improved performance in all situations without actually degrading the voices one wants to hear. Andrew Dittberner was the ideal person from whom to learn more about how it’s implemented and how it benefits people with hearing loss. So I have with me Andrew Dittberner. He’s the chief Scientific officer for GN. Thank you for joining me today. Yeah, thank you. It’s really exciting what you’re announcing. Please tell us a little bit more about the AI implementation and your philosophy for executing it in a hearing device. Sure, yeah. This would be exciting. You know, artificial intelligence has been around for a while now, and everyone’s been really talking about chatGPT and what it’s doing. What we’re doing on the hearing aid side, of course, is we don’t have the processing power like, you know, an Nvidia chipset or a server full of Nvidia chips has. So what we try to do on the hearing aid side is that we’re trying to basically embed what we call deep neural networks, which is just a smaller piece of hardware that we add into our existing hardware. And its job is to basically try to pick out the speech signal, clean it up, and represent it back to the user. And that’s pretty standard technology that’s used now in many applications. Whats tricky on the hearing aid side field is that we have very limited power to do this in. And so we’re constrained by the, you know, the number of MIPS that we can use. The memory the power consumption, battery life, everyone in the size of the actual device. We have to make sure we keep that small too. Right. So you’ve got a lot of constraints, you’re working with and that then would have driven some of the decisions you’ve made one chip or two. So we added a second chip in order to handle our, basically our AI function, our deep neural networks. And this was decided and done just because of its efficiency. We were able to get a very efficient chip in there so we wouldn’t constrain and overuse our battery. Okay, and then so how about size, say compared to the Nexia? How’s the size compared to the Nexia? And talk about the battery life. Yes, the size for Nexia is awesome. Probably when you walked around this event here, you had a chance to do a side by side comparison and you’ll be really pleasantly pleased that we wouldn’t compromise on the size. It’s, it’s fantastic. And then when it comes to the battery life, we wanted to keep what we had before. I mean, why should you lose battery life? You know, what can we do to improve it? Now? Obviously any sort of AI functionality is kind of like Bluetooth connectivity. It’s going to drain. Right. So the more you use it, the more you drain. But we still developed it to be efficient and to be very similar to like you would stream, you know to get at least 20 hours of battery life out of your system while you’re using, you know, that sort of functionality, you know, and typically when you’re, if you don’t use it as much and you’re just kind of doing the daily functions of a hearing aid, you should expect to get about 30 hours of life out of it. Okay, so the implication is we’re in a perfect environment because if anybody’s been to CES, they know that it’s this loud all day long and I’ll still get full day battery life even though you’re really taxing the DNN at that point. Yeah. And definitely this isn’t daily life for any of us. Right. But this is a time when you can really see, you know, kind of the benefits of having something that’s power efficient. You know, last thing you want to do is spend half a day and have your device, you know, all of a sudden end on you. And then what do you do the next half of the day, you know, when you have nothing? So definitely you need describe the philosophy of the DNN. How is it employed? Like if I were in this environment, for example, how actually is the DNN working? What is it doing? How is it helping me hear better? Sure. So hearing in noise, as you know, is the number one problem most ear impaired people face. Right. So it’s a significant problem. And even without hearing loss, we all suffer from it. Right. You know, hearing a noise is hard. So what we do a little bit differently is that DNN is kind of another tool in the toolbox for noise management. I mean, you have directional microphones, you have, you know, basic standard signal processing noise reduction that kind of does gain reduction in bands and stuff like that. But what we do very uniquely with our DNN for environments like this is that when you’re really struggling to hear in noise, you want to give everything you have in your toolbox to this circumstance. Right. And what we do with DNN is that we make sure when we implement it, we have our best beam former beamformer on. We have any other noise management cleanup going on. We have whatever we can from our human brain. We want to try to preserve those signals where we can so the brain can get involved in processing. And then we add DNN constrained in that beam to do only the work in the direction you look at. At this point, it doesn’t really make sense for a dominant person talking to behind me to become the dominant voice to hear. Right. I mean, we’re just talking, the two of us. The last thing I want to hear is it’s a guy behind me in my hearing aids, and that’s all I hear. And when you’re talking to me, I can’t hear you because that’s just louder. That guy’s right behind me. Okay, so what you’ve done is essentially you’ve got a layered approach. So if I leave it in, I’m assuming it’s still called all around mode. If I leave it in all around mode, right. At first I’m only gonna get some acoustic noise reduction. Then I’m gonna start to get the beamformer. And then on top of all that, when necessary, you’re going to dial in the DNN as well on top of the beamformer. So you’re not trying to discriminate behind me, you’re trying to focus on what’s in front of me. Because in this sort of conversation, most of the time I want to talk to the person I’m looking at. Is that a correct way of describing it? Absolutely. That’s exactly what we do. And the cool thing about that is, even if you have a second person talking beside you, you never get two talkers at once. Right? So you get one person talk. When the other person starts talking, you turn and look. The DNN will Handle that voice the moment you look at it, just like it’s the main signal. Right. And so you’re controlling the DNN to whatever you want to hear instead of having just this DNN operate manually. Just trying to decide on the side what you need. I have enough situational awareness. So for example, she comes up and talks to me, I’ll hear her and I can turn and attend. Yes. I mean, the thing is, you’re not eliminating all sounds, you’re just really accentuating the sound you’re looking at. So if someone is still talking on the side, it’s going to be your basically your beam former. That is going to be your largest constraint. All right. And so it’s clearly a layered approach. So first you’re bringing in some acoustic noise reduction, and then the beamformer starts in. Dial in like the Nexia I’m wearing now too. Okay. But now you’ve got the DNN on top of that. How then did you actually decide what the training should be, what noise you were going to focus on to give real improvement to real users? Yeah actually this one, we went back to neuroscience to kind of help guide us. There’s something in neuroscience that’s called spotlighting. And in spotlighting, what happens is your brain focuses on a certain sound and, and it actually puts all this energy into that sound. Right. And what helps that sound become easier to interpret is when you add contrasting to it. So in other words, you can think like in a dark environment here and everything is dark. We can process the signal because we have enough light to process it. But if I did a light like we do with this camera here, all of a sudden that signal becomes easier to process because it unveils things that we normally would be really fighting with our brain power to try to process through. Now, even without this light, you could still see it, but with light, it just makes it easier and it just kind of reduces cognitive load and listening effort and things like that. But it isn’t necessarily always about SRT. Sometimes you get SRT improvement improvements. Other times you get SNR improvements that help in cognitive load, listening fatigue, reducing that, making it easier to hear that person and make it easier for your brain to really process it. Okay, so, and that was kind of the logic behind why we decided to, you know, really go in with our instrument and maybe actually look at a little beyond just SRT into daily life. Things like, you know, listening effort, you know, it’s hard to hear. You can hear, we all can hear a lot of times. In these environments, but it takes a lot of effort and when you leave, you’re totally exhausted. Fatigued. Especially like tonight, where I’m going to be extremely fatigued from all this. Yes, well, terrific. I really appreciate you going in a little more depth. Its very exciting. Congratulations. No, thank you very much. Appreciate it. You’re welcome. I had the chance to experience a demo with Vivia’s mounted and artificial ears and playing their output into headphones. While such a demo is never quite the same as wearing the devices themselves, the performance was impressive. Nonetheless, it’s exciting to see GN adding AI noise reduction to further improve performance over what was already an excellent previous generation hearing aid. I’ll close this with a question for those millions of people with little or no audiometric hearing loss but difficulty understanding speech and noise. Will GN eventually add this technology to a Jabra consumer or OTC product? Like in any industry, cutting edge features are first introduced at the upper end of the range and then filter down. It’ll be exciting to see how that plays out at GN and elsewhere. As in ear AI noise separation matures.


Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week, and follow This Week in Hearing on LinkedIn and on X (formerly Twitter).

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

About the Panel

Laurel A. Christensen, Ph.D. is the Chief Audiology Officer at GN Hearing. In this role she leads a global team of audiologists that are responsible for all aspects of audiology for the company including new product trials, audiology input to marketing, and global audiology relations which encompasses training and product support to subsidiaries world-wide. Prior to joining GN ReSound, she was a researcher and Director of Sales and Marketing at Etymotic Research in Elk Grove Village, IL. While at Etymotic, she was part of the development team for the D-MIC, the Digi-K, and the ERO-SCAN (otoacoustic emissions test system). Prior to this position, she was a tenured Associate Professor on the faculty at Louisiana State University Medical Center and part of the Kresge Hearing Research Laboratory in New Orleans, LA. During this time at LSUMC, she had multiple grants and contracts to do research including hearing aid regulatory research. In addition to her position at GN ReSound, she holds adjunct faculty appointments at Northwestern and Rush Universities. She served as an Associate Editor for both Trends in Amplification and the Journal of Speech and Hearing Research. Currently, she is on the board of the American Auditory Society and is a member of the advisory board for the Au.D. program at Rush University. Christensen received her Master’s degree in clinical audiology in 1989 and her Ph.D. in audiology in 1992, both from Indiana University.

Andrew Dittberner, Ph.D., is the Chief Scientific Officer at GN where he has worked in Research & Exploration for the past twenty years. He received his Master’s of Science degree in 1998 from the University of Arizona, his Ph.D. in 2002 from the University of Iowa, and completed professional graduate work in audio communication engineering at UCLA. Presently, he serves on a number of government and industry research working groups (e.g. MRC, IRC), continues to consult and support NIH-funded initiatives, and holds an adjunct professor position at Vanderbilt University. Recent accomplishments with his research team resulted in the release of a new binaural noise management system based on augmented intelligence (Omnia/Nexia products), the development of hearing protection prototypes that resulted in a new division in GN (FalCom), and prototypes using Artificial Intelligence for a personalized first fit in spatialized sound.

Shari EbertsShari Eberts is a passionate hearing health advocate and internationally recognized author and speaker on hearing loss issues. She is the founder of Living with Hearing Loss, a popular blog and online community for people with hearing loss, and an executive producer of We Hear You, an award-winning documentary about the hearing loss experience. Her book, Hear & Beyond: Live Skillfully with Hearing Loss, (co-authored with Gael Hannan) is the ultimate survival guide to living well with hearing loss. Shari has an adult-onset genetic hearing loss and hopes that by sharing her story, she will help others to live more peacefully with their own hearing issues. Connect with Shari: BlogFacebookLinkedInTwitter.

Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.

 

 

Leave a Reply