The Signia IX Hearing Aid Platform Explained: A Closer Look with Brian Taylor, AuD

signia ix hearing aids
HHTM
October 3, 2023

This week, Andy Bellavia interviews fellow co-host Brian Taylor, to discuss the release of Signia’s new Integrated Experience (IX) hearing aid platform. The conversation explores the platform’s innovative features, including real-time conversation enhancement, advanced beamforming, and its readiness for future technologies like Auracast.

Taylor emphasizes the goal of optimizing conversations and improving the overall well-being of hearing aid users, showcasing the ongoing evolution of hearing aid technology to enhance user experiences.

References:

Full Episode Transcript

Hello, everyone, and welcome to This Week in Hearing. It’s well known that understanding speech in noise is the most intractable problem that developers of hearing aids struggle to address. This is especially true for prescription hearing aids, which must really mind the power consumption to achieve all day wear in a discreet, comfortable device. From the perspective of both an industry member and a hearing aid user, I can relate to the difficulty. Shari Eberts has written eloquently on this topic, so much so that she’s a reference in Signia’s. Latest white paper released this month. Brian Taylor, Senior Director of Audiology at Signia, was one of the authors of that paper. In it, they described speech in noise measurements on their new Integrated Experience, or IX, hearing aid platform. Brian put aside his This Week in Hearing co-host hat to join me to discuss what is unique about IX. Welcome to the other side of the interview desk, Brian. Thanks, Andy. It’s great to be here with you. And it’s great to have you. I mean, most people know who you are already, but I’d be remiss if I didn’t ask you to introduce yourself and tell everyone a bit about your background. Well, sure, I’m happy to do that. I am the Senior Director of Audiology at Signia, so that’s my primary role in the profession. I’m also a part of This Week in Hearing on the team with you and a few others. I’m an editor of Audiology Practices, which is a quarterly publication from the Academy of Doctors of Audiology, and I teach a class at the University of Wisconsin. I’ve been an audiologist for more than 30 years. So it’s great to be with you. Thanks for having me on the broadcast. Oh, it’s my pleasure. I’m always looking forward to talking to people who are responsible or involved in new hearing tech. So it’s great to dive into this platform, which Signia calls real time conversation enhancement. Now, if I understand correctly, this builds on the earlier Augmented Experience introduced a couple of years ago. Maybe a good place to start is to describe that system. Right. So the AX platform came out in the spring of 2021, so it’s been a little more than two years. And what was interesting and unique about AX is that it divided the listener’s soundscape, or incoming sounds in their environment into two streams. There was a focus stream, which was primarily sounds coming from the front hemisphere of the wearer, and then the second stream is called the surrounding stream, primarily sounds coming from the back of the hearing aid wear. And what’s unique about that, really, is that each stream, the front and the back sounds are processed independently, with the goal, of course, of optimizing speech in the presence of background noise. So that’s. Kind of, in a nutshell, what we call split processing, which was introduced in the AX platform. So why would you have two streams and have the ambient noise in the second stream versus just trying to reduce it as much as possible? Well, that’s a great question. I think that the best way to understand split processing and why it works is to contrast it with traditional directionality, front facing directional microphones, which have been around, as you know, for many, many years, as well as contrasting with omni. I think everybody knows with an omni system, all the incoming sounds are kind of processed as a single stream. Whatever the predominant sound might be, is how the hearing aid processes everything that’s omni with front facing directional, which is what most directional systems are and have been for a long time, the gain from the back and the sides is greatly reduced. So there’s some real limitations with that that are overcome with the split processing system that we’ve been talking about. And what are those limitations? Well, the limitations with front facing directional is- some of the sounds of importance that might be to the side and behind you might be the gain might be reduced too much. And of course, the limitation of omni is it has a tendency to pick up everything speech and noise. Yeah, right. No omni makes perfect sense because you don’t reduce or you don’t improve the SNR at all. If you just go full omni, it doesn’t help, but it’s interesting then. So you’re doing what the earbud people would call ambient awareness by allowing ambient from all around you in at a reduced level. So you still maintain awareness while focusing on the person in front. Exactly. I think that’s a limitation of a traditional directional, front facing directional system that is overcome with split processing. Okay, and then what improvements were made when the real time conversation enhancement system was created? Well, I think the big innovation is, rather than having the front back split now, we can split the front into three different snapshots or three different streams, and each one of those three streams primarily from the front, and then also including a stream from the back. All four of those streams can be processed independent of one another, meaning that if speech is detected in one of those streams, the hearing aid will turn up the gain for the speech sounds and attenuate any noise that might be in that stream. So it can do that now for three different spaces in the front, snapshots from the front, which, if you think about it, that’s really ideally suited for if somebody’s listening in conversation. Where there’s multiple talkers kind of to the side and maybe a little bit in the front, the back surrounding them. Yeah, it makes perfect sense. Like, you’re sitting at a round table with a few people around the table, so you’re actually creating three different directional streams versus just widening out the beam forming to encompass all the people. And you’re processing each one separately. Exactly. Three from the front and then including and you can’t forget what’s coming from the rear of the listener as well. But yes, three primarily facing front streams. Okay. Yeah. So you’re processing for speech enhancement on the three front streams, and then the back is really the original split system. So you have ambient awareness. Exactly. And if the back hemisphere picks up speech, if it detects, it’ll also enhance that turn up the gain for that as well. Okay, so four total, really. Although typically it’s the three in front doing speech enhancement. And then, if I understand correctly, you can actually follow a moving speaker. How does that work? Well, the signal classification system in all hearing aids today, all prescription hearing aids at least, is highly sophisticated. And I think Signia is in that group, of course. And it’s able to identify through spectral analysis of the incoming sound. If the hearing aid, the classifier, thinks it’s speech, it’ll more or less kind of lock onto it. And if the location or the proximity of the speech or the level changes, the hearing aid will continue to amplify that sound. So if the talker is moving around a little bit, it’s able to follow them and continue to amplify or turn up the gain for that incoming sound. So you can detect where speech is coming from and then essentially steer the beam. Exactly. And that’s really the foundation of that. Why that works so well in our product is because we’re on our fifth or 6th, maybe arguably the longer generation of bilateral beam forming. And that’s kind of the foundation of how this system works. The generic term for that would be spatially related noise reduction. Spatially based noise reduction. And you really get an advantage, I think, when you can use the two microphones, the microphones on either side, both sides of the person kind of working together to get those added streams. Yeah, that’s actually worth touching on because if you’re only doing the beam forming in an individual ear, even if the left the one is doing and the right one is doing it, each of those ears only has. Short distance between the two microphones to work with and that eliminates how much beam forming can be done if you spread it out. In other words, you have the two ears talking to each other, then you can use the microphones on either side of the face together and you have wider distance between them. You have more control over the beam forming. Exactly. And that’s the foundation for how this multidirectional streaming platform works. It wouldn’t work without the sophistication of bilateral beam forming system. Yeah, and I wanted to touch on that because there are beam forming earbuds too, and for milder hearing losses, they’re going to work just fine to get the SNR up a little bit. But the sophistication of bilateral or binaural beam forming is something that for the most part has been reserved for hearing aids with their more sophisticated processing. And me personally, it’s one of the things that’s amazing about prescription hearing aids is the level of signal processing tech within. Yeah, I think for anybody that’s been doing this for more than 25 years, it’s amazing the level of sophistication each generation, each platform gets more and more complicated. It becomes harder and harder to explain. I find myself in front of a group. You have to rely on a lot of charts and graphs and you have to be able to draw things so people can kind of see with a visual what’s going on. Yeah, right. And actually your white paper does a great job of that. But the most amazing thing really is all this is going on in a device which is so small, uncomfortable and whose battery lasts all day. Exactly. I often have this conversation at a context well, do I go with earbuds, OTC or prescription hearing aids? And the context of hearing loss level because there’s pretty good devices out there for just a few hours, situational use. But when you need hearing all day, pretty impressive device you can put in your ears today with a prescription hearing aid. No doubt about it. Yeah, I think nowadays with a rechargeable hearing, you get about 24 hours of use before you have to recharge it. So it’s pretty darn good. It is. But the interesting thing is, as sophisticated as the beam forming right now, you’re still only getting, according to your white paper, one or two dB of SNR improvement with real time conversation enhancement turned on versus off. Right between one and two DB. Doesn’t sound like much, but how significant does that actually interpret? No, I mean, that’s true. That’s a great question because I think, first of all, it speaks to the fact that really all prescription hearing aids are pretty darn good these days. They all do a fairly decent job of improving the signal to noise ratio. So it’s harder to squeeze more and more performance improvements out of as the technology as the platforms continue to evolve. So that’s one thing. But 1 to 2 dB is still pretty significant. If you think about, and I think a lot of your listeners will know this, if you look at a performance intensity curve, the change improvement as the SNR changes, that’s a pretty steep curve, especially for sentences. And 1 or 2 dB of improvement can be sometimes more than 10% improvement on a word recognition test. So 10% in a noisy situation, even 5% improvement in a noisy situation can be the difference between giving up this is too noisy and actually following the conversation. So 1 to 2 dB may not sound like a lot, but in many cases, it can be a difference maker. Yeah. So it’s really worth fighting over each dB. But let me ask you a question then, and that is the technology with these kinds of acoustic techniques has really gotten sophisticated. How much more gain do you anticipate being able to achieve? Or will it take a completely different breakthrough to get, say, the next 3 or 5 dB of Snr improvement? Yeah, I like to say that it’s not revolutionary what we do in our profession. It’s more evolutionary. So every platform is a little bit better down the road. I think you might start to see the artificial intelligence inside of the hearing aid. The signal classification system be trained to recognize certain voices. You can maybe train the hearing aid. Imagine a child wearing a device and you can train it to recognize the voice of the parents or the teachers. And no matter what environment you’re in, it’s always going to lock onto that specific voice and amplify it and suppress the other sounds a little bit. We’re not there yet, but I think that’s sort of to give you a glimpse of the direction that we’re heading, is that kind of signal processing technique? Yeah. Okay, well, as someone who wears pretty modern devices and still wish there was more, I’ll be waiting with bated breath for the machine learning speech in noise separation techniques to hit prescription hearing aids. We’ll record a session when Signe introduces such a thing. Yeah. Hopefully down the road in the not too distant future, we can talk more about it. So, aside from the conversation enhancement, what else is in the IX platform that’s new? Well, there’s been some upgrades in what we call the dynamic soundscape processor. The signal classification system has been upgraded to kind of match this real time conversation enhancement processing that happens, it’s in a couple of different form factors, one of which is the Silk, which is a rechargeable Instant Fit CIC. There’s been some upgrades over the last, maybe not exactly with this launch, but prior to this launch, with one of our signature features called OVP Own Voice Processing 2.0 that’s gotten a little bit more sophisticated. So, again, I think there’s been some enhancements to the overall processing strategy that go along with this launch. Okay. And one of the interesting things I saw, and I’m quoting here, the IX hearing aids are ready for future bluetooth LE audio. When will that come about? And will Auracast be supported as well. The plan is yes. For Auracast to be supported. For those that don’t know about Auracast, I would encourage you. Andy, you probably had some people on here that talked about it, but Auracast is a really big deal. So we’re happy to say that with future firmware updates, IX will support Auracast. I would expect that sometime in 2024 I don’t know exactly when, but I know that’s in the pipeline for the very near future okay, Great. And you gave me an opening to plug the ten minute video on Auracast I did for Computational Audiology, which is on the AuraFuturity YouTube channel now. Or actually, no, I’m sorry. I did it as a LinkedIn post because it’s on the Computational Audiology channel. So that’ll give a good overview of Auracast and LE audio and what it means for hearing impaired people. Yeah. No, it’s a huge I don’t like to use the word game changer very often, but I think this might be a game changer. I think so, too. Although there’s going to be such a long transition period that I still encourage people, if they’re responsible for a public place, I wouldn’t hold back from installing a hearing loop because the transition period is going to be so Signia has a model has models in the IX that has telecoils in them. So then when you do the upgrade for Auracast, you’ll actually have a model with both the telecoil and Auracast, correct. Exactly. So I’m glad you brought that up, because one of the models that launches in October is the Pure Charge&Go with the telecoil. So we know that even though Auracast isn’t on the horizon, we know that for the next several years we need to have devices out there that are compatible with loop systems that have telecoils. And our first product with IX will have a telecoil. At least one model will have a telecoil in it. So thanks for that. Oh, that’s terrific. And I actually think Auracast is going to deploy a little bit differently. I think the first Auracast implementations are going to be in places with mass consumer appeal. So I think you’ll see things like more sports bars installing Auracast so you can hear all ten TVs in the bar and anybody with an Auracast capable earbud will be able to do it. And so you have a much more immediate demand and also a differentiator if you’re trying to get more people in. Or place. And so I think we’re going to coexist with hearing loops in places like houses of worship and movie theaters, single channel applications, while some of the really interesting multichannel applications will be the first, they get Auracast and then Auracast will move in. But that’s got to be a ten year span. So for somebody to know that their hearing aid, they can support loops today, but be future proof with Auracast installations tomorrow, I think was a smart move. Yeah, no, that’s good. I’m glad that you say that. You bring up an interesting point about sports bars. In a few years, when you walk into a sports bar, you might have everybody on their earbuds. They might be actually pretty quiet places because everybody’s streaming right into their earbuds and listening to their own game on all of the TVs that are there. So it’s going to change things a little bit, I think. Yeah, it’ll be interesting to see. And I should ask if the IX’s do the same thing like mine, in mine, I can vary the mix between streaming and ambient. So if I put the streaming part at about 30% and ambient at 70%, that sounds like a background radio and I can talk to people perfectly while still getting the streaming audio. Is that the same case here? Yeah, exactly. So you don’t necessarily have to be quieter. It’s no different than if the TV volume were turned up over there and you could hear and still talk to your tablemates. With the advantage being, when you’re talking to your tablemates, you’re getting hearing corrected audio and the benefit of the beam forming technique that you have. So you can talk clearly with your tablemates and still hear the Auracast transmission, which is brilliant, utterly brilliant, in the way it’s going to make the experience of going to public places for hearing impaired people that much better. Exactly. So it’s an exciting time to be in the profession and to be working with folks that need that kind of technology or will benefit from that technology. Yeah, on a lot of different fronts. I’m, for example, getting the audio streamed to my hearing aids from this very podcast because I can hear you much better than listening to the speakers of my computer. So as you look to the future, we’ve talked about some of the demands that people have today, like connectivity and the value of Auracast and the increasing sophistication of speech and noise processing techniques and beam forming with the multiple streams. As you look to the future, what are the needs you see that are yet to be addressed? And in what order will they be addressed? What are those important things that are still necessary to improve upon? Well, I think that the whole area of using artificial intelligence for both the fitter, the audiologist, the hearing care professional, and the patient to make what I would call smarter adjustments. I’ll give you an example of what I mean? We have a feature that’s been around now for three or four years called Signia Assistant, which essentially takes ah fitting information from thousands of. Patients that opt into the system. And it allows a single individual wearing their hearing aids to take all of that adjustment information about how other people have adjusted their hearing aid and factors it into some options about how they might want to adjust their own. And I’m making it sound more complicated than it really is, but with an app, a person can make adjustments on their hearing aid based on the fittings of thousands of other people. I think you’ll continue to see that evolve. And people that want to self fit will probably be able to do it in a more intuitive way and in a more accurate way than today, just because of the evolution of artificial intelligence. I think you’ll probably see, we already see the hearing aid is sort of a wellness device now that tracks heart rate, number of steps, how much you talk. We have a feature called my well being that does all of that now, and I know others have fall detection, so you probably see that become easier to use and maybe more people thinking of their hearing aid as a wellness device. But fundamentally, at the end of the day, I think it’s always about how can you improve conversational ability? How can you improve somebody’s signal to noise ratio, how can you make the hearing aids sound more comfortable to reduce listening fatigue, those kinds of things. So we’re always looking at ways to incrementally improve signal processing. Okay, so the core hearing functions are going to continue to improve incrementally, but giving more app control to the users is something I’m certainly all for. In fact, I’ve created several of my own programs. On mine, for example, I didn’t have access to a mask mode. I was able to create a mask mode that worked really good when everybody was wearing masks and to take advantage of what’s generically called big data. In other words, if you were able to make some assumptions about what people need after fitting based on thousands of people’s user input and help guide a person to create apps that work better for them, I think that’s brilliant. Yeah, and biometrics too, because for biometrics to be really effective, the device has to be in your ear for a long time. If you’re just wearing it for an hour a day, you’re not going to derive very much in the way of insight based on little slivers of biometric input. But when you consider somebody wearing a device for 14-16-18 hours a day and you’re able to gather biometric data, there’s a lot of room for tremendous insights about a person’s state of health and well being. Yeah, that’s a great point. I think that one of the challenges. And maybe with artificial intelligence, with big data, we can make some inroads. And that is, I think that we need to find ways to get the average hearing aid wear just to wear. Their hearing aids more consistently. I’m starting to have seen some data from a few places around the world that show that the auditory centers of the brain get rewired more quickly when a person wears their hearing aid for more hours per day or they’re fit to a target. So the reason I say all that is sometimes we forget some of the simple things like encouraging a person and teaching them how to wear their hearing aids consistently, counseling them so they don’t take them out of their ears, matching a prescription target for soft, average and loud. All those things are still as important today as ever, even as the technology advances. Yeah, honestly, it surprises me because I’m so much more relaxed and less fatigued when I wear mine all day. I’m probably running on average of about 15 hours if I go look at the stats in mine and I don’t understand why people don’t wear them. So yes, that’s an endorsement for putting the devices in your ears. If you are aware and you’re listening to this podcast, I talk to people patients every week. It seems like that can’t get over that hurdle of the first few months of use. They struggle for whatever reason. And that’s a whole other conversation, I think. But I wish more people wore their hearing aids as long as you did. Well, I think that’s a very good point. There’s probably improvement to be made in the onboarding experience that get people used to the devices, learn their features, get comfortable with them. So they do become an all day wearer, but that’s a different topic, as you said. Yeah, exactly. It’s something we’re thinking about at Signia. Hopefully you’ll see some new features down the road with respect to those things. Okay, looking forward to seeing that. Because anything that encourages people to wear their devices more often or begin their hearing journey sooner, I think is all for the best. Exactly. I agree 100%. So as we wrap it up, I’ll mention a couple of things. I’ll put the white paper, the Signia white paper in the show notes so people can read it because it’s pretty interesting description of the technology that’s in the IX platform. I’ll also throw in a link to the Auracast video for anybody who’s curious about how Oracast is going to be implemented. Before we go. Any last thoughts? No. Thanks for having me on the broadcast. I think that I’d like to let listeners, if I could, put a plug in for IX. And that is, I think that we’ve designed and built a hearing aid that’s really there to optimize conversation with this multidirectional processing, split processing. And you’re going to hear more and more from us about the advantages of this type of technology when it comes to hearing in groups, enhancing conversations, reconnecting people in the places that are the most. Important to them. Yeah. Ultimately, that’s what it’s all about, because the more people are comfortable in the more difficult settings, especially public places where it can be noisy, the more engaged they are, the better their state of well being. So that’s really, ultimately what it’s all about, is enabling people to maintain their social connections. Exactly. I think we all agree with that. And I’ll hopefully learn a little bit more in a few weeks when we meet at, EUHA in Germany. Sounds good. Andy, thanks for having me on. Appreciate it. Oh, you’re welcome. You’re quite welcome. Let me ask one last question, and it’s how can people reach you? If they want to learn more? They can find me at the email address. [email protected]. Terrific. Well, yeah. Thanks a lot. Thanks for joining me and and sitting on the other side of the table from the host position. And thanks to everyone for watching this episode of This Week in Hearing.

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

About the Panel

Brian Taylor, AuD, is the senior director of audiology for Signia. He is also the editor of Audiology Practices, a quarterly journal of the Academy of Doctors of Audiology, editor-at-large for Hearing Health and Technology Matters and adjunct instructor at the University of Wisconsin.

 

Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.

Leave a Reply