Can hearing aids be personalized for specific environments or even for people with no measurable hearing loss? In this episode, recorded at the National Acoustic Laboratories (NAL), host Andrew Bellavia sits down with Dr. Pádraig Kitterick to explore the groundbreaking advancements behind NAL-NL3 and COSI 2.0.
Together, they dive into the major updates to the world’s most widely used hearing aid fitting formula. NAL-NL3 builds on its predecessor, NAL-NL2, with improved comfort, adaptability, and new modules designed for challenging hearing scenarios—including noisy environments and individuals with minimal or no audiometric hearing loss. The discussion also covers how real-world data, clinician feedback, and cutting-edge AI techniques like reinforcement learning helped shape this next-generation solution.
The conversation closes with a look at COSI 2.0, a modernized approach to identifying patient needs using AI-enhanced self-reflection and goal-setting tools. These innovations represent a major leap toward more personalized, evidence-based hearing care.
- For more information about NAL-NL3, COSI 2.0 and other projects, be sure to visit NAL’s website.
Full Episode Transcript
When I arranged to visit NAL as part of
my Australian trip in March to learn more about NL3 and COSI 2, I was really
curious just how meaningful they would be. After all, NL2 has been a global
benchmark for years, and COSI is a basic open-ended questionnaire to encourage
clients to identify their hearing needs and how well they were met. What could they possibly do to update
them? Little did I know what was in the
fertile minds of the researchers at NAL. I was more than a little surprised, and
I think you will be too. Kind thanks to Pádraig Kitterick for
spending time with me, and for being so open in exchange for airing this after
their presentations at AAA and Audiology Australia. He made me feel right at home recording
in the NAL Library. Equal gratitude goes out to Justin Zakis
and Matt Croteau for fitting me with a set of NL3 hearing aids to take home and
try. I swear they were both reciting the real-
ear speech under their breath, they having done it so many times in the
course of their trials. Stick around to the end where I share my
impressions. Hello, everyone, and welcome to This
Week in Hearing from the National Acoustic Laboratories. I’m here with Pádraig Kitterick. He’s the head of acoustic science, and
we’ve got a couple of exciting things to
discuss. The next level of NAL called NAL-NL3,
and COSI 2.0. Thank you for joining me. – Thank you for having me. – So this is really exciting, actually. Let’s talk about NL3 first. Because in my mind, it’s almost like a
global standard fitting algorithm that’s known and used worldwide. What could you possibly do different? – So if we take a step back and think
about NAL-NL2, it’s got lots of bells and whistles, lots of knobs you can
turn, lots of parameters you can set, but it’s just, it’s really just one
formula that fits everybody. – Yes, so meaning regardless of
circumstances, NL2’s formula is running the same. – Exactly. – I mean, other than noise level, because
gain is changing with the– Exactly. So it changes the gain based on input
level, so it’s a nonlinear formula. But actually, it’s the same underpinning
formula that’s working the whole time. What we realized and what we’ve come to
the conclusion is that one size no longer fits all. So when we think about NL2, When we’re updating that, we’re saying
that philosophy at the core of that, which is to maximize intelligibility of
speech in quiet, but don’t make it too
loud. So don’t exceed the loudness that a
person with normal hearing might
perceive. So that’s the core philosophy, we call
it. We’re retaining that for our speech in
quiet formula for NL3. What we realized is that there are needs
out there where that philosophy breaks
down. it no longer provides the best solution. And so we’re going to start introducing
new formula that will go alongside the speech in quiet formula that we will
introduce and we’ll call modules. And we’ll keep adding to those modules
over time. So I guess to answer your question,
there’s sort of two key things. One is we’re looking at the core speech
in quiet formula. And we’re going to address some sort of
curly cases there, some sharp edges that we want it to be even easier to use and
to deliver that core philosophy of maximizing benefit for speech and quiet
even more. But then there’s also when we’ve got to
introduce something brand new, where we’re actually changing the philosophy
to meet completely unmet needs that we believe the current, all current hearing
aid formulas for providing or prescribing gain and compression don’t really provide an answer for, and I’m happy to come on and
talk about some of that. – OK, so when you talk about speech in
quiet, like we’re here right now in this beautiful setting, right, NL2 and NL3
will look very similar. A little bit of refinements, but
basically similar in how the nonlinear gain is applied. – Absolutely. So we’re sticking with what works well,
what has been widely validated. And so when you prescribe gain for quiet
with NL3, it will look quite similar. But there’s some important changes that
will be applied in certain cases. So when we went out to clinicians around
the world and we said, hey, where does NAL-NL2 run into problems for you? Where is it not the best solution? We got some very, very consistent
feedback. So one of the bits of feedback was from
mixed losses. So people often report that the amount
of gain that NL2 prescribes is too high to be tolerable by the
individual when it comes to mixed
losses. And so we’ve gone back and we’ve looked
at that and we’ve looked again at how do we account for the conductive component
and prescribed gain for the sensorineural component. And we’ve come up with a better balance
that better reflects what we think is going to be necessary to achieve a good
acceptable source. – So you’ll actually then you’ll have base
NL3. There’ll be another version for people
with mixed losses. – Well, actually for NL3, when you
prescribe or when you add or input the sensorineural and then the air
conduction thresholds and the bone conduction thresholds, it will
automatically apply a refined approach to mixed losses. It’s not a radical change, but there
were just some things about how NL2 calculated how much additional gain it
needs because it’s a mixed loss, that we’ve just refined that calculation and
it provides a much more achievable gain
profile. In particular, for example, NL2 would
ask for a lot of gain for conductive losses at very low frequencies and very
high frequencies that would be difficult to achieve and not very acceptable to
the end user. And so one of the things we’ve also done
there is we’ve looked at huge numbers of
fittings. So we have a lot of data on how NL2 is
used in the clinic. And we can see in the data that
clinicians are not prescribing the amount of gain in mixed losses that NL2
asked for. They’re somewhere in between what NL2
asked for and what you would give for the sensorineural loss. – So this is actually a key point, right? Because you actually took somebody who
fitted a person with mixed losses, Then, as they worked with that client,
they were actually adjusting the gain to maximize the client’s satisfaction, and
you’ve captured that data and baked it into NL3 for mixed losses. – That’s absolutely correct, and we’ve
done a similar thing across the board with high frequency gain. So a lot of the feedback we also got was
that NL2 prescribes a lot of high frequency gain, particularly for new
hearing aid users. Often people are fine tuning to reduce
that amount of gain. And so rather than just say, okay, we’ll
just reduce high frequency gain, we looked at what people are doing in the
clinic. So again, we have hundreds of thousands
of fittings we can look at. But then we went back and we said, okay,
we want to reduce that high frequency gain where it’s not adding meaningfully
to intelligibility. So it’s not just going in and turning
down the gain. It’s going in and saying, let’s find a
way to reduce the gain in a sensible way where we can still say that the solution
is going to maximize intelligibility, but will not add excessive high
frequency gain where it’s not really… – So it’s more comfortable and just as
intelligible. – Exactly. And then the final big change or pain
point that we’ve addressed with NL3 is reverse sloping losses. So again, this was a common sort of
report from clinicians that they would often find with reverse sloping losses,
that NL2 was asking for far too much gain in low frequencies and far too–
Regardless of conductive or mixed losses, right? – Exactly. So very similar reports. And so again, we went back, we looked at
thousands of clinical cases where we could see how it was really fit. And then we were able to go back and
think about the rationale and refine the formula so it would give a much more
acceptable fit. – And this is all baked into the fitting
software, so based on the nature of the hearing loss, you’ll get the gain
profile. – That’s exactly right. And it was funny, I could talk to an
audiologist in the United States or in the Netherlands, or in Australia,
or in any part of the world, and I would almost invariably get those same three
or four different cases that they would sort of note that NL2 would give them
something suboptimal. – Which is interesting, that’s also
language dependent, for example, the
same… As you make these tweaks, you’re not
having problems with more tonal languages, for example, or languages
with clicking sounds, or what have you. It actually works globally. – Yeah, absolutely. And in fact, one of the really big bits
of feedback we did get is the importance of retaining our tonal formula, for
example, as part of the solution, because people really valued the fact
that there was a way to optimize fits for people to use tonal languages. So that is something that we’ll retain
in NL3. I guess one other thing to mention is
that NL2 is ultimately an algorithm. It is based on science. where the authors, even over 10 years
ago, were using quite advanced computational technology to take models
of speech intelligibility. But that process, of course, you can
imagine in the past over 10 years, the technology that we have available to
solve that problem, to take these very nonlinear, complicated models of these
perceptual processes of intelligibility and loudness, we’ve now got fantastic
technology that gives us a really, really, really, really advanced tools. So we’ve brought some up-to-date
technology into solving that problem. Now, That does not radically, often in many
cases, change the gain profile, but sometimes it does. In fact, sometimes what we find is that
the new technology that we’re using to solve that, what we call the
optimization problem, how do you maximize intelligibility and not exceed
a certain loudness level, we actually find that the answers that our more
modern technology gives are more similar to the kinds of things that clinicians
are fitting in the clinic. – Well, this is almost an application for
AI, really, right? Because you’re using the clinician
results as a training model for the new version of the algorithm. – That’s absolutely right. And we’re using something called
reinforcement learning, which is a technology that people will have heard
of that’s now being widely used to solve very complex problems. Google and DeepMind have applied that to
teach computers how to play Go and chess and all sorts of complicated things. It’s used by many industries to solve
very many complicated problems. And we’re now applying reinforcement to
the problem of solving various different challenges that we have. So in this case, maximizing
intelligibility models while achieving certain loudness targets. And so that’s part of the difference
that comes with NL3 is that we’re now switching to this much more newer
technology. But we’re learning from what really is
achievable in the clinic. So gain that is achievable in the real
world, but yet still maximizes
intelligibility. And so we’re baking all of that into the
core formula. – Well, we’ve been talking about core NL3
at this point. Tell me about modules. What are modules for? – So when we think about the core of NL3,
sort of what you might think of as the drop-in replacement for NL2, right? It’s based on that philosophy that I’ve
mentioned a few times, maximizing intelligibility, not exceeding normal
loudness. – In quiet. – And so the way we think about modules in
NL3 is where a use case or a population needs a different philosophy for how you
fit, then we’re going to make that formula a
new module. – And what are those populations? – So for the first release of NL3, which
will come later this year in 2025, we focused on what we think are the top two
things that we get told again and again and again. The first one is people with minimal or
no audiometric hearing loss. And the second one is about listening in
noisy environments. So we know that on both accounts, People, more and more people are willing
to consider hearing aid technology, even when they have minimal or no audiometric
hearing loss. And there’s an increasing recognition
that people can experience hearing difficulties, even if clinically we
might measure a normal or a near normal
audio. – You know I have to cite Brent Edwards,
right? And his 25 million Americans who have
no audiometric hearing loss, but difficulty understanding speech,
particularly in noise. And if you take that, it’s 25 million
Americans. We have about 5% of the global
population. Multiply that out, and you have 500
million people in that position. – Yeah, and when I think of Brent’s
seminal paper, where he shows 4 quadrants based on whether people have
an audiometric hearing loss or what we might think of as a clinically diagnosed
hearing loss. – I’ve shared that so many times. – And then the other dimension being, do
they actually report having a
difficulty? You know, we focus on the people who
have a hearing loss, almost irrespective of whether they say they have a
difficulty or not, or they think they
do. But there’s a large population, as
you’re alluding to, of people who don’t have a hearing loss or a clinical sort
of classically defined hearing loss, audiometric audiometric, but who report
significant problems with speech and noise in particular. And so then the question is, we have
done a whole, sort of many, many years of research at NAL, where we have taken
hearing aids and we have fit these individuals with hearing aids and shown
again and again that they benefit. But what a lot of clinicians who’ve read
that research, who’s heard Brent talk at conferences, says, okay, but that’s all
well and good, but I’ve got a client in front of me, how do I fit them? Because if I choose NAL-NL2, it just says
zero gain across the board. – So I’m assuming you’re really just
relying on the directional microphones then to deliver a little extra SNR. So you’re paying thousands of dollars
for a directional microphone,
essentially. – Well, that’s it. But also, from a sort of conventional
clinician sort of training perspective, you think, well, the milder the loss,
the more open you also want the fitting to be, because people don’t want their
ears occluded. So it’s not just that. It’s also that they’re only really
getting information from the hearing aid at a very, very narrow, restricted
frequency range. So people tend to report the hearing
aids as sounding tinny, so they don’t really like the sound quality. And so we’ve really stepped back from
this and thought, OK, what have we got to solve this problem? We have these hearing aids that are very
miniature devices. They are sophisticated signal processing
devices. They could potentially give you a broad
bandwidth, cleaned up signal with advanced noise reduction algorithms. Major manufacturers now introducing a
deep neural network based speech enhancement or noise reduction. – You only get all this if you run an
occluded fitting. – Exactly. So will people accept that? That’s a question I’m sure that many
clinicians would have. The assumption will be that people would
not accept a more occluding fitting. And I think they’re right if you’re
talking about quiet environments, but we’re here focused on where people say
they have the difficulty, which is in
noise. And it turns out that in noise, issues
with your own voice are a lot less of a problem, right? For probably reasons that are fairly
obvious. – I’ll say yes and no, because like when
I’ve tested a lot of devices, including a lot of consumer devices that go full
occluding, My very first test is to start chewing
my food and see if I could still hear the person across the table. If I cannot, I am done. I’m done right there. – Absolutely. So it’s all about a trade-off, right? If you went to a fully occluding
fitting, chewing, swallowing, all of those things become a really big
problem. And when we’re talking about really
noisy situations that often involve eating and drinking and talking, then
you’ve got to balance that. So consequently, we’re focused on
fittings that are semi-occluding. You wouldn’t go right to a full
occluding fitting, right? But use a semi-occluding fitting so that
you’re still allowing that direct path at the low frequencies, so you don’t get
as big an issue with things like chewing and swallowing – And you’re not providing
so much gain that it gets uncomfortable, but you’re boosting the SNR
– Exactly, and that’s the challenge. It’s a trade-off you have to provide
people a big enough speech benefit. One, for them to notice, and two, for
them to be willing to tolerate wearing hearing aids and listening to that gain. Because we all know that it is going to
be a trade-off in these individuals. They have normal hearing. So acoustically speaking, they don’t
need amplification. So we are trying to give them the
cleaned up signal from the hearing aid. and do that in a way that still does not
compromise speech quality, the naturalness of speech, the
comfortableness of the environment around them, chewing, swallowing their
own voice. So it’s a real balancing act. And that’s ultimately what we focused on
creating. – Well, this is really just one more level
of convergence between consumer devices and hearing devices, because ultimately, this is a great argument for situational
devices. – Yes. – If I go in and I’m prescribed a
prescription hearing aid, you know, I’m resistant to wearing it in quiet
situations because I don’t need it. But if you go with, say, an eight-hour
earbud running that module of NL3, you can also do more because you’re only
running an eight-hour device. You can do active occlusion reduction, for example. There are all kinds of things you could
do in an earbud that a person’s going to pop in your ear and go to a restaurant,
but not going to wear all day. – Absolutely. And when we talk about our minimum
hearing loss module, it really is for that situational use. We do expect that as device
manufacturers integrate this module into their devices, then I’m sure that they
will be thinking, OK, how do you really optimize a device for this use case? with a transition to a transparency mode
for quiet if you want to use it, or is it just used for noise? I mean, this I hope we will see, and I
expect we’ll see a lot of innovation around this. But I think the benefit from a sort of a
clinical and a service provider perspective is also that you know, this isn’t a one size fits all
either, right? This is a prescription that can also
adapt to the person’s hearing loss. So often you don’t have somebody with
completely normal hearing. You might have somebody with thresholds
within normal hearing up to maybe 3, to 3K, but then they start to have some
high frequency hearing loss. So what do you do then? So one of the other benefits of our
formula approach is that it will automatically mix and match and merge
together sort of what we call a hybrid approach where they need gain for
peripheral compensation because maybe they’ve got some high frequency loss and
we’ll prescribe that gain. But then at the lower frequencies where
they don’t need that gain, we have that balance between comfort and access to
the hearing aid signal. So this is not something where if
they’re not normal hearing perfectly, you can’t use it. It’s something that actually gradually
adapts and ultimately at the extreme starts to look more like our core
module. – Okay. And then we’ve been talking about people
without audiometric hearing loss in
noise. What about people with audiometric
hearing loss in noise? – So that’s where our noise module comes
in. So obviously we have our existing speech
and quiet solution and that spans from people with a mild loss all the way to
people with a profound loss. And we looked at that and we said, OK,
generally speaking, what we know is that manufacturers have put a huge amount of
investment in hearing aids in managing noise, because the world is a noisy
place. Because communication often matters more
when you’re in noise, right? It’s when you’re socializing. It’s when you really want to hear what’s
said. You want to be part of the group
conversation. And so actually, that’s where the
hearing aid is really, really critical for a lot of people. – The key functionality of hearing aids,
right? You don’t want people to socially
isolate themselves. – Absolutely. And so in that case, we ask the question
to ourselves, is our speech in quiet formula actually the most fitting
solution for that use case? And the answer is no, we believe. Why? because it assumes that all of the
speech signal is perfectly audible. But of course, in a noisy situation,
that isn’t the case. So in a noisy situation, let’s assume
typically we’re at maybe a plus 6 dB signal-to-noise ratio in a noisy
restaurant, hopefully. We know that’s a typical signal-to-noise
ratio you get in many ecologically valid sort of real-world situations. So a lot of the speech cues are also
being masked by the noise. So we looked at that and we said, okay,
well, maybe there’s a compromise we can make here. Maybe actually we can reduce the
loudness, the perceived loudness of the signal without compromising the
intelligibility of the signal. So as you reduce the loudness of the
signal, there will come a point where the signal itself dips below the
threshold that you can hear the signal. but actually a positive signal-to-noise
ratios, you can actually reduce loudness quite a bit before any audible bit of
the speech signal really starts to become inaudible. And so we looked at that because of
course we can’t change the signal-to-noise ratio by twiddling the
gains and the compression. You know, that requires directional
microphones, noise reduction, and the hearing aid manufacturers, they’re the
experts there, they’re advancing the technology there. But even the best technology will still
get you to a slightly more favorable signal-to-noise ratio. It doesn’t put you in a speech in quiet,
right? There’s always background noise. And what hearing aid users often say is,
well, I don’t want the noise to go away. I still want to feel like I’m in the
ambience. I still want to have situational
awareness. – Very much so, yes. So we’re dealing with a situation where
you’ve got a positive signal-to-noise ratio, but the noise is still being
amplified, and we’re still prescribing gain based on the assumption you’re in
quiet. So what we do in our noise module is to,
again, to have a balance. We want to make it more comfortable. How do we do that? We make it less loud. The challenge is, how do you make it
less loud perceptually while maintaining And that’s what our noise module does. the intelligibility? – Which makes it as intelligible but more
comfortable. That’s absolutely correct. – So how do you know this works? Good question. So for both modules, we’ve been running
studies in-house at Nile now for some
time. And we have a fantastic team of
audiologists, engineers, machine learning and AI
specialists right across the board who have been absolutely focused, almost
every single person now has been focused on helping with this, making sure that
NL3 is the best solution that we have. And so I’ll just maybe talk you through
some of the studies that we’ve been
running. So let me start with what we were just
talking about, listening in noise. So we’ve had over 40 people come in and
they have been existing hearing aid
users. Even just in the past four to six weeks,
we’ve had them come in, for example, and try the very latest formula that we’ve
been developing. We fit them with the existing formula,
and we fit them with our new noise
module. And we’re always comparing to our
existing sort of NL3 speech inquires. – Is this double-blind? I mean, do they know what they’re
getting? – So participants don’t know what they’re
getting. And so when we put them in the lab
– So I guess single blind, right? – So they’re single blind. So the audiologists will program, and we
will do speech and noise testing in the
lab. So we will measure how many words
they’re getting correct, and we’ll put them in a very noisy 80 dB babble. We’ll put them in a really
– You’re using your chamber for this. – Absolutely. We’ve got an array of loudspeakers. We’ll actually put them in some
realistic background noises as well. So we’re testing across multiple
different background noises, multiple different noise levels, and we’re
measuring actually how many words they repeat correctly in a more conventional
laboratory-based task. And with and without our noise formula,
we see no significant change at all in their speech intelligibility. So, intelligibility is maintained. – How are you rating comfort though? How are you getting feedback? Is it more comfortable? Because if it isn’t more comfortable,
then what’s the point? – Absolutely. Our test is if you send people out in
the real world and you get them to go into the noisy places that they want to
listen, which do they prefer to listen with? – No, don’t make me go to a bar on The
Rocks tonight to try this out. – (laughs) So that’s ultimately
the real world test. It’s got to pass the real world test. So what we do is we send our
participants away with hearing aids from all five major manufacturers. So we’ve included devices from all major
manufacturers in these trials. This isn’t any particular device. We set those devices up to be in their
best performance. We give them the latest technology. If it’s got bells and whistles for
listening, everything’s turned on. But we give them two programs. One program is programmed with the new
noise module. The other program is programmed with NL3
core. – So they can switch between the programs
at their leisure. – That’s absolutely correct. But they don’t know which one is the new
or the older quiet formula. They’ve just got one in program one,
program two, we call them, right? And they have a phone app. That’s our ecological momentary
assessment app. Basically, it’s a smart name for a phone
app that allows them in a situation to just tell us quickly, okay, which should
you prefer in terms of loudness, comfort, sound quality, naturalness? And ultimately, what we care about is
which one would you prefer to use in this situation? – They’re rating it on the spot. They’re not calling afterwards. They’re in the cafe. – They’re in the cafe and they’re doing
it. So the other thing is that our phone app
actually is able to record the sound
level. So we can also look at their ratings
when they’re in extremely noisy
environments. So for example, 75, 80, or above. – So wait a minute, you’re actually dynamically
capturing the ambient noise level while they’re testing and giving their feedback. – That’s correct. We can actually look at the sort of
A-weighted sound level when they’re in that environment. And what we see is that in those noisy
environments, almost 70% of people are choosing and
saying, no, I prefer the noise module. – And what about the other 30? Are they preferring the other one, or is
it more of a toss-up? – It’s more of a toss-up. – So you have a smaller percentage of
people who are like, eh, either way, most people are saying the new module is
better. – Yeah, yeah. And that’s for us the most important
thing, right? We can show what we want in the lab, and
that is important, and we want to collect the evidence. And for us, being able to do the
intelligibility testing in the lab and to actually show empirically that
intelligibility is not significantly different, you know, I think that the
average speech scores were within like a percentage point or two, like they’re
basically identical. But if you looked at the gains as a
clinician, you might be surprised. You know, we’re really reducing down the
gain. But as we’ll show when we publish this
data, actually we’re really only making, probably only making very small parts of
the speech signal unintelligible or not
audible. because a lot of it is masked out by the
noise. And so in noise, we have a lot more
latitude to reduce loudness. And so we believe that we’ve got a
really good pragmatic solution. – Excellent. And when will the papers be published and when will NL3
hit the marketplace? – Right, so really good question. So we’ve been working on NL3 studies
since last year. We’ve been gathering more and more
evidence and we really want this to have a really sound foundational evidence. So when we come back from the American
Academy of Audiology, which is happening in two weeks time, their conference and
the Audiology Australia conference, which is
happening immediately afterwards, where – Ah, just a little bit too long, I would have
stayed for it if it wasn’t another couple of weeks, right? – We would have loved to have you here for
that, so we’ll be presenting at those two major conferences, showing the
results from what the studies we’ve done so far, and when we land back in Sydney
then… in early April, we’ll be wrapping those
studies up, we’ll be asking ourselves the question, is there any more evidence
that we need? And then we’ll be sitting down and
writing those papers. So we expect that those papers, subject
to the typical peer review process, will be coming out in the next few months. So we’ll be wanting to get them out very
soon into journals. And then our timeline for the the actual release of the product is
that we’ve been working very closely with hearing aid manufacturers and
manufacturers of verification equipment. So the kind of vendors that integrate
NL2 and our sort of prescription formulas into their software. We’ll be having advanced versions, so
beta versions of those to them in a few months’ time. We expect to give them the very final
sort of release to manufacture in
September. And so we hope then that as they release
and update their fitting softwares and their verification softwares after that
point, you’ll start to see an NL3 option available to you in their software. – And ultimately, I didn’t express it
quite so, but that was what I was wondering when manufacturers would have
access to it. And you’re saying really September
timeframe. – Yeah, September is when they’ll get the
final product, but we also want to give it to them. So as soon as we have an early beta, which will be a few months before that,
we’ll be sharing that with them, making sure that we can work with them as a key
partner to ease the integration of this new technology so that hopefully we can
get it in the hands of clinicians as quickly as possible. – Excellent. This is really exciting because I
personally thought NL2 was like the end of it all, right? But the fact is, is that you’re now able
to develop optimization for different situations, I think is going to be
really beneficial for people, including
me, who probably has more difficulty in
noisy situations out of proportion to his actual hearing loss. – Yeah. Just looking forward to anything that
makes… and there are a lot of people in that same camp, right? – Absolutely, there’s a huge amount. I mean, we looked at the statistics even
here in Australia. We worked with a couple of different
hearing service providers, actually, and we came up with an estimate of anywhere
between 40 and even maybe as high as 60% of new adult clients who were walking in
clinic doors Had hearing loss that was either very
minimal or within the normal hearing range, and a majority of those clients
were reporting significant hearing problems if we use something, let’s say,
like the HHIE or one of those sort of standard questionnaires, so that’s why
also we put a big focus on that minimum hearing module. – That’s all really exciting stuff. And it’s funny you mentioned HHIE. Let’s touch on COSI 2.0 Because after getting fitted with this,
you might ask somebody to fill out the COSI, right? So the COSI is a very basic
questionnaire that gets to the root of a person’s actual experience. Like, what have you done with COSI 2.0 that makes it different? – So, you know, if we go back to what COSI
was all about, right, it was about putting the client at the center of
their care, right? We talk about client-centered care or
patient-centered care. You start off with what the needs of the
client are, and then you try to identify the appropriate solutions, the
rehabilitation that they need. And then the key thing with COSI is it
was all about judging the success of the
treatment. – And it was very open-ended, which makes
me wonder how you can improve it. – Absolutely. So when we talk to clinicians about
COSI, it has some downsides. One, it’s quite a general open
framework, right? It’s very open. If you look at the original COSI form,
it’s really more just to get the client to write down situations that are
important to them, and then you rate whether or not that situation has
improved. Okay? Simple concept. Hard to execute, though, when you might have clients who struggle to
think about what are the situations that are really important to me? Because maybe I’ve turned up to the
clinic because maybe my partner has thought it’s about time you go get your
ears checked and maybe, you know, sick of you turning up the TV too loud or has
noticed that you’re not socializing as much or as long with your friends, but
maybe that’s not so obvious to you. And the first time you’re being asked to
think about this is when you’re in the
clinic, And all of a sudden, you’re sitting in
front of the audiologist and say, right, so what’s the problem I can solve? – (laughs) Okay, no, I totally get it, right? You need time to reflect on it. – Absolutely. And the other key challenge is for the
clinician. You maybe have 45 minutes to an hour,
maybe you’re in the luxury of having longer, right? But a lot of clinicians, they do not
have longer than that, right? And you’ve got a lot to get through at
an assessment appointment. Not only have you got to find out what
their needs are, You’ve got to find out, you’ve got to do
all of the diagnostic testing. You’ve got to explain all those results
to the client. You’ve then got to think about what
solution might actually address the client needs. You’ve got to then explain that to them
and try to decide what the next step is. So that’s a lot to fit in. And you try to do all of that while
being incredibly client-centered and think about a variety of different maybe a variety of different levels of access
to information. So you’re giving a lot of information to
the clients. Some clients will soak that up like a
sponge. Other clients, you might need to take
more time explaining those things, not to mention if you’re dealing with
non-native English speakers, for example, or non-native in the language
that you’re practicing. – Well, yeah, you must, I mean, this must
be really difficult when you’re working, for example, as here in Australia does
with the Aboriginal populations or people who live on the Torres Strait
Islands, right? I mean, providing global implications to
an underserved area. – Absolutely. You know, so there’s the challenge of
providing culturally safe services and adapting everything to no matter what
population you’re dealing with, so they feel safe and secure and comfortable to
talk about those needs, but also that you can make this relevant, right? So making your, if you truly want to
make a service client-centered, you can’t have the client put in the center
of that. on the spot from the first time they
walk in and they’re not prepared for
that. Now, that’s a hard problem to solve, you
know, and many service providers would say they can send out information in
advance, they can try to engage the client, it’s a real challenge. So we looked at that and said, okay, how
can we make the client even more centered in this whole process? Ultimately, you ideally want the
clinician and the client to be talking about the client’s needs as soon as they
walk in that appointment, but also the clinician already
understands the client’s needs in the client’s own voice, right? Because that’s really important too. – And so how do you get there? So what we’re doing is we’re leveraging
AI technology, which everybody is, of course, but we looked at sort of large
language models and we thought, you know, the beauty of this is not that we
can replace the clinician, Because we don’t want to do that. We want to try to have the client to
have a more naturalistic way of reflecting on what their needs are,
right? And talking to somebody with natural
language is a relatively, you know, it’s a naturalistic, it feels quite natural,
particularly if you’re not just sort of put in front of a bot of some kind. You know, you’re trying to undertake a
task. And we’re also able to train that
artificial intelligence so that it’s able to ask the right questions. And so, it’s able to actually go back to
the client and say, Okay, I’ve asked you some questions. I think the needs that you’re describing
are as follows. Is that correct? And this would take place before the
visit? – Exactly. And so, the point is, it’s a time that
the client can reflect on what their needs are. So, what we’ve done is we’ve trained this very sophisticated AI model, we
call it a multi-agent model, but basically we’ve got multiple different
parts of this system. Some of that part of the system has been
trained on millions of client-clinician interactions discussing goals, right,
and discussing needs. – And I think this is really important
because people who think about AI thinking about large language models
where the internet was the input. – Exactly. Of course, these models are unable to
determine what’s, you know, satire, what’s truth, what’s an outright lie,
which is how you end up with, you know, being told to put glue on your pizza to
keep the cheese from sliding off. But when you control the training input,
then the output is much more reliable. – Yeah. – So your training data are real
interactions. – Yes. I mean, you can say, forget AI being
artificial intelligence, think of it as audiological intelligence, right? We want to put in important information
that is actually relevant and useful for this kind of interaction. So that comes from millions of
client-clinician interactions around the original COSI, where we can see what are
the kinds of things that clients actually want and need. What are the kinds of things also that
tend to improve over time when somebody gets audiological intervention, right? Because sometimes people might write
down a need that’s very difficult to change or improve through audiological
intervention, but we want to focus on the needs that will change, because
that’s the kind of need that we think we want to be able to focus on and the
clinician wants to be able to focus on. So we have all of that. We’ve also got the kind of language and
terminology that clients typically use. So what we’ve done is we’ve brought that
knowledge together, plus we’ve thought about what kind of information we need
to capture through a discussion. So the COSI would say, hey, what’s this
situation? But with COSI 2, it’s not just the
situation. It’s why. Why do you want to improve in that
situation? Like, what is it about? It’s not just about understanding more
speech and noise. It could be about not feeling left out
of the conversation. So we’re thinking carefully about what
questions we should ask the client. And we train the system so that when the
client talks to us, then what it gives back to them is it reflects back, hey,
I’ve been listening to you. I think your needs are the following. – Yeah, so you’ve created a reflective
listening system. – Exactly. – And this is done verbally, correct? – So this is done actually through a
web-based system. It’s very simple to use. So people simply get asked a question,
and they type in their response. – OK, so they’re typing the responses. – They’re typing in their response. I think that will come too, because
obviously we see with OpenAI, you know, these sort of natural interactions are becoming easier and
easier. We started off… – That’s COSI 3.0, COSI 2.0, you’re interacting by typing. – You’re interacting by typing, you can do
it on your phone or at a computer. And of course, we had questions, right? First question is, will people interact
with this? Will they want to say anything to this? Will people type? – Have this conversation typing into a
machine? – Exactly. And so we had all the same questions
that I’m sure your listeners and your watchers, your viewers are also talking thinking about now. So at the moment, we’ve been running
studies with COSI 2 where when a client books in, we’ve been working with some
clinics locally here who are really innovative in their practice and they
say, hey, we want to introduce this to our clients. We’re willing to try to test this out. So when their clients get booked in for
an assessment, we intervene and we say, hey, your assessment’s coming up. Here’s a link, fill it out. We get a huge number of those clients. Typically, if you send a client out
something in advance, maybe anywhere between 10%, but maybe more like 1% of
clients will do something with that information, right? It’s really hard to get people to
engage. So we’re getting more like between 40
and 60% of those clients. And they don’t know anything about this,
right? They’re not primed, they’re not selected
clients, they’re just anybody who’s booked in for. And they’re just basically told, this
will help improve your experience and get you off and running in a better
state if you Answer these questions first, so we’re
getting more, and they’re doing it, and they’re doing it more than half the
people are doing it. When they do it, they’re actually really
engaging with it they’re answering the prompts really thoughtfully. They’re telling the system about
multiple needs, so it goes through one need and then asks you if you have any
more, and they keep going, and they’re also not just typing one-word answers. Like, they’re doing this in the privacy
of their own home. They’re typing in really sort of
thoughtful, reflective information, and we ask them once they’ve done it. – But it’s so valuable in itself,
actually. The first time they’ve probably been
thoughtful and reflective on exactly what their hearing needs are. – Yeah, and it gives them that time. I mean, maybe they click on the link and
they think, oh, I’ll have to think about this, but they have the time to do it. That’s the key thing. And what we do immediately after they do
it is we say, hey, How good were those needs? So it told you some needs. How much do you trust those? How much do they actually reflect? And we see that the overwhelming
feedback is that they really do reflect what the client thought was their own
needs. – And then do you run an after COSI so you
get the improvement scores? – Yeah, so that’s what we’re running at
the moment. So we don’t have data on that yet,
because we’ve actually implemented being running COSI 2 since just when we came
back in January. We haven’t enough time for the new
version. But what we have been doing is we’ve
been asking the clinician. So when the client fills in that in
advance, that gets then sent to the
clinician. So then when the client walks in to the
assessment appointment, the clinician already has a set of needs based on what
the client thought it was. – Sure, that would be the whole goal. – That’s the whole goal, right? So we asked the clinician things like,
did that save you time, first of all? Did that help you have a better
conversation, a deeper, more meaningful conversation with the client? And actually, did that enhance things
like the discussion around technology? Because you could already think, OK,
what is this client looking for? What are the kinds of needs? What kind of technology might they? And across the board, again, we see
really positive feedback from the
clinicians. They’re thinking that, yes, it’s great. They’re coming in. I’m able to immediately start a
conversation. We’re able to have a deeper, more
meaningful conversation. One of the things that really surprised
us, though, was some of the feedback from the
clinicians. It was actually to do with the kind of
language and words and topics that the clients had identified in advance. So clinicians said things like, actually,
we’re having conversations that are more about why it’s meaningful to them, the
emotional side of things, the sort of social side of the need that they have. because they raised it in their needs
assessment. And so I don’t feel like I’m raising a
topic that might be a bit embarrassing or sensitive to talk about. – Yeah, they don’t know where the
emotional landmines are, but once you’ve gone through COSI 2.0, or the client has, you’ve cleared those
landmines. – Absolutely, because the client has come
up with a need that says, you know, I’m feeling isolated, I’m feeling lonely
because I’m not able to talk in noise. then the clinician can engage with that
in the appointment. And they feel like they can engage with
that because it’s safe to do so because the client has put that forward. – Right, because you have almost the
opposite risk where the clinician is afraid to bring up what might be
sensitive topics when in fact the client actually wants to talk about them. – That’s exactly right. So the early data is very encouraging. So we get really good engagement from
the clients, really positive feedback It seems to be, and our Our goal here is to enhance the quality
of that conversation. This is what it’s all about. It’s about making sure that that
conversation with the client at assessment is really more about the
client needs and as centred on the client needs as possible. And ultimately, that should give us
better outcomes. What we’re seeing at the moment is that
those who go through that process seem to be more likely to go on and accept
and take up hearing technology. That’s an early signal we’re seeing in
the data. Compared to those who don’t do it, fewer
of them take up technology. And so we’re really in understanding why is that and who are
those individuals and what is it about the tool that might have helped with
that. Ultimately, what we’re looking forward
to, and we’ll be getting this data back in end of March, beginning of April, is
the outcome data. So the next part of this tool is that it
gets the client to self-reflect on how their experience has changed outside of
the clinic, They’re not doing it in front of the
clinician. They don’t need to say nice things just
because they don’t want to offend the
clinician. – After some period of time, they
can then, this is where you get to the part of the original COSI, well how
much improvement have I seen? – Absolutely, right. And so we can again put that in their
hands. We can make that a naturalistic
interaction and we can get them thinking about that. And we can also remind them and say,
hey, two weeks ago, 4 weeks ago, whenever it was, You said that these were your needs. These were the things that you wanted. This is your language, not ours. You know, how are you going with that? What’s changed? Has it changed? – And so when will COSI 2 actually be
available for clinicians? – That’s a really good question. We don’t yet have a timeline. We’re at the really early stage of
proving this technology for us. You know, we want really solid evidence. We want to be convinced ourselves. The early data looks really compelling,
and we’ll be presenting some of that data at the two conferences that I
mentioned earlier, both in America and in Australia in the coming weeks. And then there’ll be published papers
following that. There’ll be published papers following
that, and then we’re looking at technology partners, who could we
partner with this to actually start getting this rolled out? in bigger trials in different
populations. – So you roll it out with field trials
with a clinic or, you know, larger practices and then. – Yeah, and we’re looking at different
models as well. So obviously you’ve got the what we’re
doing at the moment is testing the traditional clinic based model where the
client gets sent something in advance but then does come into the clinic. We’re interested in its use in remote
care models where clients would still interact with a
clinician but via some sort of video chat, so how could it fit into that? And what about self-fitting models and
self-care models where the client doesn’t talk to a clinician at all? Could this be a really valuable, could
it still play a valuable role in there? Because it’s all about helping the
client self-reflect about what they
want. So we’re looking at all of these
different potential use cases, but we’re starting with that more sort of
traditional clinical model because we’re aware that there’s lots of people out
there using COSI today. And you know, if we’re convinced by the
evidence and the early evidence looks fantastic so far, then again, our goal,
just like NL3, is getting it into the hands of clinician as quickly as we can. – Well, and I think that’s great, right? Because an informed client is a client
with a greater chance of walking away happy after the clinic visit. – And we always say that when it comes to
their needs, – Yeah. and to technology and some of our
previous research around how do you explain technology to people with
hearing loss and the great work that people like yourself and others do with
trying to convey and communicate all these complicated technologies. We would love every client to be as
opinionated as possible as we are about our smartphones, as we are about our
cars. We would love a client to walk in clear
and mindful and thoughtful about their needs and what they need. and opinionated about what technologies
of all the great things out there they want, and they want to listen to, and
they want to try it. So that ultimately, we want somebody who
comes forward to get the right solution for them and to have a successful
outcome. That’s the thing that’s going to help
people keep engaging with their hearing
health. keep going and then as they age, as
their needs change, as the technology they need changes, they’ll keep engaged
with it, just as we keep engaged with our general practitioner or our
physician about all sorts of other aspects of our health. – That’s just a really great summary and,
you know, how NL3 and COSI2 are going to bring that about. I’m really looking forward to seeing it
roll out and what the feedback and the outcomes are as
both of those become deployed. So thanks a lot. I really appreciate you explaining it. Very exciting stuff. – Thanks for having me on. You’re welcome. It really helped put NL3 in context by
trying out a set of hearing aids programmed with it. Justin and Matt set me up with a pair of
ReSound Omnia 9s, which included two speech and noise programs. One was programmed with base NL3, which
would be similar to NL2 with my hearing loss. The other was with the new noise module. Both had the other Omnia speech and
noise features turned on. I didn’t know which program was which. On three separate occasions, all noisy
and all with different companions, I alternated between both. The characteristics of each were
considerably different, and at first misleading. However, I quickly figured out which was
which. One mode sounded normal, which for a
two-generation old hearing aid, it included a bit of harshness, which I
tend to think of as chirpy when all the noise features are engaged. The other sounded like the volume was
turned down a couple of clicks. The temptation was to think the normal
one must be the new noise module boosting the speech. But with a bit of listening, I realized
that even if the sound scene was quieter and less harsh in the other program, I
could understand my companions just as well. I had it figured out and Pádraig
confirmed. It was interesting to spend a couple of
minutes without any hearing aids in my ears. It was quite comfortable to listen that
way, though, of course, intelligibility went way down. When I popped the Omnias back in, while
in what I assumed was the noise module, The comfort pretty much remained, but
the intelligibility went up. When I switched modes, the
intelligibility remained, but the sound was harsher. It may have been an unfair comparison,
but on the last evening, I swapped in a set of newer ReSounds, the Vivias. Of course, the Vivias performed better
overall, but I was looking at relative comfort. In that respect, the Vivias bested the
Omnias with the NL3 noise module, but not by much. My conclusion was that ReSound addressed
comfort by other means. It made me wonder if NL3’s noise module
would bring even further benefits. It also made me wonder if the greatest
improvement would be delivered in mid-grade hearing aids and OTC which
don’t have the latest DNNs and such. That remains to be seen, but this I can
say, at least for me and my hearing loss, mission accomplished, NAL!
Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week, and follow This Week in Hearing on LinkedIn and on X (formerly Twitter).
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
About the Panel
Pádraig Kitterick joined NAL in 2021 as Head of Audiological Science. Prior to that, he was Head of Hearing Sciences in the School of Medicine at the University of Nottingham, UK where he also led the hearing theme of the NIHR Nottingham Biomedical Research Centre. Pádraig’s research expertise is in evaluating hearing devices and technologies, both in the context of clinical trials and longitudinal studies. His work includes developing and validating measures that are sensitive to detecting changes in outcomes that are important to patients and to the clinicians that manage their hearing health. He has a particular interest in how quality of life should be measured in people with hearing loss. His work also seeks to understand how hearing loss that differs between the ears can affect how we hear the world, and how hearing devices and technology should be best used to address these forms of hearing loss.Bridging the Gap: Tinnitus, Psychological Distress, and Professional Boundaries
Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.