Computational audiology is defined as the augmentation of traditional hearing healthcare by digital methods, including artificial intelligence and machine learning.
This week, host Dave Kemp sits down with Jan-Willem Wasmann to talk about the potential of computational audiology to help more people in need of hearing healthcare service across the globe, and how it can help further improve the precision and efficiency of audiological care in general.
Dave Kemp 0:10
Okay, and welcome to another episode of This Week in hearing, I’m joined by a great guest today. Jan-Willem Wasmann. So Jan, tell us a little bit about who you are and what you do,
Jan-Willem Wasmann 0:20
Dave, thank you for having me. I’m an audiologist working in Nijmegen at the Radboud medical center. And my main work is dedicated to patient care. A little bit part is training other audiologists. And there’s also a little bit of research. But by working with patients that something that really fit new questions for, what kind of research is important, and what kind of questions to really impact how this impacts the care we deliver to our patients. And I studied physics and finished this in 2010, then briefly worked in in aerospace. But I had this feeling that Aerospace is really innovative, you know, but then I discovered that, for instance, for building a satellite, you need 15 years. And then you see the result. And I’m a little bit too impatient to wait for 15 years and change to medical physics, got experience in audiology, did this training in Utrecht and as I started working as audiologist and Nijmegen in 2015.
Dave Kemp 1:36
Well, that’s awesome.
Jan-Willem Wasmann 1:37
Dave Kemp 1:38
Awesome. Well, thanks for coming on today. The reason I wanted to bring you on is, you know, I think that this network that you’ve helped to form, the Computational Audiology Network is really, really interesting, something that, you know, I look at as being really a pioneer in how the shape of audiology will look in the future. I think that you know, a lot of what I’ve gathered from your group, and I want to really give you an opportunity to discuss how the computational audiology network came to be. But, you know, a lot of electrical engineers, a lot of research audiologists, AI experts, machine learning type work that’s going on. And, you know, I just find this whole field to be fascinating, because I think that we hear all the time, these sort of surface level ascertains of, you know, the future of hearing aids with AI. But we don’t, you know, I don’t think there’s a whole lot of discussion that’s happening in the quote, unquote, like mainstream within this field of what that actually really entails, and what progress is being made around that. And so I figure you’re probably is best suited as anybody to really talk about the nitty gritty of what’s going on within this whole computational audiology side of the industry. So why don’t you share with us how the computational audiology network came to be? And then sort of how you would describe this network and this whole field of computational audiology?
Jan-Willem Wasmann 3:13
Well, I think it’s important to realize that our hospital has really a network approach. So I’m working as audiologist within an ENT team. But there’s also a lot of other departments, and there’s a university. So there’s a lot of knowledge about, for instance, artificial intelligence on our campus. And there is the biophysics department. So there’s a lot of research going on there. And that’s led to conversations about projects, what we could do and using artificial intelligence, for instance, for fitting cochlear implants, or for better care create new tests more efficient. And this kind of conversations with people here directly actually, working with me, led to this idea of computational audiology of combining machine learning and audiology. And I think the one sentence summary is applying complex models into clinical care. Those models can be complex, like based on deep learning or it could be I think, the NAL-NL2 to model of how you would fit a hearing aid is also a complex model. And the clinical care in itself, of course, also really complex in the sense of persons having a lot of different hearing difficulties, and the best way to interact with people help them counsel them and give proper rehabilitation is in itself of course also complex but combining this is I think, what has led to computational audiology, that So the idea was that with these new new tools, we can improve our reach that if working here in ENT and audiology clinic, see people in a consultation room, I’m able to help maybe 1000 persons a year or 2000s. But if you’re able to scale up by remote care by using tools to triage people that are at risk and invite them into our clinic, while maybe sharing other best practices that can be used at home, or at their local practitioner, that kind of skill. Differences in scale, I think is something that can be enabled with these computational approaches with automation, actually, with the whole Fourth Industrial Revolution, if you want to apply this to audiology, then what emerges is what we described in our computational audiology perspective paper. And this group, that was via my colleague, Chris Lanting, I met Dave more. Dave Moore introduced me to Dennis Barbour, who is an expert on machine learning audiometry, that those people together with De Wet Swanepoel from South Africa really broadens our perspective that instead of only looking how would you use this technology in the Netherlands, in an infrastructure that’s already on a high level, so you have to compete with a high level of protocols and new technologies have to surpass this, we change our perspective on okay, if we look at the global burden of hearing loss, what would then be the impact of these new tools, and what we describe on on the website is a mission of how to improve the situation for people that suffer from hearing loss, either by detecting hearing loss earlier, preventing it or providing new tools or maybe cheaper tools, and
and using research or developments from engineering or clinical insights mean best practices of how you best help people in a tele audiology setting. If we are able to share those lessons and work together with all these different disciplines. And this multidisciplinary approach. That’s together I think in a nutshell, what is the computational audiology network, many different people from different backgrounds, from audiology, from artificial intelligence, from engineering, from clinics all over the world. Well, we were thinking about these projects, we saw that these discussions from different viewpoints are really important to make progress and to understand what’s important. And we thought if we can create a platform where people can share their views, and that happens, at the time that there was locked down in the Netherlands that we knew that there were students, PhD students suffering from being locked in a room not able to chat with peers, and maybe working in a really special specialist fields with not so much feedback from peers thought if we are able to have interaction between PhD students, other students, clinicians, experience scientists who can help early career researchers that can improve projects and if it improves, projects, it may in the long term, help us better to address the global burden of hearing loss.
Dave Kemp 9:05
That’s great. Yeah, when I was really kind of, like digging in, and I, you know, was saw that you had the, you know, the conference that you’re all going to put on, and we’ll talk about that a little bit toward the end is the, you know, the conference, the virtual conference that you’ve hosted the last two years. You know, I was really just impressed with the caliber of, of, you know, researchers and scientists that we’re all contributing. And, you know, I think it’s great that you’re, you know, you have this whole network and you’ve just launched a podcast. I listened to the first episode of it, and I thought we could maybe talk a little bit about that episode because I think it obviously you know, the, the topic there is all around machine learning. And so that episode was titled, Bayesian active learning. And so I figure you know, this is obviously, you know, it comes from probably some scientists that Has the last name, Bayes. And so I figured you could maybe share with us. Why was that the first episode? I know you had a number of other folks on there sounded like I know, Dennis Barbour was on there, and a few other electrical engineers. So I would love to just hear about why Bayesian active learning for the first episode and what what is Bayesian active learning in layman’s terms, so that, you know, for those of us that aren’t as immersed into this world as you how can we kind of wrap our heads around what this all kind of means and how it applies to the field of audiology.
Jan-Willem Wasmann 10:38
Now, the reason we started with this topic is that we had been conducting a scoping, review about automated audiometry, looking at all technologies or approaches available, that can reach clinical accuracy, so that it’s all calibrated, and that the results would be of similar precision, and accuracy as you would have in the clinic. And we knew that there had been automated audiometry scoping reviews before systematic reviews. But the last one was from 2013. And, of course, has been a lot of developments, including machine learning techniques. So while we were reviewing all these different approaches, we found two groups that had published an approach based on Bayesian active learning. And the third group that had been working on it, but from a different perspective, not with the clinical validation that we required. But that was all we could found. So the one of the conclusions was, okay, so at this moment, they are on the global level, three groups working on this with slightly different approach approaches. So let’s ask them what their motivation was to do this. What this is, how can you explain and its approach, so that’s what I can continue. Now, a bayesian active learning is a fully adaptive way of testing, where you use all previous results for determining what next stimulus you’re going to use. So instead of using Westlake as a procedure for audiometry, where you have a fixed search grid, you could look at audiometry as determining a threshold, which is moving through a field of frequency and intensity. And then you try to find the boundary that makes the difference between audible sounds and sounds you’re not able to hear. And if, for instance, I would see somebody on the street, my first assumption would be this person is normal hearing. And then you can test this hypothesis by providing a sound and ask the person do you hear it. And then you see if, for instance, you provide a sound on the low intensity level, okay? This person didn’t respond. So I have to change my hypothesis to maybe a moderate hearing loss, and etc, etc. That’s, I would say how patient active learning works that you do hypothesis testing. And you start with some basic assumption. And then you could say that’s a model. For instance, what some groups have done is, based on your age, maybe you already have a prediction, this would be most likely you’re hearing, and then you start testing it. And you can define what are the boundaries within what you’re looking at? What’s the, you could say resolution or the step size you’re taking for the search? If you’re, for instance, screening for hearing loss due to maybe high sound levels, then you could say, Okay, we’re only looking at the higher frequencies where we expect damage in the air. And so during this testing, you’re continually updating this model, and determining or estimating what is the next point where you have the highest uncertainty. And there you do this hypothesis testing by providing another stimulus. And that Dennis Barbour has published a couple of papers about this approach. Joseph Schlittenlacher, from from Cambridge has also done this, they’ve shown also nice graphs of how with already a few stimuli, and maybe 20 stimuli, you can determine a full audiogram and this is something that’s fully automated. Well, if you want to use this in clinical care, then of course It’s also important is the response by this subject really had the response is this the test proper that, that people pay attention when the stimulus was presented, etc, that these are also things that you can account for when you use this machine learning by adding another uncertainty that you can test for. And the reason that I think this approach of active testing where you more are actively searching for
the solution, then more passive testing where you have this fixed grid, and regardless what’s happened before you just follow your fixed recipe. The interesting part is that because you can foster find a model that’s close to somebody’s hearing status, you can choose to do less stimuli. Or you can choose to add more variables, not only test for hearing thresholds, but also test Supra – Supra threshold stimuli and, and test loudness, for instance, or there’s other factors that you think are important for your clinical question.
Dave Kemp 16:17
Yeah I feel like what one of the things I kind of took away from it, I know, when you were talking on the podcast, you know, one of the applications today is that you can trim down the time it takes to test somebody from like, five minutes to two minutes, which is, you know, it’s great. But I think that, you know, as I kind of gather what you’re saying, and I think about this from like, a remote care standpoint, and how you’re saying that this can be automated, it seems like, you know, again, one of the most promising aspects of hearing loss, like in the, the war against it is that I think one of the greatest weapons that we’re going to have toward it is the ability to increasingly provide more sophisticated ways in which you can do a lot of the audiology remotely in relying on things like this. And so I feel like, you know, when we talk about the ability to see, you know, you’re you’re limited in the amount of people that you can see on a day to day basis, you’re just only have a finite amount of time. And so if you can start to implement tools that allow for a more outsized impact, that’s really exciting to me, and I know, you had mentioned the De Wet Swanopeol, and now everything that he’s doing with hearX is kind of along the same vein, which is like these digital tools that I just feel as if we’re kind of like in the early stages of that new foray of Audiology. But I think that, you know, looking at the work that you all are doing, sort of on the forefront of how these types of tools can be made better in these different, like learning modules that the machine learning is being built on top of can just increasingly get better and better. It just feels like we’re moving in this direction, where we really will have the ability to combat hearing loss at a much greater capacity, because you’ll you’ll be able to do so much of this will be automated in such a way I feel like
Jan-Willem Wasmann 18:25
Yes I fully agree. Yes.
Dave Kemp 18:27
So one other idea is, you know, I one other thing that I wanted to touch on that I thought was fascinating, when I was listening to the episode was, you know, this whole theme of you know, how the audio audiogram is like the gold star use case for active learning in healthcare, broadly speaking. And I felt like that’s another really interesting thing is like, audiology, in a way is kind of sitting at the the focal point of machine learning and health care, broadly speaking, because of the test box that it provides. So I wanted to give you a chance to speak to this a little bit, because it sounds like some of the folks within the computational audiology network kind of came to the same conclusion, you know, on their own. And then now you’re all building on this. And I just want to speak through a little bit about what the implications of that really are within the broader scope of healthcare and medicine.
Jan-Willem Wasmann 19:27
Yes, I think there’s different aspects that three years ago when I thought about all these really promising deep learning techniques, all the examples I knew were in the festival field, for instance, in image recognition, and I was not aware of how much progress was also happening in sound recognition and classification. So one of the things I thought that’s important to realize how quickly and impressive also these developments are, but it’s Do something, I think, much more complex to explain that if you show a picture and then say, well, the computer is able to interpret this image. People can see Wow, that’s impressive. But with a sound, it’s more difficult, I think to, to show this to demonstrate this. So I was also really surprised, not not really surprised. But I was impressed by what Joseph Shlittenlacher has said about, indeed, the audiogram being the ideal testbed in the sense that it’s not intrusive to do this test, and that it allows for proper problem definition, and making assumptions and test those assumptions well, in other applications within medicine, where you, for instance, have to take a blood sample, yeah, if you then take a blood sample, and it’s it was not needed. Well, that’s, of course, something that’s more intrusive to do, while also something that’s really important to do efficiently, so if you’re able to test these kinds of tools, or in a field where the effect is not so damaging, or serious, as in help in audiology, that, I think, makes it a good testbed. But also, we already do a lot of simplifications by determining a number of frequencies where you could say, Well, maybe it’s not necessary to test in between frequencies or go for another arbitrary precision. But it’s also something that we are not able to add, for instance, that if people show up in the clinic with tinnitus, and maybe with smaller lesions in the inner ear that go undetected if you follow the, the normal grid certs. And then that’s also something that maybe with these machine learning techniques, you could continue testing and find this small lesions and maybe explain the underlying cause of the tinnitus, or other questions, of course, that’s that we do not address now that we may be called Hidden Hearing loss, because it’s beyond the sensitivity of the instruments, we now have events.
Dave Kemp 22:38
Yeah, I think that I love that whole idea of like just being able to use a more a more comprehensive evaluation, to detect these types of things and using the the machine learning modules that are, you know, really good at determining all of these kinds of things. So I guess we kind of come to the close here, I just want to give you a chance to plug your conference that you’ve you know, I know, this is year three, so maybe just kind of a, you know, a broad, a broad, you know, strokes of what this conference entails, what the first two years were like, and what people could expect, if they’re interested in joining in this year round on year three.
Jan-Willem Wasmann 23:28
So well, with the conference, when we started, the idea was we just invite people from audiology and AI, let them tell about the projects, maybe even ongoing projects so that we can discuss it and see how to improve it. And I think we just launched in April 2020, a call for abstracts, and we were overwhelmed by responses from the community, you could see from hearing scientists will also from small companies that had developed new tools, and expect that there were probably a lot of people that had some abstract or a talk that they were going to give, but they were not able to do this. And yeah, now I realized that the for me there were there was also a personal motivation to create this platform that I had this thought that I would give in June that year. But that whole conference was cancelled. So I had a story to tell but no podium nowhere to tell him. No, but and then we thought well with this virtual approaches with a virtual conference, we are able to do this but also lower the barriers for participation. So for people who are not able to travel for four days to another continent, you can lower the barrier, so we made it free for everybody. And what I still are really happy about is that we were able to You have people from renowned institutes darling, telling about their latest ongoing research, while also people from India, Nepal, South America, joining listening to the stream. And what we also did was, and that’s something that was really inspiring for many people, we organized a workshop for patients, or we had a person from Australia, from Africa from South America. And I remember how this guy from South America, he was in his early 30s, he explained that he had to travel from Bolivia to Mexico to buy hearing aids. And he was telling his story, while using an app that translated a speech to text, and vice versa. So I was seeing this new Google technology use in a different country. While here in the Netherlands, I didn’t see people experimenting with this. And that was something that showed how powerful these new innovations can be that if it works in South America, it can work in the Netherlands, and vice versa, since 90%, of people globally have smartphones. And many people have the latest smartphone so they can use these kinds of innovation. So not only were the traditional scientists and engineers, but we also saw new players with app developers, for instance. And another thing that was overwhelming that when we published this call for abstracts, I think in the next days, somebody suggested well, maybe we can organize a workshop. So that was Elle O’Brien, who told us well, I can explain something about the whole machine learning pipeline how that looks like. And we can throw a workshop and by her enthusiasm, we decided to also organize a workshop about patients and their needs globally. Another workshop about how to develop apps, another one about how to how to do low touch audiology. Since we were in this lockdown, and many clinics were experienced the same problems. And that’s something everything was developed within eight weeks. And it was a really intense time. And I was really lucky with a lot of support from the radboud University and hospital with the
researchers here. But also, almost everybody who asked who we asked to, to explain something about what they were doing, and people renowned like Fang Zheng, De Wet Swanepoel, they all confirmed, well, they were eager to join it. And it was a kind of probably enthusiasm that we were the first to embrace a virtual conference. And also, using these new techniques without trying to emulate an actual conference, physical conference, we thought, we know that the listening span is shorter, so maybe just short presentations of two to five minutes. And it could be pre recorded. So we had this bit videos that you can still still watch. That was something that we decided to use, so that there was only short talks, and more time for questions and answers. And after this conference, I was really lucky that Tobias Goehring from Cambridge, we enjoyed the conference, that we could convince him to do a next conference, a follow up activity. And I think he took it to the next level with expanding the network, more renowned researchers, but also, I think he improved the quality and made I think the our approach more consistent while still keeping it free for everybody. And I’m really glad also that now Oldenburg, together with Hanover the Cluster of Excellence in Germany is now organizing this next conference. And again, I think also in this edition, this really new developments, like virtual reality applied in audiology, that’s something I think, for clinicians is really interesting to know what these developments developments are looking like and how it can change your practice. Ie research, for instance, but also how machine learning is used. And these new algorithms are used in the next generation of hearing aids or another thing is the special session on predictive coding. So also the complex models how that is developing. I think that’s really exciting that we all have these special sessions and of course, abstracts from research groups from all over the world. And it’s really international. That’s Also think something which distinguished this conference from other states, or continents, and many nationalities are involved
Dave Kemp 30:09
Yeah, I mean, I think that, you know, just for my own personal curiosity, I would love to, you know, join just to see what’s on the bleeding edge of the science. And I feel like what we’re likely to see there, you know, and I agree with you, I think it’s really neat that it’s international. So what’s going on in, in Africa, what’s going on in Europe and Asia, like, what types of things are being studied and worked upon? And you can get a sense from there, I would imagine of what’s coming down the pike, then whether it applies to in the actual clinic itself, or with the product offering in the future of the technology. So I mean, there’s, if you want to peek behind the curtain of what’s coming down the pike, I would imagine that you should probably attend this conference. So where can I go? Where can I go to sign up and join in and be an attendee?
Jan-Willem Wasmann 31:02
Well, we’re opening the registration, April 26. So next Tuesday. So we’ll share this online on on competition audiology.com people that already participated before will via the newsletter get informed. And so there will be also, I think, a lot of diversified social networks. And here, I would also like to think, for instance, the different platforms, like Hearing Health & Technology Matters and other groups that have shared or initial invitation back in 2020, that helped us to reach a lot of people. And, and a lot of different countries, and many societies also internationally that shared the link to the conference. And and I hope that this conference will, again be a window on the state of Audiology. And that it’s, I mean, normally during my work, I will only see patients from the Netherlands. But now you will see also how in different countries developments are going and sometimes that can can be something that inspires you or helps you have this something that’s going to impact my practice. Or it can be a window that you think, Okay, we have to prepare a maybe make adjustments to better improve it in our applied in our own country. And I hope and that’s something that would maybe be the next level of the computational audiology network is that the website was intended as a forum so that people could actually interact every day with questions, share experiences, share resources. So that’s part of the website, there’s a lot of models, but also tools that people can download, for instance, to do a hearing test at home, or to do data analysis. And you could say there’s only two days a year now the conference, because we have a split of two days, the end of June and the first day of July. But the rest of the year, I would hope that we have this interaction between researchers, engineers, maybe also. Yeah, patients or people that experience hearing difficulties, and that this can continue to inspire each other for improving our work or getting excited about what’s coming up.
Dave Kemp 33:36
Awesome. Well, thank you so much. Jan-Willem, this has been such a great conversation I’m looking forward to attending, I’ll probably just be a fly on the wall observing and seeing what’s what but you know, like I said, I’m still learning and trying to wrap my head around all this. And it certainly seems like it’s just going to become more and more relevant for all of us that are operating in this industry, whether you’re practicing audiologists to someone like me, who just is supporting the industry by being a distributor. So it’s been a been a great conversation. Do you have something you want to share?
Jan-Willem Wasmann 34:11
Yeah, that’s actually everybody who’s listening also to the to the talks, I would say, we try to give as much as possible the opportunity to interact. So you can also ask your questions via the chat. Or, for instance, on the website itself. We publish all these abstracts. You can for instance, ask your question and we can ask the presenter to follow up and address your question. So I would hope that we do not raise like this if there’s only a small group of people talking and the rest is only listening because they don’t dare to ask the questions. I hope that we make it easy for everybody feel comfortable to in I think a nice atmosphere and so far atmosphere there was a lot of trust between people People will feel free to ask questions or maybe make appointments for follow up conversations.
Dave Kemp 35:06
Fantastic. Awesome. Well this has been great. Thanks for everybody who tuned in here to the end and we will chat with you next time.
About the Panel
Jan-Willem Wasmann, MSc, is medical physicist- audiologist at the Radboud Medical Center in Nijmegen, The Netherlands. Jan-Willem’s recent work includes AI-guided CI fitting techniques, simulated directional hearing based on neural networks, and remote care.
Dave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot.ai, and has been featured on NPR’s Marketplace.