Audiology in the Age of AI: How Chat GPT and Related Technologies Will Transform Hearing Healthcare

April 24, 2023

This week, host Dave Kemp explores the potential impact of AI on the future of hearing care – and what that means for clinicians and the patients they serve. He sits down with two leading voices in this area in audiology: De We Swanepoel, PhD and Jan-Willem Wasmann, MSc.

They discuss the potential benefits and risks of technologies like ChatGPT for the audiology community and in healthcare in general. They discuss the need for a more community-driven approach to Large Language Models (LLMs) like ChatGPT, where clinicians and researchers can train the model on specific parameters important for audiology and compile a database of best practices. This approach can help disseminate clinical approaches, provide better access to information for patients, and improve clinical workflows.

Overall, the conversation highlights the potential for LLMs to create a new era in healthcare and audiology.

Full Episode Transcript

{Dave} All right, everybody, and welcome to another episode of This Week in Hearing. Very excited to be joined today by De Wet Swanepeol and Jan- Willem Wasmann two of the, I think, most cutting edge, forward thinking researchers and scientists in the field of audiology. Today we’re going to be talking about Chat GPT and really the introduction of large language models, broadly speaking, and how these large language models might ultimately impact our industry and all the different professionals working within it, as well as the patient base. So we’ll get into that and talk all about it. But first, let’s start with some introductions. So why don’t we start with you, Jan-Willem, a little bit about who you are and what you do. Hi, I’m Jan-Willem Wasmann. I work at the radboud university medical center in Nijmegen as audiologist, both as clinician and researcher. I really like to be involved in how AI can be used in audiology, and so that’s why we coined the term computational audiology, and we wrote this perspective paper about three or four years ago, and I’m really surprised at what we made as predictions at the time. I think all those predictions are already passed and even have surpassed our expectations, and things are going really fast. And it’s good, I guess, to explore this and see what could be the benefits, but also the risks for our community. And well, happy to discuss this with you today. Awesome. Well, thank you so much for being here. And De Wet. Yes, Dave It’s good to be with you again. And with you, young Villam on the show. So, yes, my background is I’m a professor of audiology at the University of Pretoria in South Africa. I also have an adjunct position at the University of Colorado. My area of research interest has always been around technological innovation, connectivity, and how we can utilize that in hearing healthcare to make hearing care more accessible. I think that’s also where this link with the exciting technologies that we’ve seen kind of come online with Chat GPT over the past couple of months has kind of intersected. I also have a few other hats that I wear, so I’m also the editor in chief of the International Journal of Audiology, and then I’m also a co founder of a digital health tech company called the hearX Group. Awesome. Well, thank you two so much for being here. Like I said, couldn’t have been joined by two better thinkers, I think, on this topic. So just to kind of set the stage a little bit when we’re talking about these things, they feel kind of abstract and esoteric, but I think that we need to be conscious of just how. Pervasive and widespread. These things are becoming so OpenAI that’s the parent company of this large language model, Chat GPT, which has only been out and available to the public for about two months, has already amassed a user base of about 100 million users. So that is the fastest application to ever reach 100 million users. So this thing’s growing like wildfire. You have folks like Bill Gates out there saying that that these large language models, whether it’s Chat GPT or another one, they’re like orders of magnitude of impact as like the Internet and the PC. So you have some people out there really calling this thing out as being a seismic changing forcing function that’s going to really change a lot of different things and a lot of different professions and just the way that we operate, just like the internet did. Right. And I think that we need to kind of start thinking about in audiology, what will this all mean and how will this impact us? So why don’t we start De Wet I’m going to kick it over to you. If you could maybe kind of frame the conversation beyond what I just did there very briefly about these large language models and this notion of AI powered internet. Can you just share your thoughts on what’s going on right now and what these things really are? Yeah, sure, Dave. I mean, I think just agree with you, it’s very exciting times. I mean, anyone who’s played on Chat GPT a little bit would agree that the power of these technologies is astounding. I mean, it’s just remarkable. And apart from the personal kind of exposure and experience, we’re seeing massive shifts in the entire industry technological, but also in healthcare in general, in terms of how these technologies are changing the world around us as we speak. And as you mentioned, it’s the fastest growing platform of users ever. And it’s certainly one of those massive changes in technology that creates a new era. I mean, suddenly, if you’re used to Chat GPT, doing a regular Google search feels like a two dimensional exercise, right? So suddenly, six months ago, that wasn’t the case. Now that’s what it feels like. These technologies are super exciting. So AI chat bots are a type of generative AI that can generate text. They use these large language models that allow them to really provide answers to prompts or questions in a really human like fashion. So in essence, they’re just computer programs that use natural language processing to communicate with humans. But they are trained on tremendously large data sets, which mean they draw from. Information that is in a way almost limitless in terms of availability on the net and in other large databases. So certainly very powerful technologies. I think what’s also exciting, I mean, we talk about Chat GPT, but there’s actually a wide range of other technologies that have already existed before GPT that are now expanding exponentially because of what Chat GPT has done to bring it to the forefront. They’ve been brilliant in the way they’ve marketed it, to make it freely available and accessible to everyone. So the interest has just grown tremendously quickly. But I think what’s important to recognize, Dave, is the fact that these technologies are not just siloed technologies that you go and access. We have seen them proliferate in terms of integrations into other existing technologies. I think the most typical or the most widely known example is the integration of Chad GPT into Bing’s as a search engine. It was almost a relic of the past, but now Bing is growing exponentially. It’s becoming one of the most widely used search engines because it’s integrating this AI technology into its platform. So that’s just one example. But everything around us is starting to integrate this. I mean, every week we see new technologies like Salesforce is integrating it, Slack is integrating it into their platform. So we’re going to be seeing these technologies pop up on everything we do, our calendars, our to do lists, et cetera. So it is an important trend to think through generally, but also as audiologists and as hearing healthcare clinicians and researchers. It’s going to change the way in which we interface with patients and provide our services. Yeah, that’s really well said. Thank you for that nice overview there. I think that the Bing example is a really good one because OpenAI did partner with Microsoft to really, I think, bring that technology to Microsoft and its search engine Bing. And what we’re seeing, like you said, is that you had these sort of status quo technologies like Google that once upon a time was revolutionary and groundbreaking in and of itself. And now that’s sort of being superseded by something that has the ability to, I think, generate the types of searches. And I think this is one area of application and use case that it’s extremely well suited for that these new search results have a level of context that we’ve never really seen before. And that context is derived from all kinds of different inputs like reddit and these different things that are like sourcing a lot of customer feed back. So when you go and you search something that’s like what is the best hearing aid or. Something like that. In the past with Google, you would get a bunch of paid advertisements and then there would be some method of authority to weigh those remaining search results. Now, what GPT would be doing is going and it’s gathering a lot of different inputs and it’s going to probably spit out a totally different answer than what Google would. And so I just think that’s a very specific example, but we’re going to see a lot more of that. And I think that the key kind of culprit of what makes all of this so different is that contextual understanding of going beyond just the black and white definitive answer binary results that you would get with Google. And now you’re seeing a layer of this context and that opens up a giant can of worms in and of itself, because it’s like, how does it get to these new answers that seem so authoritative, but are they flawed inherently? So I’ll kick it to you, Jan-Willem and get your thoughts on kind of this whole thing. Yeah, that’s a really good question that I would say that these large language models are actually excellent guessing machines. So if you ask this machine, okay, complete the sentence once upon, then it will probably guess right at a time, but that’s something that’s simple everybody can do. But then if you ask it not to complete only the sentence, but just to complete a whole story, I just did it and asked it to make a story that’s also nice for children to listen to. It will create a story and then explain at the end why it’s confident. Because it used some story about magic. It’s confident that children will like it. And, yeah, if you use it and test it for all these kind of creative processes, I think it’s really nice that it, out of nothing, can either hallucinate or create content. But also because probably the main driver will. Be these search machines people will get used of using being AI like applications and not only ask things where to buy something, but also probably about hearing aids, about their healthcare status. And that was the reason that I started to just creating some prompts. These prompts are the questions you can ask to a chat bot and see what would happen if you ask these machines, okay, I have a hearing loss, what should I do? And I was actually surprised about the answers that are quite accurate, although there’s no reason to assume that they would be accurate because the system that I used at the time, chatGPT, an older version, has no clue about the world around it. It just uses this big set of training data where it can use a lot of information from, and it came up with quite good answers that I could. At least review and say, okay, this makes sense. So I see a lot of potential there. But at the same time it’s important to realize, yeah, these are all answers that are likely, but they are not factual. And there I think we have to really think about, okay, how to discern facts from hallucinations and what are then ways to proceed. And those will be different, I guess, for researchers, as for clinicians, as for patients. But what we see with the research is that one example is the Evidence Hunt application, where there’s a model like GPT 4 that’s using only data from PubMed or it’s constrained to this data, and it will also show what PubMed articles it used for. Its answer. And what I just tried is prompted a question to this system. And then one possible application would be if you have this answer to say, okay, this is based on evidence, let’s ask a system like GPT 4 to rewrite this into layman terms, so that it’s clear to, for instance, a patient you will see, and then you can check if it makes sense. And I see these kinds of integrations to use it in healthcare, where still there’s an expert in the loop. But for me it’s easier because I know this is based on quite new information, maybe not from the last year, but at least up to 2021. And it can help me explain it better to another person. Or also another way I test this is by just telling a story to chat GPT and see how it responds to it. If that’s a good response, well, maybe if I then tell it to a person, that person will also understand it the right way. So it’s a way to get feedback, for instance. And then other things. If we look into hearing healthcare, specifically, I see this developments merging like there has been Siri in 2010 that was voice to text, so voice command. And around 2016 that we see this automatic speech recognition, so speech to text, which is of course a really helpful application for many people with hearing difficulties. And now a future application would be that I see many people with hearing loss guessing what persons are saying. If then a model helps guessing and maybe is built into your device, it guesses what a speaker is going to say and gives this as a prior to the noise reduction system, which has to go really fast. I mean, these kind of interactions could be maybe reality within five years, that’s for certain. If I realize how fast it has been going in the last three months. There’s, there’s two things that you said there that I, that really resonate. The first one I want to circle back to is this idea of a large language model being restricted to one vertical. So, PubMed, I think we’re going to see a lot of this. So I think that this idea that you have these broad based LLMs like chat GPT that is scouring so much of the Internet, because think of how much written text exists out on the Internet today. I mean, it’s basically accessing all of the open gardens, if you will, but there are closed gardens. And I think there is going to be a lot of advantages of having singularly trained LLMs within specific verticals. Healthcare, I think, is a really good example of this. So let’s come back to that. But the other one that you mentioned is this idea that Siri has been around since 2010, 2016. We kind of see like, the Amazon Alexa Google Assistant and De Wet, you mentioned something at the beginning, before we even started recording, which is like a lot of this has kind of been happening behind the scenes and percolating for years, and now we’ve seen it kind of like all be put together. And I couldn’t agree more with that. I’ve been following the voice user interface space for a little while, and I know that what we’ve really seen kind of in this Alexa era is what I think of it, is major improvements on natural language processing, text to speech, speech to text. So basically, computers beginning to actually interpret language and then be able to spit it back out, whether it’s speaking it or it’s just in a chat interface. So I think that kind of like what we’re seeing is it’s not as if this is some sudden emergence of brand new technology. This is actually a maturation of five technologies that have all sort of matured to the point, and now they’ve fused into this one thing. And that’s where I think now we’re seeing kind of that byproduct of like, okay, when you put together all of this and you have this ability to capture so much of the Internet and go, it almost reminds me of I’m not sure if you’ve ever seen the movie Short Circuit, too, where Johnny 5 is the robot and he can read. He’s constantly just gathering as much information and he can read, you know, lightning speed. So he’s in the he’s in the library and he’s reading the entire encyclopedia. I mean, that’s essentially what these things are able to do, is they can read the whole Internet and spit back out to you a consolidation of that. So there’s a superpower ability here, but obviously there’s a big can of worms that that opens up, which is like this idea of how do you make sure that the information that it’s gathering is accurate? What kind of oversight is there? Those kinds of things, I think, are going to become paramount as we move forward. And we know that there’s already parts of the world that want to either slow this down or completely remove it. Italy. I saw just this morning Germany is now considering banning something like this. So I just kind of want to get your thoughts on this. Either one. Of you on this idea of like, you can’t really put the genie back in the bottle. So it seems like we have to kind of work around what exists. But if there are governments that are actively trying to put the clamps on this, I’m just curious if you feel like that’s feasible in any way and how you see this kind of shaping and shaking out. Yeah, Dave, maybe I can just respond on couple of the things you mentioned, all super relevant comments and there’s so much to talk about here. I like the Siri story that Jan-Willem also introduced. I think Siri gives us a little bit of context. When it came out, it was absolutely revolutionary, but in a way, Siri now looks like a young infant and Chat GPT is probably a five, six year old. It’s still maturing, right? I mean, we haven’t seen where this technology is going to go yet. We’ve had a bit of a foretaste, but I think there’s a lot of exciting things to come. Obviously, with these new powerful technologies, there’s all kinds of concerns that are raised, right? And I think those are important things to also mention and discuss, and those are also the things that are being raised in different forums. You’ve mentioned Italy kind of raising concerns, germany also raising some concerns about where the data is coming from and the privacy, et cetera. So I think those things need to be worked through and need to be discussed. And there needs to be good bodies to actually help us to have better transparency and insight into how these models work, where they get their data from and how we can utilize them in a way that actually helps us not to have a biased view from what these technologies are giving us. Because the one thing these technologies are really good at is sounding convincing, confident, and they sound like humans. I mean, I think that’s one of the powerful things about these AI chat bots is they really engage in a human interaction that feels natural to us, that also kind of fools us sometimes to believe them too easily, right? Because they do get it wrong. Someone compared Chat GPT to a really enthusiastic, young, inexperienced research assistant and I think super smart research assistant, right? So very eager to help, very eager to go late information, give it to you, but it does get it wrong, and we’ve seen that happen as well. So you need to have a way to kind of validate and check that. Maybe some other just general comments around what you mentioned in terms of oversight and managing this revolution in the research field. Chat GPT is amazing at supporting writing of research documents, research articles. So when it went live on the 30 November, researchers started using Chat GPT to write research papers, right? So you give it some information. And it’s super at generating text and writing an article, even for you. And suddenly the big journals had to kind of respond. So you can see Chat GPT is a co author on many research papers already published at the moment. And then we saw some of the influential journals like Nature coming out to say they’re not accepting chat GPT a legitimate author and they needed to do some work to say what constitutes a legitimate author. Right? And they came back with a rebuttal to say an author needs to be able to take responsibility for what they’re writing, which Chat GPT and any AI chatbot obviously can’t. I think that’s a good line. And they’ve made some good recommendations about next steps for us to not ban the technology, not try to get rid of it, like you said, not try to get the genie back in the bottle. But how do we find ways to utilize this technology to help us be more effective, more efficient, more responsible, and to get information out quicker to people so that we can actually move faster in this knowledge generation era that we’re living in? So we need some guidelines, we need the right processes in place, but certainly I believe it’s not the right approach to try and ban it, but actually find a way to use it responsibly. And I think there it’s also important for us to know how to acknowledge Chat GPT. It doesn’t plagiarize, it gives good information, but we need to be able to report, to say chat GPT was a tool that we utilize to generate this text or to write this article or to do this data analysis or whatever. So we need to find good, responsible ways of acknowledging its contribution so that we’re transparent in that way. So I’ve added a lot of additional comments, so let me just put it up to Jan Willem or you Dave, to kind of comment. Jan-Willem, I’ll kick it to you. Thoughts? Yeah, well, I think there’s also good critique given by the researchers who say, okay, but this is all not validated information, you cannot use it in a clinic. And I must say that I agree that that’s true, but that’s in an ideal society, ideal situation, often I see that, for instance, people have questions to me or they already have found some answers and there’s also a lot of errors there and you have only a limited time to give an explanation. So you focus on a number of items to further address. But what I found interesting that, for instance, this chatGPT also gave advice about healthy diets or about being thoughtful about sound levels. And that are things that either US clinicians take for granted. It or that’s a conversation on itself, if you have only ten minutes, for instance, and you think, oh yeah, the diet is important or hygiene, those kind of things. So it’s interesting that these systems popped it up and that could also be maybe the key for a next conversation with your specialist. So there I see also opportunities that maybe these AI chat bots can help you in digest all this information or prepare for your appointment. And as clinicians we could also collaborate within our association, for instance, and see, okay, what are good prompts, for instance, to ask and maybe publish some kind of frequently used prompts that we could advise to patients that we say well, that is a good start. And of course, with some of the warnings of possible potential misuse or in case of doubt, contact your clinician or some health provider. In that way, I think it’s helpful if we start to well, if we experiment this instead of banning these technologies also because, yeah, it’s impossible, I guess, to ban because it will be built in into many applications in the near future. Yeah, it certainly feels like one of those things that it’s going to be really hard to completely reverse and put back in the bottle, but there will probably be efforts to at least mitigate the speed. And that’s, I think what probably is both most astonishing and also most concerning is just the rate at which this seems to be progressing. I mean, the first iteration that was released, like De Wet said in November, was really kind of mind blowing and then this next version is even better and so it’s just kind of crazy to watch. But there were a couple of things that you said there, Jan- Willem, that I thought was really interesting. So first of all, it’s like this and maybe we should get into the article that you two wrote around, basically. So you mentioned like these prompts. So that’s kind of the terminology that’s used when describing how to even communicate with them as you’re prompting the large language model. And so you guys did some prompts sort of from the perspective of the patient in a hearing healthcare setting as well as the clinician. And I thought there were a couple of really interesting things that came out of that that we can talk to. But to your point, because I think it’s very specific, but I think this will be broadly applicable all over the place is this idea of sort of unexpected answers. So if it’s going to spit out seven bullet points of recommendations of what to do if you detect a hearing loss and like the first five of the seven are probably going to be pretty generic. But then there are things that. It’s obviously sourcing from some publications that it’s weighing as being authoritative. So it’s factoring in diet and exercise and all that. So even if that’s not something that’s like verbatim in the guidelines issued by some standards committee, some standards committee, it’s still adding that in. And I think that we’re going to see more of that where I think these models have opportunity to go beyond, I guess, like, the best practices, the status quo, and introduce things that might be a little bit more off the beaten path, which could actually be really significant in the grand scheme of things when you’re thinking through all kinds of different medical anomalies, more or less. And the role of the doctor is largely to determine what’s going on with you. I think that what really makes me excited about this is the idea of some sort of off the beaten path study that was done that might be completely unbeknownst to the clinician that this thing is surfacing insights from. And it seems like maybe that could be a real upside to this, is that it’s because of the breadth at which it’s going and scouring all of the different clinical data and studies and stuff like that, it might be actually surfacing some information that would not be surfaced if you’re just strictly going off of kind of the status quo today. So I’ll just throw that out there and let you respond to it in whatever way you want. But I think it would be good here to just kind of start talking through how this does apply to audiology, the patient, the clinician, the research, really any one of those different participants in here. Okay, I’d like to reply to that. I think it’s also important, this transparency, because if it’s using these different sources, you need to be able to somehow assess its validity. Of course, and I think in these discussions overlooked that. It’s called open AI, for instance, but it’s not an open organization at all. And these models are not open source or publicly available, nor its exact training data is also not available. But if in theory such model would be openly available to researchers in hearing healthcare, then it would be really interesting if you can train this model specifically on parameters important for audiology and also maybe some of the important facts for our patients for instance to take into account and that you can add also where the system is basic itself on. So maybe I don’t know how well our databases are but you can imagine that if ENT doctors and audiologists around the globe would fill a database with their best practices and say okay, just constrain this model to these best practices and it will help clinician. Students who are not up to had the best practice, for instance, to learn from it and had to disseminate these clinical approaches. While on the other hand, people who don’t have access to a clinic can use prompts and then get information from this validated model that will be really helpful. So I’d say that these commercial models have shown that it’s really versatile and could be used, but hopefully by more community driven approaches that are open and also maybe freely available because OpenAI is also now giving priority to people who pay, for instance, give them more bandwidth, et cetera. That would be helpful and could be in the long term in terms of how to organize your healthcare model. Be a good cost effective investment if many clinics throughout the country and also patients can benefit from better information, better access and better clinical workflows. One thing that comes to mind here, and I’ll send either one of you can respond, is going back to the point that I was making earlier about these verticals. So these specific basically think of OpenAI as being able to scour anything. But if you start to put parameters on that, then you can create more or less a definitive amount of information that it’s sourcing from. So you think of, like, maybe the way this evolves is you have these large medical institutions like the Cleveland Clinic or something like that, where they’re basically establishing that, okay, so for anything related to cardiology, we want those standards or all of those best practices to be guided by the United States Heart Association or something like that. So audiology would be like the American Academy of Audiology. Whatever kind of prompt pertains to hearing healthcare audiology, you need to use this set of parameters and this information to kind of guide the answers. So that’s kind of I think one way that these could be shaped is that kind of like I said, these large bodies are actually defining what the language model is able to access to begin with. Knowing that’s a level of oversight that I could kind of see being implemented here is defining what’s being sourced. Yeah. And that people know what’s in the data. And for instance, I tried to use prompts to limit constrain the model to only using American standards or British standards and I didn’t see any effect on it. So even if you try to build it into the prompts, apparently that’s not the level to constrain it and it should be. Deeper in the model and there I guess we really need as a field to embrace but not embrace this technology, but see how we can get better alternatives. Have learned from this. I mean it’s for everybody clear that GPT 4 or this math bot from Google that all these systems will not be the final shape. We’re now maybe using these five year or six year old AI advice systems and I guess good to consider them as minor so that if it’s something really important then you don’t rely on the opinion of a five year old. So good to keep that also in mind and see how to integrate these different systems that keep the errors in check while also allowing for upscaling applications and making benefits accessible in countries where maybe it’s not affordable yet or where the information is even not findable. Yeah, I think one of the important applications for these AI chat Bots is kind of assisted diagnosis for clinicians. We’ve had systems available for many years, but they just haven’t been as intuitive and they haven’t relied on such large models. I think we’re seeing this kind of taken to a whole new level. I agree with the transparency, but the exciting thing is there’s all kinds of ways in which these AI chat bots are going to and are already improving clinicians engagements with patients. And I think we’ve spoken about the diagnostic you’re going to have silo, but there’s so many other ways. I mean, they’re perfect at doing case histories, right? I mean, they can do an amazing case history, ask the right open ended questions and then kind of narrow them down so that you can have a thorough case history already done before you even see a patient. I think we’re also seeing them contributing to the efficiency of the engagements with patients. There was just a recent article that came out about Microsoft actually embedding this into a tool that will allow clinicians to get patient notes transcribed automatically and then organized and structured for you after you’ve done your consultation. So it saves you a lot of time, so it increases efficiency, effectiveness of our engagements as well. And then, of course, the whole idea of as we collect information about this patient during the case history beforehand, but also during our consultation and testing. It can collate the information, but actually also start interpreting that information for you so that you can have a cross check and a cross validation when you speak to the patient and it can make recommendations on treatment options already can be validated. By the clinician. And as you mentioned, Dave, I think it has that vantage that it thinks about everything in its database. So you can actually think about things and suggestions that we may sometimes forget about. But we need to recognize, as Jan Willem also kind of reminded us, that it needs the oversight. We don’t yet know about the transparency, how the data is being put together and what potential biases some of these models have. But super exciting to see the whole clinical engagement being affected by these technologies. I think in the next couple of years we’re going to see them integrated into that entire journey. Or another example could be I already asked GPT 4 to interpret audiogram with mixed hearing loss. We can expect that these models are going to also output images or maybe use images as input. So I can imagine that you give hand over an audiogram, which is more clinical information for the expert, and then as a patient, you can just make a photo of this audiogram and ask Chatbots to explain it to you and maybe have this better interpretation or a repetition of what the experts told you in this brief conversation. So also there it can help in explaining your patient journey, where, I think the risk I mean, of course there can be errors made, but as long as you have checks and balances and ways to restore it, if you have it in taking the patient history yeah. Then, of course, there is a follow up moment with the clinician and you can set things straight. Sorry, David, I mean, any clinician would complain to you about the amount of admin they have to do right and report writing and Chat, GPT or these kinds of models are absolutely perfect to generate these reports based on the data that you receive. So certainly potential there for efficiency and also cost effectiveness gains in practices, but also in large healthcare systems. And that’s just on the I mean, we’ve just been discussing the clinician side of users. What about the actual consumer or the patient? How can they engage with it and actually benefit from this even before they see a clinician or as a support system, after they’ve seen a clinician? Yeah, there’s a couple of things coming to mind here and I know we are kind of coming up on time here, but I just think that this idea of having your own large language model, I think on your own data would be, I think, really impactful. So you’re talking about the patient’s perspective. What happens if you get to the point where. Lot of those different electronic medical records in all their different shapes and forms from the audiogram that you’re uploading onto your iPhone in Apple Health and all of these other inputs that you can be sharing with a large language model of the future that’s literally specific to your data. How powerful would that be when it can start to take all of these different inputs and figure out what’s correlating and impacting one another, your diet, your sleep. You look at kind of this trend of the quantified self with your Apple Watch and this idea that there’s more and more sensors and there’s more data that you’re capturing and you’re feeding a model right now, but that model isn’t you don’t have the AI engine yet that’s going to really start to make sense of it. So I think that’s coming as well. That’s going to be really powerful and will change a lot of this stuff. But again, we’re at kind of day one here and I think even to your point De Wet about this notion of for the clinician of being able to create efficiencies. A lot of that again has sort of been there was a precursor which is like even the ability to transcribe your past meeting through your voice into a Notes app or something like that. All of these things have sort of enabled what’s happening now because this all exists. It’s like you have this large database of your even for one patient and all of their records. So it’s a matter of how do you start to combine that and consolidate it and draw insights from it. It’s a task that is almost impossible right now because of how fragmented the data is and then also who has the bandwidth to do that. So that’s a perfect application, I think of these large language models that we’re at the beginning of. I mean, we’re only seeing these things scour publicly available information across the internet. Think of when it starts to be able to do this for personal records. And I probably just opened up another conversation there which I’m not sure we have time for. But closing thoughts in general, maybe one short question answer would be I think that throwing more data at these models is not a solution now because it’s almost already our entire internet that’s used to train. So getting more out of the same information or constraining it, I assume that will be and wrapping other functions around it, other databases that have validated information, et cetera, that’s I think the way forward. But we will see. I assume that nobody knows what’s in store for the rest of this year, let alone in 2025. Yeah, maybe just one or two thoughts from my side as we close. I think we’ve covered a lot of ground just in kind of the health care, maybe touching a little bit on hearing. Healthcare space. But of course, this technology covers every field of occupation and health in general, and it is also changing the landscape of the tech giants that we’re so used to. You mentioned it’s day one of this technology, Dave, I think it was Google CEO who kind of downplayed Chat GPT prominence by saying it’s minute one of an entire new journey. I think that’s true. But what we’re also seeing is we’re seeing these shifts and pushbacks between the different companies as well. Everyone fighting for this space. I think we’ll have the advantage as consumers that we’re going to get really fast developments, good products. But the downside is that we’re going to have to monitor because I think some of these things have been released without enough information being provided in terms of what data they’re using, what’s the privacy, regulatory constraints that they’re functioning in, et cetera. But in any case, all kinds of things to discuss, an exciting new era that we’re in. Absolutely. I couldn’t agree more. On that note, we will end today’s conversation. I’m sure this will probably be the first of many conversations like this as this all does start to unfold and become just more pervasive, I think, in our lives. So thank you so much, De Wet and Jan-Willem, for coming on today. And thanks for everybody who tuned in here to the end. We’ll chat with you next time. Cheers. You our channel.

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.


About the Panel

De Wet Swanepoel, Ph.D. is Professor of Audiology at the University of Pretoria, South Africa and adjunct professor in Otolaryngology-Head & Neck Surgery, University of Colorado School of Medicine. His research capitalizes on digital health technologies to explore, develop and evaluate innovative hearing services for greater access and affordability. He is Editor-in-Chief of the International Journal of Audiology and founder of a digital health company, hearX group.

Jan-Willem Wasmann holds a MSc degree in physics and works as audiologist at the ENT-department of the Radboud university medical center Nijmegen in the Netherlands. He believes in the potential of computational audiology. His recent work includes AI guided CI fitting techniques, simulated directional hearing based on Neural Networks, and remote care.

dave kempDave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog,, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website,, and has been featured on NPR’s Marketplace.

Leave a Reply