From picking a restaurant on a Saturday night to choosing a new dentist, all of us seem to rely more and more on online reviews.
In this episode of This Week in Hearing, Vinaya Manchaiah, professor at the University of Colorado Medical School talks us through the consumer on-line review research his team of collaborators have conducted over the past few years.
Full Episode Transcript
Brian Taylor 0:10
Hi, everyone, and welcome to another episode of This Week in Hearing. I’m Brian Taylor. This week we’re going to be talking about online reviews, I think most of us know and are familiar with the process of writing in, online reviews, we might do it ourselves make a review of a restaurant or experience we had, and even more of us are probably relying on online reviews before we go to a restaurant. Or even before we visit our a doctor or a dentist, I know myself, I relied on online reviews, more and more. Over the last few years, I’ve become a really important tool. And online reviews are actually something that I think hearing care professionals need to pay attention to. And there’s been some research done in that area. And that’s what prompted me to bring on our guest today. Dr. Vinaya Manchaiah, who is a professor at the University of Colorado Medical School. I’m gonna let I want to welcome Vinaya to the to the broadcast today. Welcome.
Vinaya Manchaiah 1:13
Oh, thank you, Brian, my pleasure to be here. I do read the blog. And I’ve seen plenty of these videos, and I really enjoy watching them. So thank you for hosting me today.
Brian Taylor 1:22
Well, it’s good to know that people are watching our stuff. Um, you’re one of the most prolific researchers that I know. Every time I go to one of the journals, it seems like I see another article that you wrote, and so much of it is germane to clinical practice. And I think one area is what we’re going to talk about today online reviews. But before we get into that topic, I thought it would be really helpful for our viewers, if you could give us a little bit about your background.
Vinaya Manchaiah 1:53
Okay, well, I started as a clinician, audiologist, you know, worked in the clinical environment for some time. But much of my last 10-12 years, or most of my efforts are in the academic space, with academic jobs, teaching research. But I’ve always enjoyed clinical work. So I’ve kept up my clinical work throughout my career. Currently, I serve as the professor in the medical school here in the University of Colorado, I also have a leadership position as the director of Audiology in the University of Colorado hospital. So I kind of oversee the audiology clinics here. In addition to that, I also do a little bit of clinical work. In terms of my research, I’ve been all over the place, you know, looking at different things. And each year and each couple of years, my interest changes and switch to a different direction. But I would say the last few years. So there is definitely more focused efforts on internet, virtual spaces, telehealth. So these are my main areas. And I call it the lab, virtual hearing lab. If you’re interested, you can just Google virtual hearing lab. And you can go to the website, a co lead this with Professor De Wet Swanepoel in South Africa. So we are really interested in kind of understanding what happens in the virtual space, what kind of tools that we can develop, that can supplement our clinical investigations, and how can we provide some rehabilitation as well in this using the internet and virtual spaces? So we’re interested in everything that happens online
Brian Taylor 3:32
Well, and that’s why we have you on the broadcast today. There’s actually a lot of different topics we can talk about with you today. But the one I wanted to focus on is Consumer Driven Health Care, and people’s desire to seek information online to get advice online. And I think the first question for you Vinaya is what motivated you to get into that area of study?
Vinaya Manchaiah 3:58
So a few different things. I think the initial interest in this basically started from my clinical experience as well as research. For example, I started developing and evaluating internet interventions, mainly for tinnitus. Now also for vestibular disorders, in the developing self management programs that people could take on the internet and then looking at how their outcomes may substantially improve, and so on. And one of the things that happened there was people would read quite broadly and send us questions and a range of things that we never thought about. And also clinically, you know, for instance, I recall several years ago, a patient asking me, you know, individual with tinnitus asking me, Where can I get a laser therapy for tinnitus, and can you recommend me a place and I had never heard, you know, what is the laser therapy for tinnitus? And then I kind of took some time to go look at PubMed and, and then tell him that well I never heard of it, but it looks like there’s very limited evidence. So I can’t really recommend that you do this. So that kind of prompted my interest in virtual spaces. started thinking about it, you know, it seems like people are spending a lot of time online looking for information. And that is driving their interest as well as decision making. So maybe it is worth for us to look into those spaces. But at that time, I really didn’t have any expertise on how to do any anything meaningful. And then something else happened. Interestingly, I call I collaborated with some sociologists in Sweden, understanding hearing loss from a new theory theoretical perspective. For example, much of the research and hearing loss and hearing aids are driven by stigma theory, which has some limitations. So we started looking at hearing loss and hearing aids from a new theoretical perspective called social representation theory. And as a part of that, we started applying some new methodologies like looking at text data, you know, how to analyze text data using modern analytical tools, software’s so that we can look for patterns. So that gave me some insights into some methodologies. And then I thought, well, you know, this is pretty cool. So we should, how about, we apply this to a Facebook data and newspaper data and pretty much everything I can think of. And then the third, most important thing that happened was probably me moving to the US about seven, eight years ago, and looking at what’s happening here in terms of technological advancement for direct to consumer devices, and service delivery innovations. And then I don’t know how but I got introduced to Abram Bailey CEO of hearing tracker, and then we started discussing. And I think that also kind of really strengthened my interest on looking at online spaces, virtual spaces, and then look at like reviews, and so on. So you know, a few of these different things clinical, some introduction to methodology, and then connection with hearing checker, all of that I think came on the right time for us to start this area. So
Brian Taylor 7:07
ya I know I know Abram Bailey at hearing tracker collects an awful lot of data that he does great work down there in Austin. What I wanted to focus on were three studies that you did in this area around online reviews. The first that I know of the one I wanted to talk about first, at least was published in IJA International Journal of audiology, I think, last year, the title of a study I have it here is hearing aid acquisition and ownership. What can we learn from online consumer reviews? Can you tell us what you did in that study?
Vinaya Manchaiah 7:39
Well, this is a data from hearing tracker. In this we had about 1,400, online reviews on hearing aids. So when we looked at the data, basically, we had three types of things. One, we had some information about the metadata, like what type of brand, the technology level that we’re using, and then how long they were using these hearing aids. The second information that we had was, you know, user rating for 10 different things, in terms of their hearing aid, performance, benefit and satisfaction, where they would rate on a structured scale. And the third important element that we also had was, you know, text data where people would actually write hearing aid users would write their experiences, as a review. So we were looking at these data and interested in understanding, you know, does that tell us anything about hearing aid and benefit, you know, hearing aid benefit and satisfaction? Now, we kind of measure this in the clinical scenario. Typically using questionnaires, standardized questionnaires. But if you have a text data, you know, what does that tell us? So we took this data, we applied some natural language processing, basically automated way of looking for some patterns or themes within the data to find meaningful units from this text data. And we also kind of linked that we kind of looked at association between this text data with these ratings. You know, just to see, if you do the rating, like a structured, standardized questionnaire, and look at the text data when they tell you the same thing, or is there something more that comes out of the text data? So that is exactly what we did in that study. So
Brian Taylor 9:30
yeah and I know so if you go to the stone, he talks about text, data and analytics around it. For our viewers out there, and maybe this is just for my own information, when you look at it, the charts it’s like the words or the phrases that are are written the most have big letters, and then things that are not written as as much as smaller letters. So you get this infographic that’s kind of interesting as far as what’s been Yeah. have you with us? Right?
Vinaya Manchaiah 10:01
So two things there one, of course, it does look at the frequency, how frequently something comes up. But what is more important is actually not the frequency but inter relatedness. You know, one way of looking at the text data is like, we read each statement, and we call them, like qualitatively, what does that says, and then analyzing that data, but we have really good tools nowadays to kind of do that automatically. You know, you can kind of read 100 people’s responses. But if you have 100,000 responses, you know, you can’t read them and it will take forever. So can we meaningfully analyze them using the software? And the answer is, yes, we are getting closer and closer to do that. And in addition to the frequency, it also looks for themes in specific patterns in what they’re talking about. So
Brian Taylor 10:51
oh, that’s. So in this first study that I mentioned, that you kind of briefed us on – Can you tell us what you found?
Vinaya Manchaiah 10:58
Well, several interesting things in the first thing, the cluster analysis of text data resulted in six clusters, but mainly kind of two domains, one of people were talking about device acquisition, like finding the right provider, right type of device, selecting hearing loss to suit their type of hearing problem they had, and so on. And then the second main theme was device use, you know, quite a lot of discussion around smartphone connectivity, you know, just sitting there, he hearing aid through smartphone, and also hearing in noise, so that, you know, these are all some of the main things that themes that came out of this data. Another really interesting thing for me was, when we look at the quantitative rating, that is rating for structured questionnaires, nearly 80% of them have good benefit or satisfaction, rating, good are very good in in a five point scale. But when we look at this specific text data, we actually see quite a lot of issues, people reporting, of course, they’re satisfied with their hearing aid, but they still have issues. So I think the interesting thing is, just because you’re happy with hearing, it doesn’t mean you have no issues. So it has, you know, indicate that I think we need to look deeper. And another thing that also came out, you know, we also looked at association between these ratings, and then the themes, you know, so is there any relation between the ratings and the themes, and in that particular analysis, we kind of look at, in some themes is that over under representation of some of these ratings, and for one of the themes, for example, finding the right provider price point, and so on, we can have, so whenever people talk about that they have really low rating rating of one, two, and three. That means if they have problem finding the right person, they’re looking for finding the right type of device, they’re going to leave a really bad review. So we I mean, this is a I would say as a bird’s eye view, we need to get to the bottom of this. But in the first study, we just wanted to kind of understand what people were saying, and how does their text data relates to their quantitative ratings.
Brian Taylor 13:14
So that’s pretty interesting how we what people hear in the clinic, as far as complaints from one, you know, one or two individuals, it’s just interesting how that happens in the clinic. And then what you see kind of writ large, when you start pooling all of that data across 1000s of users all over the country. It’s pretty interesting. Was there anything in that if I remember right, that AJA study about hearing and background noise, right, that was a real? Yeah,
Vinaya Manchaiah 13:45
yeah. So that was also one of the dominant themes. One thing that we did not do in this, you know, we did not look at whether people were talking negatively about background noise or positively about background noise. But just looking at the statements, I can say they were talking about both both positively and negatively, you know, the court a lot of people saying, Oh, how cool this new hearing aid is, compared to my previous hearing aids, you know, it can, I can do much better in the background noise. And also plenty of people saying, Oh, I still struggle quite a lot with my background noise, and it’s not my hearing, it is no good in background noise and so on. So, yeah, that’s an interesting finding. And it does have some implications for us. You know, I think one implication may be that, well, technology has changed, advanced tremendously, and it has definitely helped background noise, you know, deal with background noise, but I think we may be placing too much emphasis on that. And I think it may be a time for us to advocate for, you know, better living spaces, you know, improve on the acoustics of the space. So that is one and second maybe, is also thinking about all the accessories that we can use a coupling with hearing aids that can, you know, help users to deal better In the background noise.
Brian Taylor 15:01
Yeah, and that’s really interesting because I think it speaks to the limitation of technology of hearing aids. When you get into these really adverse signal to noise ratios, it’s, I guess, maybe for all of us, in industry, maybe it’s a little bit unrealistic to think that the hearing aid itself can do everything. And I really liked the point you make that maybe we need to focus a little bit more on improving the acoustics of the room assessories, that can help lower the signal to noise ratio. So it’s more than just good signal processing and the hearing aid, there’s other things that we we sometimes forget, and it’s you know, the importance of these reviews, I think is that it kind of shines a light on on a theme that you’re hearing in, in multiple places. So it speaks to the power of your research and how it applies to what happens in the clinic. It’s really getting good information. Which leads me to the to the second study that was published just a couple of weeks ago at AJA American Journal of Audiology. And I have the title of that study here in front of me, because I wanna make sure I get it right. Online reviews of hearing aid acquisition and use a qualitative thematic analysis. So based on that study, it sounds like you’re looking more at Where’s what they experience after they’ve acquired hearing aids. So can you tell us about the second study that was an AJA what, what did you do in this study?
Vinaya Manchaiah 16:28
So this is a follow up study basically uses the same hearing tracker data and hearing aid reviews. So when we did the text pattern analysis, we can learn something new, but we can affect that we are getting a bird’s eye view of the data, not necessarily a granular view of the data. So we thought it’ll be interesting for us to do a traditional qualitative analysis like read, read every single statement that users has made, assign a meaning unit and what they’re saying. And then see how that relates to this quantitative data. Like, is the qualitative analysis, traditionally, we’ve been doing? Is it similar to this quantitative new metrics that we have, and also what additional things that we can learn from this analysis? Yeah, so that’s exactly what we did in the study. And in terms of the findings, very interesting findings. In the bird’s eye view, the textpattern analysis, we had found six main themes. But when we look closely at the data, we actually found 11 themes, and 136 sub-themes. So that is crazy. That means if you look very closely at the data, people are actually talking about a bunch of different things. And we can limit them to, you know, two or three main topics. Of course, there are some main dominant topics, but people talk about a range of different things. And that’s why I think it is important for us to go to the granularity of the data. But at the same time, the qualitative analysis kind of reinforced the quantitative analysis, because if you look at the main themes, you know, it from the qualitative, it still boils down to the clinical processes, you know, getting hearing tested, you know, how they got hearing aids, the device, you know, type of device, the performance of the device, physical appearance, fit, and things like that, and then the person know, what is their experience and knowledge, you know, what they satisfied with, with the device, how, you know, if they have any changes in their quality of life, things like that. So it kind of boiled it down to the main themes, but we can get to a lot more granularity, reading and understanding the statements. Some other like one other really interesting thing that happened to us in this was when we were coding, you know, when we were reading this data, we also coded whether the statements were positive, neutral, negative, or if it is more like an advice giving for future hearing aid, you know, buyers or owners. And we found that, you know, about 60, just over 60% of the statements are positive, which is pretty good. nearly 30% of the of the statements were actually negative, about 6- 7% was neutral and about 2% was like an advice giving. So this is also an interesting finding, because when we look at that single number of benefit and satisfaction in 80% Plus indicate good or very good, but when we actually look at the statements, then nearly only 60% or just over 60% have positive, you know, statements and the rest are negative and neutral. So it does kind of speak to that earlier point that we made and saying, when we ask people to say maybe they can give more details than a single number that we arrived from these questionnaires.
Brian Taylor 19:54
Yeah, that’s really interesting, because I know that the last few MarkeTrak The surveys, say an overall satisfaction rating of 80 some percent. Yeah. So you’re saying that sort of high level top level if you ask that simple question, you’re getting the same thing. But if you dig into the details, you start to see that number drop a
Vinaya Manchaiah 20:16
little bit. No, that’s exactly what we found in the online reviews. Again, the sampling is very different from market tracking here. But the interesting thing is, when we do the same type of measurements, we get the same, you know, 80% satisfaction, right? So there is definitely more to learn from this angle. And in fact, we have a current project looking into, can we use natural language to understand benefit and satisfaction? In again, we are doing this project under the virtual hearing lab, and we have a researcher in South Africa together with De Wet Swanepoel. Basically looking into can we gather text information from, you know, hearing aid users? And how does that may relate to our standardized questionnaires. And the text information may be very useful, because depending on what they say, one, we can understand their benefit and satisfaction. But in addition to that, we can also maybe help find you the hearing aids, which we can add to just using a number rating, right. So I think there is definitely more value in getting more textual data or our firsthand reports of hearing aid users.
Brian Taylor 21:30
Yeah, that’s really interesting, because it kind of speaks to the value of machine learning and then gathering all these large pools of data, and then taking that data and then applying it at the clinical level with an individual, I can see how, you know, knowing this information in the clinic can help you drive satisfaction to a higher number with the individuals that you see in your clinic when you have these details. So that’s great. That brings me to the third study that I wanted to talk about today. And that was published last year in J AAA. And that’s one that looked at, I know this one pretty well, because in my role at Signia. We’ve talked about this study and different white papers and things to kind of build the case around what consumers want and look for in their hearing aids. But in this JAAA study, you looked at consumer ratings, and this number is really high 15,000 people. So tell us what you found in the study that evaluated almost 15,000 consumer reviews.
Vinaya Manchaiah 22:33
So this, again, the data comes from hearing tracker, you know, thanks to all of their efforts in gathering good data, and in helping us do some of this interesting work. But the data comes from a particular algorithms, they have a tool they have called Help Me Choose. That means when people go to this website, the consumers or patients go to their website, they can go and take this tool, it’s more like a decision aid. So what it basically does is ask users, you know, well, depending on what are your preferences, we can tell you what hearing aid may be more suitable for you. So they basically ask a bunch of questions and Oh, for this feature, you know, do how, how necessary that is for you. And they do the same thing for a bunch of them. And then based on at the end of that they have a algorithm that tells them well, here’s a hearing aid that is suitable for you. So in some way, this is actually a pretty good data. Because when we do questionnaire studies, we don’t know if the users are really reading them and answering them truthfully, whereas if they’re actually wanting to know what hearing aid is good, then I’m pretty confident that they’re answering these questions more truthfully, and you know, so we took this 15,000 user data, and we had a couple of different things, we didn’t have much demographic data, like age, gender, that would have been very helpful. But at that time, the purpose of collecting this data was not to do a study. So they didn’t collect the data. But we had user ratings on more than 20 hearing aid attributes. And many of for many of these, we have no particular previous data, like for example rechargeability, you know, Bluetooth connectivity, so maybe some small scale focus groups when, you know, companies are developing this, but we didn’t have anything large scale. So that was interesting for me. So, in that study, we did a few things one, we descriptively looked at which of these features are most desirable or less desirable. And then are there some elements that may relate to how their their desirability goes, you know, and then third, we also did a cluster analysis to look at, you know, Are there patterns in hearing aid users based on their preferences? So this is really important for for clinicians as well as for manufacturers, you know, and this is being done in our you know, every industry like if you go to Walmart or some others, they’ll have all of our data and they can kind of look at what are their unique consumers? Right? And if you buy one thing, are you likely to buy something else? You know, like, for example, if you go as a student, you know, you may buy milk and some fruit and some snack or wherever, whatever, if you go somebody else. And so are there some specific patterns within this data? So that’s very interesting. So we kind of looked at the descriptive data of more or less desirable features, and then looked at subgrouping these users.
Brian Taylor 25:31
So I guess the million dollar question is, what did you find from these almost 15,000 Consumers?
Vinaya Manchaiah 25:39
Well, I’ll I’ll be brief, you know, there’s plenty of things that we could talk about, I think, in terms of the most desirable attributes, you know, it kind of comes down to four things, one, ability to hear friends and family in quiet, and ability to hear friends and family Nice. Those are like the top rated. And then in addition to that, a two other thing that came out was actually comfort and reliability. So of course, we’re focusing on a bunch of new things. But at the end of the day, I think what people really care isn’t, is you’re hearing it reliable. Is it comfort, comfortable for you to wear? And can you hear your friends and family in quiet and nice, so nothing surprising there, but it kind of really strengthens, you know, those are our core values. Not exactly that is what should kind of we should keep as a base. There are plenty of other things that people rated, you know, less desirable, if, for example, you know, streaming, streaming to hearing it from your landline, that was not highly desirable, because no who uses landline these days, and there are a few other things that were less desirable, but it kind of just gives us an understanding of which features, and which population may be interested in which features. But the second point which population, we didn’t have much data, like, you know, are men more likely to like some are women more likely to, you know, prefer some features are educated, more likely to prefer our people with this bit backer. So we didn’t have that. But we have the data now, because we put in some new information, and we have 5000 users data, and now we’re starting to analyze, so we may be able to tell more in the coming months. In terms of the second important finding is when we did the cluster analysis, very interestingly we only found two clusters, or two groups of people, one, a group of users 1/3 of the population, nearly 5000 of these 15,000 sample, they wanted a high end technology, they wanted everything. It’s like buying a fully loaded car, you know, they wanted a hearing aid fully loaded, and they were happy to pay a premium price for that.
Brian Taylor 27:45
You said 1/3 right?
Vinaya Manchaiah 27:46
Yeah 1/3 of that, and whereas 2/3 of the users were more choosy, you know, they kind of wanted different things, you know, specific preferences and things like that. So this is the pattern that we found. But I think I’m still not very convinced that this was a good finding, because I think our grouping only goes far if you only put that preferences rating. But if we maybe add some other information, like, you know, maybe degree of hearing loss of self reported hearing loss, age, gender, income level, you know, other things, maybe that can give us a better grouping or a segmentation of this population. That’s what we’re hoping to do in the next few months.
Brian Taylor 28:29
Do you think that these attributes could change over time? And are they based based on user demographics?
Vinaya Manchaiah 28:36
I think, yeah, I think you asked two very important questions. I think, of course, the hearing aid preferences, preference different attributes will relate to some demographic, we don’t know which one they are. And we’ll be able to answer some of those questions in the future. I am also pretty confident that the trends change all the time. You know, for example, in our previous study, one of the attributes like adjusting hearing aids from home like programming, hearing aid from home was not rated very desirable, because the data was pre 2019. And I think looking at the new data, I’m pretty confident that now after the pandemic, many users probably would want to have the, you know, feature in their hearing aid. So I think this can help. I mean, this is an area that we’re we have to look for trends over time, for sure.
Brian Taylor 29:28
Yeah, that’s interesting. So our time is running out here. So my final question to you Vinaya is, I really want to have you your take on the big picture. So taking these three studies as a whole, and all of your work around consumer online reviews, how does this research better inform audiologists or clinicians out there, and how can they apply your research findings to their daily work?
Vinaya Manchaiah 30:03
Excellent. Well, thank you, Brian, for this very important question. You know, I mean, these these studies, but I may also add a few other studies that we have done. Like, for example, we have looked at Google reviews about hearing healthcare professionals, we sampled about 40,000. Reviews and looked at like, how, what are people saying about their audiologist, we looked at Facebook data and other data. And I think it’s right time for research in this space, looking at all the things that are going on in terms of over the counter hearing aid, direct to consumer movement in healthcare, I think it’s kind of right time for us to start looking into this, both big data from research, but also individually in your clinic. Because, you know, it is important for you to know what’s going on in your own setting, and what type of feedback users are providing, I think a word of caution is we got to be extremely cautious what type of inferences or conclusions that we draw. because of many reasons, you know, first being, there is obviously a sampling bias, like a self selection bias, you know, people who are either extremely happier and happier, more likely to leave. So that is one important thing. And also, if you’re only focusing on online reviews, there is also a lot of news about fake reviews, you know, so I think we got to be mindful on what kind of inferences we can draw. But I think my take on this is that it’s definitely an area that we should focus. Even if there is some, even though some of this data is actually fake. You know – Why? Because it is driving future sales, future service delivery. So at an individual level, let’s say you have a, you know, consumer leaving a feedback that you don’t recognize, then maybe you need to go and ask a follow up questions and you know, try to kind of identify who they are and where I’m at a larger scale, that we have some new tools that can help figure out based on the authenticity of the writing, you know, if that is a genuine review, or you know, fake review. But nevertheless, I think it’s definitely time for us to look into this. The second point I would like to make is I think we need to develop methodologies that can help us meaningfully analyze and understand this data, because we looked at about 1400 reviews. But I think in coming years, it will be hundreds of 1000s of reviews, not like not in hundreds or 1000s will be hundreds of 1000s. So how can we make meaningful sense of these really, really large data? I know there will be a challenge for researchers to develop algorithm that can help us think meaningfully. But I see a huge value both for clinicians as well as for industry. Because when we look at technology turnaround, no, I think hearing a technology turnaround has become somewhat like a smartphone turnaround, you know, we have something new comes up every year. When we look at clinical research, like clinical trials, there is no way we can keep up, you know, to do conceptualize an idea, get an IRB collect data, right publish, it takes years, right, and to get anything meaningful. I mean, we need few studies to make a meaningful sense. It could take decades, you know, and by then we have kind of moved on from some technology and gone to something else. So I think the only way we could make any inferences about this new technologies are basically using this consumer data. Again, I know we got to be cautious. But I think if we can find a way to meaningfully analyze the data, it can have really good effect on the industry, like, you know, what kind of technology are we building? And how useful these are? And can we actually, before it comes in the next round within one year? Can we make some small adjustments that can really fix things based on the user perspective, I think there is a huge value. Definitely more work needs to be done in this in this space.
Brian Taylor 33:57
Yeah, no, I think that’s right on target. You know, there’s no way that traditional research can keep up with the product launch cycle. So being able to collect the data quickly like this, through online reviews not only helps people in industry, but it also helps clinicians, I think, do a better job with individuals that come into their office, knowing where some of the some of their struggles or some of their challenges might be. So it really I think fuels better patient care when you have access to this, all of this data. So thanks to you and your colleagues for all the great work that you’ve done all the contributions you’ve made in this area. I look forward to and I’m sure our viewers do as well look forward to reading more of this research as you publish it.
Vinaya Manchaiah 34:49
well thank you for the opportunity, Brian. And also I just want to acknowledge a truly interdisciplinary international team that we have. You know, Abram Bailey hearing tracker researcher Pierre Ratinaud is a social scientist in France, Beck Bennett in Australia, Erin Picou Vanderbilt, and also, finally, but most importantly, De Wet Swanepoel in South Africa, I think all of that effort really helped me. You know, get this out fairly quickly, you know, it would have taken many years to do this, but we were able to get it done fairly quickly because of this. A team effort, but thank you for your interest and, and thank you all for watching.
Brian Taylor 35:26
Takes A Village as they say, All right. Well, thank you for your time. We really appreciate it.
Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
Articles cited in this episode:
About the Panel
Vinaya Manchaiah, AuD, PhD, serves as the Professor of Otolaryngology-Head & Neck Surgery at the University of Colorado School of Medicine and as the Director of Audiology at the University of Colorado Hospital (UCHealth). He is the Principal Investigator at the Virtual Hearing Lab. He also has a position as Extraordinary Professor at the Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa, and Adjunct Professor at the School of Allied Health Sciences, Manipal Academy of Higher Education, India.
He has worked in various clinical, research, teaching, and administrative roles, although his current academic appointment centers predominantly on research and research leadership. His research mainly focuses on improving the accessibility, affordability, and outcomes of hearing and balance disorders, by promoting self-management and using digital technologies. Dr. Manchaiah has published over 200 manuscripts (>180 peer-reviewed) and 5 textbooks.
Brian Taylor, AuD, is the senior director of audiology for Signia. He is also the editor of Audiology Practices, a quarterly journal of the Academy of Doctors of Audiology, editor-at-large for Hearing Health and Technology Matters and adjunct instructor at the University of Wisconsin.