Drs. David Eagleman and Izzy Kohler from Neosensory, join Amyn Amlani to explore their latest research findings on tinnitus management and speech clarity, utilizing the company’s unique haptic wristband.
The Washington Post recently highlighted Neosensory’s Duo product, which has shown promising results in significantly reducing tinnitus loudness and frequency. The research found that the reduction in TFI scores were even greater among individuals experiencing moderate to severe symptoms after 8 weeks. This advancement offers audiologists a new treatment option to consider for their patients.
In a newly published study, Neosensory explored the effectiveness of their Clarify product in improving speech comprehension for those with high-frequency hearing loss. By detecting high-frequency speech sounds and delivering distinct vibration patterns, the Clarify demonstrated notable enhancements in speech understanding in users with and without hearing aids.
The company is currently exploring different vibration patterns for signaling detected speech sounds, aiming to enhance user experience. Additionally, due to a growing level of interest among hearing professionals, they transitioning towards greater collaborating with audiology clinics to broaden access to their technology. By providing supplementary sensory information to the brain, Neosensory’s wristbands offer a promising avenue for improving both tinnitus management and speech comprehension alongside auditory input.
Neosensory’s mission is focused on sending data streams to the brain delivered through sound to touch products. In today’s episode, we discuss new research article that sheds lights on this mission through an improvement in tinnitus and speech understanding. I’m here today with two individuals from Neosensory Dr. David Eagleman and Dr. Izzy Kohler. Thank you both for being here. And let’s just dive into some studies that you guys have published. One of the most recent ones was in the Washington Post, where you all tested, a product of yours for tinnitus. can you talk a little bit about the study design? and then we’ll get into a little bit of the dynamics of what those results are and then the clinical application, if you don’t mind. Sure. I’ll start off with talking about what it is we’re doing with tinnitus, and then, Izzy, you can talk about the actual study. So, just for clarity, what we have is this wristband that we’ve built and has vibratory motors on it. And what we’re doing for tinnitus is bimodal stimulation. So that means sound, plus touch. And that has been shown in a series of studies even before we came along, to be very effective in driving down tinnitus. So, the way this works is, with our app, we play tones, and these are around your tinnitus frequency. And each time a tone is played, you’re feeling a corresponding buzz on your skin. And what we have found, and this is what we’ll talk about the study, is that that drives down tinnitus. and it’s not a cure, but it drives it down as much as any other version of bimodal stimulation. So, for example, this company, Lenire, does the same idea, but with sound and shocks on the tongue. Our data demonstrate that it can be touch from anywhere. There are many interpretations of this but one interpretation is simply that you’re teaching the brain what is a real external sound, because every time there’s an external sound, you’re getting verification on your skin. But the tinnitus is fake news and doesn’t get any confirmation, and therefore, that gets driven down. So we did a study, Izzy, I’ll let you talk about that. Yeah. So what we did is we designed this wristband, and what we did to test it out is we used something called the Tinnitus Functional Index (TFI), which everybody knows is the validity to measure for tinnitus. And we used the wristband plus the tones. And then we had a control group that was tones alone. And simply what we did is we gave them both. One group did tones alone for eight weeks, and one group did the tones plus the wristband for eight weeks. And we gave them both the TFI on a weekly basis. And what we showed by the end of this study is that the group that had the bimodal stimulation, meaning they had the addition of the vibrations, actually did significantly greater on the TFI, and saw a tremendous drop in their tinnitus, loudness and frequency by the end of the eight weeks. If I remember right, there’s also a result that you all found related to the severity of the tinnitus as well. Did I get that right? Yes, that’s exactly right. So what we found is that those people who had a score of 50 or above baseline on the TFI, meaning that they had moderate or severe tinnitus, actually saw a much greater drop in the TFI by the end of the study than those who just had a minor tinnitus, which tells us that those people who have had severe tinnitus and have had it for quite a while, it’s quite bothersome, actually, are able to drop it down to a very mild level to the point where it’s very tolerable on a daily basis. Did you all account for the length of tinnitus that somebody might have had? So, for example, somebody might say, I’ve had tinnitus and I’ve noticed it for five years, versus somebody for 25 years. Does that make a difference in how this product is beneficial to that individual? We don’t know the answer to that yet, but we are collecting that kind of data. And so as we collect more and more subjects, through ongoing studies, we will be able to answer that. Sorry, were you going to say something else to that? Yeah, something very similar. And in some of our preliminary data, it didn’t seem to make a difference. When we did a linear regression and tried to see if there was a trend, there was minimal, if none there. One thing I want to add here is that what we find generally is that this works. For example, for people with severe tinnitus, 91% of people showed an improvement on the TFI, but we don’t yet know exactly who that other 9% is. And this is where our research is aiming now is. As we know, there are many etiologies for tinnitus. This works great for most people, and some it doesn’t. So we’re trying to clarify where does it not work so well? For whom is it? People with acoustic neuromas or something, for example. and that way we’ll be able to know exactly where this plays the best. Yeah. And from a clinical standpoint, because a large majority of our audience is going to be from the clinical segment, the question becomes, how do they now become a player in this process for your product? Can you guys share some information on that? Yeah. So, for the short history of our company, so far we’ve been selling direct to consumer, but just recently we are really moving to sell to audiology clinics and ENT clinics. And that’s wonderful for us, because a number of audiologists, we’ve noticed, have learned about Neosensory because of their patients who have come to them and told them. And so they’ve reached out to us. Now we’re doing more reach out, and it’s spinning up as a very fast flywheel. But the idea is the wristbands go to the audiologist’s office, they can, when they have a patient with tinnitus, take care of that right then by saying, look, here’s this bimodal stimulation technique, scientifically validated, proven in the literature. And again, this bimodal stimulation goes back at least about ten or twelve years in the literature, not ours, but other groups they can say, look, here’s something that you can purchase right now and go home with it. And I think that’s really critical. And from what I’m understanding, there’s not a training mechanism that goes with this, is that correct? So here’s what it is. each day there’s a – Well, we call it a training program, but each day for ten minutes, you sit down with the app and the app plays the tones and you feel the things, and you can read or do whatever, relax during that time. but we’ve gotten this down to only ten minutes a day. And when I say we’ve got it down, what I mean is we studied different time periods of how long people should use in a day, and you get the same benefit out of ten minutes as you do out of longer. So that’s about the shortest you can do. That’s really cool. And then most recently, there’s another study that you all have just completed that’s looking at speech comprehension. And this is with individuals with and without devices, hearing aids, if I’m not mistaken. Can you talk a little bit about that study? Yeah, that’s right. So it’s using exactly the same hardware platform, which is to say this device with a microphone and these vibratory motors. But now it’s running a completely different algorithm. What it’s doing now is, for patients with high frequency hearing loss, it is listening in real time for high frequency phonemes. So it’s using our homebrewed AI to listen for /s/ and /t/ and /v/ and /c/ and things like that. And it buzzes in different ways to tell you, oh, I just heard of /s/. Oh, I heard a t and so on. And so with high frequency hearing loss, typically a person still hear, their ears are doing fine at the medium and low frequencies. This now clarifies what is happening at the high frequency. That’s why we call it the clarify, the Neosensory Clarify. and so we have developed that over the course of years, and we keep improving the software on it and the ability to detect that, including, by the way, in noise, you could be at a loud restaurant. There’s music playing, there’s forks dropping, there’s chairs scraping, but it’s not hearing any of that. It’s only picking up on phonemes, on high frequency phonemes that are being heard in the environment. And so we ran these studies on it. And so, Izzy, if you want to tell about the study. So, for this study what we did is we just took this preliminary group and we used the APHAB for this particular study. And what we did was we did an onboarding call with them, and then we had them use the wristband for at least, we told them at least 2 hours a day, but most of them got up to about five or 6 hours a day of speech exposure, just through normal daily activities. And then we made sure that at least 1 hour, that was some kind of focus activity, like watching television or engaged in a conversation, listening to a podcast, something along those lines that they were really engaged in or really wanted to understand what was being said. So that was sort of a formal practice, just to make sure they got that minimal hours of practice. And then we had them take the APHAB on a weekly basis. and what we showed at the end of this is that there was a significant drop in the APHAB, and we did a lot of follow up interviews at the end of the six weeks, and a lot of people subjectively did tell us that, oh, yeah, I can listen to the television. I don’t have to use closed captions. I can hear my spouse across the table so much better now. For a while, my spouse was very soft spoken and mumbled, and now I feel like I can understand her. So we got some really great results with this study. And we’re very excited to write this up with the idea that there’s more coming. We’re certainly doing further research studies to further expand on what Clarify is capable of doing. Just as a quick example of that. One of the things we’re looking at is when it detects a phoneme, like a /t/ or an /s/ or whatever, does it just buzz a single motor, or does it do some pattern like the s pattern, the t pattern, things like that? What is easier for people to learn on things like this, as well as the next thing we’re working on now is the control group for this. Yeah. And David, what’s really interesting to me about the one of the main issues with hearing aids is the fact that you have to deal with this medium called ‘space’, and you got sounds bouncing around, you’ve got this noise, and as you pointed out, the wristband is not affected by that. So even though my auditory system, which is distorted, is now also getting a distorted system because of the environment, you are now providing additional information that hopefully clarifies, as you pointed out, with the name of your product, the ability of that person’s performance to go up. And I’m assuming that happens pretty instantaneously, is that correct? it doesn’t. So one’s brain has to learn how to fuse these two signals. So what you can do is cognitively say, oh, wait, that buzz that must be a /t/. So he must have said tap instead of sap. Fine. But in order for people to really just have it be like an ear, that takes time. So we track this through time. for example, Izzy mentioned this new study we have coming out was know, measuring people’s APHAB scores, their subjective impression of how easy it was to understand conversation through time. And you just get this improvement that moves. But we think it takes at least four to six weeks for people to just feel like it’s a part of them, such that when they have the band on, they just hear and understand what the conversation is. And when they have the band off, they feel like they have a harder time understanding what the conversation is. But the interesting part is, it’s all unconscious, the way that with your ears, you don’t think about, oh, Eagleman saying some medium frequencies now and some high and some low, and you just feel like you hear me. It’s the same thing. After some number of weeks, people just feel like they’re hearing the conversation. Well, that’s really interesting. And does it matter? I know this is going to come up in a conversation, does it matter what type of hearing aid the person is using? I’m going to assume the answer is no. But did you find anything? So here’s what we did. We looked at are they wearing no hearing aids versus hearing aids, and we found there’s a difference, which is to say, if you’re not wearing any hearing aids at all, you get this big boost in clarity from the wristband. If you already are wearing hearing aids, you also get a boost, but it’s not as large, not surprisingly, because the hearing aid is doing its job as much as it can. And so there is a boost on top of the hearing aid, but it’s not as big a thing as you get without hearing aids, to my knowledge. Is it. Correct me if I’m wrong, but we haven’t compared types of hearing aids against one another at this moment. No. So we haven’t actually collected data on the types of hearing aids. and then you also have to look at fit and other nuances, too. but we are currently looking at that to see how the interaction between the hearing aids and the wristband actually occurs, because that’s going to be a big thing to consider. So ask us again in six months. We’ll have a better answer. Well, the reason I’m bringing it up is, as you know, that signal processing changes, and some of these are fast acting versus slow acting, and it changes the voice and patterns and the things that you guys are doing. So it’ll be really interesting to figure out, is this type of signal processing better correlated with that outcome than this type of signal processing? Yeah. Cool. As we collect that data, we’ll have a clearer picture on that. All right, well, thank you both for coming on. I think what you guys are doing is absolutely fascinating. as when I was a doc student, I studied with a guy by the name of Brad Rangert. And we did some of these things where we would stimulate the tongue to see how people would actually perceive different sounds. This was about 20 or 25 years ago. I don’t remember what the outcome was, but I remember it was someone else’s dissertation, but I got to participate and I just remember it was absolutely fascinating. So the stuff that you do brings me back a little bit. Although it’s not necessarily my area, but it is really cool. And I think it’s nice to have another option in your tool belt, given the fact that people are going to react to sensory information in different ways. Yeah, that’s right. As you know, I think my view on the brain as a neuroscientist is the brain is locked in silence and darkness. It gets all this input, but it doesn’t know where that input is coming from. It just knows whether it is useful information for operating the outside world. And so if we push the information into the brain via an unusual channel, it is able to figure that out and do the right things with that info. Yeah. Really cool. We look forward to having you guys back on as you continue to collect data and as you continue your journey in this world. So, thank you very much for being here, and we look forward to the next time that we have you all on. Great. Thanks for having us here. Thank you.
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
About the Panel
David Eagleman, PhD, is the Founder and CEO of Neosensory. Dr. Eagleman is a neuroscientist at Stanford University. His research is published in the top journals in the field, including Science and Nature. Beyond his research and entrepreneurial endeavors, he is a bestselling author about the brain, with his books translated into 33 languages. He is also the host of the popular Inner Cosmos podcast and the creator and host of PBS’s Emmy-nominated series “The Brain with David Eagleman.”
Izzy Kohler, PhD, Lead Scientist at Neosensory. Dr. Izzy Kohler brings a rich background in neurological rehabilitation, human performance, and data analytics. She oversees Neosensory’s algorithm and science teams, marrying technical and neuroscience expertise to guide the company’s scientific development and testing. In her free time she enjoys outdoor activities, math, and traveling.
Amyn M. Amlani, PhD, is President of Otolithic, LLC, a consulting firm that provides competitive market analysis and support strategy, economic and financial assessments, segment targeting strategies and tactics, professional development, and consumer insights. Dr. Amlani has been in hearing care for 25+ years, with extensive professional experience in the independent and medical audiology practice channels, as an academic and scholar, and in industry. Dr. Amlani also serves as section editor of Hearing Economics for Hearing Health & Technology Matters (HHTM).