How AAVAA’s Innovative Technology Platform Could Revolutionize Hearing and Wearable Tech

aavaa ear eeg sensor hearing aids
HHTM
March 29, 2023
This week, host Dave Kemp sits down with AAVAA’s Founder and Chief Technical Officer, Naeem Komeilipoor. The Montreal-based startup recently completed a $2.0 million Seed Round which will fund the continued development and commercialization of its unique, patented technology that “aims to revolutionize human-machine interactivity and human-assistance devices”.
The company’s technology platform utilizes unobtrusive sensors that can be incorporated into existing form factors such as headphones, eyeglasses and hearing aids, along with software that integrates easily with current device components. The sensors can detect a wearer’s attention by combining brain signals, eye movement, and head direction in real time. AAVAA has also developed sound source separation technology that can be added to the configuration and allow users to directionally focus hearing devices such as earbuds and hearing aids or even cochlear implants. The company has filed patents and grants with US and international agencies.

Full Episode Transcript

[Dave] All right, everybody, and welcome to another episode of This Week in Hearing. I am thrilled to be joined today by Naeem Komeilipoor. So, Naeem, thank you so much for being here. I wanted to have you on talk about your company AAVAA. So why don’t we start there, tell us a little bit about your background, AAVAA, and what your technology is ultimately striving to achieve.

[Naeem] Sure, thank you very much, David. It’s a pleasure to be here. So, my name is Naeem. I’m the founder and CTO of AAVAA. I am a biomedical engineer and neuroscientist and going to AAVAA. So practically what we do at AAVAA, we are building smart bluetooth earphones or hearing aids, as well as the smart glasses and headphones that monitors people’s attention and subtle comments. So, by monitoring eye brain facial activity as well as the head movement, we can allow users to control their devices hands free, voice free, just by using again their attention and subtle comments. So the application is far reaching. So we started the genesis of the project was to solve the cocktail party problem, which I’ll get back to that later on. So, to understand in noisy environment what sounds people are paying attention to, to be able to enhance those sounds and suppress the distracting noises. But soon we figure out that the application is far reaching. There are lots of application in augmented reality, virtual reality market, so people can seamlessly interact with their real and virtual environment using their attentions and comments such as their blinking, clenching and so on. And today we know that this technology, our technology is ready as of today to be commercialized in assessive technology market to help people with mobility impairment. For example, quadriplegia who have who are paralyzed head below using our device that can command their devices. For example, they can steer their wheelchair, they can control their smart home devices, they can even type and control feeding robots and so on using our devices. So we are practically replacing the sip and pop switches that they are using. And of course, in consumer electronic markets, using our devices, using our technology, we can turn a normal bluetooth earphone to again a brain computer interface earphone, which you can skip song, answer phone calls. We can also monitor stress and fatigue down the road or sleep down the road. So there are lots of applications, but for now, the main markets that we are pursuing is hearables. Hearing aids as well as assistive technology market and augmented reality and virtual reality. Yeah. When I first started to learn about your company at a high level, it’s really interesting all the different applications like you just mentioned, that you can do with this. But obviously this being This Week in Hearing, the one that I really glommed onto was kind of way in which you kind of your technology takes a novel approach at potentially solving the Holy grail of hearing healthcare, which is the cocktail party. Right. This idea that when you’re in, it’s a totally different ballgame when you’re in a quiet setting and the way in which your hearing aids or your hearables are performing, but as soon as you go into a loud setting with a lot of background noise, it’s a totally different ballgame. And being able to parse out the specific conversations that you’re listening to and the speakers and all that is a major challenge. I would guess that that’s probably like the number one culprit of dissatisfaction with hearing aids overall right now. So it’s a matter of how do you solve that? So I figure you can maybe speak to this a little bit about how your technology specifically is designed to provide a novel way that might be able to solve this. Sure, you’re completely right. So let me tell you why did I come up with this idea? Practically, I grew up with grandparents who were hearing aid users. So we have a large family. We are like around 40 cousins. Wow. And I remember that every time when we were at family gathering, my grandparents, they really had hard time hearing and they were frustrated. They wouldn’t participate in conversation and they would stop using the devices. And I experienced their frustration really per se throughout when I was growing up. So, flash forward. I was a biomedical and biomedical engineer and neuroscientist. I wanted to start a company and because I was studying sound, the relation between sound and the brain during my PhD, I was looking for solving problem in this space. One day I was talking with my grandmother around four years ago and she was in a small family gathering and I saw that she still has the same problem. And that for me was the ‘aha!’ moment. Well, time has passed and this problem hasn’t been solved. Although she’s using a very cutting edge hearing aids. So the idea that I came up with was there is only one way to solve this problem to the so called cocktail party problem, which is the problem that hearing aids in noisy environment, they are not able to filter out distracting noises and enhancing the desired sound. So that’s the solution is to understand their attention. This is the only solution, because if you don’t know what sounds they’re paying attention to, you don’t know what sound to enhance. So that was the genesis of the project. So we start solving this problem by understanding the auditory attention of the user. So, I’ll give you an example. So, the auditory attention decoding is used in neuroscience in which when you have a competing speaker, so we have like two speakers, a male and female, they’re telling two different stories and you are asked to pay attention to one and ignore the other one. Using artificial intelligence and signal processing, there are methods that can – to an acceptable accuracy, but with a long delay, they can tell you what sounds you’re paying attention to. However, people in laboratory, they have 64 electrodes or 128 electrodes using gel electrodes. And it takes a long time, around ten to 15 seconds to decode this signal. And the accuracy is around 70%. So soon, six months down the road, I figure out that this technology cannot be commercialized anytime soon. So then I started thinking like, what could be the solution, like temporary solution for this problem. Then I came up with this idea. Well, if we could understand where they are facing or where they are looking, using again brain and bicycles, we can solve this problem partially. So that’s what we did. So the human eye, David, is like a battery. So you have positive charges on the cornea and negative charges on the retina. It’s practically a battery or a dipole. When you move your eyes, it creates an electric field that propagates throughout your face and scalp. So using our AI, if you’re close to this signal, for example, around the eye, the signal is stronger. Our secret sauce was to be able to decode the shadow of these signals from around and inside the ear. And that’s what we did. We were successful in 2020. We came up with this demo that wearing our headphone, you could understand where the person is looking like left, center or right, and accordingly enhance the sounds that they are paying attention to. So this is again, we solve this problem. We are perfecting this technology and at the same time we figure out that not only we can decode the case, but also we can decode some other subtle comments like blinking, clenching and this has also applications for hearing aids or for hearables. Imagine there are multiple speakers in front of you and you want to let the device know which one you are focusing. Apart from the fact that you are looking at this you’re facing in one or looking at the other one. You can actually. speak and say you send a subtle comment and you can say I am listening to this sound, so please keep follow this sound. And these are the applications that we are developing right now. At the same time, when you know what sound the user, so what sound the user paying attention to, you need to separate that sound source, enhance it and suppress there is. So we developed in parallel, we developed our beam forming algorithms. So this signal sound source separation, or directional sound source separation using signal processing methods as well as sound source separation using AI, which probably we’ll get back to that later on. Yeah, I mean, just to pull up here for a second, just so I make sure I’m following along. What’s really cool about this is, like you said, you almost need another input mechanism to indicate to the hearing aid. And as this theme, I think that if you demystify what this whole notion of AI in your hearing aid really means, these are the kinds of applications that I think are going to start to become unlocked, which is you have another input into the smart device and it’s registering. Okay, so in the background, there’s ten speakers in this circle that I’m standing in and I’m gazing right at this person using the electromagnetic signals that you’re picking up. Because like you said, the eye is essentially a battery and it’s shooting off all of these different signals that you can actually see. Okay, that’s who they’re looking at. And therefore, we’re going to enhance the beam forming or we’re going to focus everything on that one speaker and sort of remove or turn the volume down and everything else. Am I kind of following there with how this works? Exactly. So as you mentioned, solving the cocktail party problem is the holy grail of this hearing industry right now. But the solutions that are there, you hear this buzzword. We have AI in our hearing aids everywhere, but no one knows what that really mean to have an AI. So practically even imagine this scenario even if you have an AI that is capable of separating sound sources in real time using one microphone or more than one microphones at the end, if you are in a situation where you have two competing or three competing speakers in front of you, no hearing aids can tell which sounds to enhance. You can use the head direction and then the accuracy. You can increase the accuracy, but that still is not perfect because in reality, we communicate using our eyes. So we pay attention to the sound sources not only by turning our head, but also using our eyes and also by listening. Sometimes the sound source may come from behind. So again, in order to take this revolution of transfer separation to the next level and solve this problem. Completely. There’s only one solution and that is to understand what sounds they are paying attention to and then enhance those sounds. So none of these solutions are perfect at the moment. And again, when we are talking about AI in hearing aids, there are three different kinds of technology. Speech enhancement or denoising transfer, separation and auditory sync, classification, understanding in what environment are located so that you automatically adjust the hearing gate configuration. But again, none of this technology allow you to really enhance the desired sounds and suppress the distracting nodes. And the kind of technology that we are talking about, we’ll see that in future. And that’s the technology that can solve this problem, if not completely to a great degree. Yeah. So I want to get into the actual practical aspect of this, right? Because when you were describing earlier, like the electrode skull cap, like I’ve seen those before where you wear the thing and that doesn’t obviously seem feasible in reality of people walking around with a giant set of electrodes on their head. So it’s like how do you actually make this something that’s commercially viable and such that people would actually wear it? I mean, it sounds like you’ve done a lot of the heavy lifting already to get it into a factor that’s approachable. But are there limitations with that because of the small size and piece of real estate, I guess. Exactly. So exactly. You’re completely right. So in laboratories still, some of the companies who are investigating this issue, they’re still using these gel electrodes and so on. The problem is you don’t have dry electrodes. If you have dry electrodes, they’re also not comfortable for extended wear. So the way we solve this problem, we attack this problem from all different angles. So we are building our own sensors. So, for example, to show you an example, this is our bluetooth earphones. So we are going to remove our newest device. It’s just going to be bluetooth earphones. These are the sensors that we build. These are our glasses. Form factor, again, are capable of understanding the attention and be used for conversation enhancement. And this is our headphones. So the sensor is located here. The problem is so again, in order to solve the problem, when you have a full cap with gel electrode, you have a high signal to noise ratio. When you reduce the number of sensors and you use dry electrodes, the signal to noise ratio radically drops. And in order to solve this problem, you have to attack this problem from different angles. From one side, you need to build dry sensors that are capable of recording and brain data with high quality. So high signal to noise ratio, they. Be comfortable for extended wear from another side. You have to have a good enclosure design that is ergonomic again, resulting in height, signal to noise issue and then being comfortable for extended war. You have to have a very strong hardware capable of recording this data in real time, filtering out the noise, enhancing the signal and so on. So decoding the attention. And at the same time, you need to have a good machine learning and signal processing algorithms that are again increasing the signal noise ratio and so on. So we managed to solve this problem. We don’t have 100% accuracy, but for example, for our say striking, we are around now 75% to 80% accurate. And for some other applications such as facial gestures, the accuracy is higher. And so we are on our way. And we just see the progress every day. The models are improving and we know that at some point this would be ready for commercialized, for commercialization. Again, as of today, decoding the facial gestures, understanding where the head is facing, the technology is in a place where we can commercialize it today. Also our sound source separation and beam forming as well. So they’re ready to be commercialized. Okay, interesting. So how do you see that working? I mean, would this be something where you would license your technology? Would you actually be coming to market with your own hardware? How does this become something where we actually are starting to see this in the market? That’s a great question. So our business model is practically business to business or business to business to consumer. So we are trying to license this technology to OEMs, to hearing aid manufacturers, hearable manufacturers, headphone manufacturers, augmented reality, virtual reality manufacturers, and they are interested. They have reached out to us and they are interested. However, at the same time, while we are trying to license the technology to them, we dissected many devices all the way from hearing aids to hearables. And we studied their hardware components and we built our hardware to be compatible with all of these existing hardware available and the hardware we are building. So this is the newest one that we have, very miniaturized. So we are further miniaturizing this to this size. And if one day we decide that we want to go the B 2 C route, business to consumer route, we would be capable of doing so. But this is not our plan. So our plan is to go through the B2B route and the goal is practically hearing aids or hearable companies. So they would incorporate our sensors into. Their device and our sensors because we are building it ourselves. So the actual material, so the material, the sensors that are picking up the brain sensors as well as the hardware, we are producing this so they can easily incorporate this into their existing form factors. And let me tell you David, this year in CES- LG, they came up an earbots that for the first time is incorporating these brain sensors not for a speech enhancement application, but for sleep monitoring. So that’s a great signal for the whole industry that these sensors soon will be incorporated in consumer electronic devices as well as the hearing aids. And I have talked to many executive in hearing aid companies many of them have already research group working on this or they’re collaborating with the universities or internally they are trying to solve this problem. But the approach they have is not correct because they’re still using these gel electrodes and still in the laboratory settings and practically from the day zero we said we have to make this in a variable form factors that would be suitable for extended wear. That’s why we start talking to some of them and they are willing to collaborate with us. So now we have 50 of these devices ready and soon we are going to shipping out to these companies so that they can try them in actions and give us feedback and that would be the starting point for doing partnership and so on. Yeah, because I think the interesting thing is that there’s been this notion that eventually hearing aids will be integrated with a variety of different sensors and I think that it’s on the surface kind of hard to grok and wrap your head around like well, what will those applications be? And I think that this is really a very specific application that people can understand, is like, as you add more sensors, you better inform the hearing aid more or less of these new inputs. And that ultimately results in something that feels seamless, in theory, for the user, for the hearing aid wearer where they’re like. This thing’s amazing. It’s now being able to dynamically adjust itself in ways that it’s not been able to before. So for me, I think it’s just very informative to kind of understand a what’s the state of this stuff? Like how far out is this? It sounds like we’re making a lot of progress but then also to just kind of understand as it does become commercialized and integrated into the offerings. These are the kinds of things that this is going to unlock because everything right now is just sort of like AI and hearing aids and it’s like what does that mean? Now? These are, I think, real specific things that people can understand is like yeah, the cocktail party effect, the thing that we all. Struggle to figure out how do you actually solve this? Well, one way that maybe you do that is you have a more sophisticated input mechanism into the hearing aid so that it’s just a more sophisticated method of parsing out sounds and who’s talking and what. So that makes a ton of sense to me, but it’s a matter of like, when does this all come to market? And based on what you’re telling me, it sounds like we’re getting closer and closer every year. But hopefully in the next few years this does start to become a reality. Exactly. Can’t I agree more with you? Let’s remember that since the ear is the gateway to the body, so the sky is the limit. So the sensors we are developing for the in ear sensors, they’re also capable of picking up heart rate signals. And again, since you are there, you can pick up temperature. So that’s the direction where we are taking. So we are practically making a feet beats for the ear. a few hearing aid companies are taking this direction as well, but that’s what we will see in future. So using your bluetooth earphone, your hearing aid, not only you can command your device and the device can understand your need in real time, it can also measure other biometrics. Heart rate, stress, fatigue can be used for and these are really scientifically proven that you can using this in ear eeg sensors, you can capture these signals. We can be used for sleep monitoring. So really the sky is the limit and we’ll see many applications down the road. Yeah, just to say one last thing on this I do, I think that the ear is such an ideal place to record biometrics and I think that it’s going to be really interesting because it’s like you can get your heart rate from, you can monitor that from your hearing aids. And I think that what’s going to make this go from something where it’s sort of like perceived as a nice to have to where it becomes a need to have is as these types of things become enabled, where you can capture these types of metrics, it’s a matter of what you can then do with that. And that’s where I think it’s going to get really interesting. And that’s why this caught my attention so much, is that it’s really one of the first times where I’m starting to fully understand, oh, okay, so that is a specific use case of having the brain wave readouts of what you could actually do with that. That seems to be, I think, the disconnect that people have when they talk about this. It just feels kind of Sci-Fi and futuristic. You can do all these things, it’s going to be the fitbit for your year. But when you start to ground it in real applications that people encounter on a day to day basis that are challenging and you can speak to how these things can be used to solve those. That’s when things start to get really exciting. And I think that’s the difference between speaking at it from a high level and then actually getting into specifics of this is what that would actually. Mean. So I really appreciate you coming on and walking through what exactly this would mean as these kinds of functions and capabilities do become enabled in the next few years as the miniaturization of these sensors continues to move in the direction where they can be fit on something like a hearing aid. Exactly. I’m glad that we’re sharing the same vision. And also one thing I wanted to mentioned David, cochlear implant users also they are the ones who greatly benefit from this technology for two reasons. First, they can use these non invasive sensors again to understand what sounds they’re paying attention to and second, because they already have the real estate. So you have like the transmitter and the receiver. The receiver is already implanted inside and is theoretically so you will have higher signal to noise ratio and researchers have shown that. So then because it’s located in a temporal bone and then it’s closer to the auditory cortex so you get a much cleaner signal. So again, not only hearing aid users will see this technology being used for hearing aid users, but will greatly be beneficial for cochlear implants users as well. Again, I was talking about the auditory attention decoding which is very hard by capturing from outside on the sculpt is going to be much easier for cochlear implant users. And I’m really glad because some research groups and some companies start investigating and working on this problem as well. Yeah, I couldn’t agree more. I mean, I think that it’s fascinating that the cochlear implant is something that I think lends itself. Like you just said there, the way that it’s sort of like embedded in your skull and it really does lend itself to, I think, a lot of these new age use cases that it seems as if the cochlear implant might be the sort of the front lines of where we see this really first become implemented and commercialized. So it’s cool that I think we’re almost as the technology that’s most cutting edge in this industry seems to be catering more and more toward the highest end of the degree of hearing loss with cochlear implants and being able to provide all these new additional use cases, which I think makes it that much more compelling to get something like a cochlear implant, which can feel daunting because it’s such a life change. So I think that’s really neat. That’s where we might see a lot of this sort of burst forays of it kind of take place. Exactly. So we were really to be honest with you, I didn’t think about cochlear implant when I started this project. We were reached out by one of these cochlear implant companies and they asked because do you think this technology could be used for our devices as well? And I said. Yeah, of course. And you have the real estate because you have already this receiver that you can easily implement electrodes there and get much cleaner signal. And then these last few years, you have many papers published implementing already integral EEG signals or brain sensors into the receiver. And there is also very promising. So I’m sure down the road, I’m not sure if it’s going to I think seeing this technology being in actions first in hearing aids and hearables, I think that would happen first. And as the technology is progressing, and then the cochlear implant companies realizing that, okay, there’s this groundbreaking technology that we need to incorporate, I think later we will see the cochlear implant content is also incorporating this technology and provide this service for their users. Fantastic. Well, thank you so much, Naeem, for coming to learn today. I really enjoyed this conversation. And I’m excited to learn more about your company, about AAVAA and watch it kind of come to market. So appreciate you you very much. And I appreciate everybody who tuned in here to the end. We will chat with you next time

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

 

About the Panel

Naeem Komeilipoor, PhD, is the Founder and Chieft Technology Officer at AAVAA. He has held numerous roles in the fields of neuroscience, AI, advanced audio, embedded systems, industrial design, material science, and more. Naeem is also an Entrepreneur In Residence at TandemLaunch Inc. and previously held the role of Scientific Project Manager at YourRhythm Project in 2018.  He obtained a Doctor of Philosophy – PhD in Human Movement Science from Università degli Studi di Verona and Vrije Universiteit Amsterdam (VU Amsterdam) between 2011 and 2015. Prior to that, he obtained a Master’s degree in Biomedical/Medical Engineering from Chalmers University of Technology between 2009 and 2011.

 

dave kempDave Kempis the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot.ai, and has been featured on NPR’s Marketplace.

Leave a Reply