Innovations in Chip Technology Enabling Enhanced Hearing and Audio Devices

audio signal processing femtosense interview
HHTM
October 11, 2022

How will innovations in microchip technology influence the future of hearing and audio devices? This week, Dave Kemp is joined by Jonathan Russo of Femtosense and Andrew Bellavia of AuraFuturity. They talk about some of the latest innovations and trends in the industry, including different use cases and how this will help improve the lives of people with hearing loss – both for OTC and prescriptive hearing devices.

They discuss the unique approach being used by Femtosense, which allows for significantly more efficient processing and less battery drain compared to today’s current devices and how this will open the door for new features and use cases.

Full Episode Transcript

Dave Kemp 0:10
All right, everybody, and welcome to another episode of This Week in Hearing. I’m excited for today’s chat with Andy Bellavia and Jonathan Russo. We’re gonna be talking a lot about chip technology innovation happening on that front and some of the different use cases that this is going to begin to open up here in the world of hearing healthcare. So let’s start with some introductions. John, why don’t you share a little bit about who you are and what you do,

Jonathan Russo 0:36
sure, yeah, thanks, Dave. So my name is John, working at a, an early stage recently Series A funded startup called Femtosense located in South San Francisco. And we’re all about bringing sort of more efficient, smaller form factor, Edge AI, to consumer devices, medical devices, and predominantly in the speech and audio space.

Dave Kemp 1:02
Fantastic. And Andy, welcome back to the show. First time, I think that you’ve been on since your your new gig. So do you want to share a little bit about the new gig and what you’re doing now?

Andy Bellavia 1:13
Yes, it is actually, you know, I had been so used to the pattern that comes with working at Knowles, I had to rethink the whole thing. This This isn’t the first time under my new role is founder of AuraFuturity. Really the goal being to help innovators in the hearable and hearing health space, move the needle on what I think of as being the global pandemic of hearing loss through go to market and branding consultancy. So appreciate you having me on again.

Dave Kemp 1:41
Absolutely. Okay, cool. So yeah, let’s frame the conversation today. John, I know, you approached me, you have a really cool company and the technology that you all are creating over there. And so why don’t we just kind of kick things off by you giving a high level like, in a nutshell, what it is that you think your company can help to solve, you know, here within the hearing healthcare space?

Jonathan Russo 2:05
Yeah, totally. So, as an approach to bringing AI to these consumer devices, predominantly speech and audio, we decided to take a two prong approach. One is create some custom silicon that’ll run AI algorithms much, much more efficiently than anything that’s out there. And that boils down to a few technical points. But then we also build the algorithms themselves, which are sort of like fine tuned and optimized to run on our hardware. And by doing both of those, you get, you know, 100x gains in terms of energy efficiency over what’s currently available today. And you get 10 times the amount of ML power, I guess, if you will, in a small form factor, because you have limited memory size, right? And so we adapt these algorithms, and we have this hardware. And we really wanted to sort of like go out and say, All right, well, we have this enabling technology, like what problems can we sort of solve out in the world, you know, we don’t just want to stick ML in every single product that we possibly can and not add any value. So it took a long time for us to dive in and identify the problems that were worth solving. And the number one problem that we identified that has a real big need is solving that speech and noise problem for people who are hard of hearing, or they’re either hearables wearers or their medical grade ear earpiece hearing aid wearers so we wanted to sort of target that application and give people better speech understanding in the presence of noise. And that was sort of the motivation behind going down this whole path of you know, audio and speech sort of processing and spent about two and a half years since we’ve been, like, started embarking on this problem. And it’s been very interesting.

Dave Kemp 3:56
Well, I think it’s really, it’s interesting, like, you know, as an outsider, like yourself to kind of come, you’ve identified an age old problem here in the hearing healthcare space, which is kind of the cocktail effect, right, like the cocktail party effect, where a lot of people might not even really register on an audiogram as having hearing loss in a quiet setting, but then you stick them in a room with a whole bunch of noise in other people conversating. And you’re left with this feeling of like, I can’t really hear what people are saying around me, it’s hard to isolate a single speaker, which is the speech and noise issue that I think, has been this quest that so many have embarked on to try to solve. And so I think that we’re at this time now, you know, just from doing this podcast and trying to really kind of understand the underlying innovation that’s taking place right now, with the devices what’s becoming capable as artificial intelligence, as much of a buzzword as that seems, is really starting to do have an effect on the ways in which these devices can circumvent, like old challenges? And that it does, it does feel in a way that we’re sort of at this advent of maybe the technologies at the point where it can sort of overcome some of these challenges. Andy, for from your perspective, as somebody that has hearing loss, you wear hearing aids, what are your thoughts as it relates to this whole idea of like the cocktail party and how we could solve it?

Andy Bellavia 5:30
Well, speech in noise is really the key issue at all levels of hearing loss. And you’ll hear everybody in the industry talking about it. And you see people constantly working on next generation devices, to give even incremental improvements of one or two or three dB of apparent signal to noise ratio improvement, to attack that Cocktail Party Problem. I think there are really a couple of angles here. And you alluded to one of them. I’ve been reading research on the accuracy of self reported hearing loss, which turns out to be not that good. And that has implications for OTC because if people don’t recognize they have hearing loss when they’re mild, or an only recognize when it’s more severe, that kind of undercuts the premise of OTC. But more related to what we’re talking about here is they’re also people who report hearing difficulties, but measure normally on an audiogram. Brent Edwards has talked about this too. And these studies are all turning that up. And so there is no hearing aid solution for a person like that. However, being able to improve speech in noise, without amplification, probably will do a better job of helping those people enjoy such situations, more so than any amplification based hearing device would do.

Dave Kemp 6:54
And so John, can you kind of like walk us through how your technology works, you know, in terms of, from the genesis of, you know, my voice and the sound that it’s transmitting, to how it gets filtered through, you know, your processors and all that, and then how it ultimately reaches the recipient on the other end, just kind of give us a sense of like, how this works?

Jonathan Russo 7:17
Yeah, sure. To start, I’ll give you like a small taste of how, what the current paradigm is. So there’s sort of two approaches, there’s classic signal processing, right? Where you’re doing various different compression techniques, and you’re doing various different filters, like bandpass, filters that focus on like the speech, frequency bands, right, and filter out the bands that are associated with certain types of noise. The second approach to the classical approach is a sort of beamforming approach, right? And so you have two microphones, or more microphones on hearing aids. And you’re basically trying to steer these microphones in the direction of where the the most important sound is coming from, right. But it’s not discerning what is the most important sound? It’s basically just saying, Okay, well, I need a cone of listening right in front of me, which is fine. If you’re facing the person it does, okay. But if you’re stepped back from that person, or if they’re off to the side to you, it’s it’s very difficult. And so we took a different approach to, you know, solving this problem, which is coming at it from a DNN deep neural network, or, as we recurrent neural networks sort of approach. And these are very different than those two sort of approaches in that you’re not just altering presets of the hearing aid, and you’re not just manipulating the microphone, right. So you’re in a noisy situation, and you’re talking to someone, you have audio that goes into the microphone of your earpiece, right, you take that signal, you do a transform on it, right? And it puts it into a bunch of frequencies, right? And then you have the AI, which is estimating which parts of those frequencies are noise and which parts are human speech. And so you train this algorithm to learn what is speech and what is noise on on a bunch of clean speech and a bunch of noisy speech in various different environments for various different speakers and many different languages. And then you start to get really good at filtering out what is the noise and what is the speech and then after you have the clean signal that has come out, you have to then reinsert it back into like the audio signal and then play it through the speaker. All in about 10 milliseconds or eight milliseconds time. And for a power budget that’s going to be able to run all day on a tiny 100 milliamp hour earbud like we find in an air pod or something like that. And it’s been very difficult for people to a, get an algorithm that’s that efficient, right? Get an algorithm that’s small enough that it can fit on the onboard memory on one of those air pods or something, right, and then also get one that removes enough noise, right, get one that also doesn’t distort the voice so badly that it’s worse than listening in noise in the first place. right, and then you have to run that algorithm on something, right. And so even if you were able to make an algorithm that good, you still couldn’t run it on a microcontroller, or an ARM processor, for example, right, you’d either ruin your battery life in about 15 minutes, or you’d sort of, you know, just not have the computational power to run it. It’s why you need the hardware, and you need the software like together. And so that’s why we came at it with that approach. So it’s very different than, than the current paradigm.

Dave Kemp 10:30
So like, Andy, and I’m trying to kind of wrap my head around, because it is there’s so much going on in such a micro amount of time, you know, that, you’re basically in my mind, you’re taking, you know, this noise environment, you’re capturing it, you’re processing it, and you’re filtering it back to the person so that you filter out, you know, you’ve extracted the speech from the noise. And you do that in a way where there’s like, no latency, you know, so that it feels like it’s happening in real time, which is almost like magic. But I’m curious, Andy, from your perspective, and like, from an engineers mindset, what is, you know, like, historically, what, what’s made this so challenging? Like Jonathan said, there, you know, kind of like the power consumption factor here, what else has historically been part of the making this kind of, like prohibitive?

Andy Bellavia 11:26
Yeah, so I think, ultimately, a macro level is about the processing power available, that you can actually put in an earpiece. Now, if you think about the Cocktail Party Problem, it’s really that the noise is in the same frequency bands as the speech because the noise is a lot of other people speaking, echoing off the walls and everything else. And so there are always limitations in what classical audio filtering can do. And modern hearing aids do a really nice job within those limitations, being able to filter just so and using beamforming, mics and so on. I mean, I can tell you personally, even having gone from one generation of Phonak, hearing aid to another that there was an improvement in speech in noise that came with it. But there’s a limit when ultimately the noise you are trying to filter out, is in the same bands as the speech you’re trying to understand. So, the newer techniques such as what Femtosense is doing is to simply and correct me if I’m wrong, Jonathan, but you’re almost rebuilding the signal. In other words, you’re taking in the signal, with much more advanced techniques, able to extract out the speech and resend that out of the speaker while leaving the noise behind, which is different than just filtering the signal as it goes through. That takes a tremendous amount of processing power. And one of the reasons why earlier attempts have not worked out well is running it on standard processors, very power hungry way to do it, you almost have to have the processor designed specifically for this kind of application in order to do it efficiently. And we’re now getting to the point where that is possible to do this will really make a step change improvement in how speech is extracted from noise.

Dave Kemp 13:19
I think now it’s probably a good time to play some of these clips that you sent me, John, just so people get a sense of what this sounds like. So what we’ll do is the first ones that you’ll hear are, like just as the standard recording, and then we’ll go back to back with what it sounds like with the Femtosense technology applied to it

Sample 13:39
the robots don’t look like they’re actually working like the parts are just kind of doing the dances but this is unbelievable access like like this is like the best marketing ever like everyone looks so hyped on the robots don’t look like they’re actually working like they don’t have the parts they’re just kind of doing the dances but this is unbelievable access like like this is like the best marketing ever like everyone looks so hype {voice clip, speaking spanish. Noisy}- ..{voice clip, speaking spanish, reduced noise}.

Jonathan Russo 14:29
So you guys probably heard in the clips like you have a sort of really noisy background situation and like a factory and it does really good. It does a good job at removing the sort of like non steady state background noise. But then you also heard a clip of like some glass clinking right, which can be really high frequency impulse sounds that could be very painful for hearing aid users, as you’re probably aware, so reducing the intensity of those sounds while still allowing the user to be aware of their surroundings is like a really big plus point. So not only increasing the intelligibility of speech and noise, but just making hearing aids more comfortable in general.

Dave Kemp 15:11
Yeah, no, I think that’s a great point to make. They’re in really cool clips. I mean, it’s, it’s definitely a compelling use case. And so for me, like, where my head’s at is, you know, how does this kind of come to fruition within this market? You know, are you all building a standalone product? Are you partnering? Is this something that you can use to augment, you know, existing manufacturer’s device ecosystem, just kind of like walk me through your go to market strategy more or less?

Jonathan Russo 15:44
Yeah, definitely. So like I mentioned before, we have this huge repository of algorithms. And then we also have, like custom silicon that we build, it’s just a coprocessor chip. And this coprocessor, like sits on a system on chip (SoC), like, so if you have a hearing aid system, or you have an air pod system or something like that this chip is implemented, like on top of that system, you can either implement that as like an actual discreet chip, or you can implement that directly into that sort of SoC as, as an IP block, right. So there’s a couple of different ways if you’re a product manufacturer to do it, there’s directly partnering with those people, with the product manufacturers, selling them an entire solution, like we have the algorithm, we have the chip, like, here’s how you put it into your system, like add the functionality. There’s also the suppliers to that hearing it and earbud companies that build Bluetooth SOC s or earbud, SOC s, we’re also partnering with those companies so that when you purchase an SOC for a hearing aid, it already has that functionality built in, there’s no integration effort on the part of the product manufacturer. So there’s a couple of different ways to go to market that way, we have people interested in running just our algorithm on their own hardware. And, like, we have people that are interested in our hardware, and they want to run their own speech enhancement algorithm, or they want to run their own sound event detection algorithm or something. And so like our software development kit, is pretty standard in the in the industry, that just allows people to build these special models and the way that runs most efficiently on the chip. So they can do all their algorithm development. And you know, the most common standard ML frameworks like you know, pytorch, or TensorFlow, and then really just deploy it down to the chip super, super easily. And then rapidly prototype and iterate so that they can get the best performance. And then if you need to send an update to the user, you have a brand new model that works while you just send a you just send an update and download the software.

Andy Bellavia 17:48
So ultimately, the easiest way to implement this for say, a true wireless earbud manufacturer would be to take your silicon, your software as a unit incorporated into the device, which makes it really interesting, because you’re now bringing this high level of speech in noise extraction to a device, which isn’t a hearing device at all. And it makes me curious, because if you take a person in, I haven’t dove into any studies that might have been done. But it seems to me that if you have a person who’s in that milder end of hearing loss, where they can understand people, you know, well enough really well in quiet situations, but they have difficulty hearing people in a cocktail party with this solution. And do you have any data or to show that this solution without any amplification at all, will give real improvement to people in loud situations versus nothing?

Jonathan Russo 18:49
Yeah, great question. So right now we’re building a study with major university to run this exact clinical sort of setting test. And so you have a user that’s sitting in in a controlled environment with a bunch of background noise, and then you’ll have a speaker, very controlled, you know, and then you’ll say, Okay, well, you’ll wear these headphones, or your wear these earpieces, and then no gain is sort of applied to that. And you’ll say, Okay, how many words did you get correct in this scenario, and then we’ll run the enhancement scenario, where we send our algorithm, and then they’ll see how many words they get right there. We’ll compare it to like, here’s just a straight pass through, here’s like, the traditional hearing aid algorithms of today. And then here’s our algorithm, and ultimately compare those results. And so right now, I’m in the middle of designing that study with a postdoc,

Andy Bellavia 19:41
and you can run out of different levels of hearing loss, correct?

Jonathan Russo 19:45
Yeah, exactly. And you can even in the hearing aid model scenario, you can have custom settings for each individual user. So if you’re in the study, you can run an audiogram right? And then get their customs here like this is the best possible fit it for this person in this hearing scenario, and then there’s compared to us.

Andy Bellavia 20:06
Okay, it’s really interesting because it’s almost the anti OTC if you will, in other words, a true wireless company who implements this, if it proves effective, you know, from mild to moderate hearing loss and really bringing recognition rates up, they can honestly say that this is not a hearing aid, we’re simply are eliminating the background noise. So you can enjoy yourself at a party, you can totally step away from all the stigma inducing discussions about hearing loss, and offer device which does nothing more than allow you to hear the speech as against all the background noise. It’s a fascinating area. You know, one of the reasons why I wonder how OTC per se will actually do, and how much will it be overrun by newer technologies like this at the milder end of hearing loss?

Dave Kemp 21:02
Yeah, I mean, I think that it’s, you make such a good point there because it is, you know, it’s like, imagine this future where you have the professional that’s just equipped with, you know, we’ve often talked about like, what will the cheaters glasses equivalent be in the hearing healthcare industry, and it very well could be something like this, where it’s a device that’s specifically designed for speech and noise, it doesn’t even amplify? And, you know, it’s so it’s very situational. It’s just another solution that exists for people. And I think, to your point, Andy, it’s like maybe, you know, I think one of the one of the ways that we can eventually move past the stigmatization of hearing aids is that you have all kinds of new devices that do different things. And hearing aids are just one type of device that can augment your, your hearing abilities, if you will.

Andy Bellavia 21:55
Yeah, absolutely. If if a milder end speech and noise is enough to satisfy people in the situations where they have difficulty great, in you know, moderate to severe and beyond in hearing loss, you’re going to need professional assistance, you’re going to need an audiologist and hearing aid properly fitted. But that hearing aid will perform even better with a solution, such as this one, also incorporated. So it’s really as this technology develops and grows, I think at all levels of hearing loss, people win.

Dave Kemp 22:29
So as we come to the close here, one last question I have, it’s kind of a just a free for all, anybody can answer it, and you can both answer it, but I’m just curious, like, uh, you know, kind of like, as the layman’s perspective on this, in trying to kind of wrap my head around, you know, why now? And what’s different? What’s changed? What’s enabling this, you know, help me to understand here in October of 2022, what’s transpired? Jonathan, in the last year or two, that’s like, this is an enabler. You know? Like, have there have been underlying breakthroughs in the technology in machine learning and deep neural nets, these things that have enabled your company to exist? I mean, you know, is there is there something that you can specifically say, or a handful of things that, you would say, are the keys to like, why this is now becoming feasible?

Jonathan Russo 23:21
Yeah, in machine learning, in general, it’s always been this chicken and egg between, like, oh, we have this algorithm, but no hardware to run it. And then we have powerful hardware, but we don’t have an algorithm that’s performing well enough. And so I think it’s advancement on both of those fronts. It’s having a powerful enough and an efficient enough processor that can run the most state of the art algorithms, and only now recently have algorithmic advances gotten to the point where you can get great performance and even running on the most efficient hardware, is it going to be enough, right. And so we’ve made pretty big strides on the hardware side. And then also big strides on the algorithm side as well, like not getting into too much technical detail. But you know, when you really apply these tenants of sparsity, which we’re all about, it’s like, oh, well, now you don’t have to do as many computations, right, you can get the same problem done with far fewer computations. And those computations that you do have to do. Now you can do them more efficiently on the hardware. And so I think both of those things paired together is really like the perfect storm for making this possibility. I don’t think you could do it with just the hardware. And these not these regular off the shelf algorithms. And I don’t think you could do it with just our algorithm without the hardware. And it may be a little biased, but

Andy Bellavia 24:38
yeah, and really what you’re saying is, if you can do with fewer computations, that means lower battery drain. So when you think about an ordinary TWS earbud would say eight hours battery life, if I take your silicon and your algorithms, apply it to that earbud and run the algorithm say full time What’s the effect on battery life?

Jonathan Russo 25:02
Yeah, so we did a study where we compared it to an ARM M7 processor running the same algorithm. If you ran our algorithm on the M7, you’d get about 30 minutes of battery life, it also get pretty hot, too. But and then if you took our algorithm and you ran it on our hardware, you’re looking at multiple days of battery life, I guess. So you would be dominated by the speaker energy of the thing, right, you’d be dominated by the microphone energy. So hitting that benchmark of eight hour or 10 hour battery life for someone who wants to wear these all day, is very, very attainable.

Andy Bellavia 25:41
So meaning it’s a very, very small percentage of the total power budget of everything else running on the year, but it wouldn’t really notice a live reduction. Running Exactly. package on an earbud. Okay,

Jonathan Russo 25:53
exactly. And to your point about reducing the number of computations, yeah, you can drive the power way down, or all of that free time that you’re not spent doing computation on the chip, you can now run other things on top of it, right? So if you want to identify whether or not I was heavy breathing, or I had, how many coughs did I have throughout the day, right? How many sighs that I have? What can I infer about this user about their health, about running algorithms for your heart rate and a PPG sensor or running, you know, even into your EEG, you have companies like IDUN right, that are making the inner EEG sensor for tracking where your eyes go. And so when you have that processing power available, you can both run the speech enhancement, but you can run all of those cool things as well. And to geek out just for a second and give you a taste of like what we have in development is like, you’ll be able to run this speech enhancement algorithm, run all of those bio signal processing things, right. And then the outputs of those bio signal processing things can be used to like, get you this total health score, right, and then inform these higher level models about what behaviors and changes you can make in your life. And you can even go as far as like mute certain sounds, if you train the algorithm to learn, like, Oh, this is baby crying, you can go to the app on your phone and be like, I want to hear a baby crying, and I want to hear this. But I don’t want to hear glass breaking because that hurts. And I don’t want to hear TV in the background. And you know, I don’t want to hear my dog. I don’t want to hear my son. So it’s, it’s really cool what you can do. And you’re really limited by your imagination and your ability to create these algorithms, the processing power is there, the tools are there. It’s just open up for the engineers to really come up with, like, what can they do?

Andy Bellavia 27:33
So let me ask you a related question, then because now you’re putting a chip with a ton of processing power nearby. And there are companies working on an app ecosystem Bragi for one and Sonical, for another. And I’m kind of going back to the article I wrote for a world hearing day where I suppose in the end that when hearables have a true App Store, then you’re going to be able to load all these different apps in you know, whether you’re on subscription basis, or what have you, including amplification apps you get from third parties, which is going to short circuit the OTC but more generally, I could get health apps, I could get sleep apps, I could, you know, get, you know, EEG monitoring apps and use EEG signals to control different things, all this sort of thing. How close do you think your silicon and technology is to being able to support a true hearables app ecosystem? As far as integration,

Jonathan Russo 28:34
those companies that you mentioned, and even some other ones that are building that OS and some white label goods, like been talking to them since like the very beginning. And I’m sitting on our electronics lab bench right now that I can tell you has a number of different development kits on it and just wires hanging everywhere. But I think we’re very close in that regard. It’s a matter of like, how are we going to implement this? Right? The technology is all there, it’s come down to the details of like, okay, where’s this actually going to sit on the chip? And, you know, how is this going to be hooked up which all of the pieces are there? It’s it’s a matter of doing it now at this point. And then reality of the situation like what are these business terms are going to be but you know, we want to get tech out in the world. And we want to get it out there quickly. So I think it’s right around the corner. And while we have a vested interest in also selling our algorithms, we want to open it up for people to build their own and, and ultimately serve like the end user and like the earphones, right?

Andy Bellavia 29:31
I don’t think you can overestimate the impact this will have think about mobile phones before the smartphone came. Okay. What do you do with a mobile phone? All right. Okay. The other day I was sitting on our screen porch and I heard a bird I’d never heard before, and whipped out my phone. I started the Merlin app. Let it listen. It told me what the bird was right? What phone maker would think to build a bird call analysis. You know programmed in a feature phone, right? They wouldn’t do it. Okay, all of the innovation of mobile phones has come because they opened up that app ecosystem for anybody to develop on. Now, the same thing is coming to hearable devices. And you know it everywhere, including the hearable or the hearing world. This is going to change everything. Because people are going to be able to experiment and develop in release. Hearing related algorithms have increasingly more sophistication, can increasingly better performance in certain scenarios, along with all the other apps and, and health features and so on. It isn’t up to the hearable maker to try and figure out what’s best, right? They simply have to provide comfort, good sound quality, good battery life, a sensor package that’s accurate, and let the app developers have at it. That’s going to change everything.

Jonathan Russo 31:02
One more thing. So we’re going to be giving a number of demos at CES 2023. So come and stop by our booth. And then you can you can always come by our suite and we’ll have a bunch of demos going up there as well. But if you want to hear this sort of speech enhancement live in person on the show floor with all this background noise going, definitely come by and see them two cents. I don’t know which booth we are yet but I’m sure you can be able to find it on there.

Dave Kemp 31:31
Okay, awesome. Well, thank you so much, Jonathan. Andy, great to have you on. First time being here as AuraFuturity. Looking forward to seeing you at AuDacity in a few weeks. and Jonathan, I’m sure this won’t be the last time you’re on the show as well. So with that, everybody thanks for tuning in here to the end and we will chat with you next time. Cheers

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

 

About the Panel

Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.

Jonathan Russo is responsible for business development at Femtosense, an early stage startup focused on bringing more efficient, smaller form factor, edge AI to consumer and medical devices, with a focused primarily in the speech and audio space.

 

Dave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot.ai, and has been featured on NPR’s Marketplace.

Leave a Reply