In recent surveys, up to 60% of hearables users said they would find conversational enhancement desirable. Such large percentages mean people who do not measure as having hearing loss also wish to hear others better in noisy situations.
Meeting this need calls for AI-based speech enhancement, but until the present time it has not been possible to incorporate effective, low-latency algorithms directly into hearables. Then came twin announcements from Chatable and Knowles describing that such a system has been developed for Knowles’ AISonic audio processor chip and incorporated into their new TWS developers platform.
In this segment, Dave Kemp discusses the implications of this development with Giles Tongue, CEO of Chatable, and Andy Bellavia, Director of Market Development for Knowles Corp. They describe how their companies’ partnership revolutionizes transparency mode in hearables to enhance hearing for everyone, and how such a system could further improve understanding in noise when incorporated into hearing aids. They also share how other features of the TWS developers kit enable hearables designers to meet popular use cases such as premium sound, hands-free voice assistant access, and more. It all adds up to a future where hearables become an increasingly useful tool in daily life.
Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week, and follow This Week in Hearing on LinkedIn and Twitter.
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
Full Episode Transcript
Dave Kemp 0:09
Welcome to This Week in Hearing for another episode, a show where we discuss all types of innovation in emerging technology pertaining to the worlds of hearing aids, hearables hearing health, pretty much anything that falls into this giant spectrum. That is the world of hearing. With me today I have Giles Tongue and Andy Bellavia, we’re going to be discussing some really exciting new innovation stemming from these two companies. So I will now kick it over to them, let them introduce themselves. Tell them a little bit. Tell us a little bit about who they are and what they do. We’ll start with you, Andy.
Andy Bellavia 0:44
Thanks for having me on. Again, Dave. It’s always a pleasure. Andy Bellavia, Director of market development for Knowles hearing health tech group. And I’m responsible for everything which is not a regulated hearing aid actually, which is in ear monitors for professional musicians, radio communications, earpieces, and true wireless hearable devices, which is of course what we’re going to talk about today.
Dave Kemp 1:05
Perfect and Giles.
Giles Tongue 1:06
Hi, I’m Giles, CEO of Chatable. We’re leading AI startup. We’re based in London, but funded by Mark Cuban. And we started in about 2017 on this exciting journey. And yeah, thanks very much for having me on today.
Dave Kemp 1:21
Perfect. Well, the reason I wanted to bring these two one it was just announced a few days ago that Knowles has introduced their new ai AI Sonic audio processor chip. And included in this chip is integration with chattels technology. So before I you know, kind of put the put the cart before the horse and get to what exactly this integration entails. I wanted to start with you, Andy, to really kind of understand broadly speaking, you know, the the broad strokes of this chip, and some of the innovation that will ultimately kind of allow this to usher in sure we go ahead
Andy Bellavia 1:59
to in just a minor correction to chip itself has been around for a little bit. We, for example, recently released a kit that enables white goods and appliance makers to add voice to their existing device not but what’s new is that we’ve now integrated the chip into a true wireless development kit. And the reason we did that is because the hearables market is really moving forward on a lot of different fronts. And we’ve seen in all the different surveys how sound quality is the number one demand from people wearing wearable devices. And now it comes with longer and longer where people are listening and consuming audio, they want good sound quality. So we enable the highest possible sound quality there people want long wear comfort, they want ANC, and with long wear comes, you know a voice assistant access, people are using their voice assistant more and more. So we’ve enabled local wake words and multiple wake words within the device. And so all of these things go into the kit so people can develop true wireless is for all the use cases that people are demanding in one of those is hearing enhancement conversational assistance. We’ve seen in a recent surveys anywhere from 30 to 50% of people asking for it. And it’s because it apart from hearing impairment, even people with normal hearing have a hard time understanding in difficult situations like loud restaurants and so on. I mean, how often do you see people yelling across a table at each other. And these are people with no hearing loss at all. I mean, Brent Edwards, recall in the earlier discussion, Brent Edwards of the NAL, he identified just in the United States, 25 million people who don’t measure as having hearing loss, but still have trouble hearing and loud situations. And so one of the goals of this kit was to enable conversation assistance for everybody. And that’s really the you know, the goal of the partnership with chattable. Now, as for the the chip itself, it’s actually designed especially for audio applications, it has two different processors in it. One of them is very, very low power. And this is where your wake words happen. So you can have hands free voice assistant access, and even even to voice assistance. So I could for example, ask for Google or I could ask for my music voice assistant, you know, or whatever I want to do. So multiple voice words that sort of thing. The other processors very high speed, high compute for audio application. And this really when you talk about advanced features of the kit, this is where the magic happens. It is designed especially to run very compute intensive applications like chattels and it were very very thrilled to have this partnership going because now for the first time you can take this sort of AI based conversational assistance inputted in the ear So at this point, Giles Why don’t you go ahead and explain a little bit more how that actually works. Thanks
Giles Tongue 4:37
Andy a great intro and use some of my lines already which is which is convenient actually which segues me in so if we start with Transparency Mode today, so AirPod AirPods earbuds very popular at the moment expected to sell something like 2 billion units over the next four years. And people are very comfortable with using their earbuds and they don’t seem to want to take them out as much as you might imagine, so wearing them to listen to music for phone calls, etc, etc. And people are used to all day use. But the trouble with the transparency mode at the moment is, as much as you want to be able to sit down in your favorite coffee shop and have a conversation with your friend as they come in and just press transparency mode, what that’s doing is letting in all the noise. So you’re kind of back to where you started with with, you know, your peaceful world that you were sitting in and enjoying yourself is now being interrupted by all the noise that’s coming through with the conversation. So it’s worth dwelling on conversation for a second, because what we’re getting from conversation is not just voice it’s information. So when you’re talking to someone, you’re learning about their their age, their sex, the intent of what they’re talking about. And we’re getting a lot of information that’s coming through that that conversation. And as the noise is coming in, we lose that information. So in a silence situation, you might be able to hear a mosquito around you, as somebody in the next few months has a phone call or the TV goes on, you can hear that mosquito anymore. And the same is true for conversation, all of those little nuances that you were picking up, you start to lose as as the noise increases. So that’s what’s happening with transparency mode as you want to use it for the conversation. But ultimately, you’ve getting all this noise coming in. And that’s ultimately why we hear about future source in the in the data you mentioned earlier on about why conversation enhancement is so in demand. So what we’re able to do with our tech is to enhance that transparency mode and take it to a level where that conversation is actually enhanced. So we want to go into in a little bit of detail now then we can look at you know AI and why it’s such a good fit for solving this problem. And the two challenges that we have to overcome as we’re as we’re going into it. The first of course, is latency. This is a challenge for any DSP, but particularly for AI adding AI will add latency. So a bit like on a zoom call, if you have a lot of latency, it’s it’s somewhat uncomfortable. If you add that kind of latency into an in person conversation, it’s egregious and unusable. So you don’t want any latency. And the second thing you want to do is to have this actually on chip in your in your device. And that removes all of the inconveniences that might occur from secondary devices or, indeed adds latency itself. So typical approaches to AI are adding latency and adding some good inconvenient factor because they need to run on third party devices. And what we’ve been working on to overcome is to remove that latency completely and also be able to run on device. But the missing ingredients of this point was the chip. And that’s why we were so excited earlier in this year, when we started to have some private conversations with Knowles about the AI Sonic chip and, and to look at how we could begin to build a new AI that would work on this chip with the resources that were available. So that was a bit of a monologue. Sorry. But that, that brings us up to where we are now with with this new chapter bl AI that’s running on the nose chip.
Dave Kemp 8:06
Yeah, I think this is so exciting for a number of reasons. First of all, I’ve known both of you for a little while now. And I just find this so fascinating that it’s sort of like your two respective companies that really represent what is this kind of convergence here of technologies. And so you know, we’ve had Giles with Chatable has existed, but it’s never really had a chip that’s compatible to run their AI processing, right. And so then you have this Knowles AI Sonic audio processor chip. That’s the perfect companion for this type of processing capabilities that can run like Andy said, kind of these intensive computes. And so like what does this all translate into? Well, I think what’s so exciting is this is just a DSP chip that can fit on to virtually any system on a chip out there, right. And so what we’re going to see, and this is what I think has been kind of the recurring theme is the underlying technology, what’s actually going on in under the hood of a lot of these different devices, is this dramatic improvement in all kinds of the components that make up the devices. And so ultimately, what we’re kind of like looking at right now is the horizon where we’re going to have the next generation of hearing aids and hearables and true wireless headphones and all these different things, the actual guts of the device are getting so much better. And what it means is that the next $100 pair of Skullcandy, headphones or whatever kind of headphones are out there, where your, you know, your next generation hearing aids, they’re going to have all kinds of improved capabilities from a fundamental standpoint, that’s derived from the innovation happening inside of the devices. And this is I think, is such a good representation of kind of where we are today in this state of technology for these devices. is, you know, the the smaller that We go in the more granular that we’re able to go because of, you know, Moore’s law and and just the nature of devices getting smaller and more efficient and more powerful. The end result is that all kinds of devices that are going to be hitting the market here soon, are going to have really exciting features baked into them kind of at the core, and I think that’s cause for a lot of optimism. What what are your thoughts there, Andy?
Andy Bellavia 10:25
Yeah, I agree. Yeah, how many years that we’ve been talking about what it’ll take to make an all day hirable device that people are really aware and you kind of just tick through the different things that are in this kit is representative of what’s possible for any true wireless company to develop around now, I mean, begin with the speaker. So we we’ve got a hyper driver, narrow, which is dynamic speaker, ordinary speaker as a woofer for the bass, and then we have a balanced armature tweeter. This is becoming all the rage and high performance hearables. Now it goes with the HD audio streaming, people want better sound quality, and actually just today, it’s the sixth as we record this, now you can go breed what Marc Sparrow Forbes wrote about hybrid drivers and how good they sound. They just published it this morning. And so you’ve got the sound quality if the sound quality is poor people won’t wear it for all day, you have ANC we actually chose Sony’s ANC chip because they’re one of the world’s finest and people want ANC for all day wear because they go on to train, you know, they’re loud studying, they still want to listen without blowing out their eardrums. Good quality ANC, we have microphones, we got a vibration sensor, which you know, gives you voice security so that your neighbor can trigger things that has to be your own voice. And of course, we’ve got the audio processor chip, which enables all kinds of advanced functions, for example, using a lango for the outgoing voice. So that clear voice for your voice assistant or calls or whatnot on the outgoing side, and of course Chatable on the incoming side, so that you have essentially the best possible incoming sound quality under any circumstances, you can understand people no matter what environment you’re in, and so all the pieces have now come together to give good comfortable, good sounding advanced features. It’s almost like having your own ears but better and with you nlsa you know your your Butler walking behind you telling you everything you want. And all right. Hey Jeeves how do we get to this restaurant? Right? And Jeeves answer you answers you as naturally as possible. So now you really have a device, we’ve really come after all this time, Dave to have a device that people actually can wear all day long, and get a tremendous amount of value from it really, really well said, Giles, I’m
Dave Kemp 12:35
curious to you know, kind of come back around to the Chatable piece. Andy, that was such a good summary of everything that’s going on. And you mentioned, you know, okay, so there’s this facet of it. That is the audio that comes in, I want to give you an opportunity here, because it’s so interesting what you’re all doing. Can you really share, like, what exactly it is the Chatable is doing without giving away any the secret sauce, but kind of like in plain English, what’s going on? And what that end result is for the for the for the consumer?
Giles Tongue 13:04
Yeah, sure. So what what we’re trying to do is make sure that the information you’re really after, which is the person speaking to is coming to you without any kind of interference hitting it. So there could be other noises in the room, there could be you know, it’s a coffee shop situation, or even at home with TV or people next year, whatever all of these things are running, some kind of interfering effect on what you’re trying to listen to the other person’s speech. So what we do with with our AI is take that sound that’s coming in, and we’re going to enhance that that speech. So you’re finding the conversation is just better. Like, if you imagine you’re in a coffee shop or something, and that person is suddenly a meter closer to you. That’s the kind of effect that we’re creating here, we’re really able to create a more vivid conversational experience for you. So that, as I say that information is is preserved. So that the effect is is one of just general enhancement. And he said, you know, it’s just better. That’s what we think people will be saying when they’re using this. So you switch from that transparency mode, where it’s just, you know, everything noise and all the rest of it, you’ve then switched on our AI transparency plus features we’re calling it and the conversation just becomes better, more vivid, brighter, take off your air pods, or your earbuds or whatever you’re wearing. And you think, Oh, I wish I had that back on. So that’s the kind of feature that we’re delivering here. It’s really about conversation enhancement is absolutely the right phrase for it. It’s a more vivid and more brighter experience, which we think everyone will enjoy. It doesn’t have to be a coffee shop, it could be at home, could be at the skate rank, it could be wherever you are, you know, this is a feature for everyone that should be as simple as just touching a button and switching it on. And that’s what’s being enabled by this combination of really efficient AI with these new supercharged chips that are available to to process all of these complex calculations.
Andy Bellavia 14:48
Yeah, I want to add to that because I’ve taken a lot of your mobile devices out for a spin and some of them may have transparency mode, but it’s really unnatural. It may be good for telling if a car is coming up behind you when you’re running But when you try to have conversations with them, you can’t tell what direction it’s coming from, it doesn’t sound natural, or the noise comes in with it. That’s a real barrier to all day hearables, where the, you know, the Chatable system, which gives you not only completely natural, but enhanced transparency really breaks down one of the main barriers to wearing here was all day.
Dave Kemp 15:19
Yeah, I couldn’t agree more. And I think again, it comes back to this whole theme of, you know, I’m gonna put my hearing health hat on here. And Andy, we’ve kind of had this discussion before, but I continue to believe that you know, one of the most significant ways in which we can improve adoption of Hearing Solutions, and that should not just be limited to hearing aids, I’m speaking more to this concept of doing anything to treat and alleviate your hearing loss and knowing how pervasive hearing losses and how specific it is to each individual. Right. And so I think that having this ability, like Giles said, where you turn a button on, I mean, I think about sort of that scenario, broadly speaking at scale, and having lots and lots of people being able to experience that sounds that they haven’t really heard in yours for the first time, whether it’s just in a conversation, the clarity of a conversation, right? You know, how often is it where people might, you know, just over the course of time it degrades to the point where it’s, you kind of just your brain becomes acclimated to the fuzziness, that’s associated with your type of hearing in a conversation, you know, so restoring clarity, they’re being able to hear the birds chirping again, right, like being able to give people this restoration, in a sense through technology of their sense. And I think that the big thing that that does is it makes people aware of what they’ve lost. And I think that’s probably one of the most significant ways that we can help to drive overall adoption for this, because it gives them a taste of this, when you start to have this massive proliferation of consumer devices that people are buying for streaming, right, they’re not really buying them for these hearing augmentation purposes. Some will be but by and large, many of the people that I’m kind of describing here, I think, are going to be exposed post purchase to this. And I think that those are the ones that really represent the people that are that have historically been cited as this seven year gap of taking action on your hearing loss. Now we’re looking at a scenario very, very soon where people I think will be exposed, and they’ll they’ll get a taste of this. And whether they decide that that’s enough for them just their pair of true wireless headphones that have this conversational enhancement boost, or they want to go and they want to adopt something a little bit more significant than that, I just think that the name of the game is exposure. And this is going to create so much more exposure to folks that may have sort of slowly lost their hearing over the course of time.
Andy Bellavia 17:50
Yeah, Dave, you make a lot of really excellent points there. I mean, I think from the beginning of people who just have a hard time hearing in loud environments, and don’t have hearing loss at all, which is most of us. So even apart from hearing loss, I think this is just a wonderful enhancement for all day hearables. But your point is quite right. I mean, people were getting at the front end to hearing loss, and they only start to notice it in loud restaurants, and so on, suddenly realize how much better it can be. And that’s already an accessibility issue, because you’re putting a consumer price device in someone’s ears that allows them to participate in daily life. And that’s, that’s really important. And of course, as you get more severe in, in hearing loss, you can see why this is valuable. Even in next generation hearing aids, if you’re doing true AI noise extraction, as well as the amplification that we’re severely hearing impaired people need, that’s really wonderful, because everybody, I’ll tell you, even with modern hearing aids, it’s probably the use case that they satisfy the least I mean, there are things they are doing to improve it, then like in my own case, I notice a vast improvement, but it’s far from perfect. And you can see a real value all the way up and down the hearing health or hearing loss spectrum, the value of such an approach is Chatable is taking,
Dave Kemp 19:03
right. I couldn’t agree more there. And I think that again, it just it provides more of a footprint right for people to maybe be exposed to this kind of functionality. But I think again, kind of like bringing it home here. You know, I think what’s exciting, again, is this idea that, you know, when you really kind of look, you take the magnifying glass out, basically and you look at what’s going on with these systems on a chip, you know, it wasn’t even that long ago where that whole premise didn’t exist, where you weren’t able to input an entire system on a single chip. And when we’re talking about little tiny devices, you know that you were inside your ears, size is a big deal. And I think that these kinds of things, when you really start to understand what’s happening, you know, at the most granular level of these devices, it gives you a lot of optimism around what comes next right like you know, here we are in 2021 and I’m curious just get your guy’s overall thoughts about like, you know, what, what else we can kind of start to layer on to this, we don’t have to go into specifics with use cases or anything. But I’m curious to kind of just understand from you two about ways in which this trajectory continues to play out.
Giles Tongue 20:09
Yeah, I mean, I can jump in from a possibly slightly different angle here, which is to say, you know, the availability of these chips to enable AI and, and other complex processes to happen on device may encourage, I mean, and we might be an example of this, but it may encourage other venture capitalists and other ways to get involved. And that will encourage new approaches to solving today’s problems to be funded and brought forward. So you know, I mentioned we’ve funded by Mark Cuban who likes to invest in in AI. And, you know, we’ve been lucky enough to be able to bring what is a totally new approach of neuroscience led AI into this field and look what we’ve come up with. So hopefully, this evolution, I mean, this is a big step, right? That chips that can enable AI to run in the ear. This is a huge monumental moment. And you know, the outcome of what we’re achieving here is, you know, Facebook, and Google and all the other guys, moonshots, and all this sort of stuff. This topic that we’re on here, this was unimaginable years ago, you know, there’s fleets of people at Facebook reality labs, and moonshot Project X, you know, all these guys working on trying to solve these problems, and they’ve now been enabled, they’ve now been unlocked. So the Knowles system is now a platform on which other people can get funded and and more exciting use cases can be unveiled at the moment, the AI that people are looking for the solution people are looking for as conversation enhancement. But once we’re beyond that, what’s the next thing and the next thing so there’s more and more features, no doubt that will come down the road. And hopefully, you know, this, you know, this development will encourage more innovative thinkers and new entrants to get funding and go and take their ideas forward. So it’s difficult to know exactly what that next feature and evolution might be. But there’s all these different areas that you’ve been talking about, particularly Dave and Andy on this and other podcasts about conversational AI and, you know, integrating with the cloud and how to enable different use cases just through wearing this instead of using a smartphone or a computer or other means of communicating. So yeah, it’s the start of an exciting new wave. Oh,
Dave Kemp 22:19
yeah, I completely agree. I mean, I think that this whole idea of being able to do that process being on the chip itself, that’s the big breakthrough here. And like you said, like, you know, you have Facebook out there that’s saying they’re calling this a moonshot. Well, you kind of already landed on the moon here, you know. So I think that’s so exciting. And in to your point to you know, you You brought one piece of the one piece of the puzzle, right, and you needed what Knowles had in order to really bring this thing into fruition. And I think that’s kind of the other thing here is that you had mentioned earlier, Andy, Alango, too, right. So these are names that you actually hear throughout the industry. And the licensing of technology is really starting to become a theme. You know, we’ve talked about that with Jacoti as well. And so I think that it’s it’s interesting how you’re now seeing a lot of these software players come into the fold as licensees of their technology to be baked into something like a Knowles chip. And that, again, the possibilities do start to make your head spin a little bit, because we are kind of in this new frontier where a lot of this can be done on chip. So closing thoughts from you, Andy?
Andy Bellavia 23:21
Yeah, very well, said. I mean, it’s really an exciting time. And we you mentioned the licensing, I mean, we made this chip open, it’s an open DSP, so anybody can write into it. And so think about some of the things that we’re you know, landing on Mars, if you will, right? Think about Nikolai over Bragi, and how he’s talked about what happens when you can actually start to derive emotional intent. His example is if I say, I need help, versus I need help, right, deriving the emotional intent from the intent from that same expression means a lot and you have people starting to approach that problem. And you can you can do it for mood analysis there. People are working on playing music according to your mood, this sort of thing? Well, all that starts to be possible with the processing power that we’re putting on chip today. So I think it’s really an exciting time for all the advanced use cases for hearables, and we’re really going to start to see devices which are not only useful, but really enhancing of a person’s lifestyle is very, very exciting.
Dave Kemp 24:20
Giles. closing thoughts?
Giles Tongue 24:21
Yeah, totally. I mean, we, we I can only speak from from our own perspective. And I can tell you that every day we’ve spent working on the nose chip, you know, has felt like a lifetime. I mean, Dr. Andy, on our team was talking about how the other day we were sitting in a room, we’ve got two laptops going we got a bunch of breadboards we got the the earbuds in and we’re doing some stuff and in an hour, we’ve got Oh, wow, that’s much better than it was an hour ago. And then Andy was describing how you know to do that probably was five years of work. And you know, NASA’s entire compute power done that in an hour, just three guys sitting in a room. And that’s the kind of incremental steps that were made able to make Now at lightning fast speed, and that’s gonna take us to some really exciting places now. So yeah, I mean, we couldn’t be more excited right now we’ve got the chips there and and the reference design for TWS, which means we’re literally able to within minutes go did a little learn, right? Let’s try that. And in the meantime, we’re off making the AI better and better and better. And it’s just such an exciting moment right now the speed of iteration and the potential for where we can go is just so exciting. So yeah, this is this is really a huge moment. And I hope people realize, you know, quite where we are right now. And you know, we’re really grateful for Knowles for believing in us and for giving us the chance and for supporting us. And Andy you mentioned, the system right now is very easy to work with. So you know, that’s, that’s a good thing, that there are other people wanting to start developing stuff. It’s really good system to work with. So, you know, we’re just so grateful and excited for this this moment. And, you know, even we don’t know exactly where we go. And, you know, we’ve got some nice plans. And, you know, let’s see, we can start ticking off some milestones. Well, thanks. I
Andy Bellavia 26:01
mean, yeah, we find a relationship extremely valuable as well. I appreciated all that you said, Giles. Fantastic. So this is,
Dave Kemp 26:08
this is, this is so cool. I’m just like, you know, from a personal standpoint, it’s hilarious that, you know, to the two people that I’ve, you know, two of the people that I’ve really gotten to know through all the podcasts and everything, you know, just so happen to be the ones that are like bringing forward or their respective companies are bringing forward to such a monumental landmark milestone in this industry. And it’s so cool to see so I can’t wait to see, you know, to your point, Giles. Like I hope people understand it. I think people are totally gonna understand it once they start to see this come and manifest in the market here in the not too distant future. Because I continue to say this, like everywhere as loudly as possible, this next generation of hearing aids and hearables, and true wireless devices are going to be amazing. So I’m so excited. And it’s just getting started with what’s coming. So I think lots and lots to be optimistic about so that’s all for this week. Thank you for everybody who tuned in here, and we will chat with you next time.
Transcribed by otter ai
Andrew Bellavia is the Dir. of Market Development for Knowles Corp, a leading acoustic solutions provider to the hearables, smart speaker, mobile, and IoT industries. He has been personally involved in supporting the development of many innovative hearable devices since the beginning with pioneers like Bragi and Nuheara. Andrew is also an advocate for the role technology can play in addressing hearing loss, and in the practical use cases for voice in the coming hearables revolution. When not in the office he can usually be found running the roads of N. Illinois, and until recently, the world, often photographing as he goes.
Giles Tongue, is the CEO of Chatable, an industry leading Artificial Intelligence start-up. Founded in 2017 with investment from US billionaire Mark Cuban and headquartered in London, UK.
Dave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder & Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast, where he and his guests discuss emerging technology pertaining to hearing aids and consumer hearables. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot.ai, and has been featured on NPR’s Marketplace.