Voice Technology and the Future of Hearing Healthcare: Interview with Dave Kemp

Image
Hearing Health & Technology Matters
May 28, 2020

Hearing Healthcare 2020 is a column where we explore the forces behind the changing landscape and disruptions impacting the hearing healthcare industry.

This week, voice technology expert Dave Kemp returns to talk with HHTM’s President, Kevin Liebe, about voice tech and its potential impact on the future of the hearing industry. 

 

KL: To start, could you provide readers with some of your background and how you came to be interested in hearables and voice technology?

DK: I started working at Oaktree at the beginning of 2016 and one of my early goals was to find an area that I could be an expert in within the industry to allow me opportunities for public speaking. I’ve always been passionate about technology and sold software-as-a-service prior to joining Oaktree, so I felt that technology was the best area for me to focus on. I noticed that made-for-iPhone hearing aids were becoming increasingly popular and consumer wearables were migrating to the ear (such as Bragi’s kickstarter campaign in 2015), which led me to start looking into potential use cases for “smart” ear-worn devices.

I launched my blog, Future Ear in 2017, with the tagline being, “connecting the trends converging around the ear.” In my initial post, I mentioned that we’ve effectively begun, “wirelessly tethering our ears to the internet,” which has only become more pronounced in the years since, with the rise of AirPods and ambient media (i.e. podcasting). Another area that I began researching was voice technology. Although Apple bought Siri in 2010 and baked it into the iPhone 4S, mass-market voice assistants didn’t really popularize until Amazon launched its Echo speakers with Alexa in 2014. Google followed suit two years later with Google Home (housing Google Assistant), and it began to become clear that Google & Amazon in particular were betting that voice assistants would represent a new way to interface with technology.

I looked at voice assistants and hearables (including hearing aids) as working in simpatico – the assistant would serve as the “ambient” interface, and the hearable would serve as the home to house the assistant. In this scenario, we could operate our phones (and other smart devices) without having to touch them, as we could just speak to the assistant to execute an increasing amount of “tasks” that we currently rely on our smartphones for (I wrote about this in 2017) Flash forward to today, and we’re beginning to really see progress around making this scenario a reality.

 

KL: When thinking about hearables and the use voice technology, many people might think of a teenager or 20-something using Apple AirPods to talk to Siri. How accurate is this perception and do you have any demographic or use data that might change this view?

DK: One of the most interesting aspects to smart speaker sales and voice assistant usage is that adoption of the technology is happening across the entire age spectrum, from young children to older adults. It sort of makes sense when you think about it – kids don’t even need to know how to read in order to play their favorite song through Alexa, and older adults don’t need to look at and tap on a small piece of glass to inquire about the weather. There’s not much of a learning curve.

According to Voicebot’s 2019 smart speaker survey, about 20% of smart speaker owners are 65+ years old, and half of them indicated that they use their device daily. If we apply these numbers to the number of Americans that own a smart speaker (90 million US adults), then we can assume roughly 18 million adults 65+ own a device, and 9 million of them use their device daily.

So, I think it’s fair to say that a decent sized portion of the patient demographic might be using Alexa or Google Assistant regularly, and the numbers are only going to rise.

 

KL: Some have recently begun proposing the increased use of voice technology as a way to reduce physical contact in a post-pandemic world. In your opinion, how will COVID-19 influence the development of voice technology and what sort of examples might we begin to see in the near future?

DK: There’s a lot of talk within the voice technology industry about this notion. In essence, what may have seemed rather trivial or gimmicky before, now might be something that’s broadly appealing. We touch a lot of surfaces in public places. Payment terminals, kiosks at the airport, buttons in the elevator, doors, etc. COVID-19 has made us much more germophobic and might be motivation for companies to implement voice-based functionality as an alternative.

The one that I feel the most confident that will likely shift broadly to voice is payment terminals. With Amazon Pay, Apple Pay, Samsung Pay and Google Pay all being widely available at US retailers, in combination with each of those companies having their own hearable and voice assistant, it seems plausible to me that voice-based payments may become realized within the market. Google recently announced that Google Assistant can now authenticate purchases using voice biometrics – think fingerprint or FaceID but with your voice as the method of authentication. Amazon also announced a partnership with Exxon Mobil where you’ll be able to pay at the pump with Alexa at all Exxon gas stations. There’s momentum behind this premise.

All the puzzle pieces are right there and we’re beginning to see them be assembled together. I’d wager to say that in 2-3 years, you’ll be able to pay for a ton of things through your voice assistant (which might be housed in your hearing aid) and have it tied to Apple Pay or Google Pay or the Cash app or whatever, and have it instantly transact from there. That’s where this all seems to be going and why I’m bullish on voice commerce.

 

KL: Should hearing professionals care about voice technology?

DK: The first made-for-iPhone hearing aid debuted in 2013 (Resound’s Linx). HIA reported that 94% of hearing aids sold in Q3 2019 had wireless capabilities (which I assume to mean Bluetooth). So, in just under 7 years, we went from a technology that was a novelty in the industry, to basically becoming standardized.  

I think we’re sort of at a similar point in time with voice-assistant based hearing aids. Today, they’re definitely a novelty, although if you look at how someone who actually uses something like a Pixel phone and Google Assistant through their Phonak Marvel hearing aids, you’ll swear you’re seeing the future. At least that’s how I view them. I think hearing care professionals should at least be aware of these capabilities and monitor their progress, because there’s a good chance that an increasing number of their patient base will already be using voice assistants through their smart speaker or connected car or smartphone, and might find it appealing.

The biggest beauty of this technology, in my opinion, is that it’s basically free for the provider and the patient. This is a totally new feature set available to some of today’s hearing aids. And it’s only going to get more sophisticated in time and allow for more optionality as the underlying technology matures.

Being well-versed in this new feature-set might be just another way the hearing professional can differentiate their services and ultimately enhance the patient experience.

 

KL: What do you think are the most exciting potential use cases of voice technology in the hearing industry over the next 5 years?

DK: Along with voice technology, another area that I’ve been researching, writing and podcasting about is around biometric data collection via wearables. Just as we’re beginning to see hearing aids become access points to consumer voice assistants, we’re also seeing hearing aids begin to be outfitted with various sensors that can capture different biometric data. For example, Starkey’s Livio AI can capture metrics gathered from the inertial sensors embedded in the hearing aids. The first PPG sensor (which can capture data such as heart rate) was embedded into a RIC hearing aid just last year. I would bet that biometric monitoring becomes a standardized feature set in time, because it’s suddenly feasible both financially and technically.

So, what does this have to do with voice technology? Well, I recently co-authored a book titled, “Voice Technology in Healthcare” in which I outlined a concept that I’ve termed, “Nurse Siri.” In essence, I believe that the hearing aids of tomorrow (3-5 years from now) will serve many functions beyond hearing amplification. One, as I’ve outlined in here, is to play home to our voice assistants, and another it to capture an increasing amount of physiological data. Where things get really fun, however, is when you start to combine these future use cases. That’s what Nurse Siri is – a combination of these two use cases, where the hearing aid captures the data, and then Nurse Siri analyzes all that data for you to create actionable insights.

Imagine having a hearing aid that detects you’re getting sick from the physiological data that it’s assessing through machine learning. The patient would be wearing their hearing aids for long periods of time each day, allowing for data to be logged on the minute every hour. That’s going to provide for a really solid benchmark within the longitudinal data, which it can then contrast any irregularities or anomalies within the data. This would effectively transform the device into a preventative health tool that would proactively notify the patient of threats in their health. In my eyes, that’s going to be really appealing to older adults. Or, imagine being able to manage your diet by communicating with your voice assistant about everything you eat, and have the voice assistant log it and provide you with guidance based on how your body responds to what you’re eating, at a physiological level. In this scenario, I guess it’s more like Coach Siri or Dietician Siri.

There’s a lot of different ways to layer conversational AI (voice assistants) on top of biometric data, these are just a few examples.

 

KL: Assuming the ‘Nurse Siri’ concept you describe evolves in meaningful ways in the next few years, this would seemingly place hearing healthcare professionals on the “front lines” of a patient’s overall wellness by helping monitor the reliability of these systems and potentially communicate irregularities to other medical professionals. Would you agree?

DK: It’s interesting that you should mention other medical professionals, and now I’m thinking back to the conversation you and I had on my podcast a few months ago where we discussed the 2020 Audiology Dispensing Survey you all had just put out at HHTM. One of the findings from that survey that really stood out to me was that like 55% of the respondents indicated that their top referral source for new patients was doctors or medical referrals.

The way that I look at it is if, say, your cardiologist is going to prescribe you to wear some kind of heart monitor, and you also might be a candidate for some type of amplification, why not kill two birds with one stone with a hearing aid that provides both? There’s still work to be done by the manufacturers to ensure medical-grade monitoring and we’re probably still years away from that, but I do think it’s within the realm of possibility that those type of capabilities become feasible, without being a major drain on the battery or super cost-prohibitive.

I think we’re at the start of a new phase with hearing aids because so much of the underlying hardware and software has been incubating and maturing this past decade. They reside in one of the most sought after spots on the body (just look at where the R&D departments are focusing within the tech giants), and this industry stands to gain because it’s full of ear experts. I’m really optimistic about the future of the technology, which I think will allow for a much stronger value proposition and therefore create more incentive for people to establish a relationship with audiologists and other hearing professionals.

 

KL: Dave, thanks so much for sharing your take on voice technology with our readers and giving us some interesting food for thought on why it’s becoming increasingly relevant to those of us in the hearing industry.

DK: Thanks a lot, Kevin. I always appreciate all the insight you and the team at HHTM generate for the industry!

 

 

Dave Kemp is the Director of Business Development & Marketing at Oaktree Products and the Founder and Editor of Future Ear. In 2017, Dave launched his blog, FutureEar.co, where he writes about what’s happening at the intersection of voice technology, wearables and hearing healthcare. In 2019, Dave started the Future Ear Radio podcast where he and his guests discuss topics pertaining to what he’s covering through his blog. He has been published in the Harvard Business Review, co-authored the book, “Voice Technology in Healthcare,” writes frequently for the prominent voice technology website, Voicebot, and has been featured on NPR’s Marketplace. Dave travels the country giving talks to hearing care professionals on the technological evolution that the hearing aid is currently experiencing and the new use cases today’s hearing aids are supporting.

 

Kevin Liebe, AuD, is President and CEO of Hearing Health & Technology Matters (HHTM). He also serves as a Scientific Advisor to Neosensory, a Silicon Valley based startup pioneering experiences in sensory augmentation. As an audiologist, Kevin has experience in variety of settings, including private practice, ENT, and industry. He is a past president and board member of the Washington State Academy of Audiology

Leave a Reply