hearing aids machine learning ai

Why Artificial Intelligence is Coming to a Hearing Aid (and Audiology Practice) Near You

by Dr. Chris Heddon, Resonance Medical

Since the early 2010’s, artificial intelligence (AI) and machine learning (ML) have been grabbing headlines in the popular media—this has not been just hype. There has been tangible value creation in terms of market capitalization, user adoption, and our understanding of machine intelligence by companies like Google, Amazon, and Facebook who have invested in AI by hiring neuroscientists formally trained to program models of brain function into computers.

With big tech companies spending billions on AI and hiring from an incredibly small talent pool of around 25,000 AI researchers worldwide, the market rate for candidates just out of PhD programs is well in excess of $300,000 per year. For experienced AI talent, salaries balloon above seven figures—not including signing bonuses. As high-tech and innovative as hearing aid companies are, this is not a space in which they are well positioned to compete for talent. Nevertheless, before the market for AI researchers became white hot, hearing aid companies had been working on various AI and machine learning approaches for quite some time. The removal of specific technological constraints, combined with the hearing aid industry’s need to address new and disruptive service delivery models, indicates that the time to bring AI to the hearing care market is now.

 

Until the widespread adoption of Bluetooth Low Energy (BLE)-enabled hearing aids, there was not a clear use case for either machine learning or AI in hearing aids. Hearing aids were small islands of computing power limited by their small energy-efficient microprocessors. There was simply no option to perform the computationally intensive processes necessary for AI. Now, BLE connectivity grants hearing aids connection, not just to mobile devices, but also to our greater cloud computing infrastructure.

 

Given the high cost of premium clinic-based audiology services bundled with today’s hearing aids, the next evolution necessary for the hearing industry to reach additional markets is in providing an intelligent hearing aid that can be programmed outside of the clinic.

These next generation intelligent hearing aid systems require three specific elements: (1) hearing aids with energy-efficient wireless connectivity, which gives them access to external computing power; (2) the ability for a hearing care professional to securely program a hearing aid from a distance, which gives the user access to the highest level of hearing care at all times—even in real-world environments; and (3) mobile phones with sufficient computing power to run AI on-device, which support the dynamically responsive intelligent hearing aids and provide the additional benefits of protecting user privacy and reducing the mobile device power consumption associated with cellular connection to a cloud based server (which would have been needed if the AI was run in the cloud rather than on the user’s mobile device). The first element enables the latter two, while the latter two elements give end-users access to the best combination of human and machine intelligence.

 

Addressing the Future of Hearing Care

 

Hearing care professionals provide fantastic quality service to people who need hearing aids. This is the kind of personalized service I prefer as a hearing aid user, but with 15% market penetration among those who need a hearing aid, and an existing shortage of audiologists in the US market, it is clear that these professionals will not be able to serve all who need quality hearing care without a marked change in the technology underpinning the hearing care industry’s service delivery model.

In addition to taking the much-needed step of making hearing aids over-the-counter (OTC), the industry needs to find a way to program OTC hearing aids to fit the specific needs and preferences of the end-user. Without the automation of initial and ongoing fitting, the OTC designation for hearing aids will be much ado about nothing. Look no further than Japan’s longstanding OTC market, which was often, and rightfully, referenced as a counterpoint to the OTC model in the United States. With device return rates approaching 50% for some online direct-to-consumer (DTC) channels, OTC, without intelligent connected hearing aids that give the end user some degree of personalization, is an unsustainable proposition.

 

Perhaps more important than the OTC category in the United States, China is the hearing industry’s fastest growing market (between 3-4x current US and EU growth, according to publicly available Big 6 investor reports).

 

According to the World Health Organization, over 45% of people 65 or older in China have moderate-severe hearing loss. With these numbers in hand, hearing aid manufacturers have an obligation to their shareholders to begin deploying novel service delivery models that respond quickly to the Asian market with economics that compete with commodity Chinese hardware—and that means embracing automation!

 

Artificial Intelligence in Hearing Aids

 

How close are we to intelligent connected hearing aids? Widex appears to be first to market with the announcement of their mobile-based machine learning platform at the 2018 American Academy of Audiology (AAA) conference, which was also described in an overview of the technology in the April 2018 Hearing Review.

For clarity, AI and machine learning are terms often used interchangeably by companies to describe their products. Technically speaking, AI is a broader system of machine intelligence that embodies a set of more narrowly focused ML algorithms. Widex’s machine learning approach appears to be a method of determining the end-user’s “auditory intention” as it relates to a particular acoustic environment in order to reduce the amount of clinic time spent fine-tuning a hearing aid to the end-user’s preferences. It would be reasonable to assume that Widex will also compare the individual preferences of large sets of anonymized users in order to more efficiently suggest appropriate settings to each user. Over time, this population level approach should reduce the number of end-user interactions to something smaller than the current 20 interactions reported by Widex.

 

The real challenge with creating AI-driven hearing aids is in taking these novel technologies and deploying them on mobile devices. Without securely pairing with mobile devices capable of performing AI processes, hearing aid performance is held hostage to the connection speed of the mobile device as it attempts to offload AI operations to a server and, as noted above, increases the risk of exposing protected health information.

 

Again, Widex and others have been working on machine learning and AI solutions for quite some time. GN Hearing and Starkey have also built and published papers on desktop computer-based AI prototypes, However, these solutions have generally remained confined to the research lab.

 

Further Development Needed

 

A connected, intelligent hearing aid requires the support of AI researchers who select appropriate algorithms for optimizing the performance of hearing aids, as well as specialized developers who know how to efficiently program high performance AI into mobile platforms. Right now, this talent predominantly works at places like Apple and Google, where they have recently begun publishing mobile AI tools for developers to incorporate into their apps, including hearing-related apps. However, these general purpose AI tools have not been specifically tailored to the needs of hearing assessment, environmental detection, and device programming, so there is still a lot of customized development required when creating a mobile-optimized AI architecture for hearing aids, even when using the best available off-the-shelf solutions.

Based on the above, and despite their developmental costs and challenges, AI-driven hearing aids will drive the next wave of growth in the hearing aid market. They will provide better quality experiences for users, reach additional markets, and perform functions that will be seen as essential to users who benefit from the power of these AI-driven tools.

 

*Stay tuned next week for Part 2, which will offer some thoughts on the 5-year outlook of the hearing aid industry.

 

Chris Heddon is the founding CEO of Resonance Medical, which makes mobile-based, artificial intelligence-driven software, that programs hearing aids without the need for a human technician. Chris co-founded the company as a spinout of Northwestern University in 2014 with two Northwestern professors. He holds issued and pending patents for devices and software technology related to hearing restoration and drug delivery to the ear. He was drawn to the field of hearing loss when he discovered that he had autoimmune hearing loss during medical school in his mid twenties.

About HHTM

HHTM's mission is to bridge the knowledge gaps in treating hearing loss by providing timely information and lively insights to anyone who cares about hearing loss. Our contributors and readers are drawn from many sectors of the hearing field, including practitioners, researchers, manufacturers, educators, and, importantly, hearing-impaired consumers and those who love them.

1
Leave a Reply

1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
0 Comment authors
  Subscribe  
newest oldest
Notify of
Microsoft Support Canada

Because of the applications and uses, AI is always in the highlight nowadays.so, people are taking interest in this field. companies like Google, Amazon, Microsoft are taking interest as well and they invested in AI. To know more, you can also visit Microsoft Support Canada you may know in detail.