How Intel is Improving Accessibility for People with Hearing Loss with Darryl Adams

intel hearing loss accessibility
HHTM
June 27, 2023

This week, Andrew Bellavia sits down with Intel’s Head of Accessibility, Darryl Adams. They discuss Intel’s commitment to improve accessibility for individuals with hearing loss and other disabilities, and the initiatives currently underway at Intel.

The company recently announced hearing health collaborations with organizations like 3DP4ME and Accenture, utilizing 3D printing technology to develop custom-fitted hearing aids in developing countries. This approach is faster and more cost-effective than traditional manufacturing methods, aiming to increase access to hearing aids globally.

Additionally, Intel is working on integrating assistive technology with computers and improving the connection process for individuals using hearing aids and other hearing devices.

Full Episode Transcript

Hello everyone, and welcome to this Week in Hearing. Today, I’m more than happy to have with me Darryl Adams, director of Accessibility at Intel. Perhaps because intel has been somewhat quietly going about their business in the accessibility space, I was a bit surprised to learn all what they’re doing.

I think you’ll be pleasantly surprised too, Darryl. Let’s begin with an introduction. Please tell everyone a bit about yourself and how you came to direct Intel accessibility initiatives. Sure thing. And thanks Andrew, for having me on the show. It’s a pleasure to be here. So my journey is a long one at intel.

I’ve been working at Intel for 26 years, and much of my career I was more of a technical program project manager across all kinds of different innovation, research and development projects. Really exciting things. But along the way I’ve been

growing into more challenges

in my role based on the progressive sight loss that I’m experiencing, as well as being single sided death. So as I’ve been working through or basically trying to

remain productive in a role with declining eyesight,

the necessity to really understand where assistive technology can fill the gaps for me became imperative. And as I started down that path, I recognized that working at a company that is a technology basically a global technology leader, that maybe there’s an opportunity to move beyond just supporting myself and figuring out my own career path.

But more,

how might I be able to influence intel to look at creating a future where technology is more inclusive and more accessible to more people? So that was the motivation. It came from a very personal space and it’s been a fairly long road at that. But all told, it really does feel like we are moving in that direction.

And as technology evolves and as the amount of compute that is becoming available over the next decade, there are tremendous opportunities to really remove many of the barriers that exist today and also help improve all the areas where technology is enabling people with disabilities as well. I really love that story because it’s about bringing your whole person to work with you and how you can use that to make a positive.

Influence on so many people. There’s actually a story on a website I read entitled it’s all started in the intel cafeteria.

Tell us a bit about that. It’s a really great story. Sure. Yeah. So

it was probably like 2007 time frame, I believe.

I had a coworker that had severe dyslexia to the point where he was unable to read and he relied 100% on

speech to text

to speech and he had a lot of challenges but at the same time was also very successful. But he felt very similar, I think, in my thinking that as an employee at intel maybe he would have the ability or the opportunity to influence the development of a product that could help himself and others.

So he developed the Intel Reader which was a handheld device that would take a snapshot of text and basically read it right back to you. Today that seems

we see this everywhere now in today’s smartphones and other devices that we do this very well. But at the time it was fairly groundbreaking and this is pre iPhone, pre smartphone for the most part. And

the great thing about the story is that this was one individual that had the idea, had the inspiration, had the problem to be solved and he was able to influence and rally the support needed to actually make that happen. And I was one of the folks that he engaged with because being knowing that I was low vision he was asking me

would this product help you? And if so, how could we make it better? And so I was able to explain to him that if you’re going to be using this device without being able to see it or see it well, you’re going to need some more tactile differentiation on the buttons that you’re using so someone can utilize it without seeing it.

And so he took that into consideration and that helped the design of the product as well. And along the way I helped with some project management and

that experience just solidified in my mind that it is possible as an individual to make a difference. If you have enough just ingenuity and have the drive and the passion to make it happen, well that just speaks on so many different levels. One the kind of company that intel is that the two of you could leverage your own experience to really establish initiatives within the company.

And the other is also emphasizing the importance of actually working with the intended end users when you’re working in the accessibility space.

Far too easy to assume. You think you know what they need, when in fact, if you ask somebody, they’re going to turn up all kinds of usability issues that you had never thought of. So that’s a terrific story. And of course, that led into your hearing work as well, which in the end resulted in the article in Forbes written by Steven Aquino.

And he led the article with the announcement that you part with a nonprofit called Three DP, Four Me in the hearing health space. How did that partnership come about and what are your mutual goals for the partnership? Well, it started with Jason Solomyer, who is the founder of Three DP for Me, reached out to us

as a technology

innovation company with a global presence to help understand, or basically to ask if we would be interested in helping him scale his idea.

The purpose behind Three DB for Me is to really democratize hearing healthcare as a whole, but bringing that technology angle to it to basically dramatically drive down the cost of hearing AIDS as well as making them available, more

easily available to populations who otherwise just simply do not have access. And

you can create programs that can do this at a small scale. And his interest is, once we sort out the process, how do we scale this to make a meaningful difference to a global audience?

After we were speaking

through the details around the technology involved and the ways that we can scale, and we ended up bringing in accenture as well as a partner for the same reasons. Because with the synergy of the global presence and the ability to make things happen at larger scales, we felt like together we could do more faster.

So we started out with this year just organizing a pilot program in Jordan for 50 children who went through the diagnosis process and verified that they were hard of hearing and

they all were able to sit with an audiologist. And from there, we’ve decided, or not decided, but

we’re exercising Jason’s vision around

creating a scan of the ear with a handheld Wand scanner, and that produces

a cloud data file. That then gets transferred or converted into a CAD file that can be sent to a 3D printer and a mold can be printed. And so the benefit of this is that you can see many, many patients in a day that have all of those scans and then print in volume the molds for those hearing AIDS.

And once that is complete, we, we create the plastics and then integrate the snap in the electronics and then fit the hearing aid. And while the pilot wasn’t quite this quick, the goal is to do this in a day, so somebody could come in and have their hearing test, have their ear scanned, have the part printed, have it integrated, and then have it fitted within the same day.

And the value of this is that not only is this way less expensive, but it also allows for

the ability to

minimize the specialist time. If you can see a large number of folks in one day and make that happen end to end,

that’s a win, rather than having to come back in weeks or months multiple times to complete the process. So that’s the goal. Okay, so the project is really around the custom ear molds, not the hearing AIDS themselves, but the custom ear molds so that you can do rapid turnaround on custom ear molds for the hearing AIDS.

Yes, currently that is the case. But I think the bigger picture is that recognizing that the solution goes pretty far, even beyond the technology in general. So there’s a need to be able to access the populations and make this available. So there’s a need for broad distribution. And then it’s not only when somebody has lived with hearing loss, maybe perhaps for their entire lives.

It’s not as simple as just giving them hearing and the day is done. We have to then there’s more process involved in speech therapies and just various other things that are required on that path. So we just want to make sure that we’re thinking about this holistically. And we obviously are concerned mostly with the technology angle here in terms of what we bring to the table.

But we do want to make sure we’re doing that in the context of the bigger picture to make sure that all of these folks that in the future will be able to benefit from this process will truly

yield the real benefit of hearing. Okay, so the ear molds are the launch pad for this partnership, but you’re really thinking more holistically to deliver hearing care services to a global population of underserved. People, perhaps people far from any audiologist and not thinking beyond the device. In other words, for holistic care.

Yeah. So that would be three DP for me’s. Mission. And I would say that it’s probably accurate to say that they’re focused very much in the general area around Jordan geographically. But the interest would be that the availability of these processes and this technology would scale out for folks to be able to utilize it as well.

Okay, no, that makes perfect sense. For example, I know a person who’s building solar rechargeable hearing AIDS in Botswana, having them fit, but I don’t think doing custom ear molds. And so this process, when you scale it, you’re going to reach out to other organizations such as his in order

to scale it and deliver it globally. And that by itself is no minor thing, because when you think about a person who lives far away from a hearing care professional who might be running a Roving circuit anyway, to be able to do everything in one go or one day versus multiple visits is not an insubstantial benefit.

Right. I think that’s a very key point to this whole thing.

But in the article, he also mentioned several ongoing projects, and he quoted what you just said, democratize hearing solutions around the globe. And people who know me know I’m pretty passionate about the need to deliver hearing care globally. There is now, according to the who, remember? Right. 300 million people who are debilitatingly or debilitated by hearing loss.

And how to reach those people when we’re shipping 20 million hearing AIDS a year, enough to fit 10 million people, roughly. It’s a drop in the bucket, the number of hearing AIDS being shipped today versus the number of people being served by it. And so

that goal of democratizing hearing solutions around the globe really picked my interest. And there were some other things mentioned in the press release related to hearing. Tell us about them.

So we’re also looking at

solving the problem around well, it’s the the hearing component of communication when you’re working with a PC. So Pandemic showed us that or gave us a new way of working where we spend a lot of many people spend a lot of their day on their computer for work, and that involves now exactly like this.

So meetings over video conference or audio conference. And while there’s been I think it’s quite. We were fortunate that the technology was at a place where it was to be able to support this shift.

And now as we are working with this new model and also looking at more hybrid types of environments where you’ve got remote people that are dialing into meetings that are taking place in person, this becomes even more of an issue. And it’s the ability, when I think about, I think actually maybe it’s important to step back and look and consider the much bigger picture around.

So I like to start with the importance of human connection and to build relationships and develop true connection. Communication is critical and so when we think about communication, we’re talking about

the sender, the message and the receiver. And so I think across all three of those components there are opportunities for problems to be introduced. And in the case on the hearing side, the receiving side,

this could be somebody who is just hard of hearing and that’s obviously a very complex space that means a lot of different things to a lot of different people. It can also mean contextually,

there are noises in the environment that are causing other disruptions and distractions and then even visually or there’s things that is going on that are taking somebody else’s attention. So there’s a lot of complexity there. And I like to work through each one of those,

one of those ideas and determine how we might

evolve technology to address them. So in this case that we’re talking about here today with hearing loss,

a key thing is that today we do not have or it’s very difficult typically to connect hearing AIDS to PCs.

And so the, the experience is not great and we’re changing that with the introduction of Bluetooth Low Energy Audio, or Le Audio. This is the kind of the future standard now of Bluetooth audio and the standard is basically

was complete this year recently and Microsoft has introduced it now as of last month into Windows Eleven. And companies like intel are building out platforms to accelerate

the audio stack on the platform.

And we’ll see this coming out starting as early as next quarter, but certainly going into next year where you together with an ecosystem of

hearing both. Hearing aid manufacturers, as well as headset manufacturers and all kinds of audio device manufacturers.

As companies begin to adopt Le Audio at scale, we’re going to see, I think, transformational positive impacts around usability and interoperability between hearing AIDS, hearing devices, and PCs in addition to phones. And so not only is it simplified connectivity to your computer so you can have a meaningful and productive conversation

and communications with others while you’re on your computer, but then also be able to transfer that to the phone without much hassle as well. And so it’s that thinking about that whole user experience end to end with both hearing AIDS as well as other hearing devices as well. Well, it’s actually interesting, quite a coincidence, actually, because I have for some years now been doing all my meetings with I have Phonak hearing AIDS.

And even though they’ll do Bluetooth Classic, it’s kind of a pain in the butt to make it work with the PC. So there’s a Phonak TV connector right down there. And so I do the direct streaming through the Phonak TV connector. And I’m actually talking about Le Audio in the Computational Audiology Conference a little bit later this month in which I address those issues.

And so it’s fantastic that intel is in this space, because in that talk, I share a study that was done. It was for a slightly different purpose, but it’s completely relevant. And what they did was they took a group of people with hearing loss and a group of people without hearing loss, and they gave them a test in which they did it visually first to make sure there was no cognitive impairment, where they were exposed to words, and then asked to recall them sometime later.

They all came out about the same. Then they did it in spoken words. And the people with hearing loss, their hearing loss was not corrected, and they listened through headphones. And when they did this test, the normal hearing people scored twice as well on the recall test as the hearing impaired people, okay?

And then they reversed it. So they corrected the sound for the hearing impaired people and gave simulated hearing loss through tuning the audio to the normal hearing people. And the results went completely backwards. Okay?

What that says is that when you’re having these meetings, you think about your career. Now you’re having these meetings on professional basis. Hearing impaired people are automatically at a disadvantage in terms of how much they understand and take away and are to participate during these meetings. Right? It’s a career disadvantage if they can’t hear well.

About. And so to make it so you don’t have to play these games like connect a TV connector, but just a simple connect and go so everybody who needs it or finds benefit from it can have clear audio steering directly to devices is wonderful. And of course that’s true for social engagements too.

To do that part of making PC connection to hearing devices seamless and effortless and with good audio quality that actually means a lot to a lot of people. Yeah, I believe so. And I believe that the problem as you described through that example, I feel like it’s just so much bigger than many realize and it’s so much because it’s not really

possible to measure. What I find in speaking with many people with varying degrees of hearing loss is that there’s many that just haven’t even seen an audiologist,

but they’re not treated with hearing AIDS, they have not addressed the problem. They kind of bear that burden and just say, well I’ll get by,

I got half the message or I get your point, they’re okay, it’s not great, but they just do the best they can. And if you think about that playing out around the world every minute of every day, people are just missing the point.

Whatever was being communicated, they did not get all of it. And then the person that was doing the communicating don’t even know that. There’s often not an indicator that says what I just said was not received exactly

as I intended it. And so this is like it’s a problem that is in many ways unmeasurable, but it is massive. So to the extent that we can start solving on both sides of this equation, both from the speech side, for clarity of speech enhanced with technology, to the ability to not only

improve the audio signal and the quality of the audio of the voice, but then remove the distracting portions of the signal or the unnecessary portions of the signal. All these kinds of post processing things are also candidates for future enhancements. Whether they’re happening

as they can within a hearing aid or maybe it’s being

post processed on a PC client platform in a way that is specific to that individual’s needs. It’s that kind of personalized experience that I think that we can see in the future if there’s really a lot to unpack there. And when you talk about communication being a two way street, I think.

There’s less realization than there should be for people doing the talking that it’s important to have good audio quality, because if you’re doing the talking, it means you want people to take away and understand what you’re saying. Well, if you have poor audio quality, you will not get through in the way that you want.

And it’s also so that in the hear loss spectrum, people are in all places because it can be gradual. Some people don’t even realize it. Others may be in denial mode. But I remember at the beginning of the pandemic how much conversation there was about zoom fatigue and how much of zoom fatigue was the cognitive overload that came with trying to listen under less than ideal conditions.

When you were doing this for hours, how tired were you at the end of the day of meetings compared to the beginning because you were straining the whole time?

I haven’t seen consistent measurements on how that works exactly, because it’s hard to measure cognitive load and all the rest. But empirically,

better sound quality makes for easier meetings with less stress. And the study I cited says that there’s a real benefit to being able to hear well. So delivering good hearing and, well, good communication quality both ways during zoom meetings

is an important thing. Is an important thing. But you also bring up now, because communication is a two way street, we’ll take it one step further. We’ve been talking about people with hearing impairment, but I was surprised at what intel is doing on alternative communications for people who cannot speak or have speech disorders.

I didn’t even think about it, actually, until I was doing my homework for this, that it’s intel who at least as of late, had been supplying the AAC solution for Stephen Hawking. And then the light bulb went off in my head. You have been working in this area, too.

Tell us more about that as well. Yeah, so that’s probably one of the longer standing things, areas where intel has been actively engaged. And so that came around almost a couple of decades ago. It’s been a long time that intel has been involved, that intel had been involved with Stephen Hawking.

And so it started off the idea was just simply, how can we improve his ability to interact with his technology? And that would include his synthesized voice?

And so basically,

how much could we bring to the table to automate his to take full advantage of the capacities that he had and understanding that with ALS and motor neuron disease and. And similar types of afflictions.

This is a gradual changing type of

disability that you need to control for along the way. And so you can create a solution that will work temporarily but very well with the understanding that that solution is going to have to evolve over time to continue to meet the person where they are terms of their abilities.

So we wanted to build something that was flexible in that way and that could take basically any type of input and convert that into a switch mechanism that would then allow him to select text, produce the content he was interested in and then be able to output with his synthesized voice.

One of the interesting stories behind that though, is that Stephen Hawking’s voice is iconic. And along the way that synthesis was very old. That was not new technology by any stretch. And the team had continued to work on all kinds of voice improvements with that synthesis over time. And they would show it to him and say check this out, we can now do better intonation and we can do these things and make it sound more realistic.

And he was just like, don’t. He’s like this is my identity and the world knows me as this voice and I don’t.

We learned a couple of things from that. One is that that’s critical, like just thinking about your voice as your identity, that’s profound. And the other thing is that

it’s not always the new technology that wins out, it is going to be personal preference and there’s lots of other things that are going on. So this idea that we’re listening to the communities that we’re trying to serve is just so important. And I bring this up in all the contexts in which we are acting in this space that we are not developing technology for the sake of it or if we think we have an idea that might help a group that is marginalized or experiences barriers.

We’re not pursuing it unless we’re engaging with the people that are experiencing the problem that we’re trying to solve and bringing them into that conversation with research, formal, informal conversations, understanding. So I think that started with Stephen and

along the way

that evolved in such a beautiful way. Because not only were we able to help him continue to contribute his brilliant insights with the world, but both Steven and intel were both very. Much for the idea that all of that technology was not to be just limited to his use and that we would open source every aspect of that and to allow others to use it and or build upon it.

In whichever way they see fit, so that all of that work is available as the assistive, context aware toolkit that can be found on GitHub.

That’s inspiring to think that people can just jump in and evolve the technology to suit specific needs

and or help others that are

maybe experiencing similar

scenarios or have just similar challenges. But

the latest thing that we’re doing in that space is we’re adding the ability to use brain computer interface as an input and that’s a specifically

it’s more of a

I’ll call it low cost brain computer interface. So signals that are not high fidelity but lower fidelity signals, but using AI to do a better job interpreting those signals and then transferring those signals into actionable

computing tasks that allow to select specific other keyboard

keys on the keyboard

or move a mouse on a screen, those kinds of things. And that allows somebody with absolutely no motor control whatsoever to communicate via a computer and a synthesized voice. And then you add lastly the idea of generative AI and large language models into that and their ability to put together really robust communication in a short order is becoming more and more a reality.

So I think there’s a tremendous upside here for folks that are becoming

experiencing ALS motor neuron disease. Their ability to continue to communicate as their physical function decreases.

Yeah, no, there are a lot of different things going on that I think make communication much easier with less cognitive overload and more fluent for people who cannot speak in the traditional way. And in fact, it’s funny because actually one of my clients is a brain computer interface company called Ava and we’re working on exactly that situation and you can see the machine learning or AI in the AEC devices themselves.

Which in other words, is more predictive so that you can get to the intended thought or speech more quickly, I think is probably the single most beneficial thing that’s going on here. You don’t have to type out keys that you can get to even complex sentences very quickly, thanks to AI capability today.

And so for intel, then, you’ve leveraged talking experience, not into finished AAC products, alternative communication products, but you’re enabling people to develop innovative solutions through the assistive context to wear toolkit. Is that correct? That’s correct. Okay, so there’s going to be intel inside if I’m allowed to do that in a number of different devices, 1 may not even know.

But you’re leveraging all your experience to advance the state of the art in AAC. Is that correct? Yes. That’s really fascinating. See, it goes back to the beginning when I said I think people don’t necessarily know all what intel is doing in this space because you’re going about it quietly.

And that’s a great example. Probably having much more influence in that space than people realize by a long shot.

Hope so I just really think that we just want to demonstrate how technology is place in this world and in terms of how do we improve the lives of people. And for my particular interest, how are we doing that in such a way that

we are both removing barriers that exist today. So technology barriers, how do we make technology better so the barrier is removed, but also to think of technology as an enabler in not only the access to digital information, digital experiences, the digital economy, all of these things that we need to ensure that people across a spectrum of disability are able to come forward as we create these new experiences.

And it just needs to be equitable, but also outside things. Like

I guess the analogy would be how GPS has changed the way that people drive in general and get from A to B, relying on GPS, people that are visually impaired or blind, relying on technology to do the same. But just for simply walking indoors, walking through public spaces and being able to have that same sort of experience that’s guiding them and providing all the information they need around the context that they’re in.

These are all things that are just examples of technology that are enabling people, enabling independence at the very least, and hopefully also enabling people to pursue and complete education.

And become employed and have meaningful careers that will last and then that way they’re contributing completely to society, being productive and

there’s just win upon win in that equation. So pretty excited about it. Yeah, win upon win is a good way of putting it and taking what you said. I think of it in terms of enabling people to have the best possible lifestyle and best possible quality of life and communication is so fundamental to that.

Communication is probably one of our most fundamental needs once you get beyond food and shelter and I would almost say it’s a fundamental right really. And the fact that we have this technology to enable so many more people to communicate more effectively is really exciting and so I really applaud all the things that intel is doing in that space.

And as we start to wrap up this discussion, what closing thoughts do you have about where we’re at today and how you see us going forward and how you see intel going forward? Well, I think

one of the key things that I’ve been really spending a lot of time thinking about lately is that I think that we’re really at a point where things will be meaningfully changing with the way that basically with our relationship with technology as a whole. We have for decades been essentially following the same types of patterns with our use of technology.

If you think about

in the work context we’ve been using desktop PCs and laptops to do the work.

The example I like to cite is that when I started at intel in the late ninety s, I was given a laptop and in my job I would use create documents and I would send email and I would create spreadsheets and presentations and that’s how I would communicate in my job.

And I fast forward to 2023 and that’s exactly what I’m doing.

I have a laptop that is faster and that is more certainly more powerful.

The way that I interact with it is probably is more pleasant but it is the same set of processes and the reason why I bring that up is that the same barriers exist. If you’re somebody who is unable to use a keyboard the whole process is based on a keyboard today still.

If you’re somebody who’s not able to see a screen the whole process is visual. If you’re unable to hear what’s going on in meetings remotely that whole process is auditory and. Know, we’re making incremental improvements to that to ensure to try to remove small parts of those barriers over time.

And there’s a lot of good work that’s been done. But what I’m thinking is that

with the amount of

processing that is coming to market over the next decade, the availability both on a client, like on a computer, in a phone, or in the data center,

it’s going to simply enable ways of computing or interacting with technology that we just haven’t done before. And the idea of immersive computing is

definitely an important example. Whether you’re talking about mixed extended reality, virtual reality, or something as simple as

a voice assistant,

when you’re interacting with technology that’s more

natural and conversational, the entire way that we think about designing for that experience changes.

So when I put all those together, the different modalities that we can bring to bear with all the types of sensors that we have at our disposal today, and the proliferation of very powerful generative AI, and also just AI in general, just computer vision, language processing, all of these things, they’re exploding.

And it’s going to allow us to just have a completely different way that we interact. And I have some ideas for how that can look, but no one really has the crystal ball here in terms of really how it’s going to change. But I’m assured, I can assure that it’s going to change.

And that’s the exciting part to be part of this time. Creating that future is incredible. Well, I, for one, really look forward to seeing just exactly how you and intel create that future for all the reasons you just said. So I very much appreciate you joining me today. It was really great to have you on.

And thanks everyone for watching this episode of this weekend hearing. Thank you.

Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.

Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including AppleSpotify, Google and more.

About the Panel

Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.

Darryl Adams is the Director of Accessibility at Intel and has been with the company for more than 26 years. Darryl leads a team that works at the intersection of technology and human experience helping discover new ways for people with disabilities to work, interact, and thrive. Darryl’s mission is to connect his passion for technology innovation with Intel’s disability inclusion efforts to help make computing and access to digital information more accessible for everyone and to make Intel an employer of choice for employees with disabilities.

Leave a Reply