This week, Andrew Bellavia is joined by Simin Soleimanifar, a research scientist and product specialist at the University of Illinois Urbana-Champaign. They explore Simin’s research on voice perception and speech production in cochlear implant (CI) users.
The study involved recording the voices of 13 individuals with bilateral CI devices and analyzing their ability to control volume variations during sustained vowel vocalizations. The findings revealed that CI users experienced higher voice variation compared to normal hearing individuals, indicating a disrupted vocalization auditory feedback loop. The interview highlights the implications of these findings for CI users’ communication and vocal health, emphasizing the importance of audiologists, speech language pathologists (SLPs), and interdisciplinary collaboration.
Full Episode Transcript
Hello everyone,
and welcome to this episode.
of This Week in Hearing,
I have with me
Simin Soleimanifar
She’s a research scientist and
product specialist at the
University of Illinois at
Urbana-Champagne,
and she’s going to share her
search that she’s done on
cochlear implants and how it
affects the way people speak.
Simin,
please introduce yourself and
share more. Hi, Andrew. And hi,
everyone.
Thank you for having me here
today. My name is Simin.
I’m thrilled to be here as a
guest on this amazing podcast.
A little background about me is
that I got my Bachelor’s and
Master’s degree in audiology and
then I started my PhD program
and speech and hearing
science at U of I.
Now I’m approaching the end of
my program and I’m working
on my dissertation,
which I’m excited to
talk about it today
I’ve dedicated the
research my research area
focused on hearing impaired
community,
specifically those with
cochlear implants.
And today I’m excited to share
insights and conclusions
from my research.
And thank you again for
having me today.
And I’m ready to jump right in
and explore the fascinating world
of hearing care and cochlear
implants. Well,
it’s a pleasure to have you.
It was my good fortune to hear
some of this research at the
Project Voice conference
earlier.
And so I’m really excited
for listeners,
especially hearing care
professionals,
to understand more about
what you did.
Let’s start by explaining what
was the problem that you were
looking to investigate when you
embarked on this research.
Sure. So as I said,
my research focuses on
cochlear implants.
So for people who are not
familiar with this technology,
I’m going to just give you
a brief description.
So cochlear implants are like
tiny medical devices that are
implanted into our inner ears to
provide a sense of sound and to
restore the hearing to normal
level for people with severe
to profound hearing loss.
And these devices consist of an
external part, an internal part.
They convert like acoustic
signal to digital version and
they can be implanted
on both ears.
We call it bilateral or just one
ear that we call it unilateral
implantation. Well,
so far the majority of research
has been focused on evaluating
the impact of these devices
on speech perception or on
localization or other perceptual
abilities in cochlear
implant users.
And these studies have
constantly showed a significant
improvement of cochlear implants
on speech perception,
either in quiet or noise or
localization or these
types of areas.
And also they show that the use
of using both devices
cochlear implants
it.
Has greater benefits compared to
just having one ear
and one device.
Because bilateral devices,
they can give you more natural
and enhanced listening
experience.
They can resemble binaural
hearing abilities that normal
hearing listener has.
But the question here is that
how about their voice?
Voice in hearing impaired and
specifically those with cochlear
plants is an area that received
less attention.
So that was the first question
that brought me to
this research.
I’m going to go a little
bit into details.
So we know that we perceive,
and then what we perceive that
it can change what we produce.
You could hear yourself and then
monitor yourself and then you
can correct the errors in your
speech production real time
while you are hearing yourself.
So this is called speech
perception and production loop
and it can be activated by
different types of feedbacks.
And the most important one
is auditory feedback.
So we know that CI users have
access to auditory feedback even
to a limited extent
after CI surgery.
So compared to before
getting CI,
they’re going to be better in
monitoring their voice.
But the problem is that they
still show some struggles and
problems with certain type of
vocal task or certain types
of their voice features.
So they could have robotic
or monotonic voice,
making it difficult for others
to understand them.
They may have difficulty
regulating their speech,
their speech level,
their loudness level,
and it leads to further
communication barriers for them.
And these challenges,
they can have impact affecting
their social interactions,
job opportunities and overall
quality of their life.
So if you have worked with them,
you might notice that they may
complain about the level of
their voice and they’re not sure
that how loud or soft they’re
speaking. Right?
So I was wondering that what is
the effect of using these
devices specifically both
together on their voice and on
their ability to control the
volume of their voice?
So that was the big picture of
what I did in my research,
their specific goal.
It’s interesting,
when I first heard you say
that at the conference,
I had to think a little
bit about
internalize what you said.
And I realized a good example of
that for people to consider
is when you wear occluding
earphones or even if you wear
I’ve tried lots and lots of
hearing assistance devices
and what now became.
over the counter hearing aids.
And if they’re occluding
devices,
you tend to speak more quietly.
And so I would take these to a
loud restaurant and wear
them and try them out,
and my spouse would be on the
other side of the table,
and she’s like,
you have to talk louder.
You have to talk louder,
because the perception of my
voice was different than
what she was hearing.
And so that makes
a lot of sense.
And I know even hearing aid
people have to work feeding a
natural amount of voice into
people’s ears so that their self
perception is accurate.
But I understand with
cochlear implants,
because you’re not necessarily
100% mitigating really severe
profound hearing loss,
because that’s the state of
cochlear implant technology.
This could become a
real problem. Now,
you had some collaborators when
you did this research too,
who was involved in it?
Sure.
I’m working in Binaural
hearing lab at U of I,
and my supervisor is Dr.
Justin Aronoff.
We’re together running
this project.
There are some other students
involved in the lab,
and we recently are
collaborating.
We are using Cochlear Americas
brand in our research,
and we’re looking forward to do
the same research with
other brands.
Everything is going
on in our lab.
Okay,
got it.
And so then how actually did
you conduct the research?
What was the structure
of the research?
And how did you go about
determining people’s self voice
perception and its effects?
Yeah, that’s a great question.
So we recorded the voices of 13
individuals with bilateral
cochlear implants. As I said,
they all had the same brand,
Cochlear Americas,
and they performed a sustained
vowel vocalization test.
So I recorded their voice while
producing twelve American
English vowels
in three different conditions.
And they were randomly chosen.
So the conditions were using
both devices together while
producing the vowel,
and then using each ear
individually and alone,
right ear alone,
and then left ear alone.
And to see how good they are at
controlling the variation
of their voice volume,
we use a metric that is called
variation of peak amplitude
or VAM.
That is a measurement of long
term control of amplitude
variation.
And it actually shows the
stability of loudness,
or volume over a waveform.
And having higher value
of this metric,
it means having greater
variations or less ability,
poorer ability to control the
volume of your voice.
So actually,
we were looking for
smaller values.
A smaller values means
better control.
Okay, yeah,
that was the method that we
used for our research.
And so then did you find
that people spoke
with a consistently different
volume or their volume was
actually wavering as they spoke.
Yeah, exactly.
So what we saw was that the
results of this study showed
when CI users,
they use their devices
to vocalize,
they show a high amount of
variation in their voice in
their waveform. So, for example,
like a normal hearing listeners,
they easily can keep their voice
at a fixed level and produce
a long vowel
for like 5 seconds
or maybe longer.
And you can see a smooth and
flat waveform in their voice.
They’re just like, say,
without any going up and back.
But what we saw for CI users,
either bilateral CI users
or unilateral ones,
we saw a lot of variation in
their voices compared to normal
hearing listeners.
They couldn’t fix their voice
at the same level.
They were a lot of
like,
going up and down peaks
and valleys.
But that wasn’t the only fact
or finding of this study.
The most interesting one was
that as soon as bilateral CI
users switched to use
just one device,
they got better at controlling
their voice.
We could see
lower amount of variation,
loudness and volume variation in
their voice as soon as they turn
off one of the devices.
So it’s kind of like,
contrary to popular belief
that we’re saying,
having two devices is kind of
like resembling having two ears.
And you can get better
at speech perception,
but when it comes to
speech production,
it’s kind of like we
can see conflicts.
We can see that
no,
when they use just one device,
they can get better at
controlling their voice.
And this is not just on level.
So we previously did a research
in our lab and we saw kind of
the same effect for the
pitch of their voice,
how they can be accurate
at singing,
how they can keep the pitch of
their voice at a specific level.
And again,
we saw that they’re pretty good
with using just one device
compared to both device
in voice test.
So the general conclusion of
this research was that it seems
that using both devices,
although it has its own benefit
in perceptual tests,
but it seems that it’s
negatively affect their
voice in CI user.
So that seems to imply that the
vocalization auditory feedback
loop is actually broken. Right.
So, like,
I use the example of occluding
earphones – okay,
it caused me to speak
at a lower volume.
It was consistent volume,
just lower,
and with a little
bit of practice,
so I could raise my voice and
speak so other people could hear
me wearing an occluded earphone.
But, you
are implying really that the
whole feedback loop is broken
if both volume and pitch are
wavering all over the place,
would you draw that conclusion?
And if so,
are there actually any remedies
for that? Yeah, absolutely.
The reason is not clear
enough yet,
but what we assume is that there
might be a mismatch in loudness
cues that they can get from each
ear. And as I mentioned before,
what you perceive can reflect
on your production.
So if you get mismatched
cues from each ear,
it makes a confusion when you
want to produce like a vowel or
a word or like long sentence.
It could be because of auditory
mismatch, as you mentioned,
between ears.
It could be even like
neural health.
You might have a better neural
health in one of your ears.
This is not something that
can get easily fixed.
But for remedies,
what we can do firstly is
that CI manufacturer,
they can work on syncing to two
devices at the maximum rate.
So if we can get the same
cues from each year,
I assume that it kind of solves
the problem to a great extent.
But then when it comes to voice
experts or speech language
pathologists,
I kind of have some takeaways
for them that I can mention
later in our discussion. Right,
okay.
So the immediate takeaway is
that it’s more about the
mismatch than it is the function
of the CI itself. Right.
And
are there things that hearing
care professionals can do in the
adjustments in the settings of
the CIs in order to get the
perception of the voice more
matched on one side
versus the other?
Or is that really an issue
for the manufacturers?
So
it depends on CI manufacturer.
But voice expert and speech
language pathologist,
they can work to train CI users
to constantly monitor their
voice consciously.
So I’m going to briefly first
talk about a little bit about
impact of this issue on their
communication life.
So having problem with
volume control,
it can impact their
communication. For CI users,
for example,
speaking too loudly
or too softly,
it can make it difficult for
others to understand them.
It can also make fatigue or
discomfort for listeners
or speaker.
And I had some CI users that
saying they assume they’re
talking like normally by
reading other people,
like facial expression,
they realize that they’re too
loud and they lost their
interest in conversation.
You know, and also like in,
in consistent like volume,
it can make it harder for
listener to follow the follow
up conversation.
And it becomes very important
when you are in a group setting
or when you are talking
about a complex topic.
But more important thing is that
CI users are at a risk of
developing voice disorders.
So these disorders can happen
because of increased strain
placed on their vocal cord when
they speak with high volume
variation and these strain,
they can lead to hoarseness,
vocal fatigue or other vocal
problems and disorders.
So CI users,
they may experience discomfort
or even pain when they speak.
So that’s why that I wanted to
mention this before talking
about takeaway for hearing care
professionals because it’s
really important to diagnose
their voice disorders from the
earliest stages and avoid
leading to further disorder
and disabilities for them.
Here are a few takeaways
that I have.
Hearing care professionals,
they should evaluate and monitor
their patients voice quality as
part of their regular
and overall care.
And by incorporating voice
evaluation into a routine
assessment,
they can identify difficulties
in their patient’s voice
quality. Also audiologists,
they can tailor CI reading to
change the settings
to the way that
they can maximize matches
between the cues that
participants get from each year.
So part of it on our
audiologists in the hearing care
team and then education and
counseling also is
very important.
Professionals could educate
their patients about the
potential changes in their voice
quality and the importance
of vocal health.
I had some CI users in the lab
that they had history of
alcoholism or smoking and their
voice got worse than their other
peers in the same group of
hearing impaired community.
So they should care about their
vocal health and also hearing
care professional,
they can provide strategies
and some exercises to help
patients improve their vocal
health and change the overall
of their communication.
And also collaboration and
interdisciplinary approaches
is very important.
Hearing care professional can
work closely with speech
language pathologists,
with vocal therapists and they
can develop some treatment plans
that it can both address the
auditory need and the
vocal aspect of.
Communication for CI users.
So they were some takeaways
from this research. Okay,
that’s really interesting.
And what I asked earlier that if
the feedback loop was broken
isn’t really true.
It’s only that when you
have the mismatch,
you don’t know how to
interpret that.
And so you’re going to waiver
all over the place.
But it sounds like then with
appropriate training,
say from a speech language
pathologist,
you can actually work within
the new feedback signals,
if you will,
and learn how to modulate and
control the pitch of your voice,
even so, is that correct?
Exactly.
So as we talked in Voice AI
Conference, so professionals,
they can explore the integration
of voice AI technology
into their practice.
So voice AI systems can provide
real time feedback on various
aspects of a speech,
like speech loudness intonation.
So by using these tools,
like hearing care professionals,
they can empower their patients
to self monitor and to make
adjustment to their speech.
And it will ultimately improve
their voice quality. So,
for example,
if the voice AI can just monitor
their voice for a long time,
for a long, late period,
long
the voice expert,
speech language pathologist
and the hearing care team,
they can monitor the changes
in CI users voice,
and they can teach them how they
can make adjustment to make
their voice fixed in
the same level.
So there are some training
methods that they can absolutely
use to improve their voice
quality and communication
skills. Okay,
so in addition to in person
training and rehab from an SLP,
you’re saying there are also
these sort of AI based tools.
Are they already available or is
that something that
would be possible?
I think both. So as I heard,
I’m not very familiar with the
technology, but as I heard,
there are some technologies
that they’re launched,
but they are planning to just
get it better and better,
become better and better.
So I think it’s not ready to
use like in a larger scale,
but I’m hoping that very soon
they can just be very useful.
Okay.
Which is really interesting
because I know at least one of
the CI manufacturers has an aural
rehab program already that can
be used to complement in person
rehab and training.
And so what you’re saying is if
these sorts of AI based tools
are built into those,
people can do additional kind of
like physical rehab, right?
Do additional rehab at home with
the tools in addition to having
in person rehab with an SLP,
and therefore improve the way
they vocalize. Is that correct?
Yeah, exactly. Because voice AI,
they can provide feedback
and they can be like a kind of
like personal coach for our CI.
users. Many of my subjects,
they have desires to sing
or play in a band.
So just imagine such
a monitor system.
They can help them to sing or
play in tune. So, yeah,
besides having or participating
in rehabilitation programs,
they can have their own personal
coach at their home,
which is really interesting,
because in thinking of the
greater lifestyle and well being
of a person is more than just
listening one on one with a
hearing aid or a cochlear
implants.
It’s all the other things they
do in their life, too.
Exactly.
A full life. Yeah.
They really have desires
to sing.
I have a lot of subjects
in the lab that,
before getting CI or before
becoming deaf,
they were playing they were
playing some instruments
or they were singing,
but they couldn’t after that.
So they really would like to
enjoy different aspects
of their lives,
and I think they deserve it.
So I’m very looking forward to
see how technology can be
integrated in their lives and
improve the quality of
their life. Well,
this is a really fascinating
line of research,
and I can really see how
it will, in the end,
help CI users lead a fuller
and more enjoyable life.
And all the different
things that they do,
whether it’s music,
the way they interact
professionally,
and how important it is to be
able to hear and speak
well professionally,
lots of different parts of a
person’s lifestyle that this
research has the potential to
affect in a positive way.
It’s terrific,
and I look forward to seeing how
this all plays out in terms of
the therapies and the
tools you named.
So I appreciate you coming
on to explain that to us.
Do you have any closing
thoughts for people?
No. Thank you so much, Andrew,
for having me on This Week in
Hearing today.
It’s been a pleasure to share my
research and decide with your
audience and yeah, that’s it.
Thank you.
And if people want to engage
with you after hearing this
podcast, how do they reach you?
They can check my LinkedIn page.
All of my contact information
is on my LinkedIn account.
And yeah,
actually so we’re recruiting
for CI users in the lab.
So if any of these people are
watching this podcast,
I would love to invite them
to come to our lab.
We’re doing a lot of different
experiments and for hearing
care professionals, teams,
they can reach out to me sure,
absolutely.
On my LinkedIn account. Well,
thank you, Simin.
I appreciate you coming on
and everybody watching.
Thanks for tuning in. Thank you.
It’s been a pleasure.
Have a great day, Andrew.
Same to you.
Be sure to subscribe to the TWIH YouTube channel for the latest episodes each week and follow This Week in Hearing on LinkedIn and Twitter.
Prefer to listen on the go? Tune into the TWIH Podcast on your favorite podcast streaming service, including Apple, Spotify, Google and more.
About the Panel
Andrew Bellavia is the Founder of AuraFuturity. He has experience in international sales, marketing, product management, and general management. Audio has been both of abiding interest and a market he served professionally in these roles. Andrew has been deeply embedded in the hearables space since the beginning and is recognized as a thought leader in the convergence of hearables and hearing health. He has been a strong advocate for hearing care innovation and accessibility, work made more personal when he faced his own hearing loss and sought treatment All these skills and experiences are brought to bear at AuraFuturity, providing go-to-market, branding, and content services to the dynamic and growing hearables and hearing health spaces.
Simin Soleimanifar is a researcher in Speech and Hearing Science, specializing in research and development of hearing medical devices. Her work ranges from conducting feasibility studies to integrity testing and human studies. She also focuses on high-quality audio systems and voice-assistant technologies, with the goal of connecting people to the beautiful world of sound.