What Would be a Preferred Hearing Aid Performance?

 

Would a consumer select the same hearing aid performance as was recommended by an audiologist if given a choice among different signal processing schemes?

Results show that there is not agreement, at least for four different hearing aid algorithms (different hearing aid operating systems) as reported in this study. This post will provide results of consumer preference comparisons from two countries for the same study, the preferred algorithms in different listening environments, speech intelligibility in noise with their preferred algorithms, and SNR intelligibility comparisons among the provided algorithms.


Country Comparisons

 

Would the results be similar to those found in the U.S. if conducted with subjects in a different country – in this case, the Netherlands? Individual algorithm preferences in the Netherlands showed the same inconsistency between the subject algorithm preferences versus those recommended by a group of audiologists. Because individual algorithm preferences in the Netherlands showed the same inconsistency between audiologists’ recommendations and subject preferences, they will not be duplicated here. Instead, a few graphs have been selected to provide additional information not measured in the U.S. study.

 

Figure 1. Signal processing algorithm rank order preference differences between subjects from The Netherlands and the United States, for the same study.

Figure 1. Signal processing algorithm rank order preference differences between subjects from The Netherlands and the United States, for the same study.

Subject algorithm preferences between the two countries for first and second rank order are shown in Figure 1. The Clarity algorithm was ranked highest for first and second choice by subjects in The Netherlands. In the U.S., the Comfort and Clarity algorithms were rank ordered the same, with both being the preferred algorithms. The Comfort algorithm showed the greatest difference between the two countries. There seems to be no logical explanation for this. In general, the results show that there appears to be little consistency in how subjects rank order the algorithms they prefer, regardless of country.


Preferred Algorithm in Different Listening Environments

 

Subjects were asked to rank order their algorithm preferences when listening in the following common environments in The Netherlands study: quiet, party, car, street, and music (Figure 2).  The Clarity algorithm was the clear winner, ranking first or tied for first in every listening environment other than music. This was followed closely by the Equalizer algorithm.

Figure 2. Subjects’ preferred algorithms (rank ordered 1 and 2) when listening in the environments identified, and when allowed to adjust between the four different algorithms used in this study.

Figure 2. Subjects’ preferred algorithms (rank ordered 1 and 2) when listening in the environments identified, and when allowed to adjust between the four different algorithms used in this study.


Speech Intelligibility in Noise with Preferred Algorithm

 

Figure 3. Speech intelligibility (SNR) when listening in noise. Each 1-dB improvement translates to a 9.6% intelligibility score increase.

Figure 3. Speech intelligibility (SNR) when listening in noise. Each 1-dB improvement translates to a 9.6% intelligibility score increase.

The results of speech intelligibility, as expressed in signal noise ratio (SNR) when listening in noise, is shown in Figure 3 for the subjects’ preferred listening algorithm. The graph shows the overall score for all fifteen subjects, for those having a pure-tone audiogram less than 45 dB, for those having a pure-tone average greater than 45 dB, and a comparison to their past hearing aid. The algorithm subjects preferred provided improved listening in noise when compared with their current hearing aid, regardless of the hearing levels as expressed by the pure-tone average (PTA).

 

SNR Intelligibility Improvement by Algorithm

 

Figure 4. Average SNR (signal noise ratio) improvement for each of the algorithms when all subjects are measured on each algorithm.

Figure 4. Average SNR (signal noise ratio) improvement for each of the algorithms when all subjects are measured on each algorithm.

Do each of the algorithms provide equal SNR improvement when compared with each other?  Figure 4 showed the SNR improvement with subjects when tested across all algorithms. It shows that the Clarity algorithm provided the best SNR improvement, with the Comfort algorithm showing the least SNR improvement. So, for the algorithms under investigation, it appears that the algorithms do not provide equal SNR improvement.


Summary

 

Experienced hearing aid wearers were provided with an open platform system in which they were allowed to move back and forth and select between four different but common algorithms (meaning four different hearing aid operational characteristics). These algorithms, (Fidelity, Clarity, Comfort, and Equalizer) are not to be confused with selecting among different listening environments (Quiet, Noise, Music, Restaurant, etc.), which is a common feature in current hearing aids. In essence, each algorithm is essentially a different hearing aid. The study purpose was to determine if the algorithm recommended by an audiologist would be the same as what the consumer would prefer following a two-month period during which subjects wore this system (BTE hearing aid and remote algorithm selector).  Results showed that there was essentially no agreement.

The investigation results provide lingering questions about hearing aid selection – what do we really know about hearing aid selection?  Overall, results from this study show:

  • Similar hearing thresholds are not satisfied by the same hearing aid signal processing scheme,
  • Appropriate hearing aid circuitry is not as accurately predicted as one might be led to believe,
  • Signal processing preferences by hearing aid users change over time,
  • Patients/clients/consumers’ signal processing preferences bear little resemblance to recommendations made by audiologists,
  • Patients/clients/consumers are interested in participating in their hearing aid selection.

 

 

*This article was originally published at Wayne’s World on November 10, 2015. Title image courtesy USAR

 

 

Wayne Staab, PhD, is an internationally recognized authority on hearing aids. His professional career has included University teaching, hearing clinic work, hearing aid company management and sales, and extensive work with engineering in developing and bringing new technology and products to the discipline of hearing. Dr. Staab is the Founding Editor of Wayne’s World and served as the Editor-In-Chief of HHTM from 2015 to 2017.

Sound Localization – Time-of-Arrival Differences at the Ears

 

Time-of-arrival of sound at the two ears is an important contributor to sound localization. In this continuation of a series on binaural hearing, special attention is given to the second major contributor to sound localization, that of time-of-arrival of the sound at the two ears. Last week’s post on localization featured interaural intensity difference (IID) as the other major contributor to sound localization by humans. Interaural means between the ears.

 

Interaural Time-of-Arrival Differences (ITD)

 

Figure 1. Sound from the front arrives at the two ears at essentially the same time and is heard with no time delay difference between the ears.

Figure 1. Sound from the front arrives at the two ears at essentially the same time and is heard with no time delay difference between the ears.

 

 

 

Normally, sounds generated in the environment travel through air and arrive at both ears. Sound travels quickly through the medium of air, but it takes a short amount of time to reach the ears. This time element provides useful information to help the auditory system determine where the sound originates. If the sound comes directly from the front (Figure 1), the distance to the ears is the same and the sound arrives, and is heard, at both ears at the same time. When the sound source is an equal distance from both ears, at either 00 or 1800, the ITD is equal to 0.

 

Figure 2. Interaural time-of-arrival difference (ITD) of a sound arriving at the two ears. In this case, the distance SL (sound left) is greater than SR (sound right), meaning that the sound waves reach the right ear (near ear) slightly sooner than for the left ear (far ear).

Figure 2. Interaural time-of-arrival difference (ITD) of a sound arriving at the two ears. In this case, the distance SL (sound left) is greater than SR (sound right), meaning that the sound waves reach the right ear (near ear) slightly sooner than for the left ear (far ear).

 

On the other hand, when the sound comes from any direction other than directly from the front or rear, a disparity between the time-of-arrival of the sound at the right and left ears occurs. The magnitude of the difference depends on the azimuth from which the sound is directed, but the interaural time-of-arrival difference (ITD) is greatest when the sound comes from 900 to one ear or the other (Figure 2). In the horizontal plane, this results in an approximate 0.6 msec delay in the signal arriving at the contralateral ear (Figure 3). The human auditory system is capable of responding to the very small ITDs that result from just a few degrees of azimuth displacement of the sound source.

 

Figure 3. Interaural time difference (ITD) in msec from 00 through 1800 azimuth. The same ITD occurs for different horizontal azimuth positions around the head (blue dashed line values as examples). Note that ITDs can be the same when from the front or when from the rear, resulting in ambiguities as to where the sound is coming from, resulting in frequent front/back errors as to location.

Figure 3. Interaural time difference (ITD) in msec from 0 degrees through 180 degrees azimuth. The same ITD occurs for different horizontal azimuth positions around the head (blue dashed line values as examples). Note that ITDs can be the same when from the front or when from the rear, resulting in ambiguities as to where the sound is coming from, resulting in frequent front/back errors as to location.

 

Although the auditory system makes use of both ILDs (last week’s post) and ITDs, the latter are thought to play a more significant role in how far left or right a sound source may be. The ITD is fundamental to localizing sound sources of frequencies lower than 1500 Hz. Above 1500 Hz, the cues become ambiguous. Both abrupt-onset and low-frequency sounds give rise to an ITD because sound reaches one ear before the other.

 

Duplex Theory of Sound Localization

 

The duplex theory of sound localization was introduced by Lord Rayleigh (1877-1907), taking into account both Interaural Intensity Differences (IIDs) and Interaural Time Differences (ITDs). The duplex theory states that ITDs are used to localize low-frequency sounds and IIDs are used to localize high-frequency sounds (explained in the previous post).

Localization for human adults is not good between 2,000 and 4,000 Hz, with poor sensitivity for both ITDs and IIDs in that range. Additionally, as shown in Figure 3 of this post and from Figure 3 of the previous post, a particular ITD and IID can arise at more than one azimuth location in space, resulting in some ambiguities, especially on front/back errors. This has led to what some have identified as the “cone of confusion” (Figure 4).

The pinna casts a shadow of sounds that originate from behind the listener. The amplitude of sounds above 2000 Hz coming from the back is about 2-3 dB lower than sounds originating from the front. This spectral difference is accepted as an additional cue to aid in both elevation judgments and in front/back discrimination[1].

 

Figure 4. Cone of confusion that can result from simple interaural cues. Fortunately, a normal functioning human auditory system can usually resolve such cone of confusion conditions.

Figure 4. Cone of confusion that can result from simple interaural cues. Fortunately, a normal functioning human auditory system can usually resolve such cone of confusion conditions.

 

Cone of confusion – Simple interaural cues cannot provide the information as to knowing whether a sound is from the front, back, above, or below. For example, a sound occurring at 450 to one’s left and to the front will have the same ITD as if it had occurred at a position 450 to the left and to the rear. The same holds true when applied to the right ear.

To add to the confusion, this occurs for sounds from above and below as well. In other words, a cone of confusion can occur at all positions between directly left and directly right of a listener’s head. Fortunately, a normal functioning human auditory system can usually resolve such cone of confusion conditions.

 

Improving Localization

 

Turning one’s head can help reduce localization errors. However, this activity takes about 500 msec, considerable time by neural standards. The pinna allows sound to “bounce around” before entering the auditory canal. How much activity is associated with this passive feature depends on the direction from which the sound originates, the frequencies involved, and applies primarily to vertical sound localization.

 

Can Localization Occur if the Two Ears are Dissimilar?

 

Altschuler and Comalli[2]commented that even when the two ears remain “unequal,” yet aided, as long as the perceived sound is intense enough to stimulate both ears, the important cues for localization (time, phase and intensity) are perceived and used in a positive way.

 

Quick Office Test of Localization?

 

The following procedure was developed a number of years ago by Comalli and Altschuler[3]. They suggested purchasing an inexpensive small loudspeaker that could be moved around the listener in a one-yard circle. With eyes closed, the listener is asked to locate the source of the sound (moving from right to left or vice versa) by pointing, or saying “right,” “left,” or “center.” Narrow bands of noise could be used as the signal source. The “center” for normal listeners is “12 o’clock” ±10 degrees. They recommended that hearing aids having a center wider than this should be excluded in favor of those more closely approximating the “normal” standard. This might be a good test to use with some of the directional-microphone hearing aids, adaptive or fixed.

 

Implications for Hearing Aids

 

Binaural hearing aid fittings are a must for maximizing localization ability, even though hearing-impaired individuals have poorer localization ability than normal-hearing persons[4]. Localization is poorer than their unaided localization when fitted with hearing aids and tested at MCL (most comfortable loudness).

Interestingly, localization can improve even when word recognition is poor. And, a listener is more comfortable when the speaker can be located accurately.

Perhaps, following, or concurrent with audibility, the primary goal of a hearing aid fitting should be that of providing localization. Could it be that localization, in the normal course of events, might be a more significant goal than improvement in word recognition scores? An interesting thought.

 

*This article was originally published at Wayne’s World on March 2, 2015. Title image courtesy Wikimedia Commons

 

 

Wayne Staab, PhD, is an internationally recognized authority on hearing aids. His professional career has included University teaching, hearing clinic work, hearing aid company management and sales, and extensive work with engineering in developing and bringing new technology and products to the discipline of hearing. Dr. Staab is the Founding Editor of Wayne’s World and served as the Editor-In-Chief of HHTM from 2015 to 2017.