New Method for Predicting Speech Intelligibility Holds Promise in Hearing Aid Design

Image
Brian Taylor
April 6, 2017

Since at least 1950 when Fletcher and Galt published a scientific paper on the Articulation Index (AI), hearing care professionals have examined the relationship between frequency-specific audibility and speech intelligibility.

As most know, the current standard for quantifying this relationship is the Speech Intelligibility Index or SII. Although the SII has been around for about 20 years, there are others methods for examining the relationship between audibility and intelligibility of speech in both quiet and conditions of background noise. Moreover, engineers and acousticians at hearing aid manufacturers have developed their own sophisticated algorithms that attempt to identify background noise and reduce it, while simultaneously amplifying sounds recognized as speech.

Recently, a research group, Cognitive Signal Processing, at Ruhr-Universität Bochum (RUB) in Germany has developed a method for predicting speech intelligibility in noisy listening situations. According to one report their results are more precise than those gained through current standard methods and could be used to improve the performance of hearing aids in background noise.

 

Improving Hearing Aid Performance in Background Noise

 

The research was carried out in the course of the EU-funded project “Improved Communication through Applied Hearing Research.” To date, in hearing aid design the standard approaches for predicting speech intelligibility have included the so-called STOI method (short time objective speech intelligibility measure). The STOI method requires a clear original signal, that is, an audio track that’s been recorded without any background noises. Based on the differences between original and filtered sound, the value of speech intelligibility is estimated.  

 

The researchers at RUB have found a way to predict intelligibility without needing a clear reference signal that is more precise than the STOI method. Consequently, their findings might help reduce test processes in the product development phase of hearing aids.

 

Findings by the researchers may provide methods for engineering a more intelligent hearing aid that could automatically recognize the wearer’s current surroundings and situation. If, for example, a hearing aid user moves from a quiet street into a busy restaurant, the hearing aid would register an increase in background noises. Accordingly, it would filter out the ambient noises – without impairing the quality of the speech signal – in a more effective way that what currently exists in modern hearing aids.

 

*Featured image courtesy creative

  1. I am pleased to note improvements in speech intelligibility research. I offer the following:
    AS indicated , speech is a cognitive response. Understanding of speech is affected by the presence of overlapping circuit noise in the neural channels due to environmental noise, and cochlear limitations (sensory). Speech is a very complex transmission, almost like coding. It cannot be masked except by higher energy interference. But the brain only requires speech cues for understanding, and this is evident in persons with profound SNHL. With neurogenesis and new dendritic connections at the dentate gyrus, people with profound losses can use cues well enough to understand conversations. Because understanding happens in ‘sentences’ , word lists don’t make for better conclusions.

Leave a Reply