H. Christopher Schweitzer, Ph.D.

Dr. Schweitzer is Director, HEAR4-U International and Technical Consultant for IMH Corporation.


In a previous post, Dr. Schweitzer discussed accuracy and reliability of hearing threshold measurements, describing the inaccuracies and problems that exist with current methods.  That hearing aids are fitted based on poor baseline information has led to the introduction of other tests and procedures to better determine in situ hearing aid performance.  That we have not yet achieved desired results is explained further in this post, looking at the evolution of electroacoustic measures of hearing aids, and what they provide.  WJS, Editor


Accuracy and Precision

Tests of hearing aid performance have relied properly on engineers and their well-developed desire for both accuracy and precision.   Control of the level and frequency are specified in standards, such as ANSI S3.22-2014. These provide precise control tolerances for small deviations of both.  Clinicians with well-equipped offices are able to conduct tests similar to those required of manufacturers.  But it was long understood that there were multiple discrepancies between the coupling and impedance of an individual’s ears and those properties in controlled standard test device, such as a 2cc ear simulator.   This naturally led to the development of instrumentation that produced in situ versions of these tests, and the proliferation of Real Ear Measures (REMs), and new standardization protocols for insertion gain and outputs (ANSI S3.46-2013).    However, while undeniably important to the progress of clinical management of hearing aids, a case for looking past REMs can be made.


Looking Past REMs

First, it should be evident that measuring the sound pressure level at the tympanic membrane is still a measure of physical acoustics, not perceptual experience.  It introduces an individual’s ‘organic coupler,’ which, to be sure, is preferable over a standardized machined one.   Coupler-to-ear differences are unquestionably valuable to consider, whether by direct measure and calculation, or by average estimates, especially for children and low functioning hearing aid users.    Some REM approaches may even incorporate body baffle effects, adding additional detail to what telecom engineers call the orthotelephonic response, i.e. the change of a sound pattern as it arrives at the eardrum (transfer function).  To their credit, some systems also utilize recorded speech passages in an attempt to relate the audiogram to assumed audibility of those more relevant signals.  But to extend any of these measures to preferred listening levels and how to best configure the various hearing aid parameters is still a curvy, and rather nebulous path.   To apply pure tone audiograms with individual ear REMs of any signal type still requires numerous inferential leaps and assumptions.

The familiar flow pattern of the standard REM approach is illustrated in Figure 1.  The wiggly arrows at the lower right indicate that attempts to use the prior measures to hit an uncertain “Target” in the listener’s mind is presumptive.  The ‘target’ is a sound pattern that matches an individual’s personal preference for Comfortably Clear Listening levels (CLL) and a tonal bias for speech (complex signals) in normal (binaural in natural acoustic environmental space) conditions.  Such settings presumably can only be known to the listener, although the advance of brain mapping instrumentation may someday close that gap.   Even notable efforts by various manufacturers to forecast the presumed audibility of various speech components into insertion gain displays still rely on assumptions of audibility based on sound pressure levels at uncorrelated eardrum measurement points, and arguably still come up short of their intended use – to deliver a listener’s perceptual preferences into the basic settings of their hearing aids.

Figure 1. The typical hearing aid ‘validation flow pattern is precise, but by leaving out numerous aspects of listener auditory perceptual operations, is lacking in accuracy for hitting the target of Comfortably Clear Listening settings for speech in acoustically diverse conditions and normally reliant on rich binaural processing cues. The ultimate “Target” includes perceptual nuances such as preferred tone balance and sound quality.


Numerous appeals to many of these arguments have been made in past years,1,2,3,4, but a clinically expedient and useful alternative has been lacking. One comprehensive approach did have moderate clinical utilization in Europe,5,6,7   but extensive equipment requirements and rapid changes in hearing aid properties blunted its appeal.


The next part of this paper will describe a simple, but robust, tablet application that introduces an easy way to move beyond Real Ear to a Real Hear method of hearing aid verification.  



The introduction of REMs was an important and commendable step towards closing the gaps on hearing aid fittings and listener preferences. However, the assumption that proper verification of hearing aid fittings demands their use, deserves critical discussion.  As with all progress, there comes a time to honestly examine the limitations of familiar procedures, and to consider what more can be done to advance the delivery of professional services.   Whereas these comments began by contrasting ‘accuracy’ with ‘precision,’ the case will be made that in ‘validating’ hearing aid fittings, the contrast between message reception and acoustical signal delivery must also be considered.



  1. Cox, RM, and McDaniel, DM. (1989). Development of the Speech Intelligibility Rating (SIR) Test for Hearing Aid Comparisons”. J Speech Hear Res, 32, 347-352.
  2. Van Tasell DJ. Hearing loss, speech, and hearing aids. (1993) J Speech Hear Res. 136(2):228-44.
  3. Schweitzer, HC , Mortz, M & Vaughn, N (1999). Perhaps not by prescription, but by perception. High Performance Hearing Solutions (Hearing Review Supplement). 58-62.
  4. Schweitzer, HC & Donnelly, R (2013). Why it’s time to retire the audiogram (for hearing aid fittings). Hearing Health Matters.
  5. Schweitzer, HC & Haubold, J (2000).   Fitting for an ‘Auditory Life’ (part 1).  Hearing Review 7(9). 42-51,76.
  6. Haubold, J & Schweitzer, HC (2000). Closing the gaps on hearing aid acoustical satisfaction. Audiology Today 12(1). 18-19.
  7. Schweitzer, HC & Haubold, J (2000). Fitting for an auditory life. (part 2). Hearing Review 7(10). 68-88.

by Christopher Schweitzer, Ph.D.


H. Christopher Schweitzer, PhD has a long history of research, development, and clinical activity related to hearing and hearing aids, and continues to own the Family Hearing Centers of Colorado. He is a frequent contributor to HHTM (Hearing Health and Technology Matters).



Figure 1. High in precision, low in accuracy.

Students in Signals and Systems learn that there’s an important difference between ‘Precision’ and ‘Accuracy’ in the engineering related sciences.   If an archer sends multiple consecutive arrows to the upper left of the red bulls eye, he receives high points for precision (and its cousin, reliability), but not accuracy, if the goal is to hit the center of the target (Figure 1).   While there’s a need for both, the user of a system can be misled by a sufficiency of one, but a lack of the other.  Such is the case with measurement of real ears.


In the case of Real Ear Measures (REMs), it can be argued that there is a tendency to be rich in the virtue of precision, but poor in accuracy if it is used as a tool to ‘verify’ or ‘validate’ hearing aid performance for individuals.  Blistering arguments are often made of how unprofessional hearing aid fittings are when accomplished without the use of REMs.   Indeed, entire careers have been attached to that premise as textbooks, articles and graduate degrees have focused on the importance of carefully obtained, and presumably vital, REMs.  But this paper puts forth a case that it is well past time to go beyond REMs, and to pursue simple Real Hear measures for more relevant (accurate) listener benefit.  


It Starts with the Audiogram

The audiogram’s familiar graphic portrayal of a listener’s barely audible sensitivity pattern for selected frequencies in unnaturally isolated ears is the nearly universal ‘go to’ starting point for most hearing fittings.  The audiogram’s historic value was always as a robust and reliable tool for differential diagnosis and monitoring medically related progressive changes in hearing impairment.   For that purpose, carefully obtained audiometric data can serve well with both accuracy and precision.  But recall that those values represent minimum sensitivity for single tones with controlled durations presented to unnaturally isolated ears. Neurologically, the procedure may represent activation of several thousand peripheral neurons.  To then apply these data to make predictions about comfortable clear listening levels for complex signals which involve hundreds of millions of cortical neurons, along with multiple binaural interactions to extract rapidly updated and time-varying acoustic messages is a massive stretch of confidence. But, of course, the pure-tone audiogram done on isolated ears in abnormal listening conditions is the basis for hearing aid fittings in offices around the world. Even the use of speech materials, while admittedly more relevant for the hearing aid user, are still generally collected on unnaturally isolated ears under contrived circumstances.  To the credit of many researchers, valiant attempts to launch those clinical arrows towards the ‘target’ of comfortably clear listening. There remains a need to scrutinize the premises and outcomes of present approaches with a view towards greater accuracy and higher listener satisfaction.  Editor’s note:    The “problem” with basic audiometric testing has been a revisited topic in HHTM publications: A, B, C, D.           



Consider the important notion of omega, Ω.   In psychophysical research it is arguably the focal point of validating a measurement, as expressed in the simple formula  [ Ω  = ƒ(S)  ]. To paraphrase Yost1, this is the ‘gold standard’ of psychophysics research, of which clinical audiology is essentially a professional subset.  The formula simply states that a behavioral measure (Ω), such as a threshold audiogram, has a functional relationship to the stimuli (S).  This relationship of the physical properties of acoustic stimuli and the behavior as reported in standard audiometric tests are admittedly precise given all the standardized care applied to control ambient noise, signal levels and their construction.  So, once again precision is generally not an issue of concern.  But these measures of mostly peripheral reception of simplistic signals are obtained at a troubling distance from the acquisition of time-varying spoken streams of messages. They are unfulfilling at best as a means of working out the ‘appropriate’ pattern of amplification details such as the slope of the frequency response pattern which often flattens as loudness increases above thresholds.   If the target is to reduce the burden on the listener to interpret the brief burps of sound that convey clumps of meaning in spoken conversations, the “probe” of information needs to move above the Tympanic Membrane.   While REMs are not behavioral measures, hence outside the omega assumptions, they are generally coupled to audiograms in the protocols.


Consider Also Duration

The standard audiometric presentation signal is designed to be presented for lengths of one second or sometimes two seconds with control rise and fall times.  Audiometric ‘pulsed tones,’ which are generally easier to hear for most people at threshold, are typically one half a second or sometimes as brief as 0.2 second (200 msec).   The reason for the 200 msec minimum length signals relates to the temporal integration properties of the auditory system.   Signals that are shorter in duration than 100 to 200 milliseconds require higher levels of intensity to perceive as Zwislocki2 and others reported many years ago.   Figure 2 is a review reminder of temporal integration showing how very brief signals require greater intensity to achieve the perceptual equality of those that reach full integration of energy after approximately 100 msec, or more, depending on frequency.  Does it matter, one might ask?  Given that there are many speech elements with durations less than 50 msec, the answer would seem to be important enough to take into consideration.

Figure 2. Loudness increases as brief signals are lengthened up to approximately 100 msec as reported by Zwislocki2 and others. Note how very brief signals, in the range of some consonant bursts, may be as much as 20 dB less than longer signals (such as those used in audiometry) for which loudness has fully integrated.


While vowel components of speech are generally several hundred milliseconds in duration, long enough for full integration of loudness, many speech plosives are much less, and their audibility can reasonably be presumed to require higher sound levels to achieve audibility.  So, while a 3k Hz audiometric tone of 500 msec may properly represent a threshold for that particular signal construction, it’s entirely possible that a significant portion of the plosive /t/ or /k/ energy at 3k Hz may not be audible due to the rapidly spoken duration of less than 10 msec in a transient phoneme3.


And Frequency Modulations

Since seminal work at the Haskin’s Labs in the 1950s4,5 it has been well-understood that many crucial elements that differentiate some speech sounds change in frequency over short periods of time, i.e. they are frequency modulated.  Indeed, Eimas and his colleagues,6 were among several groups that showed that new born babies, appear to be ‘pre-wired’ to hear those brief FM signatures that differentiate, for example, /ba/ from /da/.   Recent work in the neurosciences show much more vigorous electrical activity in auditory regions of the human brain with FM signals, than for simpler, non-modulated acoustic signals.8,9     Yet they are notably absent in audiometric tests, except for the occasional use of ‘warble tones’ in some sound field measures, but they remain absent on a standard audiogram.

Figure 3. The 20-40 msec 2nd formant transitions that provide the acoustic distinctions between 3 consonants with the same vowels. After Delarte et al4.

As a reminder to the reader, Figure 3 illustrates the well-studied formant transitions (frequency modulations in a brief period of time) that distinguishes the spoken syllables  /ba/ /ga/ and /da/ as originally reported by the Haskin’s Lab group.


These properties of duration and modulation in the auditory system raise the simple question of ‘how can clinicians characterize a listener’s struggles with hearing speech using signals that do not represent the way the auditory system is organized to receive speech?   It is a fundamental inadequacy of test sensitivity as a professional approach to alleviate the stress of spoken communication for hearing impaired individuals. There must be a willful acknowledgement that hearing for tones is not the same as hearing for speech at the neurophysiological level.   Extraction of ‘meaning’ from the patterned pulses of speech is immensely more complex than the reporting of the audibility of barely audible sinusoids.  It is assumed that hearing professionals know these facts, but have no convenient and established alternative to the conventional pure tone audiogram for verifying hearing aids.


Next week:  Evolution of Electroacoustic Measures of Hearing Aids



  1. Yost, WA, Popper, AN, Fay, RR, (1993) Human Psychophysics. Springer-Verlag New York. Chapt 1. Psychoacoustics 1-12
  2. Zwislocki, J. Theory of Temporal Auditory Summation (1960) 
J. Acoust. Soc. Am. 32, 1046.
  3. Wieringen, A, Pols, L. (2006). Perception of highly dynamic properties of speech. Chapt 2 in Listening to Speech- An Auditory Perspective. Greenberg, S, Ainsworth, W. (eds) Laurence Erlbaum Pub. Mahwah, NJ 21-38.
  4. Delattre, PC., Liberman, AM. & Cooper, FS. (1955) Acoustic Loci and Transitional Cues for Consonants. J. Acoust. Soc. Am. 27, 769–773).
  5. Liberman, AM, Harris, KS, Hoffman, HS & Griffith, BC.  (1957)  The discrimination of speech sounds within and across phoneme boundaries. J. Exp. Psychol. 54, 358–368.
  6. Eimas PD, Siqueland ER, Jusczyk P, Vigorito J. (1971) Speech perception in infants.  Science. 22;171(3968):303-6.
  7. Miller, CL, Morse, PA (1976). The “heart” of categorical speech discrimination in young infants. J. Speech Hear Res 19(3) 578-89.
  8. Hart, HC, Palmer, AR, Hall, DA.  Amplitude and Frequency-modulated Stimuli Activate Common Regions of Human Auditory Cortex. (2003)  Cerebral Cortex  (Oxford Journals) 13(7) 773-781.
  9. Okamoto, H, Kakigi, R. (2015) Encoding of frequency –modulation (FM) rates in human auditory cortex. Scientific Reports 5, Article No 18143.