A Multisystem Approach
Over time, many professionals have realized that both the bottom up and top down approaches are important at different times. These professionals may view auditory processing involving both factors but adding other factors that are felt to be even more important than the mere bottom up or top down processes discussed above. This author has developed such an approach and has presented his multisystem approach called the Lucker MultiSystem Integrative Approach or LMSIA and has published this information previously (Hawkins & Lucker, 2017).
The underlying theme of the LMSIA model is that both the auditory system (bottom up processes) and various systems in the brain (including the language and cognitive centers) work in an integrative (mixed together in harmony) way to process what we hear so that, in the end, we understand what we have heard. At times, bottom up processing is most important. At other times, top down processing is most important. However, most of the time, they work together, integrated to successfully process what we hear. Thus, evaluation must look at all systems involved as well as the integration of these systems and how a breakdown in auditory processing can be due to deficits in one or more systems as well as the integration of these systems.
Therefore, treatment must focus on improving all systems that are malfunctioning as well as the problems due to a lack of appropriate integration between these systems.
Understanding the Multisystem Approach
To better understand this approach, consider the following situations. Imagine you are sitting at home in a room where you can hear knocking at the front door. As you sit in that room, you hear “knock, knock, knock”. As you hear the sound, your auditory and cognitive system identifies that the sounds came from the direction of the front door, and the pattern of the knocks is common to what your memory system has processed as being someone knocking. You had no idea before hearing the sound that there would be any sound coming from the front door. You only identify (having higher cortical centers focus on the sound received) that the knocks are coming from the door after you have processed the incoming knocking first entering your ears and then be transmitted to the higher cortical centers (Bottom Up).
As soon as your higher centers (top) recognize what was transmitted from the bottom-up, the cortical centers start processing what might be the meaning of the sound. You might have been expecting someone, and immediately after your cortical centers process the incoming knocking, you think, “Oh, wow, that is the person I am expecting”. You might then look at your watch (visual processing) to verify if that could be the person who is supposed to come at 2pm. Thus, your higher level cortical centers are checking what it received from the bottom-up against what it thinks the sound might be. The top-down centers decide to check, and you call out, as you walk to the door, “Who is it?” Your higher (top) centers have set up two possible decisions. If the voice sounds like or the person you expect, and the person says the name you were expecting, your decision is that the person at the door is the person you were expecting. The second choice is if the voice is unrecognized and the person says a name that does not match the person you were expecting, the top centers rethink and change their decision. In this second case, the decision is to look out the window or through the peep hole to see who is at the door. Thus, as you can see, bottom up processing started the process while top down processing became involved and they worked together to identify who is at the door.
Now consider this second situation. Let us consider listening in a very noisy restaurant. The person you are with starts discussing a topic for which you have no initial clue, you use a great deal of bottom-up processing to get the first clues to identify the topic. The bottom-up is not merely from the auditory system but is also from our visual system. Pointing, showing something, body language, etc. provide very important bottom-up signals to allow the higher cortical systems (such as our language system and cognitive system) to begin their top-down processing. If we get clear auditory and visual signals, we switch from primary bottom-up to primary top-down processing. But, once we have guessed what might be the topic, we validate our decision based on the next bottom-up processing we receive.
With all the noise in the restaurant, the incoming (bottom up) signal is distorted. We heavily rely on bottom up processing to verify the words we think were hear as the person is speaking. But, language cues and our language knowledge stored in memory (another system in the brain) help us comprehend the message. For example, if we hear someone say something and we think it was “…where the great giraffes are stored,” we would use our higher-level language center to determine if those words go appropriately in the sentence, which they do. We then use our higher-level cognitive center to determine if that makes sense and the answer would be “No”. Once the decision is “No” we would seek other cognitive, memory, language areas of cortex to try to figure out what it might really be instead of “great giraffes”. If these higher-level centers figure, “Wait I have heard that song before, and it says something about the grapes of wrath,” our higher level cortical system would review our short-term memory store of the sentence and replace the “great giraffes” with “grapes of wrath” and then realize what was really said. But, if we still can’t figure it out, we might ask, “What are the ‘great giraffes’?” At that point, we would put more emphasis on our bottom-up processing to check if the incoming response might sound like “great giraffes” or “grapes of wrath”. As soon as we process “grapes” our higher cortical centers could process the whole message and we would not even need to hear “of wrath”.
What I hope this information indicates is that both top-down and bottom-up processes are critically important in the accurate and appropriate processing of auditory information we receive. This is important since many professionals and educators working with children who have auditory processing problems tend to focus on only one process missing work on the crucially important other process. Both are important for successful processing of what we hear. It is hoped that those reading this will understand that both bottom down and top up processing are equally important and that they must successfully work together for people to truly understand what they hear.
References:
American Academy of Audiology. (2010). Diagnosis, treatment, and management of children
and adults with central auditory processing disorder [Clinical Practice Guidelines].
Retrieved from https://audiology-web.s3.amazonaws.com/migrated/CAPD%20Guidelines%208-2010.pdf_539952af956c79.73897613.pdf
American Speech-Language-Hearing Association. (2005). (central) auditory processing
disorders [Technical Report]. Retrieved from https://www.asha.org/policy
Hawkins J & Lucker JR (2017). Looking at auditory processing from a multisystem perspective,
Topics in Central Auditory Processing, 2 (1), 4-12.
Rees NS (1973) Auditory processing factors in language disorders: The view from Procrusters’
bed. Journal of Speech and Hearing Disorders, 38, 304-315.
Jay R. Lucker, Ed.D., CCC-A/SLP, FAAA, is a Professor in the Department of Communication Sciences and Disorders at Howard University and also works in private practice specializing in Auditory Processing and Language Processing Disorders
Excitation and inhibition are accomplished by the Glutamate and Gaba neurotransmitters. Both are efferent instructions originating at the hippocampus before CA actions take over. In people with hearing losses, GABA neurotransmitters are more active and effective than GLUTAMATE. This is easily demonstrated in a hearing test, where the artificial increase of just 5 db input of spoken words causes a drop in the sensation of the subsequent words, by as much as 10db. leading to erroneous findings in MCL computation. I can only guess that with the hearing loss in progress, and amplification being used, there is a degradation of bottoms up processing (amplification), and top down functions then only serve to lower the action potentials of the incoming potentials. Let me clarify that recognition of coded language occurs as a cognitive action after the memory system is activated by hippocampal reaction to the action potential. In simple words (nothing simple really!), the sequencing of such potentials determines their routing to the prefrontal cortexes where the comparison of long term memory storage and the incoming potential then help determine recognition. In other words, recognition is the prelude to speech understanding. If the code is not understood, then the meaning of the input will take more processing time. If the time is very limited (less than 2 msec), then it is lost, even when default processing is induced in the next mode.
esequencing