Hearing With Your Ears, Listening With Your Brain

Imagine yourself at a reception, chatting one on one with another guest. You struggle to hear everything the person is saying amid the din of surrounding conversation, but even though you miss a word here and there, you have enough context to understand the gist of what the person is telling you.

In this situation, you are relying on different kinds of knowledge, including contextual cues and relevant memories, to process the information you receive. This knowledge is critical not only for those trying to converse in a noisy room but also for people with hearing impairments, who must rely on knowledge in this way for just about every conversation they have.

Scientists in recent years have grown increasingly aware of the integral role cognition plays in communication, and this awareness has spawned a new field of research called cognitive hearing science. This field examines the way our minds process the auditory signals being sent to the brain, factor in the complexity of what we’re listening to, and adjust to the quality of listening conditions.

The findings from this field hold particular significance for people with hearing impairments, whose inner ears don’t capture complete auditory information for the brain to process. Long-term effects of insufficient bottom-up signal processing may affect what is stored in the brain, sometimes causing a negative cycle; without understanding, knowledge is not updated, which in turn leads to reduced understanding in the future.

A New Model

Recent models of language understanding under adverse or distracting conditions have emphasized the complex interactions among language signal, working memory capacity (WMC) and related executive functions, and episodic and semantic long-term memory (LTM).

Cortical, top-down processes such as working memory (WM) and attention play important roles for very early, as well as late, stages in language processing. Bottom-up factors relying on the cochlea, the auditory portion of the inner ear, are associated with phonological and linguistic processing. Depending on the listening task and how adverse the listening conditions are, these bottom-up and top-down processes interact at different levels in the auditory system.

My colleagues and I have developed a model that addresses this bottom-up–top-down interaction. The Ease of Language Understanding (ELU) model builds on the assumption that the brain “Rapidly, Automatically, and Multimodally Binds PHOnological” information together and represents it in a very short-term buffer we call RAMBPHO. In ideal listening conditions, the RAMBPHO input matches a sufficient number of phonological attributes in the mental lexicon — the systematic organization of vocabulary stored in our mind — and triggers our brains to access and comprehend words rapidly and implicitly, which updates our knowledge. But if hearing is impaired — either through a physical disability or simply because of competing noise — WM kicks in to support listening. Using both phonological and semantic LTM, we fill in, or infer, missing information, which then feeds back to and primes RAMBPHO for its next input.

The topic of the conversation or the regional accent of the person speaking to us are examples of the semantic and phonological kinds of priming, respectively. The output of the system is some level of understanding or perceived gist, which in turn induces a semantic framing of the next explicit loop. To conceptualize the time scale on which the explicit loop operates, consider what happens when two people in a dialogue take turns speaking. Every turn primes RAMBPHO, so the system is undergoing constant dynamic changes, optimizing brain work all the time. Another output from the system is episodic LTM, where information encoded into LTM is dependent on the type of processing carried out in WM. The more adverse the listening conditions, the less spare capacity we have to encode information into LTM. The brain mechanisms that either facilitate or hinder smooth “online” (i.e., real-time) language processes or long-term cognitive change are vital to this area of study.

Cognitive hearing science is not just about auditory aspects of speech but also relates to lip reading and to visual language, such as sign language. It also includes cross-cultural comparisons of tests and tools that address the mechanisms involved in auditory perception. For example, the WM tests used are now tested and evaluated in many labs all over the world (see the Frontiers Topic on “The role of working memory and executive function in communication under adverse conditions,” eds. Rudner &  Signoret, and the Topic on “Cognitive hearing mechanisms of language understanding: Short- and long-term perspectives,” eds. Rönnberg, Ellis, Sörqvist, & Zekveld).

New Findings and Predictions

Many kinds of signal processing and signal distortion in hearing instruments (e.g., amplitude/frequency compression, binary masking) affect phonological processing, thus also affecting RAMBPHO. If the RAMBPHO representation is less precise, then the person has a heightened chance of accessing incorrect or irrelevant information from the mental lexicon, according to the ELU model. For some people with low WMC, a hearing aid actually can hinder rather than facilitate speech processing. At the same time, more advanced signal processing can benefit people with high WMC.

This pattern was demonstrated in a 2007 study in which cognitive hearing scientists Thomas Lunner and Elisabet Sundewall Thorén gave 23 hearing aid users a task to measure their WMC and then presented the participants with speech in background noise that made the speech difficult to hear (Lunner & Sundewall Thorén, 2007). Critically, some participants wore hearing aids offering basic signal processing (called automatic volume control), whereas others wore hearing aids providing more advanced signal processing (wide dynamic range compression). Participants with low WMC showed better comprehension with basic signal processing, whereas those with high WMC heard better with the more advanced signal processing (see also Arehart, Souza, Baca, & Kates, 2013).

Beyond hearing aids, high WMC also benefits other aspects of language processing: Participants with hearing impairments and high WMC are more efficient in their use of phonological codes in WM while carrying out rhyme and reading tasks than are those with low WMC, for example. High WMC also shields the auditory system from distraction, thus supporting the attention process. Furthermore, high WMC is linked with the ability to make use of semantic cues that help separate speech from background noise, and it also may serve as an important cognitive shield against fatigue.

Low WMC, on the other hand, is associated with high activation in the prefrontal brain regions related to semantic processing when understanding speech amid surrounding noise. These frontal brain processes cause mental strain.

The ELU model also makes predictions about the effects of hearing impairments on LTM. Compared with people with normal hearing function, people with impairments are more likely to miss certain auditory cues during the course of the day, even if they’re using hearing aids. For instance, some
everyday adverse listening conditions (e.g., multitalker babble) can confound a person with hearing problems because the hearing aid cannot segregate streams of speech in the same way that a healthy auditory system can.

In these cases, a person’s WM becomes fully occupied, attempting to resolve misinterpretations and misunderstandings. But because of an increased number of mismatches, episodic LTM will not be used (i.e., encoded into and retrieved from) with the same frequency.

We have demonstrated empirically that the LTM of people with hearing impairments falls into relative disuse and decline compared with their WM. In one study (Rönnberg et al., 2011), 160 hearing aid users were measured on their ability to recall words, action phrases, and sentences from LTM. Those with greater amounts of hearing loss performed worse on these tasks, even when age was taken into account. No such pattern was found for WM.

The results of Rönnberg et al. (2011) were reinforced in a study by Rönnberg, Hygge, Keidser, and Rudner (2014) involving more than 100,000 participants recruited through the UK Biobank (http://www.ukbiobank.ac.uk/). The UK Biobank is a research population and charity organization containing health data on 500,000 people. Interested researchers can apply to use this resource for health-related research relating to the public good. Interesting to note is that the negative effects of functional hearing loss strike at a visuospatial prospective LTM task (the proxy for episodic LTM in Rönnberg et al., 2014), hence aligning with the modality-general episodic LTM effects obtained in the Rönnberg et al. (2011) study. This leads to the more general conclusion that potential audibility or attention problems related to the hearing impairment and to an auditory task (encoded with the hearing aid) cannot explain the data obtained. Instead, it seems to be a more general, multimodal episodic memory system that is affected by hearing loss.

This episodic LTM decline, related to hearing loss, may be a cause of dementia. Recent research has shown that the risk of developing dementia of the Alzheimer’s type during a 10-year period of follow-up studies increased by a factor of 4 to 5 for people with moderate to severe hearing impairments. This remained a significant finding even though many background variables such as age, sex, race, education, diabetes mellitus, smoking, and hypertension were taken into account. Because an episodic LTM deficit is an important component of developing Alzheimer’s disease, this may be an important link that previously has been overlooked.

New Applications

Hearing aid manufacturers and dispensers must be made aware of the fact that only people with a high WMC will benefit from more advanced signal processing, and that the fitting of hearing aids should be based on more than the person’s auditory perception of speech. As has been shown, for some people with low WMC, signal processing that is too advanced impedes rather than helps cognitive processing of the signal. It is, in this context, important for scientists to develop reliable brain indices of cognitive disturbance and effort, which may improve hearing aid fitting. Pupil-dilation and brain-oscillation measures are examples of these indices. Hopefully, future cognitive hearing aids can capitalize on such techniques — for example, by using the electrical signals detected in the brain to actually control the online amplification and signal processing in hearing aids.

From the new cognitive hearing science perspective, it would be equally important to evaluate episodic LTM of the contents of a conversation in noise, because these contents signal what kind of spare cognitive capacity the listener had while attempting to follow the conversation. If the communication saps too many processing resources from the listener’s brain, that person will be left with fewer cognitive reserves to encode the information into episodic LTM.

Future studies also should evaluate the effects of WM training and, especially, whether any improvements in WMC transfer to other skills, such as speech understanding in noise tasks. Scientists also can use longitudinal studies to examine the effects of WMC training and hearing aid use, as well as how these effects relate to or shield the brain from age-related cognitive decline. All of these applications are important for clinical audiologists, psychologists, and other hearing health-care practitioners to know about in order to optimize the communication potential of people with hearing loss. œ

References

Arehart, K. H., Souza, P., Baca, R., & Kates, J. M. (2013). Working memory, age, and hearing loss: Susceptibility to hearing aid distortion. Ear and Hearing, 34, 251–260. doi:10.1097/AUD.0b013e318271aa5e

Arlinger, S., Lunner, T., Lyxell, B., & Pichora-Fuller, M. K. (2009). The emergence of cognitive hearing science. Scandinavian Journal of Psychology, 50, 371–384. doi:10.1111/j.1467-9450.2009.00753.x

Lin, F. R., Metter, E. J., O’Brien, R. J., Resnick, S. M., Zonderman, A. B., & Ferrucci, L. (2011). Hearing loss and incident dementia. Archives of Neurology, 68, 214–220. doi:10.1001/archneurol.2010.362

Lunner, T., & Sundewall Thorén, E. (2007). Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. Journal of the American Academy of Audiology, 18, 604–617. doi:10.3766/jaaa.18.7.7

Rönnberg, J., Danielsson, H., Rudner, M., Arlinger, S., Sternäng, O., Wahlin, Å., & Nilsson, L.-G. (2011). Hearing loss is negatively related to episodic and semantic long-term memory but not to short-term memory. Journal of Speech, Language, and Hearing Research, 54, 705–726. doi:10.1044/1092-4388(2010/09-0088)

Rönnberg, J., Hygge, S., Keidser, G., & Rudner, M. (2014). The effect of functional hearing loss and age on long- and short-term visuospatial memory: Evidence from the UK biobank resource. Frontiers in Aging Neuroscience, 6. doi:10.3389/fnagi.2014.00326

Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., … Rudner, M. (2013). The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7. doi:10.3389/fnsys.2013.00031

Rönnberg, J., Rudner, M., & Lunner, T. (2011). Cognitive hearing science: The legacy of Stuart Gatehouse. Trends in Amplification, 15, 140–148. doi:10.1177/1084713811409762.

Comments

In 2007 we reported that the ability to recognize speech or other familiar sounds by 340 listeners with normal hearing (sensitivity) appeared to be relatively distinct from spectral or temporal acuity or general intelligence. If we had not included an environmental sound identification task we likely would have concluded that indeed, “speech is special.” At any rate, our results agree with Ronnberg’s model, except that it may need to be expanded to include familiar nonspeech sounds.

J Acoust Soc Am. 2007 Jul;122(1):418-35.
Individual differences in auditory abilities.
Kidd GR1, Watson CS, Gygi B.

Very interesting article. I was just wondering whether people with cochlear implants were assessed in the studies mentioned as speech perception is significantly different from hearing aids.
Regards,
Ms M.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.