Cognitive abilities for communication
To get a comprehensive picture of communicative disability, one needs to relate cognitive abilities to communicative potential and circumstances. Prediction studies and experimental manipulations have proven to be fruitful. In our division, we have studied the communicative potential of persons with cerebral palsy, traumatic brain injury, epilepsy, intellectual disability, using some core cognitive concepts that typically draw on memory functions. More recently, I have shown that working memory capacity plays a key role for deafened speech-readers, sign language users, for users of cochlear implants and for hearing aid users, especially when signal processing in the hearing aid is more advanced and/or when listening conditions are challenging.
We listen and understand with the brain
Imagine yourself at a party, chatting one on one with another guest. You struggle to hear everything the person is saying amid the din of surrounding conversation. And even though you miss a word here and there, you have enough context to understand the gist of what the person is saying. In this type of situation, you’re relying on different kinds of knowledge stored in your brain, including contextual cues and relevant memories, to process the information you’re receiving. This knowledge use is critical not only for people trying to converse in a noisy ballroom, but especially for people who struggle with hearing impairments. Imagine repeating the above mental processes in just about every conversation you have. We have named the research area Cognitive Hearing Science (CHS). CHS acknowledges the intergral role cognition plays in communication. Specifically, CHS examines the way our minds process the auditory signals being sent to the brain, factoring in the complexity of what we are listening to and the quality of the listening conditions.
Modelling the signal-cognition interface in communication
In a model that I and colleagues have developed, Ease-of-Language Understanding (ELU) under adverse or distracting listening conditions is assumed to depend on the complex interactions between working memory capacity (WMC), attention, and executive functions, and episodic and semantic long-term memory. This kind of approach has already proven to be fruitful for a more comprehensive grasp of the subtle interplay between bottom-up (sensory) and top-down (cognitive) processes, including both online processes and long-term changes (negative or positive) due to hearing impairment/deafness and aging. We test the model by means of both behavioral and brain imaging data, including data from functional magnetic resonance tomography (fMRI), magnetoencephalography (MEG), and pupil size. The model has played an important role in the development of the new area of Cognitive Hearing Science.
Long-term effects of hearing impairment
One prediction of the ELU model is that when the input to the ear is sufficiently poor, or when the person is hearing impaired, there is an increased probability that the phonological-lexical activation in long-term memory will fail, i.e., there is a mismatch between input to the brain and the representations stored in the brain. This will trigger working memory activity, attempting to reconstruct and fill in what has been misheard. Working memory will therefore be constantly exercised, while long-term memory will be relatively more disused: when some items are not encoded, neither will they be stored ore retrieved as frequently as working memory is exercised. The general consequence is that long-term memory will decline, but working memory capacities will remain. We have shown this to be true with different type of memory tasks and for purely visuospatial materials in a large data base, the UK Biobank.