Home Email this page Print this page Bookmark this page Decrease font size Default font size Increase font size
Noise & Health  
 CURRENT ISSUE    PAST ISSUES    AHEAD OF PRINT    SEARCH   GET E-ALERTS    
 
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Email Alert *
Add to My List *
* Registration required (free)  
 


 
   Abstract
   Introduction
   Noise Modulation
   Amplification Al...
   Phonological Mis...
   A Working Memory...
   The Nature of Ph...
   The Nature of th...
   Ease of Language...
   Conclusions
   References
   Article Figures
 

 Article Access Statistics
    Viewed4848    
    Printed219    
    Emailed9    
    PDF Downloaded52    
    Comments [Add]    
    Cited by others 33    

Recommend this journal

 


 
SPEECH PERCEPTION AND UNDERSTANDING Table of Contents   
Year : 2010  |  Volume : 12  |  Issue : 49  |  Page : 263-269
When cognition kicks in: Working memory and speech understanding in noise

1 Linnaeus Centre Head, The Swedish Institute for Disability Research, Snekkersten, Denmark; Department of Behavioral Sciences and Learning, Snekkersten, Denmark
2 Linnaeus Centre Head, The Swedish Institute for Disability Research, Snekkersten, Denmark; Department of Behavioral Sciences and Learning, Snekkersten, Denmark; Oticon A/S, Research Centre Eriksholm, Snekkersten, Denmark
3 Linnaeus Centre Head, The Swedish Institute for Disability Research, Snekkersten, Denmark; Department of Behavioral Sciences and Learning, Snekkersten, Denmark; Audiology/ENT & EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam

Click here for correspondence address and email
Date of Web Publication21-Sep-2010
 
  Abstract 

Perceptual load and cognitive load can be separately manipulated and dissociated in their effects on speech understanding in noise. The Ease of Language Understanding model assumes a theoretical position where perceptual task characteristics interact with the individual's implicit capacities to extract the phonological elements of speech. Phonological precision and speed of lexical access are important determinants for listening in adverse conditions. If there are mismatches between the phonological elements perceived and phonological representations in long-term memory, explicit working memory (WM)-related capacities will be continually invoked to reconstruct and infer the contents of the ongoing discourse. Whether this induces a high cognitive load or not will in turn depend on the individual's storage and processing capacities in WM. Data suggest that modulated noise maskers may serve as triggers for speech maskers and therefore induce a WM, explicit mode of processing. Individuals with high WM capacity benefit more than low WM-capacity individuals from fast amplitude compression at low or negative input speech-to-noise ratios. The general conclusion is that there is an overarching interaction between the focal purpose of processing in the primary listening task and the extent to which a secondary, distracting task taps into these processes.

Keywords:  Competing noise, ease of language understanding, masking, speech understanding, working memory

How to cite this article:
Rönnberg J, Rudner M, Lunner T, Zekveld AA. When cognition kicks in: Working memory and speech understanding in noise. Noise Health 2010;12:263-9

How to cite this URL:
Rönnberg J, Rudner M, Lunner T, Zekveld AA. When cognition kicks in: Working memory and speech understanding in noise. Noise Health [serial online] 2010 [cited 2017 Nov 24];12:263-9. Available from: http://www.noiseandhealth.org/text.asp?2010/12/49/263/70505

  Introduction Top


In the history of audiology and speech communication research, the possibility that cognitive factors influence speech intelligibility in adverse listening conditions has been recognized relatively late. [1],[2] This may in part be due to the fact that auditory scientists and audiologists have emphasized the possibility of predicting speech understanding in complex listening situations using relatively simple kinds of auditory or linguistic test stimuli and without considering the full impact of contributions from higher-order cognitive factors. [3],[4] Also, cognitive scientists have not considered the sensory influence on cognitive function in sufficient detail. [5],[6] Recognition of both these kinds of interaction and subsequent consequences for information processing has led to the development of a new interdisciplinary area called cognitive hearing science.[1],[7]

A general conclusion of the recent advances in cognitive hearing science is that the important features of future modeling must include the interplay between bottom-up and top-down processing, or in other terms, the interaction between automatic/implicit and deliberate/explicit kinds of information processing. [8],[9],[10] For example, recent research by our group demonstrates that explicit, top-down mechanisms that depend on working memory (WM) capacity may determine successful adaptation to signal processing algorithms, such as wide-range dynamic compression, in assistive technology. [11],[12],[13]


  Noise Modulation Top


Lyxell and Rönnberg [14] obtained interesting results in an early study on visual speechreading. Participants were presented with a video-image of the speakers' face who pronounced words or sentences in different kinds of background conditions (in quiet, stationary white noise, or meaningful noise - with the same speaker reading from a newspaper article about the assassination of Olof Palme). The auditory target speech signal was not presented. WM capacity was assessed by the reading span test that measures ongoing storage and processing functions of WM. [15],[16] The two noise conditions did not affect visual speechreading performance per se. Nevertheless, WM capacity was correlated with speechreading, but only for the meaningful noise mask. For the white noise condition, there was no correlation between visual speechreading and reading span performance. The type of target stimulus (i.e., words or sentences) did not affect the relation between WM and speechreading performance. The results indicated that speechreading in noise capitalizes on cognitive resources in qualitatively different ways, depending on the type of noise mask.

George et al, [17] observed similar results in a study examining the role of auditory and non-auditory factors in speech perception in stationary and modulated noise by normal hearing and hearing-impaired listeners. The results indicated that the speech reception threshold in both stationary and modulated noise of the normal hearing participants was predicted best by the text reception threshold test. [18] The text reception threshold test is a visual analog of the speech reception threshold test. [19] Participants are asked to read aloud sentences that are partly masked by a vertical bar pattern. The test adaptively measures the percentage of unmasked text required for 50% correct sentence perception. The text reception threshold is related to the speech reception threshold in normal hearing listeners. [18] For the hearing-impaired listeners included in the study of George et al,[17] the pure tone audiogram was the main predictor of speech perception in stationary noise. However, speech perception in modulated noise was related to auditory temporal acuity and the text reception threshold. Thus, generally consistent with Lyxell and Rönnberg, [14] this set of results indicates that non-auditory cognitive factors (linguistic inference-making and WM) are relevant for speech perception, specifically in fluctuating noise.

Compared to a stationary noise masker, speech perception in competing speech maskers provides a 7 dB "release of masking" for young adults without hearing impairment, [20] whereas the masking release is smaller or absent for younger or older persons with hearing impairment. [17],[21],[22],[23] Thus, hearing loss is associated with a relative inability to benefit from the relatively silent periods in the noise masker. This inability to "listen in the dips" seems to be related to a reduced perception of the information provided by the temporal fine structure of the speech signal (rapid fluctuations in amplitude over time) in the absence of envelope information (slower modulation superimposed on the fine structure). [24] The role of temporal fine structure in listening in the dips is further reflected by the fact that increasing the amount of temporal fine structure information increases the speech reception thresholds of listeners with normal hearing to a larger extent for modulated noise maskers. [25] However, the amount of masking release in fluctuating noise may in turn depend on the interaction between speech-to-noise ratio and impairment such that a hearing-impaired person needs more favorable speech-to-noise ratios before the benefit of fluctuations becomes apparent. [26]

Kramer et al, [27] furthermore showed that the "speech intelligibility in noise" factor of a self-reported hearing disability questionnaire was better predicted by speech perception in modulated noise than by speech perception in stationary noise alone. This indicates that the hearing problems experienced by persons with hearing loss are tapped by speech perception in modulated noise tests that require more cognitive capacity, which furthermore underlines the value of examining the role of these cognitive abilities in hearing difficulties.


  Amplification Algorithms and Noise Modulation Top


Another empirical illustration of the necessity of conceptualizing the interaction between auditory input characteristics and cognition was shown by Lunner and Sundewall-Thorén. [28] They manipulated the type of compression (slow/fast) in the hearing aid and the type of background noise (modulated/unmodulated). Slow compression creates quasi-linear amplification of the spoken input sound, which preserves syllable characteristics to a relatively high degree. In contrast, fast-acting compression gives nonlinear amplification which results in syllable compression and an alteration of the temporal envelope which determines the acoustical form of the speech signal, and it therefore influences the cues used during phonological processing. [29] Fast-acting compression amplification can improve, but not perfectly restore the ability to understand speech in modulated noise. [30]

Lunner and Sundewall-Thorén [28] showed that speech perception in relatively easy listening conditions (unmodulated noise and slow compression) was predicted by hearing threshold levels. However, in more demanding and ecologically more relevant conditions (modulated noise and fast compression), WM functions (i.e., letter monitoring performance) [31] played a prominent role in the prediction of speech-in-noise performance. The actual performances were similar across conditions. This study thereby demonstrates that a qualitatively different kind of explicit WM processing and storage function is triggered by the demanding compared to the less demanding listening conditions. In other words, the listening demands determine how a certain speech perception performance is reached. The details remain to be worked out as regards to what constitutes a "demanding" listening situation in relation to different kinds of cognitive capacities and materials. [32],[33] The studies of Lyxell and Rönnberg [14] and George et al,[17] have in common that the involvement of cognitive capacity during speech perception was high when speech was masked by a fluctuating noise masker (i.e., either a human voice, a square-wave modulated masker, or the long-term speech spectrum noise, modulated and locked to one talker). [34] A tentative suggestion therefore is that a fluctuating noise seems to signal demands on WM storage and processing to a larger extent than a steady-state noise.


  Phonological Mismatch, Working Memory Capacity, and Aided Speech Perception in Noise Top


The studies described above thus suggest that one important role of cognition in aided speech recognition in noise is to piece together information in a speech signal available in the dips of a modulated masker. One of the reasons for a degradation of the percept may be loss of temporal fine structure. [25] A degraded percept resulting from loss of temporal fine structure can be partially compensated for by fast-acting compression, making the weaker part of the signal more audible.

In several recent studies performed by our group, we have systematically manipulated the type of signal processing used in hearing aids and examined which conditions were associated with WM capacity. The rationale behind this approach is that if interindividual differences in WM capacity are related to speech perception performance in certain conditions, apparently, these conditions induce a qualitatively different, WM-based strategy of information processing. In one study, we manipulated the experimental settings in the hearing aids of experienced users such that they differed from and "mismatched to" the original settings in the hearing aids. [35] The unfamiliar hearing aid setting created a mismatch presumably at the syllabic, phonological level (see also the section describing the ELU-model in the current paper). This mismatch caused a dependence on WM capacity, as reflected by significant correlation coefficients between speech perception performance and the reading span test. This relation was observed across combinations of slow and fast compression settings in the experimental hearing aid, with steady as well as with modulated noise. In a generalization study, in which we re-analyzed three datasets (two of which were from a Danish population), we again observed that reading span performance was related to performance in different mismatch conditions. Thus, the phonological mismatch effect due to unfamiliar hearing aid settings is stable across the two Scandinavian languages. [36]

In yet another study, we actually manipulated the participant's experience with a particular hearing aid setting. It was found that after 9 weeks of training with a slow and quasi-linear compression release setting in the hearing aid, speech perception performance with a fast, nonlinear compression setting was related to WM performance, [13] whereas this was not true with the slow setting. That is, in the former case, a phonological mismatch was induced, hence the dependence on WM capacity. This mismatch may be due to the fact that the nonlinear compression used at test alters or "distorts" the phonological representation extracted in a substantial way compared to the slow compression setting that the participants have become acclimatized to during training, hence the mismatch between training and test. The reverse was not necessarily true (i.e., for training with a fast setting and testing with a slow setting), presumably because transfer was easier from practice with a relatively more distorted percept to a less affected speech signal. [13]


  A Working Memory System for Ease of Language Understanding Top


To address issues related to speech understanding in noise more generally, the Ease of Language Understanding (ELU) model was developed [8],[9],[10] [Figure 1].
Figure 1: A working memory system for Ease of Language Understanding (ELU)[8],[9],[10]

Click here to view


It suggests one way of accounting for the role of cognition in language understanding in persons with hearing loss. The perceptual input to the model is conceptualized as multi-sensory or multi-modal linguistic information, which at a cognitive level is assumed to be Rapidly, Automatically, and Multi-modally Bound together to form Phonological streams of information (RAMBPHO, sign language is not dealt with in this paper, but see Rönnberg et al. [9] ). As long as optimum conditions prevail, the RAMBPHO function mediates rapid and implicit unlocking of the lexicon by means of matching input with stored phonological representations in long-term memory. [9]

If suboptimum conditions are at hand (e.g., due to hearing loss, suboptimal signal processing in a hearing aid, and/or noisy conditions), RAMBPHO information may fail to activate stored representations and mismatch may occur. Mismatch can also arise due to slow lexical access and less precise phonological representations in long-term memory. However, mismatch or partial mismatch can be compensated for by other levels of language (e.g., semantic information). If appropriately manipulated, semantic information may prime phonological processing and sentence comprehension. [37]

Our current knowledge is that when phonological mismatch occurs, explicit processing and storage capacity is required to infer and abstract meaning, prospectively as well as retrospectively, on the basis of incomplete new information and previous knowledge. [35],[38] In addition, it is assumed that the relative contribution of explicit and implicit functions to language understanding varies as a function of mismatch as well as talker, context and dialog-specific aspects. [10] Thus, ELU is, in general, negatively correlated to the degree of explicit involvement, and is in those instances predicted to be dependent on the storage and processing capacity with which explicit functions can be carried out.


  The Nature of Phonological Mismatch Top


According to the general description of the ELU model, several conditions may cause mismatch at different linguistic levels of analysis. [8] Here we focus on phonological mismatch. The studies referred to in the current paper, which examined the relation between speech processing algorithms and WM fall into two classes. One deals with the perceptual input that causes a mismatch with existing phonological representations in long-term memory. [28] The other deals with how perceptual learning with one kind of signal processing scheme - with subsequent modifications of phonological representations in long-term memory - can cause a mismatch with perceptual input from other signal processing schemes. [13] Both have in common that they signal a demand on explicit, WM capacity. Across studies, especially a fast-acting compression signal processing algorithm, combined with modulated noise, seems to provoke the highest dependence on WM capacity. [13],[36] We propose that high WM capacity allows hearing-impaired listeners to benefit from masking release in modulated noise in a similar way as experienced by persons with normal hearing thresholds, by allowing them to piece together the information partially restored by fast-acting compression release. WM capacity would thereby partially compensate for loss of sensitivity to temporal fine structure. [25]

In even further detail, Lunner et al, [12] suggested that the benefit from a fast amplitude compression system increases as input speech-to-noise ratio becomes more negative. This is because long-term output speech-to-noise ratios from the hearing aid are increased relative to the input speech-to-noise ratio. [39] Then, by implication, if persons with high WM capacity demonstrate speech reception thresholds at lower or more negative input speech-to-noise ratios than persons with a low WM capacity, then the high capacity persons will receive an added benefit from the relatively higher output speech-to-noise ratios (cf. the speech-to-noise ratio-dependence of benefit from fluctuating maskers). [26]

According to Mattys et al, [40] the so called perceptual load as induced by stationary noise maskers leads to a reliance on acoustic features in segmentation judgments of ambiguous phrases. In contrast, cognitive load (as induced by secondary tasks such as divided attention and lexical memory tasks) strengthens the reliance on lexical and semantic knowledge (e.g., Was it "mild option" or "mile doption" I just heard?). Perceptual load would thus lead to more "mile doption" judgments and cognitive load would lead to "mild option" judgments. Their model (Mattys et al. [40] ) does not predict that acoustic/phonological mismatches increase the cognitive load, and hence result in a dependence on explicit WM resources in terms of the ELU model. On the face of it, this result does not rhyme with our phonological mismatch data. [13],[36]

However, if we conceive of our mismatch data as principally targeting phonological mismatches, then it is possible to reconcile the predictions about perceptual load described by Mattys et al, [40] and the ELU model. [8] Phonological mismatch is about the cognitive consequence of a phonological mismatch for speech understanding in noise, whereas the model of Mattys et al,[40] is about the reliance on different features for the purpose of segmentation. In the ELU model, the "bottleneck" of information processing and lexical access from long-term memory is a phonological representation (presumably at the syllabic level). [10] Phonological processing in RAMBPHO is affected by the probability of correct neural, bottom-up driven phonemic classification at an early stage in the processing of the information, how the type of hearing impairment affects this classification (i.e., effects of inner and outer hair cell damage to the classification), [37] and whether the visual information helps binding and optimizing the phonological (syllabic) percepts for unlocking the lexicon. [7]

Further to this point is the fact that the neural correlates for speech versus non-speech perception clearly differ. [41] Again, it seems that when speech is to be processed, a qualitatively different processing mode with a different neural signature occurs. Areas for phonological (syllabic) analysis differ from brain areas merely involved in sound perception. [41] Perhaps amplitude modulated noise signals speech or phonology, rather than noise, on the principle that "you can't hear speech as noise". [42] The consequence of this reasoning is that as long as an auditory stimulus resembles "speech" by the way in which it is processed by the brain, namely, as a competing source of information, it is likely that explicit WM resources will be tapped. Importantly, given sufficient WM capacity this does not necessarily imply a higher cognitive load, since "load" in our understanding is relative to the capacity of the individual. However, what can be expected is a qualitatively different compensatory strategy when the individual deploys his/her implicit and explicit resources at hand (Rönnberg, [10] [Figure 1],[Figure 2],[Figure 3], p. 73-4).

Thus, what we have articulated here is a position in-between perceptual and cognitive load-induced strategies (Mattys et al. [40] ), which emphasizes the interaction between perceptual task characteristics and (a) the individual's implicit capacities to pick up the phonological elements of speech in adverse listening conditions (via the actual ability to capitalize on temporal fine structure, the ability to multi-modally bind together the target phonological elements, and the speed with which the lexicon is accessed), as well as (b) how the implicit functions interlock and interact with the explicit WM-related capacities to synthesize and infer the actual discourse. With high quality of implicit processes (fast lexical access and precise phonological representations in long-term memory), the probability of mismatch is reduced. With lower quality of these processes (slow lexical access and less precise phonological representations), the probability of mismatch is increased, and the level of speech understanding will then be a function of the compensatory potential of explicit WM-based strategies. [10]


  The Nature of the Speech Task Top


Related to our discussion are results observed in a different line of research on the effects of distracting sounds on serial short-term memory recall. Especially, the transitions in vowels seem to be more distracting than consonant sounds when it comes to phonological similarity effects within consonant-vowel-consonant distracters. [43] Hughes et al, [43] therefore have argued in favor of a so-called "changing-state hypothesis". The essence of this hypothesis is that change (or fluctuation) is not crucially determined by whether the effects rely on speech stimuli or not. The distraction effect could well be obtained by other kinds of fluctuating sounds like tones, [44] as long as they distract the overall task of serial recall.

Thus, what seems to determine the effects, in general, is that interference is caused by similarity in the process applied to distracter or masker stimuli and not the contents per se, [45] but see Baddeley and Larsen. [46] For example, if the purpose of the task is to recall visually presented semantic information by category, recall is compromised by the semantic relatedness of the irrelevant speech distracter. However, if the instruction is to recall by serial order, the negative effect of semantic relatedness is reduced (semantic relatedness was manipulated by forward vs. reversed speech), and other acoustic/phonological distracters that affect seriation play a role instead. [45]

Translated back to the issue of phonological mismatch, amplitude modulation seems to signal speech, which thereby interferes with the processing of the phonological component, as long as the focal task is to recall or repeat (serially) the semantically unrelated words in lists such as the Hagerman lists. [28] Phonological mismatch effects - with a reliance on WM capacity - do not show as clearly with other types of natural and contextually richer sentences such as HINT. [13] The ELU model is not defined in terms of dedicated loops for information processing, but rather in terms of the phonological information that is bound together (i.e., RAMBPHO, cf. Baddeley) [47] to access the phonological lexicon. [8] In that sense, the model may accommodate general processes that affect seriation and phonology, if seriation is a constraint for efficient on-line lexical access.

To further develop the ELU model, we have to study the demands on WM in semantically interfering conditions as well, i.e., when the distracting task interferes with the focal task of semantic processing of a sentence in noise, or with semantic priming of the sentence to-be-reported. In this way, other causes for mismatch and for explicit processing may be charted more broadly.


  Ease of Language Understanding, Just Follow Conversation, and Type of Noise Top


In a study by Hygge et al,[48] using a conversation following task ("Just Follow Conversation"), ELU was measured by asking participants to individually adjust the signal-to-noise ratio such that the listener was "just able to follow and comprehend a conversation". The types of noise applied were either (a) speech-spectrum random noise, or (b) a male voice reading from another chapter of the text used for the female signal voice, or (c) the male voice played in reverse. Both normal hearing and hearing-impaired persons participated. Each masker had a similar detrimental effect on "ease" in the hearing-impaired group, whereas for the normal hearing participants, the two speech conditions were equally less distracting than the random noise masker. It was argued that the hearing-impaired participants could not effectively use the temporal variations in the speech maskers, resulting in similar speech-to-noise ratios across conditions that were all worse than those obtained by the normal hearing controls. Thus, in contrast to the hearing-impaired group, the normal hearing could effectively utilize the temporal cues, silent stops and pauses to follow the message of the signal voice better. [37],[49]

Hygge et al. had expected the forward, normal speech masker to be more distracting than the male voice played in reverse because of the interfering semantic processing it would induce. However, the lack of interfering effects of the content of the speech maskers has also been reported by others [20],[21],[50],[51] (see also the review of Bronkhorst). [52] Festen and Plomp [21] showed that the speech reception threshold is similar for speech masked by a single distracting voice and speech masked by a noise masker that was spectrally shaped to the voice and multiplied by the voice envelope.

In conclusion, it seems that despite the lack of semantic interference, the study conditions referred to above have a phonological component in common. The reason for this result may again be related to the demands of the focal task. [45],[52] There is less possibility for semantic distraction as long as the focal task is to "just follow" the conversation.

One example of a manipulation that involves a more semantic [45] or cognitive load [40] was studied by Lyxell et al. [53] They used two "Just Follow Conversation" conditions: in the active listening condition, participants were instructed to prepare for forthcoming and randomly presented questions on the content of the speech. In the other condition, participants received a passive listening instruction. In both the conditions, background babble from Kindergarten was used as noise masker. WM capacity correlated significantly with performance in the active but not in the passive condition for normal hearing participants.

This manipulation may be seen as a parallel to the task by Mattys et al,[40] where they instructed participants to maintain a set of words in memory while carrying out the segmentation judgments, and similar in spirit to the semantic category recall manipulations by Marsh et al. [45] In both the cases, cognitive or semantic load is increased, resulting in higher reliance on lexical-semantic information and WM capacity. Manipulations of cognitive load in relation to different kinds of mismatch and in different types of noise remain to be investigated for future developments of ELU.


  Conclusions Top


Perceptual load and cognitive load (cf. Mattys et al. [40] ) can be dissociated by manipulating the nature of the secondary task and by assessing the effects that they have on the reliance on acoustic versus lexical aspects of coding speech.

In terms of a general mismatch concept included in the ELU model, there is yet another dimension to be noted: if the phonological mismatch with the lexicon is sufficiently large, cognitive resources will also be brought into play. Whether this is a heavy cognitive "load" or not will in turn depend on the WM capacity of the individual. [10] This underscores the value of assessing qualitative differences in speech processing.

Modulated noise maskers may serve as triggers for speech maskers and therefore induce a WM, explicit mode of processing. The ELU model is about the interaction between the implicit capacity to pick up speech elements under adverse conditions and the explicit capacity to capitalize on those elements to re-construct what was said.

High WM capacity individuals benefit more from low or negative input speech-to-noise ratios with fast amplitude compression, but phonological precursors such as the ability to pick up temporal fine structure may also aid in reconstructing and discovering what was uttered in modulated noise (speech or speech-like).

The general point is that there is an overarching interaction between the focal purpose of processing in the primary task (e.g., repetition, serial recall vs. recall by category or by comprehension of the gist of a conversation) and the extent to which the secondary, distracting task taps into these processes.

 
  References Top

1.Arlinger S, Lunner T, Lyxell B, Pichora-Fuller MK. The emergence of cognitive hearing science. Scand J Psychol 2009;50:371-84.  Back to cited text no. 1      
2.Speech understanding and aging: Working Group on Speech Understanding and Aging: Committee on Hearing, Bioacoustics, and Biomechanics, Commission on Behavioral and Social Sciences and Education, National Research Council. J Acoust Soc Am 1988;83:859-95.  Back to cited text no. 2      
3.Auer ET Jr. Spoken word recognition by eye. Scand J Psychol 2009;50:419-25.  Back to cited text no. 3      
4.Lidestam B. Visual discrimination of vowel duration. Scand J Psychol 2009;50:427-35.  Back to cited text no. 4      
5.Gallacher J. Hearing, cognitive impairment and aging: A critical review. Rev Clin Gerontol 2005;14:1-11.  Back to cited text no. 5      
6.Pichora-Fuller MK. Use of supportive context by younger and older adult listeners: Balancing bottom-up and top-down information processing Int J Audiol 2008:47:S72-82.   Back to cited text no. 6      
7.Campbell R, Rudner M, Rönnberg J. Editorial. Scand J Psychol 2009;50:367-9.   Back to cited text no. 7      
8.Rönnberg J, Rudner M, Foo C, Lunner T. Cognition counts: A working memory system for ease of language understanding (ELU). Int J Audiol 2008;47:S171-7.  Back to cited text no. 8      
9.Rönnberg J, Rudner M, Foo C. The cognitive neuroscience of signed language: Applications to a working memory system for sign and speech. In: Bäckman L, Nyberg L, editors. Memory, aging and the brain: A Festschrift in honour of Lars-Gφran Nilsson. London: Psychology Press; 2010. p. 265-86.  Back to cited text no. 9      
10.Rönnberg J. Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: A framework and a model. Int J Audiol 2003;42:S68-76.  Back to cited text no. 10      
11.Lunner T. Cognitive function in relation to hearing aid use. Int J Audiol 2003;42:S49-58.  Back to cited text no. 11      
12.Lunner T, Rudner M, Rönnberg J. Cognition and hearing aids. Scand J Psychol 2009;50:395-403.  Back to cited text no. 12      
13.Rudner M, Foo C, Rφnnberg J, Lunner T. Cognition and aided speech recognition in noise: Specific role for cognitive factors following nine-week experience with adjusted compression settings in hearing aids. Scand J Psychol 2009;50:405-18.  Back to cited text no. 13      
14.Lyxell B, Rönnberg J. The effects of background noise and working memory capacity on speechreading performance. Scand Audiol 1993;22:67-70.  Back to cited text no. 14      
15.Daneman M, Carpenter PA. Individual differences in integrating information between and within sentences. J Exp Psychol Learn Mem Cogn 1980;9:561-84.  Back to cited text no. 15      
16.Rφnnberg J, Arlinger S, Lyxell B, Kinnefors C. Visual evoked potentials: Relation to adult speechreading and cognitive function. J Speech Lang Hear Res 1989;32:725-35.  Back to cited text no. 16      
17.George EL, Zekveld AA, Kramer SE, Goverts ST, Festen JM, Houtgast T. Auditory and nonauditory factors affecting speech reception in noise by older listeners. J Acoust Soc Am 2007;121:2362-75.  Back to cited text no. 17      
18.Zekveld AA, George EL, Kramer SE, Goverts ST, Houtgast T. The development of the text reception threshold test: A visual analogue of the speech reception threshold test. J Speech Lang Hear Res 2007;50:576-84.  Back to cited text no. 18      
19.Plomp R, Mimpen AM. Improving the reliability of testing the speech reception threshold for sentences. Audiology 1979;18:43-52.  Back to cited text no. 19      
20.Duquesnoy AJ. Effect of a single interfering noise or speech source upon the binaural sentence intelligibility of aged persons. J Acoust Soc Am 1983;74:739-43.  Back to cited text no. 20      
21.Festen JM, Plomp R. Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing. J Acoust Soc Am 1990;88:1725-36.  Back to cited text no. 21      
22.George EL, Festen JM, Houtgast T. Factors affecting masking release for speech in modulated noise for normal-hearing and hearing impaired listeners. J Acoust Soc Am 2006;120:2295-311.  Back to cited text no. 22      
23.Lorenzi C, Gilbert G, Carn H, Garnier S, Moore BC. Speech perception problems of the hearing impaired reflect inability to use temporal fine structure. Proc Natl Acad Sci U S A 2006;103:18866-9.  Back to cited text no. 23      
24.Hopkins K, Moore BC, Stone MA. Effects of moderate cochlear hearing loss on the ability to benefit from temporal fine structure information in speech. J Acoust Soc Am 2008;123:1140-53.  Back to cited text no. 24      
25.Hopkins K, Moore BC. The contribution of temporal fine structure to the intelligibility of speech in steady state and modulated noise. J Acoust Soc Am 2009;125:442-6.  Back to cited text no. 25      
26.Bernstein JG, Grant KW. Auditory and auditory-visual intelligibility of speech in fluctuating maskers for normal-hearing and hearing impaired listeners. J Acoust Soc Am 2009;125:3358-72.  Back to cited text no. 26      
27.Kramer SE, Kapteyn TS, Festen JM, Tobi H. The relationship between self-reported hearing disability and measures of auditory disability. Int J Audiol 1996;35:277-87.  Back to cited text no. 27      
28.Lunner T, Sundewall-Thorιn E. Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. J Am Acad Audiol 2007;18:539-52.  Back to cited text no. 28      
29.Dillon H. Hearing Aids. Turramurra. Australia: Boomerang Press; 2001.  Back to cited text no. 29      
30.Moore BC, Peters RW, Stone MA. Benefits of linear amplification and multichannel compression for speech comprehension in backgrounds with spectral and temporal dips. J Acoust Soc Am 1999;105:400-11.  Back to cited text no. 30      
31.Gatehouse S, Naylor G, Elberling C. Benefits from hearing aids in relation to the interaction between the user and the environment. Int J Audiol 2003;42:S77-85.  Back to cited text no. 31      
32.Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008;47:S53-71.  Back to cited text no. 32      
33.Cox R, Xu J. Short and long compression release times: Speech understanding, real world preferences, and association with cognitive ability. J Am Acad Audiol 2010;21:121-38.  Back to cited text no. 33      
34.Hagerman B. Speech recognition in slightly and fully modulated noise for hearing impaired subjects. Int J Audiol 2002;41:321-9.  Back to cited text no. 34      
35.Foo C, Rudner M, Rönnberg J, Lunner T. Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. J Am Acad Audiol 2007;18:553-66.  Back to cited text no. 35      
36.Rudner M, Foo C, Sundewall-Thorén E, Lunner T, Rönnberg J. Phonological mismatch and explicit cognitive processing in a sample of 102 hearing-aid users. Int J Audiol 2008;47:S91-8.  Back to cited text no. 36      
37.Stenfelt S, Rönnberg J. The signal-cognition interface: Interactions between degraded auditory signals and cognitive processes. Scand J Psychol 2009;50:383-93.  Back to cited text no. 37      
38.Hannon B, Daneman M. A new tool for measuring and understanding individual differences in the component processes of reading comprehension. J Educ Psychol 2001;93:103-8.  Back to cited text no. 38      
39.Naylor G, Johannesson RB. Long-term signal-to-Noise ratio at the input and output of amplitude-compression systems. J Am Acad Audiol 2009;20:161-71.  Back to cited text no. 39      
40.Mattys SL, Brooks J, Cooke M. Recognizing speech under a processing load: Dissociating energetic from informational factors. Cogn Psychol 2009;59:203-43.  Back to cited text no. 40      
41.Patterson RD, Johnsrude IS. Functional imaging of the auditory processing applied to speech sounds. Philos Trans R Soc Lond B Biol Sci 2008;363:1023-35.  Back to cited text no. 41      
42.Whalen DH, Benson RR, Richardson M, Swainson B, Clark VP, Lai S, et al. Differentiation of speech and nonspeech processing within primary auditory cortex. J Acoust Soc Am 2006;119:575-81.  Back to cited text no. 42      
43.Hughes RW, Tremblay S, Jones DM. Disruption by speech of serial short-term memory: The role of changing-state vowels. Psychon Bull Rev 2005;12:886-90.  Back to cited text no. 43      
44.Jones DM, Macken WJ. Irrelevant tones produce an irrelevant speech effect: Implications for phonological coding in working memory. J Exp Psychol Learn Mem Cogn 1993;19:369-81.  Back to cited text no. 44      
45.Marsh JE, Hughes RW, Jones DW. Interference by process, not content, determines semantic auditory distraction. Cognition 2009;110:23-38.  Back to cited text no. 45      
46.Baddeley AD, Larsen JD. The phonological loop unmasked? A comment on the evidence for a "perceptual-gestural" alternative. Q J Exp Psychol 2007;60:497-504.  Back to cited text no. 46      
47.Baddeley AD. The episodic buffer: A new component of working memory? Trends Cogn Sci 2000;4:417-23.  Back to cited text no. 47      
48.Hygge S, Rφnnberg J, Larsby B, Arlinger S. Normal-hearing and hearing-impaired subjects' ability to just follow conversation in competing speech, reversed speech, and noise backgrounds. J Speech Hear Res 1992;35:208-15.  Back to cited text no. 48      
49.Pichora-Fuller MK. Processing speed and timing in aging adults: Psychoacoustics, speech perception, and comprehension. Int J Audiol 2003;42:S59-67.  Back to cited text no. 49      
50.Binns C, Culling JF. The role of fundamental frequency contours in the perception of speech against interfering speech. J Acoust Soc Am 2007;122:1765-76.  Back to cited text no. 50      
51.Rhebergen K, Versfeld NJ, Dreschler WA. Release from informational masking by time reversal of native and non-native interfering speech. J Acoust Soc Am 2005;118:1274-7.  Back to cited text no. 51      
52.Bronkhorst AW. The cocktail party phenomenom: A review of research on speech intelligibility in multiple-talker conditions. Acust Acta Acust 2000;86:117-28.  Back to cited text no. 52      
53.Lyxell B, Borg E, Ohlsson IS. Cognitive skills and perceived effort in active and passive listening in a naturalistic sound environment . Publications from the Sound Environment Centre at Lund University. Report no. 8 (pp.91-104).  Back to cited text no. 53      

Top
Correspondence Address:
Jerker Rönnberg
Department of Behavioral Sciences and Learning, Linköping University, S-581 83, Linköping, Sweden

Login to access the Email id

Source of Support: The Swedish Research Council 349-2007-8654, Conflict of Interest: None


DOI: 10.4103/1463-1741.70505

Rights and Permissions


    Figures

  [Figure 1]

This article has been cited by
1 Effects of noise and audiovisual cues on speech processing in adults with and without ADHD
Anne M. P. Michalek,Silvana M. Watson,Ivan Ash,Stacie Ringleb,Anastasia Raymer
International Journal of Audiology. 2014; : 1
[Pubmed] | [DOI]
2 High second-language proficiency protects against the effects of reverberation on listening comprehension
Patrik Sörqvist,Anders Hurtig,Robert Ljung,Jerker Rönnberg
Scandinavian Journal of Psychology. 2014; 55(2): 91
[Pubmed] | [DOI]
3 Listening effort and fatigue: What exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group ‘white paper’
Ronan McGarrigle,Kevin J. Munro,Piers Dawes,Andrew J. Stewart,David R. Moore,Johanna G. Barry,Sygal Amitay
International Journal of Audiology. 2014; 53(7): 433
[Pubmed] | [DOI]
4 A music perception disorder (congenital amusia) influences speech comprehension
Fang Liu,Cunmei Jiang,Bei Wang,Yi Xu,Aniruddh D. Patel
Neuropsychologia. 2014;
[Pubmed] | [DOI]
5 The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners
Tim Schoof,Stuart Rosen
Frontiers in Aging Neuroscience. 2014; 6
[Pubmed] | [DOI]
6 Monitoring the capacity of working memory: Executive control and effects of listening effort
Nicole M. Amichetti,Raymond S. Stanley,Alison G. White,Arthur Wingfield
Memory & Cognition. 2013; 41(6): 839
[Pubmed] | [DOI]
7 Development and evaluation of a linguistically and audiologically controlled sentence intelligibility test
Verena N. Uslar,Rebecca Carroll,Mirko Hanke,Cornelia Hamann,Esther Ruigendijk,Thomas Brand,Birger Kollmeier
The Journal of the Acoustical Society of America. 2013; 134(4): 3039
[Pubmed] | [DOI]
8 The effects of working memory capacity and semantic cues on the intelligibility of speech in noise
Adriana A. Zekveld,Mary Rudner,Ingrid S. Johnsrude,Jerker Ro¨nnberg
The Journal of the Acoustical Society of America. 2013; 134(3): 2225
[Pubmed] | [DOI]
9 The Effects of Syntactic Complexity on Processing Sentences in Noise
Rebecca Carroll,Esther Ruigendijk
Journal of Psycholinguistic Research. 2013; 42(2): 139
[Pubmed] | [DOI]
10 The Ease of Language Understanding (ELU) model: Theory, data, and clinical implications
Rönnberg, J. and Lunner, T. and Zekveld, A. and Sörqvist, P. and Danielsson, H. and Lyxell, B. and Dahlström, Ö. and Signoret, C. and Stenfelt, S. and Pichora-Fuller, M.K. and Rudner, M.
Frontiers in Systems Neuroscience. 2013; (JUNE)
[Pubmed]
11 Early ERP signature of hearing impairment in visual rhyme judgment
Classon, E. and Rudner, M. and Johansson, M. and Rönnberg, J.
Frontiers in Psychology. 2013; 4(MAY)
[Pubmed]
12 Auditory models of suprathreshold distortion in persons with impaired hearing
Grant, K.W. and Walden, B.E. and Van Summers and Leek, M.R.
Journal of the American Academy of Audiology. 2013; 24(4): 254-257
[Pubmed]
13 The Effects of Syntactic Complexity on Processing Sentences in Noise
Carroll, R. and Ruigendijk, E.
Journal of Psycholinguistic Research. 2013; 42(2): 139-159
[Pubmed]
14 Working Memory Is Partially Preserved during Sleep
Daltrozzo, J. and Claude, L. and Tillmann, B. and Bastuji, H. and Perrin, F.
PLoS ONE. 2012; 7(12)
[Pubmed]
15 Working memory capacity may influence perceived effort during aided speech recognition in noise
Rudner, M. and Lunner, T. and Behrens, T. and Thorén, E.S. and Rönnberg, J.
Journal of the American Academy of Audiology. 2012; 23(8): 577-589
[Pubmed]
16 Speech recognition in adverse conditions: A review
Mattys, S.L. and Davis, M.H. and Bradlow, A.R. and Scott, S.K.
Language and Cognitive Processes. 2012; 27(7-8): 953-978
[Pubmed]
17 Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility
Zekveld, A.A. and Rudner, M. and Johnsrude, I.S. and Heslenfeld, D.J. and Rönnberg, J.
Brain and Language. 2012; 122(2): 103-113
[Pubmed]
18 The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening
Piquado, T. and Benichov, J.I. and Brownell, H. and Wingfield, A.
International Journal of Audiology. 2012; 51(8): 576-583
[Pubmed]
19 Acceptance of background noise, working memory capacity, and auditory evoked potentials in subjects with normal hearing
Brännström, K.J. and Zunic, E. and Borovac, A. and Ibertsson, T.
Journal of the American Academy of Audiology. 2012; 23(7): 542-552
[Pubmed]
20 Episodic long-term memory of spoken discourse masked by speech: What is the role for working memory capacity?
Sörqvist, P. and Rönnberg, J.
Journal of Speech, Language, and Hearing Research. 2012; 55(1): 210-218
[Pubmed]
21 New measures of masked text recognition in relation to speech-in-noise perception and their associations with age and cognitive abilities
Besser, J. and Zekveld, A.A. and Kramer, S.E. and Rönnberg, J. and Festena, J.M.
Journal of Speech, Language, and Hearing Research. 2012; 55(1): 194-209
[Pubmed]
22 Speech recognition in adverse conditions: A review
Sven L. Mattys,Matthew H. Davis,Ann R. Bradlow,Sophie K. Scott
Language and Cognitive Processes. 2012; 27(7-8): 953
[Pubmed] | [DOI]
23 Processing Load Induced by Informational Masking Is Related to Linguistic Abilities
Thomas Koelewijn,Adriana A. Zekveld,Joost M. Festen,Jerker Rönnberg,Sophia E. Kramer
International Journal of Otolaryngology. 2012; 2012: 1
[Pubmed] | [DOI]
24 Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility
Adriana A. Zekveld,Mary Rudner,Ingrid S. Johnsrude,Dirk J. Heslenfeld,Jerker Rönnberg
Brain and Language. 2012; 122(2): 103
[Pubmed] | [DOI]
25 The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening
Tepring Piquado,Jonathan I. Benichov,Hiram Brownell,Arthur Wingfield
International Journal of Audiology. 2012; 51(8): 576
[Pubmed] | [DOI]
26 Working memory supports listening in noise for persons with hearing impairment
Rudner, M., Rönnberg, J., Lunner, T.
Journal of the American Academy of Audiology. 2011; 22(3): 156-167
[Pubmed]
27 Cognitive Load During Speech Perception in Noise: The Influence of Age, Hearing Loss, and Cognition on the Pupil Response
Adriana A. Zekveld,Sophia E. Kramer,Joost M. Festen
Ear and Hearing. 2011; 32(4): 498
[Pubmed] | [DOI]
28 The Influence of Semantically Related and Unrelated Text Cues on the Intelligibility of Sentences in Noise
Adriana A. Zekveld,Mary Rudner,Ingrid S. Johnsrude,Joost M. Festen,Johannes H. M. van Beek,Jerker Rönnberg
Ear and Hearing. 2011; 32(6): e16
[Pubmed] | [DOI]
29 Effects of cognitive load on speech recognition
Sven L. Mattys,Lukas Wiget
Journal of Memory and Language. 2011; 65(2): 145
[Pubmed] | [DOI]
30 The influence of semantically related and unrelated text cues on the intelligibility of sentences in noise
Zekveld, A.A. and Rudner, M. and Johnsrude, I.S. and Festen, J.M. and Van Beek, J.H.M. and Rönnberg, J.
Ear and Hearing. 2011; 32(6): e16-e25
[Pubmed]
31 Cognitive hearing science: The legacy of stuart gatehouse
Rönnberg, J. and Rudner, M. and Lunner, T.
Trends in Amplification. 2011; 15(3): 140-148
[Pubmed]
32 Effects of cognitive load on speech recognition
Mattys, S.L. and Wiget, L.
Journal of Memory and Language. 2011; 65(2): 145-160
[Pubmed]
33 Cognitive load during speech perception in noise: The influence of age, hearing loss, and cognition on the pupil response
Zekveld, A.A. and Kramer, S.E. and Festen, J.M.
Ear and Hearing. 2011; 32(4): 498-510
[Pubmed]



 

Top