Thomas Koelewijn
VU University Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Koelewijn.
Acta Psychologica | 2010
Thomas Koelewijn; Adelbert W. Bronkhorst; Jan Theeuwes
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.
Ear and Hearing | 2012
Thomas Koelewijn; Adriana A. Zekveld; Joost M. Festen; Sophia E. Kramer
Objectives: Recent research has demonstrated that pupil dilation, a measure of mental effort (cognitive processing load), is sensitive to differences in speech intelligibility. The present study extends this outcome by examining the effects of masker type and age on the speech reception threshold (SRT) and mental effort. Design: In young and middle-aged adults, pupil dilation was measured while they performed an SRT task, in which spoken sentences were presented in stationary noise, fluctuating noise, or together with a single-talker masker. The masker levels were adjusted to achieve 50% or 84% sentence intelligibility. Results: The results show better SRTs for fluctuating noise and a single-talker masker compared with stationary noise, which replicates results of previous studies. The peak pupil dilation, reflecting mental effort, was larger in the single-interfering speaker condition compared with the other masker conditions. Remarkably, in contrast to the thresholds, no differences in peak dilation were observed between fluctuating noise and stationary noise. This effect was independent of the intelligibility level and age. Conclusions: To maintain similar intelligibility levels, participants needed more mental effort for speech perception in the presence of a single-talker masker and then with the two other types of maskers. This suggests an additive interfering effect of speech information from the single-talker masker. The dissociation between these performance and mental effort measures underlines the importance of including measurements of pupil dilation as an independent index of mental effort during speech processing in different types of noisy environments and at different intelligibility levels.
International Journal of Otolaryngology | 2012
Thomas Koelewijn; Adriana A. Zekveld; Joost M. Festen; Jerker Rönnberg; Sophia E. Kramer
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.
Trends in Amplification | 2013
Jana Besser; Thomas Koelewijn; Adriana A. Zekveld; Sophia E. Kramer; Joost M. Festen
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
Social Neuroscience | 2008
Hein T. van Schie; Thomas Koelewijn; Ole Jensen; Robert Oostenveld; Eric Maris; Harold Bekkering
Abstract Lateralized magnetic fields were recorded from 12 subjects using a 151 channel magnetoencephalograpy (MEG) system to investigate temporal and functional properties of motor activation to the observation of goal-directed hand movements by a virtual actor. Observation of left and right hand movements generated a neuromagnetic lateralized readiness field (LRF) over contralateral motor cortex. The early onset of the LRF and the fact that the evoked component was insensitive to the correctness of the observed action suggest the operation of a fast and automatic form of motor resonance that may precede higher levels of action understanding.
Journal of Experimental Psychology: Human Perception and Performance | 2009
Thomas Koelewijn; Adelbert W. Bronkhorst; Jan Theeuwes
It is well known that auditory and visual onsets presented at a particular location can capture a persons visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control.
Hearing Research | 2014
Thomas Koelewijn; Barbara G. Shinn-Cunningham; Adriana A. Zekveld; Sophia E. Kramer
Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios. In line with previous studies, the performance decreases when processing two target sentences instead of one. Additionally, dividing attention to process two sentences caused larger pupil dilation and later peak pupil latency than processing only one. This suggests an effect of attention on cognitive processing load (pupil dilation) during speech processing in noise.
Journal of the Acoustical Society of America | 2014
Thomas Koelewijn; Adriana A. Zekveld; Joost M. Festen; Sophia E. Kramer
A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.
Hearing Research | 2015
Thomas Koelewijn; Hilde de Kluiver; Barbara G. Shinn-Cunningham; Adriana A. Zekveld; Sophia E. Kramer
Recent studies have shown that prior knowledge about where, when, and who is going to talk improves speech intelligibility. How related attentional processes affect cognitive processing load has not been investigated yet. In the current study, three experiments investigated how the pupil dilation response is affected by prior knowledge of target speech location, target speech onset, and who is going to talk. A total of 56 young adults with normal hearing participated. They had to reproduce a target sentence presented to one ear while ignoring a distracting sentence simultaneously presented to the other ear. The two sentences were independently masked by fluctuating noise. Target location (left or right ear), speech onset, and talker variability were manipulated in separate experiments by keeping these features either fixed during an entire block or randomized over trials. Pupil responses were recorded during listening and performance was scored after recall. The results showed an improvement in performance when the location of the target speech was fixed instead of randomized. Additionally, location uncertainty increased the pupil dilation response, which suggests that prior knowledge of location reduces cognitive load. Interestingly, the observed pupil responses for each condition were consistent with subjective reports of listening effort. We conclude that communicating in a dynamic environment like a cocktail party (where participants in competing conversations move unpredictably) requires substantial listening effort because of the demands placed on attentional processes.
Attention Perception & Psychophysics | 2007
Erik Van der Burg; Christian N. L. Olivers; Adelbert W. Bronkhorst; Thomas Koelewijn; Jan Theeuwes
The second of two targets is often missed when presented shortly after the first target—a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing. However, forcing participants to immediately process the auditory target, either by presenting interfering sounds during retrieval or by making the first target directly relevant for a speeded response to the second target, did not result in a return of a cross-modal AB. The findings argue against echoic memory as an explanation for efficient cross-modal processing. Instead, we hypothesized that a cross-modal AB may be observed when the different modalities use common representations, such as semantic representations. In support of this, a deficit for visual letter recognition returned when the auditory task required a distinction between spoken digits and letters.