Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adelbert W. Bronkhorst is active.

Publication


Featured researches published by Adelbert W. Bronkhorst.


Journal of the Acoustical Society of America | 1987

The effect of head‐induced interaural time and level differences on speech intelligibility in noise

Adelbert W. Bronkhorst; Rosina Plomp

A study was made of the effect of interaural time delay (ITD) and acoustic headshadow on binaural speech intelligibility in noise. A free-field condition was simulated by presenting recordings, made with a KEMAR manikin in an anechoic room, through earphones. Recordings were made of speech, reproduced in front of the manikin, and of noise, emanating from seven angles in the azimuthal plane, ranging from 0 degree (frontal) to 180 degrees in steps of 30 degrees. From this noise, two signals were derived, one containing only ITD, the other containing only interaural level differences (ILD) due to headshadow. Using this material, speech reception thresholds (SRT) for sentences in noise were determined for a group of normal-hearing subjects. Results show that (1) for noise azimuths between 30 degrees and 150 degrees, the gain due to ITD lies between 3.9 and 5.1 dB, while the gain due to ILD ranges from 3.5 to 7.8 dB, and (2) ILD decreases the effectiveness of binaural unmasking due to ITD (on the average, the threshold shift drops from 4.6 to 2.6 dB). In a second experiment, also conducted with normal-hearing subjects, similar stimuli were used, but now presented monaurally or with an overall 20-dB attenuation in one channel, in order to simulate hearing loss. In addition, SRTs were determined for noise with fixed ITDs, for comparison with the results obtained with head-induced (frequency dependent) ITDs.(ABSTRACT TRUNCATED AT 250 WORDS)


Journal of Experimental Psychology: Human Perception and Performance | 2008

Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

E. van der Burg; Christian N. L. Olivers; Adelbert W. Bronkhorst; Jan Theeuwes

Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing.


Journal of Vision | 2008

Audiovisual events capture attention: Evidence from temporal order judgments

E. van der Burg; Christian N. L. Olivers; Adelbert W. Bronkhorst; Jan Theeuwes

Is an irrelevant audiovisual event able to guide attention automatically? In Experiments 1 and 2, participants were asked to make a temporal order judgment (TOJ) about which of two dots (left or right) appeared first. In Experiment 3, participants were asked to make a simultaneity judgment (SJ) instead. Such tasks have been shown to be affected by attention. Lateral to each of the dots, nine irrelevant distractors continuously changed color. Prior to the presentation of the first dot, a spatially non-informative tone was synchronized with the color change of one of these distractors, either on the same side or on the opposite side of the first dot. Even though both the tone and the distractors were completely irrelevant to the task, TOJs were affected by the synchronized distractor. TOJs were not affected when the tone was absent or synchronized with distractors on both sides. SJs were also affected by the synchronized distractor, ruling out an alternative response bias hypothesis. We conclude that audiovisual synchrony guides attention in an exogenous manner.


Nature | 1999

Auditory distance perception in rooms

Adelbert W. Bronkhorst; Tammo Houtgast

The perceived distance of a sound source in a room has been shown to depend on the ratio of the energies of direct and reflected sound. Although this relationship was verified in later studies,,, the research has never led to a quantitative model. The advent of techniques for the generation of virtual sound sources, has made it possible to study distance perception using controlled, deterministic stimuli. Here we present two experiments that make use of such stimuli and we show that a simple model, based on a modified direct-to-reverberant energy ratio, can accurately predict the results and also provide an explanation for the ‘auditory horizon’ in distance perception. The modification of the ratio consists of the use of an integration time of 6 milliseconds in the calculation of the energy of the direct sound. This time constant seems to be important in spatial hearing—the precedence effect is also based on a similar integration window.


Journal of the Acoustical Society of America | 1995

Localization of real and virtual sound sources

Adelbert W. Bronkhorst

Localization of real and virtual sound sources was studied using two tasks. In the first task, subjects had to turn their head while the sound was continuously on and press a button when they thought they faced the source. In the second task, the source only produced a short sound and the subjects had to indicate, by pressing one of eight buttons, in which quadrant the source was located, and whether it was located above or below the horizontal plane. Virtual sound sources were created using head‐related transfer functions (HRTFs), measured with probe microphones placed in the ear canals of the subjects. Sound stimuli were harmonic signals with a fundamental frequency of 250 Hz and an upper frequency ranging from 4 to 15 kHz. Results, obtained from eight subjects, show that localization performance for real and virtual sources was similar in both tasks, provided that the stimuli did not contain frequencies above 7 kHz. When frequencies up to 15 kHz were included, performance for virtual sources was, in ge...


Journal of the Acoustical Society of America | 1999

Contribution of spectral cues to human sound localization

Erno H. A. Langendijk; Adelbert W. Bronkhorst

The contribution of spectral cues to human sound localization was investigated by removing cues in 1/2-, 1- or 2-octave bands in the frequency range above 4 kHz. Localization responses were given by placing an acoustic pointer at the same apparent position as a virtual target. The pointer was generated by filtering a 100-ms harmonic complex with equalized head-related transfer functions (HRTFs). Listeners controlled the pointer via a hand-held stick that rotated about a fixed point. In the baseline condition, the target, a 200-ms noise burst, was filtered with the same HRTFs as the pointer. In other conditions, the spectral information within a certain frequency band was removed by replacing the directional transfer function within this band with the average transfer of this band. Analysis of the data showed that removing cues in 1/2-octave bands did not affect localization, whereas for the 2-octave band correct localization was virtually impossible. The results obtained for the 1-octave bands indicate that up-down cues are located mainly in the 6-12-kHz band, and front-back cues in the 8-16-kHz band. The interindividual spread in response patterns suggests that different listeners use different localization cues. The response patterns in the median plane can be predicted using a model based on spectral comparison of directional transfer functions for target and response directions.


Journal of the Acoustical Society of America | 2000

Multichannel speech intelligibility and talker recognition using monaural, binaural, and three-dimensional auditory presentation

Rob Drullman; Adelbert W. Bronkhorst

In a 3D auditory display, sounds are presented over headphones in a way that they seem to originate from virtual sources in a space around the listener. This paper describes a study on the possible merits of such a display for bandlimited speech with respect to intelligibility and talker recognition against a background of competing voices. Different conditions were investigated: speech material (words/sentences), presentation mode (monaural/binaural/3D), number of competing talkers (1-4), and virtual position of the talkers (in 45 degrees-steps around the front horizontal plane). Average results for 12 listeners show an increase of speech intelligibility for 3D presentation for two or more competing talkers compared to conventional binaural presentation. The ability to recognize a talker is slightly better and the time required for recognition is significantly shorter for 3D presentation in the presence of two or three competing talkers. Although absolute localization of a talker is rather poor, spatial separation appears to have a significant effect on communication. For either speech intelligibility, talker recognition, or localization, no difference is found between the use of an individualized 3D auditory display and a general display.


Acta Psychologica | 2010

Attention and the multiple stages of multisensory integration: A review of audiovisual studies

Thomas Koelewijn; Adelbert W. Bronkhorst; Jan Theeuwes

Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.


Journal of the Acoustical Society of America | 2000

Fidelity of three-dimensional-sound reproduction using a virtual auditory display

Erno H. A. Langendijk; Adelbert W. Bronkhorst

The fidelity of reproducing free-field sounds using a virtual auditory display was investigated in two experiments. In the first experiment, listeners directly compared stimuli from an actual loudspeaker in the free field with those from small headphones placed in front of the ears. Headphone stimuli were filtered using head-related transfer functions (HRTFs), recorded while listeners were wearing the headphones, in order to reproduce the pressure signatures of the free-field sounds at the eardrum. Discriminability was investigated for six sound-source positions using broadband noise as a stimulus. The results show that the acoustic percepts of real and virtual sounds were identical. In the second experiment, discrimination between virtual sounds generated with measured and interpolated HRTFs was investigated. Interpolation was performed using HRTFs measured for loudspeaker positions with different spatial resolutions. Broadband noise bursts with flat and scrambled spectra were used as stimuli. The results indicate that, for a spatial resolution of about 6 degrees, the interpolation does not introduce audible cues. For resolutions of 20 degrees or more, the interpolation introduces audible cues related to timbre and position. For intermediate resolutions (10 degrees - 15 degrees) the data suggest that only timbre cues were used.


Attention Perception & Psychophysics | 2015

The cocktail-party problem revisited: early processing and selection of multi-talker speech

Adelbert W. Bronkhorst

How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene analysis, and attention, all dealing with early processing and selection of speech, which has been stimulated by this question. Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and “unmasking” resulting from binaural listening. Psychoacoustic models have been developed that can predict these effects accurately, albeit using computational approaches rather than approximations of neural processing. Grouping—the segregation and streaming of sounds—represents a subsequent processing stage that interacts closely with attention. Sounds can be easily grouped—and subsequently selected—using primitive features such as spatial location and fundamental frequency. More complex processing is required when lexical, syntactic, or semantic information is used. Whereas it is now clear that such processing can take place preattentively, there also is evidence that the processing depth depends on the task-relevancy of the sound. This is consistent with the presence of a feedback loop in attentional control, triggering enhancement of to-be-selected input. Despite recent progress, there are still many unresolved issues: there is a need for integrative models that are neurophysiologically plausible, for research into grouping based on other than spatial or voice-related cues, for studies explicitly addressing endogenous and exogenous attention, for an explanation of the remarkable sluggishness of attention focused on dynamically changing sounds, and for research elucidating the distinction between binaural speech perception and sound localization.

Collaboration


Dive into the Adelbert W. Bronkhorst's collaboration.

Top Co-Authors

Avatar

Jan Theeuwes

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kim White

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Arntzen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Arjan J. Bosman

Radboud University Nijmegen

View shared research outputs
Researchain Logo
Decentralizing Knowledge