A. John Van Opstal
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. John Van Opstal.
Nature Neuroscience | 1998
Paul M. Hofman; Jos G.A. Van Riswick; A. John Van Opstal
Because the inner ear is not organized spatially, sound localization relies on the neural processing of implicit acoustic cues. To determine a sounds position, the brain must learn and calibrate these cues, using accurate spatial feedback from other sensorimotor systems. Experimental evidence for such a system has been demonstrated in barn owls, but not in humans. Here, we demonstrate the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae.
Brain Research Bulletin | 1998
Maarten A. Frens; A. John Van Opstal
This paper reports on single-unit activity of saccade-related burst neurons (SRBNs) in the intermediate and deep layers of the monkey superior colliculus (SC), evoked by bimodal sensory stimulation. Monkeys were trained to generate saccadic eye movements towards visual stimuli, in either a unimodal visual saccade task, or in a bimodal visual-auditory task. In the latter task, the monkeys were required to make an accurate saccade towards a visual target, while ignoring an auditory stimulus. The presentation of an auditory stimulus in temporal and spatial proximity of the visual target influenced neither the accuracy nor the kinematic properties of the evoked saccades. However, it had a significant effect on the activity of 90% (45/50) of the SRBNs. The motor-related burst increased significantly in some neurons, but was suppressed in others. In visual-movement cells, comparable bimodal interactions were observed in both the visually evoked burst and the movement-related burst. The large differences observed in the movement-related activity of SRBNs for identical saccades under different sensory conditions do not support the hypothesis that such cells encode dynamic motor error. The only behavioral parameter that was affected by the presentation of the auditory stimulus was saccade latency. Auditory stimulation caused saccade latency changes in the majority of the experiments. Meanwhile, the timing of peak collicular motor activity and saccade onset remained tightly coupled for all stimulus configurations. In addition, saccade latency varied as function of the distance between the stimuli in 36% of the recordings. Interestingly, the occurrence of a spatial latency effect covaried significantly with a similar spatial influence on the SRBNs firing rate. These cells were always most active in the bimodal task when both stimuli were in spatial register, but activity decreased with increasing stimulus separation.
The Journal of Neuroscience | 2004
Marc M. Van Wanrooij; A. John Van Opstal
Monaurally deaf people lack the binaural acoustic difference cues in sound level and timing that are needed to encode sound location in the horizontal plane (azimuth). It has been proposed that these people therefore rely on spectral pinna cues of their normal ear to localize sounds. However, the acoustic head-shadow effect (HSE) might also serve as an azimuth cue, despite its ambiguity when absolute sound levels are unknown. Here, we assess the contribution of either cue in the monaural deaf to two-dimensional (2D) sound localization. In a localization test with randomly interleaved sound levels, we show that all monaurally deaf listeners relied heavily on the HSE, whereas binaural control listeners ignore this cue. However, some monaural listeners responded partly to actual sound-source azimuth, regardless of sound level. We show that these listeners extracted azimuth information from their pinna cues. The better monaural listeners were able to localize azimuth on the basis of spectral cues, the better their ability to also localize sound-source elevation. In a subsequent localization experiment with one fixed sound level, monaural listeners rapidly adopted a strategy on the basis of the HSE. We conclude that monaural spectral cues are not sufficient for adequate 2D sound localization under unfamiliar acoustic conditions. Thus, monaural listeners strongly rely on the ambiguous HSE, which may help them to cope with familiar acoustic environments.Monaurally deaf people lack the binaural acoustic difference cues in sound level and timing that are needed to encode sound location in the horizontal plane (azimuth). It has been proposed that these people therefore rely on spectral pinna cues of their normal ear to localize sounds. However, the acoustic head-shadow effect (HSE) might also serve as an azimuth cue, despite its ambiguity when absolute sound levels are unknown. Here, we assess the contribution of either cue in the monaural deaf to two-dimensional (2D) sound localization. In a localization test with randomly interleaved sound levels, we show that all monaurally deaf listeners relied heavily on the HSE, whereas binaural control listeners ignore this cue. However, some monaural listeners responded partly to actual sound-source azimuth, regardless of sound level. We show that these listeners extracted azimuth information from their pinna cues. The better monaural listeners were able to localize azimuth on the basis of spectral cues, the better their ability to also localize sound-source elevation. In a subsequent localization experiment with one fixed sound level, monaural listeners rapidly adopted a strategy on the basis of the HSE. We conclude that monaural spectral cues are not sufficient for adequate 2D sound localization under unfamiliar acoustic conditions. Thus, monaural listeners strongly rely on the ambiguous HSE, which may help them to cope with familiar acoustic environments.
Nature Neuroscience | 2003
Marcel P. Zwiers; A. John Van Opstal; Gary D. Paige
Auditory and visual target locations are encoded differently in the brain, but must be co-calibrated to maintain cross-sensory concordance. Mechanisms that adjust spatial calibration across modalities have been described (for example, prism adaptation in owls), though rudimentarily in humans. We quantified the adaptation of human sound localization in response to spatially compressed vision (0.5× lenses for 2–3 days). This induced a corresponding compression of auditory localization that was most pronounced for azimuth (minimal for elevation) and was restricted to the visual field of the lenses. Sound localization was also affected outside the field of visual–auditory interaction (shifted centrally, not compressed). These results suggest that spatially modified vision induces adaptive changes in adult human sound localization, including novel mechanisms that account for spatial compression. Findings are consistent with a model in which the central processing of sound location is encoded by recruitment rather than by a place code.
The Journal of Neuroscience | 2005
Paul M. Hofman; A. John Van Opstal
Human sound localization results primarily from the processing of binaural differences in sound level and arrival time for locations in the horizontal plane (azimuth) and of spectral shape cues generated by the head and pinnae for positions in the vertical plane (elevation). The latter mechanism incorporates two processing stages: a spectral-to-spatial mapping stage and a binaural weighting stage that determines the contribution of each ear to perceived elevation as function of sound azimuth. We demonstrated recently that binaural pinna molds virtually abolish the ability to localize sound-source elevation, but, after several weeks, subjects regained normal localization performance. It is not clear which processing stage underlies this remarkable plasticity, because the auditory system could have learned the new spectral cues separately for each ear (spatial-mapping adaptation) or for one ear only, while extending its contribution into the contralateral hemifield (binaural-weighting adaptation). To dissociate these possibilities, we applied a long-term monaural spectral perturbation in 13 subjects. Our results show that, in eight experiments, listeners learned to localize accurately with new spectral cues that differed substantially from those provided by their own ears. Interestingly, five subjects, whose spectral cues were not sufficiently perturbed, never yielded stable localization performance. Our findings indicate that the analysis of spectral cues may involve a correlation process between the sensory input and a stored spectral representation of the subjects ears and that learning acts predominantly at a spectral-to-spatial mapping level rather than at the level of binaural weighting.
The Journal of Neuroscience | 2004
Marcel P. Zwiers; Huib Versnel; A. John Van Opstal
The midbrain inferior colliculus (IC) is implicated in coding sound location, but evidence from behaving primates is scarce. Here we report single-unit responses to broadband sounds that were systematically varied within the two-dimensional (2D) frontal hemifield, as well as in sound level, while monkeys fixated a central visual target. Results show that IC neurons are broadly tuned to both sound-source azimuth and level in a way that can be approximated by multiplicative, planar modulation of the firing rate of the cell. In addition, a fraction of neurons also responded to elevation. This tuning, however, was more varied: some neurons were sensitive to a specific elevation; others responded to elevation in a monotonic way. Multiple-linear regression parameters varied from cell to cell, but the only topography encountered was a dorsoventral tonotopy. In a second experiment, we presented sounds from straight ahead while monkeys fixated visual targets at different positions. We found that auditory responses in a fraction of IC cells were weakly, but systematically, modulated by 2D eye position. This modulation was absent in the spontaneous firing rates, again suggesting a multiplicative interaction of acoustic and eye-position inputs. Tuning parameters to sound frequency, location, intensity, and eye position were uncorrelated. On the basis of simulations with a simple neural network model, we suggest that the population of IC cells could encode the head-centered 2D sound location and enable a direct transformation of this signal into the eye-centered topographic motor map of the superior colliculus. Both signals are required to generate rapid eye-head orienting movements toward sounds.
Journal of the Acoustical Society of America | 2004
Joyce Vliegen; A. John Van Opstal
The localization of sounds in the vertical plane (elevation) deteriorates for short-duration wideband sounds at moderate to high intensities. The effect is described by a systematic decrease of the elevation gain (slope of stimulus-response relation) at short sound durations. Two hypotheses have been proposed to explain this finding. Either the sound localization system integrates over a time window that is too short to accurately extract the spectral localization cues (neural integration hypothesis), or the effect results from cochlear saturation at high intensities (adaptation hypothesis). While the neural integration model predicts that elevation gain is independent of sound level, the adaptation hypothesis holds that low elevation gains for short-duration sounds are only obtained at high intensities. Here, these predictions are tested over a larger range of stimulus parameters than has been done so far. Subjects responded with rapid head movements to noise bursts in the two-dimensional frontal space. Stimulus durations ranged from 3 to 100 ms; sound levels from 26 to 73 dB SPL. Results show that the elevation gain decreases for short noise bursts at all sound levels, a finding that supports the integration model. On the other hand, the short-duration gain also decreases at high sound levels, which is in line with the adaptation hypothesis. The finding that elevation gain was a nonmonotonic function of sound level for all sound durations, however, is predicted by neither model. It is concluded that both mechanisms underlie the elevation gain effect and a conceptual model is proposed to reconcile these findings.
NeuroImage | 2012
Gleb Bezgin; Vasily A. Vakorin; A. John Van Opstal; Anthony R. McIntosh; Rembrandt Bakker
Non-invasive measuring methods such as EEG/MEG, fMRI and DTI are increasingly utilised to extract quantitative information on functional and anatomical connectivity in the human brain. These methods typically register their data in Euclidean space, so that one can refer to a particular activity pattern by specifying its spatial coordinates. Since each of these methods has limited resolution in either the time or spatial domain, incorporating additional data, such as those obtained from invasive animal studies, would be highly beneficial to link structure and function. Here we describe an approach to spatially register all cortical brain regions from the macaque structural connectivity database CoCoMac, which contains the combined tracing study results from 459 publications (http://cocomac.g-node.org). Brain regions from 9 different brain maps were directly mapped to a standard macaque cortex using the tool Caret (Van Essen and Dierker, 2007). The remaining regions in the CoCoMac database were semantically linked to these 9 maps using previously developed algebraic and machine-learning techniques (Bezgin et al., 2008; Stephan et al., 2000). We analysed neural connectivity using several graph-theoretical measures to capture global properties of the derived network, and found that Markov Centrality provides the most direct link between structure and function. With this registration approach, users can query the CoCoMac database by specifying spatial coordinates. Availability of deformation tools and homology evidence then allow one to directly attribute detailed anatomical animal data to human experimental results.
Attention Perception & Psychophysics | 2008
Lars Riecke; A. John Van Opstal; Elia Formisano
A sound that is briefly interrupted by a silent gap is perceived as discontinuous. However, when the gap is filled with noise, the sound may be perceived as continuing through the noise. It has been shown that this continuity illusion depends on the masking of the omitted target sound, but the underlying mechanisms have yet to be quantified thoroughly. In this article, we systematically quantify the relation between perceived continuity and the duration, relative power, or notch width of the interrupting broadband noise for interrupted and noninterrupted amplitude-modulated tones at different frequencies. We fitted the psychometric results in order to estimate the range of the noise parameters that induced auditory grouping. To explain our results within a common theoretical framework, we applied a power spectrum model to the different masking results and estimated the critical bandwidth of the auditory filter that may be responsible for the continuity illusion. Our results set constraints on the spectral resolution of the mechanisms underlying the continuity illusion and provide a stimulus set that can be readily applied for neurophysiological studies of its neural correlates.
Jaro-journal of The Association for Research in Otolaryngology | 2011
Martijn J. H. Agterberg; A.F.M. Snik; Myrthe K. S. Hol; Thamar E. M. van Esch; C.W.R.J. Cremers; Marc M. Van Wanrooij; A. John Van Opstal
We examined horizontal directional hearing in patients with acquired severe unilateral conductive hearing loss (UCHL). All patients (n = 12) had been fitted with a bone conduction device (BCD) to restore bilateral hearing. The patients were tested in the unaided (monaural) and aided (binaural) hearing condition. Five listeners without hearing loss were tested as a control group while listening with a monaural plug and earmuff, or with both ears (binaural). We randomly varied stimulus presentation levels to assess whether listeners relied on the acoustic head-shadow effect (HSE) for horizontal (azimuth) localization. Moreover, to prevent sound localization on the basis of monaural spectral shape cues from head and pinna, subjects were exposed to narrow band (1/3 octave) noises. We demonstrate that the BCD significantly improved sound localization in 8/12 of the UCHL patients. Interestingly, under monaural hearing (BCD off), we observed fairly good unaided azimuth localization performance in 4/12 of the patients. Our multiple regression analysis shows that all patients relied on the ambiguous HSE for localization. In contrast, acutely plugged control listeners did not employ the HSE. Our data confirm and further extend results of recent studies on the use of sound localization cues in chronic and acute monaural listening.