Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norbert Kopčo is active.

Publication


Featured researches published by Norbert Kopčo.


Journal of the Acoustical Society of America | 2005

Localizing nearby sound sources in a classroom: Binaural room impulse responses

Barbara G. Shinn-Cunningham; Norbert Kopčo; Tara J. Martin

Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Object continuity enhances selective auditory attention

Virginia Best; Erol J. Ozmeral; Norbert Kopčo; Barbara G. Shinn-Cunningham

In complex scenes, the identity of an auditory object can build up across seconds. Given that attention operates on perceptual objects, this perceptual buildup may alter the efficacy of selective auditory attention over time. Here, we measured identification of a sequence of spoken target digits presented with distracter digits from other directions to investigate the dynamics of selective attention. Performance was better when the target location was fixed rather than changing between digits, even when listeners were cued as much as 1 s in advance about the position of each subsequent digit. Spatial continuity not only avoided well known costs associated with switching the focus of spatial attention, but also produced refinements in the spatial selectivity of attention across time. Continuity of target voice further enhanced this buildup of selective attention. Results suggest that when attention is sustained on one auditory object within a complex scene, attentional selectivity improves over time. Similar effects may come into play when attention is sustained on an object in a complex visual scene, especially in cases where visual object formation requires sustained attention.


Journal of the Acoustical Society of America | 2000

Spatial unmasking of nearby speech sources in a simulated anechoic environment.

Barbara G. Shinn-Cunningham; Jason Schickler; Norbert Kopčo; Ruth Y. Litovsky

Spatial unmasking of speech has traditionally been studied with target and masker at the same, relatively large distance. The present study investigated spatial unmasking for configurations in which the simulated sources varied in azimuth and could be either near or far from the head. Target sentences and speech-shaped noise maskers were simulated over headphones using head-related transfer functions derived from a spherical-head model. Speech reception thresholds were measured adaptively, varying target level while keeping the masker level constant at the ‘‘better’’ ear. Results demonstrate that small positional changes can result in very large changes in speech intelligibility when sources are near the listener as a result of large changes in the overall level of the stimuli reaching the ears. In addition, the difference in the target-to-masker ratios at the two ears can be substantially larger for nearby sources than for relatively distant sources. Predictions from an existing model of binaural speech intelligibility are in good agreement with results from all conditions comparable to those that have been tested previously. However, small but important deviations between the measured and predicted results are observed for other spatial configurations, suggesting that current theories do not accurately account for speech intelligibility for some of the novel spatial configurations tested.


Journal of the Acoustical Society of America | 2011

Effect of stimulus spectrum on distance perception for nearby sources.

Norbert Kopčo; Barbara G. Shinn-Cunningham

The effects of stimulus frequency and bandwidth on distance perception were examined for nearby sources in simulated reverberant space. Sources to the side [containing reverberation-related cues and interaural level difference (ILD) cues] and to the front (without ILDs) were simulated. Listeners judged the distance of noise bursts presented at a randomly roving level from simulated distances ranging from 0.15 to 1.7 m. Six stimuli were tested, varying in center frequency (300-5700 Hz) and bandwidth (200-5400 Hz). Performance, measured as the correlation between simulated and response distances, was worse for frontal than for lateral sources. For both simulated directions, performance was inversely proportional to the low-frequency stimulus cutoff, independent of stimulus bandwidth. The dependence of performance on frequency was stronger for frontal sources. These correlation results were well summarized by considering how mean response, as opposed to response variance, changed with stimulus direction and spectrum: (1) little bias was observed for lateral sources, but listeners consistently overestimated distance for frontal nearby sources; (2) for both directions, increasing the low-frequency cut-off reduced the range of responses. These results are consistent with the hypothesis that listeners used a direction-independent but frequency-dependent mapping of a reverberation-related cue, not the ILD cue, to judge source distance.


The Journal of Neuroscience | 2009

Reference Frame of the Ventriloquism Aftereffect

Norbert Kopčo; I‐Fan Lin; Barbara G. Shinn-Cunningham; Jennifer M. Groh

Seeing the image of a newscaster on a television set causes us to think that the sound coming from the loudspeaker is actually coming from the screen. How images capture sounds is mysterious because the brain uses different methods for determining the locations of visual versus auditory stimuli. The retina senses the locations of visual objects with respect to the eyes, whereas differences in sound characteristics across the ears indicate the locations of sound sources referenced to the head. Here, we tested which reference frame (RF) is used when vision recalibrates perceived sound locations. Visually guided biases in sound localization were induced in seven humans and two monkeys who made eye movements to auditory or audiovisual stimuli. On audiovisual (training) trials, the visual component of the targets was displaced laterally by 5–6°. Interleaved auditory-only (probe) trials served to evaluate the effect of experience with mismatched visual stimuli on auditory localization. We found that the displaced visual stimuli induced ventriloquism aftereffect in both humans (∼50% of the displacement size) and monkeys (∼25%), but only for locations around the trained spatial region, showing that audiovisual recalibration can be spatially specific. We tested the reference frame in which the recalibration occurs. On probe trials, we varied eye position relative to the head to dissociate head- from eye-centered RFs. Results indicate that both humans and monkeys use a mixture of the two RFs, suggesting that the neural mechanisms involved in ventriloquism occur in brain region(s) using a hybrid RF for encoding spatial information.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Neuronal representations of distance in human auditory cortex

Norbert Kopčo; Samantha Huang; John W. Belliveau; Tommi Raij; Chinmayi Tengshe; Jyrki Ahveninen

Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15–100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects’ discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.


Journal of the Acoustical Society of America | 2007

Sound localization with a preceding distractor.

Norbert Kopčo; Virginia Best; Barbara G. Shinn-Cunningham

Experiments explored how a distractor coming from a known location influences the localization of a subsequent sound, both in a classroom and in an anechoic chamber. Listeners localized a target click preceded by a distractor click coming from a location fixed throughout a run of trials (either frontal or lateral). The stimulus onset asynchrony (SOA) between distractor and target was relatively long (25-400 ms); control trials presented the target alone. The distractor induced bias and variability in target localization responses even at the longest SOA, with the specific pattern of effects differing between the two rooms. Furthermore, the presence of the distractor caused target responses to be displaced away from the distractor location in that run, even on trials with no distractor. This contextual bias built up anew in each run, over the course of minutes. The different effects illustrate that (a) sound localization is a dynamic process that depends on both the context and on the level of reverberation in the environment, and (b) interactions between sequential sound sources occur on time scales from hundreds of milliseconds to as long as minutes.


Journal of the Acoustical Society of America | 2010

Speech localization in a multitalker mixture.

Norbert Kopčo; Virginia Best; Simon Carlile

An experiment was performed that measured, for the frontal audio-visual horizon, how accurately listeners could localize a female-voice target amidst four spatially distributed male-voice maskers. To examine whether listeners can make use of a priori knowledge about the configuration of the sources, performance was examined in two conditions: either the masker locations were fixed (in one of five known patterns) or the locations varied from trial to trial. The presence of maskers disrupted speech localization, even after accounting for reduced target detectability. Averaged across all target locations, the rms error in responses decreased by 20% when a priori knowledge about masker locations was available. The effect was even stronger for the target locations that did not coincide with the maskers (error reduction of 36%), while no change in errors was observed for targets coinciding with maskers. The benefits were reduced when the target-to-masker intensity ratio was increased or when the maskers were in a pattern that made it difficult to make use of the a priori information. The results confirm that localization in speech mixtures is modified by the listeners expectations about the spatial arrangement of the sources.


Journal of the Acoustical Society of America | 2010

Exploring the benefit of auditory spatial continuity

Virginia Best; Barbara G. Shinn-Cunningham; Erol J. Ozmeral; Norbert Kopčo

Continuity of spatial location was recently shown to improve the ability to identify and recall a sequence of target digits presented in a mixture of confusable maskers [Best et al. (2008). Proc. Natl. Acad. Sci. U.S.A. 105, 13174-13178]. Three follow-up experiments were conducted to explore the basis of this improvement. The results suggest that the benefits of spatial continuity cannot be attributed to (a) the ability to plan where to direct attention in advance; (b) freedom from having to redirect attention across large distances; or (c) the challenge of filtering out signals that are confusable with the target.


Journal of the Acoustical Society of America | 2003

Spatial unmasking of nearby pure-tone targets in a simulated anechoic environment

Norbert Kopčo; Barbara G. Shinn-Cunningham

Detection thresholds were measured for different spatial configurations of 500- and 1000-Hz pure-tone targets and broadband maskers. Sources were simulated using individually measured head-related transfer functions (HRTFs) for source positions varying in both azimuth and distance. For the spatial configurations tested, thresholds ranged over 50 dB, primarily as a result of large changes in the target-to-masker ratio (TMR) with changes in target and masker locations. Intersubject differences in both HRTFs and in binaural sensitivity were large; however, the overall pattern of results was similar across subjects. As expected, detection thresholds were generally smaller when the target and masker were separated in azimuth than when they were at the same location. However, in some cases, azimuthal separation of target and masker yielded little change or even a small increase in detection threshold. Significant intersubject differences occurred as a result both of differences in monaural and binaural acoustic cues in the individualized HRTFs and of different binaural contributions to performance. Model predictions captured general trends in the pattern of spatial unmasking. However, subject-specific model predictions did not account for the observed individual differences in performance, even after taking into account individual differences in HRTF measurements and overall binaural sensitivity. These results suggest that individuals differ not only in their overall sensitivity to binaural cues, but also in how their binaural sensitivity varies with the spatial position of (and interaural differences in) the masker.

Collaboration


Dive into the Norbert Kopčo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron R. Seitz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Sincak

Technical University of Košice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge