Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Virginia Best is active.

Publication


Featured researches published by Virginia Best.


Trends in Amplification | 2008

Selective Attention in Normal and Impaired Hearing

Barbara G. Shinn-Cunningham; Virginia Best

A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Object continuity enhances selective auditory attention

Virginia Best; Erol J. Ozmeral; Norbert Kopčo; Barbara G. Shinn-Cunningham

In complex scenes, the identity of an auditory object can build up across seconds. Given that attention operates on perceptual objects, this perceptual buildup may alter the efficacy of selective auditory attention over time. Here, we measured identification of a sequence of spoken target digits presented with distracter digits from other directions to investigate the dynamics of selective attention. Performance was better when the target location was fixed rather than changing between digits, even when listeners were cued as much as 1 s in advance about the position of each subsequent digit. Spatial continuity not only avoided well known costs associated with switching the focus of spatial attention, but also produced refinements in the spatial selectivity of attention across time. Continuity of target voice further enhanced this buildup of selective attention. Results suggest that when attention is sustained on one auditory object within a complex scene, attentional selectivity improves over time. Similar effects may come into play when attention is sustained on an object in a complex visual scene, especially in cases where visual object formation requires sustained attention.


Journal of the Acoustical Society of America | 2005

The role of high frequencies in speech localization

Virginia Best; Simon Carlile; Craig Jin; André van Schaik

This study measured the accuracy with which human listeners can localize spoken words. A broadband (300 Hz-16 kHz) corpus of monosyllabic words was created and presented tolisteners using a virtual auditory environment. Localization was examined for 76 locations ona sphere surrounding the listener. Experiment 1 showed that low-pass filtering the speech sounds at 8 kHz degraded performance, causing an increase in polar angle errors associated with the cone of confusion. In experiment 2 it was found that performance in fact varied systematically with the level of the signal above 8 kHz. Although the lower frequencies (below 8 kHz) are known to be sufficient for accurate speech recognition in most situations, these results demonstrate that natural speech contains information between 8 and 16 kHz that is essential for accurate localization.


Jaro-journal of The Association for Research in Otolaryngology | 2007

Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

Virginia Best; Erol J. Ozmeral; Barbara G. Shinn-Cunningham

In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.


Nature Neuroscience | 2007

Cortical interference effects in the cocktail party problem

Rajiv Narayan; Virginia Best; Erol J. Ozmeral; Elizabeth M. McClaine; Micheal L. Dent; Barbara G. Shinn-Cunningham; Kamal Sen

Humans and animals must often discriminate between complex natural sounds in the presence of competing sounds (maskers). Although the auditory cortex is thought to be important in this task, the impact of maskers on cortical discrimination remains poorly understood. We examined neural responses in zebra finch (Taeniopygia guttata) field L (homologous to primary auditory cortex) to target birdsongs that were embedded in three different maskers (broadband noise, modulated noise and birdsong chorus). We found two distinct forms of interference in the neural responses: the addition of spurious spikes occurring primarily during the silent gaps between song syllables and the suppression of informative spikes occurring primarily during the syllables. Both effects systematically degraded neural discrimination as the target intensity decreased relative to that of the masker. The behavioral performance of songbirds degraded in a parallel manner. Our results identify neural interference that could explain the perceptual interference at the heart of the cocktail party problem.


Journal of the Acoustical Society of America | 2005

Spatial unmasking of birdsong in human listeners: Energetic and informational factors

Virginia Best; Erol J. Ozmeral; Frederick J. Gallun; Kamal Sen; Barbara G. Shinn-Cunningham

Spatial unmasking describes the improvement in the detection or identification of a target sound afforded by separating it spatially from simultaneous masking sounds. This effect has been studied extensively for speech intelligibility in the presence of interfering sounds. In the current study, listeners identified zebra finch song, which shares many acoustic properties with speech but lacks semantic and linguistic content. Three maskers with the same long-term spectral content but different short-term statistics were used: (1) chorus (combinations of unfamiliar zebra finch songs), (2) song-shaped noise (broadband noise with the average spectrum of chorus), and (3) chorus-modulated noise (song-shaped noise multiplied by the broadband envelope from a chorus masker). The amount of masking and spatial unmasking depended on the masker and there was evidence of release from both energetic and informational masking. Spatial unmasking was greatest for the statistically similar chorus masker. For the two noise maskers, there was less spatial unmasking and it was wholly accounted for by the relative target and masker levels at the acoustically better ear. The results share many features with analogous results using speech targets, suggesting that spatial separation aids in the segregation of complex natural sounds through mechanisms that are not specific to speech.


Journal of the Acoustical Society of America | 2010

Stimulus factors influencing spatial release from speech-on-speech masking

Gerald Kidd; Christine R. Mason; Virginia Best; Nicole Marrone

This study examined spatial release from masking (SRM) when a target talker was masked by competing talkers or by other types of sounds. The focus was on the role of interaural time differences (ITDs) and time-varying interaural level differences (ILDs) under conditions varying in the strength of informational masking (IM). In the first experiment, a target talker was masked by two other talkers that were either colocated with the target or were symmetrically spatially separated from the target with the stimuli presented through loudspeakers. The sounds were filtered into different frequency regions to restrict the available interaural cues. The largest SRM occurred for the broadband condition followed by a low-pass condition. However, even the highest frequency bandpass-filtered condition (3-6 kHz) yielded a significant SRM. In the second experiment the stimuli were presented via earphones. The listeners identified the speech of a target talker masked by one or two other talkers or noises when the maskers were colocated with the target or were perceptually separated by ITDs. The results revealed a complex pattern of masking in which the factors affecting performance in colocated and spatially separated conditions are to a large degree independent.


Journal of the Acoustical Society of America | 2004

Separation of concurrent broadband sound sources by human listeners.

Virginia Best; André van Schaik; Simon Carlile

The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0 degrees, 22.5 degrees, 45 degrees, 67.5 degrees, and 90 degrees azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.


Scientific Reports | 2015

Musical training, individual differences and the cocktail party problem

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Gerald Kidd; Aniruddh D. Patel

Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical ‘cocktail party problem’ in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of ‘informational masking’ (IM) while keeping the amount of ‘energetic masking’ (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker “cocktail party” environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced “speech-in-noise” perception by musicians.


Journal of the Acoustical Society of America | 2012

The influence of non-spatial factors on measures of spatial release from masking

Virginia Best; Nicole Marrone; Christine R. Mason; Gerald Kidd

This study tested the hypothesis that the reduction in spatial release from masking (SRM) resulting from sensorineural hearing loss in competing speech mixtures is influenced by the characteristics of the interfering speech. A frontal speech target was presented simultaneously with two intelligible or two time-reversed (unintelligible) speech maskers that were either colocated with the target or were symmetrically separated from the target in the horizontal plane. The difference in SRM between listeners with hearing impairment and listeners with normal hearing was substantially larger for the forward maskers (deficit of 5.8 dB) than for the reversed maskers (deficit of 1.6 dB). This was driven by the fact that all listeners, regardless of hearing abilities, performed similarly (and poorly) in the colocated condition with intelligible maskers. The same conditions were then tested in listeners with normal hearing using headphone stimuli that were degraded by noise vocoding. Reducing the number of available spectral channels systematically reduced the measured SRM, and again, more so for forward (reduction of 3.8 dB) than for reversed speech maskers (reduction of 1.8 dB). The results suggest that non-spatial factors can strongly influence both the magnitude of SRM and the apparent deficit in SRM for listeners with impaired hearing.

Collaboration


Dive into the Virginia Best's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gitte Keidser

Cooperative Research Centre

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge