Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara G. Shinn-Cunningham is active.

Publication


Featured researches published by Barbara G. Shinn-Cunningham.


Proceedings of the National Academy of Sciences of the United States of America | 2006

Task-modulated “what” and “where” pathways in human auditory cortex

Jyrki Ahveninen; Iiro P. Jääskeläinen; Tommi Raij; Giorgio Bonmassar; Sasha Devore; Matti Hämäläinen; Sari Levänen; Fa-Hsuan Lin; Mikko Sams; Barbara G. Shinn-Cunningham; Thomas Witzel; John W. Belliveau

Human neuroimaging studies suggest that localization and identification of relevant auditory objects are accomplished via parallel parietal-to-lateral-prefrontal “where” and anterior-temporal-to-inferior-frontal “what” pathways, respectively. Using combined hemodynamic (functional MRI) and electromagnetic (magnetoencephalography) measurements, we investigated whether such dual pathways exist already in the human nonprimary auditory cortex, as suggested by animal models, and whether selective attention facilitates sound localization and identification by modulating these pathways in a feature-specific fashion. We found a double dissociation in response adaptation to sound pairs with phonetic vs. spatial sound changes, demonstrating that the human nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior “what” (in anterolateral Heschl’s gyrus, anterior superior temporal gyrus, and posterior planum polare) and posterior “where” (in planum temporale and posterior superior temporal gyrus) pathways as early as ≈70–150 ms from stimulus onset. Our data further show that the “where” pathway is activated ≈30 ms earlier than the “what” pathway, possibly enabling the brain to use top-down spatial information in auditory object perception. Notably, selectively attending to phonetic content modulated response adaptation in the “what” pathway, whereas attending to sound location produced analogous effects in the “where” pathway. This finding suggests that selective-attention effects are feature-specific in the human nonprimary auditory cortex and that they arise from enhanced tuning of receptive fields of task-relevant neuronal populations.


Journal of the Acoustical Society of America | 2003

Note on informational masking (L)

Nathaniel I. Durlach; Christine R. Mason; Gerald Kidd; Tanya L. Arbogast; H. Steven Colburn; Barbara G. Shinn-Cunningham

Informational masking (IM) has a long history and is currently receiving considerable attention. Nevertheless, there is no clear and generally accepted picture of how IM should be defined, and once defined, explained. In this letter, consideration is given to the problems of defining IM and specifying research that is needed to better understand and model IM.


Trends in Amplification | 2008

Selective Attention in Normal and Impaired Hearing

Barbara G. Shinn-Cunningham; Virginia Best

A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.


Journal of the Acoustical Society of America | 2005

Localizing nearby sound sources in a classroom: Binaural room impulse responses

Barbara G. Shinn-Cunningham; Norbert Kopčo; Tara J. Martin

Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.


Cerebral Cortex | 2015

Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG

James A. O'Sullivan; Alan J. Power; Nima Mesgarani; Siddharth Rajaram; John J. Foxe; Barbara G. Shinn-Cunningham; Malcolm Slaney; Shihab A. Shamma; Edmund C. Lalor

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


The Journal of Neuroscience | 2012

Robustness of cortical topography across fields, laminae, anesthetic states, and neurophysiological signal types.

Wei Guo; Anna R. Chambers; Keith Darrow; Kenneth E. Hancock; Barbara G. Shinn-Cunningham; Daniel B. Polley

Topographically organized maps of the sensory receptor epithelia are regarded as cornerstones of cortical organization as well as valuable readouts of diverse biological processes ranging from evolution to neural plasticity. However, maps are most often derived from multiunit activity recorded in the thalamic input layers of anesthetized animals using near-threshold stimuli. Less distinct topography has been described by studies that deviated from the formula above, which brings into question the generality of the principle. Here, we explicitly compared the strength of tonotopic organization at various depths within core and belt regions of the auditory cortex using electrophysiological measurements ranging from single units to delta-band local field potentials (LFP) in the awake and anesthetized mouse. Unit recordings in the middle cortical layers revealed a precise tonotopic organization in core, but not belt, regions of auditory cortex that was similarly robust in awake and anesthetized conditions. In core fields, tonotopy was degraded outside the middle layers or when LFP signals were substituted for unit activity, due to an increasing proportion of recording sites with irregular tuning for pure tones. However, restricting our analysis to clearly defined receptive fields revealed an equivalent tonotopic organization in all layers of the cortical column and for LFP activity ranging from gamma to theta bands. Thus, core fields represent a transition between topographically organized simple receptive field arrangements that extend throughout all layers of the cortical column and the emergence of nontonotopic representations outside the input layers that are further elaborated in the belt fields.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Normal hearing is not enough to guarantee robust encoding of suprathreshold features important in everyday communication

Dorea R. Ruggles; Hari Bharadwaj; Barbara G. Shinn-Cunningham

“Normal hearing” is typically defined by threshold audibility, even though everyday communication relies on extracting key features of easily audible sound, not on sound detection. Anecdotally, many normal-hearing listeners report difficulty communicating in settings where there are competing sound sources, but the reasons for such difficulties are debated: Do these difficulties originate from deficits in cognitive processing, or differences in peripheral, sensory encoding? Here we show that listeners with clinically normal thresholds exhibit very large individual differences on a task requiring them to focus spatial selective auditory attention to understand one speech stream when there are similar, competing speech streams coming from other directions. These individual differences in selective auditory attention ability are unrelated to age, reading span (a measure of cognitive function), and minor differences in absolute hearing threshold; however, selective attention ability correlates with the ability to detect simple frequency modulation in a clearly audible tone. Importantly, we also find that selective attention performance correlates with physiological measures of how well the periodic, temporal structure of sounds above the threshold of audibility are encoded in early, subcortical portions of the auditory pathway. These results suggest that the fidelity of early sensory encoding of the temporal structure in suprathreshold sounds influences the ability to communicate in challenging settings. Tests like these may help tease apart how peripheral and central deficits contribute to communication impairments, ultimately leading to new approaches to combat the social isolation that often ensues.


Frontiers in Systems Neuroscience | 2014

Cochlear neuropathy and the coding of supra-threshold sound.

Hari Bharadwaj; Sarah Verhulst; Luke Abraham Shaheen; M. Charles Liberman; Barbara G. Shinn-Cunningham

Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.


Journal of the Acoustical Society of America | 1993

Adjustment and discrimination measurements of the precedence effect

Barbara G. Shinn-Cunningham; Patrick M. Zurek; Nathaniel I. Durlach

A simple model to summarize the precedence effect is proposed that uses a single metric to quantify the relative dominance of the initial interaural delay over the trailing interaural delay in lateralization. This model is described and then used to relate new measurements of the precedence effect made with adjustment and discrimination paradigms. In the adjustment task, subjects matched the lateral position of an acoustic pointer to the position of a composite test stimulus made up of initial and trailing binaural noise bursts. In the discrimination procedure, subjects discriminated interaural time differences in a target noise burst in the presence of another burst either trailing or preceding the target. Experimental parameters were the delay between initial and trailing stimuli and the overall level of the stimulus. The model parameters (the metric c and the variability of lateral position judgments) were estimated from the results of the matching experiment and used to predict results of the discrimination task with good success. Finally, the observed values of the metric were compared to values derived from previous studies.


Journal of the Acoustical Society of America | 1998

Adapting to supernormal auditory localization cues. I. Bias and resolution

Barbara G. Shinn-Cunningham; Nathaniel I. Durlach; Richard Held

Head-related transfer functions (HRTFs) were used to create spatialized stimuli for presentation through earphones. Subjects performed forced-choice, identification tests during which allowed response directions were indicated visually. In each experimental session, subjects were first presented with auditory stimuli in which the stimulus HRTFs corresponded to the allowed response directions. The correspondence between the HRTFs used to generate the stimuli and the directions was then changed so that response directions no longer corresponded to the HRTFs in the natural way. Feedback was used to train subjects as to which spatial cues corresponded to which of the allowed responses. Finally, the normal correspondence between direction and HRTFs was reinstated. This basic experimental paradigm was used to explore the effects of the type of feedback provided, the complexity of the stimulated acoustic scene, the number of allowed response positions, and the magnitude of the HRTF transformation subjects had to learn. Data showed that (1) although subjects may not adapt completely to a new relationship between physical stimuli and direction, response bias decreases substantially with training, and (2) the ability to resolve different HRTFs depends both on the stimuli presented and on the state of adaptation of the subject.

Collaboration


Dive into the Barbara G. Shinn-Cunningham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antje Ihlefeld

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Lee

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge