Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hari Bharadwaj is active.

Publication


Featured researches published by Hari Bharadwaj.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Normal hearing is not enough to guarantee robust encoding of suprathreshold features important in everyday communication

Dorea R. Ruggles; Hari Bharadwaj; Barbara G. Shinn-Cunningham

“Normal hearing” is typically defined by threshold audibility, even though everyday communication relies on extracting key features of easily audible sound, not on sound detection. Anecdotally, many normal-hearing listeners report difficulty communicating in settings where there are competing sound sources, but the reasons for such difficulties are debated: Do these difficulties originate from deficits in cognitive processing, or differences in peripheral, sensory encoding? Here we show that listeners with clinically normal thresholds exhibit very large individual differences on a task requiring them to focus spatial selective auditory attention to understand one speech stream when there are similar, competing speech streams coming from other directions. These individual differences in selective auditory attention ability are unrelated to age, reading span (a measure of cognitive function), and minor differences in absolute hearing threshold; however, selective attention ability correlates with the ability to detect simple frequency modulation in a clearly audible tone. Importantly, we also find that selective attention performance correlates with physiological measures of how well the periodic, temporal structure of sounds above the threshold of audibility are encoded in early, subcortical portions of the auditory pathway. These results suggest that the fidelity of early sensory encoding of the temporal structure in suprathreshold sounds influences the ability to communicate in challenging settings. Tests like these may help tease apart how peripheral and central deficits contribute to communication impairments, ultimately leading to new approaches to combat the social isolation that often ensues.


Frontiers in Systems Neuroscience | 2014

Cochlear neuropathy and the coding of supra-threshold sound.

Hari Bharadwaj; Sarah Verhulst; Luke Abraham Shaheen; M. Charles Liberman; Barbara G. Shinn-Cunningham

Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.


The Journal of Neuroscience | 2015

Individual Differences Reveal Correlates of Hidden Hearing Deficits

Hari Bharadwaj; Salwa Masud; Golbarg Mehraei; Sarah Verhulst; Barbara G. Shinn-Cunningham

Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.”


Frontiers in Neuroscience | 2016

Altered Onset Response Dynamics in Somatosensory Processing in Autism Spectrum Disorder

Sheraz Khan; Javeria A. Hashmi; Fahimeh Mamashli; Hari Bharadwaj; Santosh Ganesan; Konstantinos P. Michmizos; Manfred G. Kitzbichler; Manuel Zetino; Keri Lee A. Garel; Matti S. Hämäläinen; Tal Kenet

Abnormalities in cortical connectivity and evoked responses have been extensively documented in autism spectrum disorder (ASD). However, specific signatures of these cortical abnormalities remain elusive, with data pointing toward abnormal patterns of both increased and reduced response amplitudes and functional connectivity. We have previously proposed, using magnetoencephalography (MEG) data, that apparent inconsistencies in prior studies could be reconciled if functional connectivity in ASD was reduced in the feedback (top-down) direction, but increased in the feedforward (bottom-up) direction. Here, we continue this line of investigation by assessing abnormalities restricted to the onset, feedforward inputs driven, component of the response to vibrotactile stimuli in somatosensory cortex in ASD. Using a novel method that measures the spatio-temporal divergence of cortical activation, we found that relative to typically developing participants, the ASD group was characterized by an increase in the initial onset component of the cortical response, and a faster spread of local activity. Given the early time window, the results could be interpreted as increased thalamocortical feedforward connectivity in ASD, and offer a plausible mechanism for the previously observed increased response variability in ASD, as well as for the commonly observed behaviorally measured tactile processing abnormalities associated with the disorder.


Frontiers in Neuroscience | 2013

Auditory selective attention reveals preparatory activity in different cortical regions for selection based on source location and source pitch

Adrian Lee; Siddharth Rajaram; Jing Xia; Hari Bharadwaj; Eric Larson; Matti Hämäläinen; Barbara G. Shinn-Cunningham

In order to extract information in a rich environment, we focus on different features that allow us to direct attention to whatever source is of interest. The cortical network deployed during spatial attention, especially in vision, is well characterized. For example, visuospatial attention engages a frontoparietal network including the frontal eye fields (FEFs), which modulate activity in visual sensory areas to enhance the representation of an attended visual object. However, relatively little is known about the neural circuitry controlling attention directed to non-spatial features, or to auditory objects or features (either spatial or non-spatial). Here, using combined magnetoencephalography (MEG) and anatomical information obtained from MRI, we contrasted cortical activity when observers attended to different auditory features given the same acoustic mixture of two simultaneous spoken digits. Leveraging the fine temporal resolution of MEG, we establish that activity in left FEF is enhanced both prior to and throughout the auditory stimulus when listeners direct auditory attention to target location compared to when they focus on target pitch. In contrast, activity in the left posterior superior temporal sulcus (STS), a region previously associated with auditory pitch categorization, is greater when listeners direct attention to target pitch rather than target location. This differential enhancement is only significant after observers are instructed which cue to attend, but before the acoustic stimuli begin. We therefore argue that left FEF participates more strongly in directing auditory spatial attention, while the left STS aids auditory object selection based on the non-spatial acoustic feature of pitch.


Journal of the Acoustical Society of America | 2013

A comparison of spectral magnitude and phase-locking value analyses of the frequency-following response to complex tones

Li Zhu; Hari Bharadwaj; Jing Xia; Barbara G. Shinn-Cunningham

Two experiments, both presenting diotic, harmonic tone complexes (100 Hz fundamental), were conducted to explore the envelope-related component of the frequency-following response (FFRENV), a measure of synchronous, subcortical neural activity evoked by a periodic acoustic input. Experiment 1 directly compared two common analysis methods, computing the magnitude spectrum and the phase-locking value (PLV). Bootstrapping identified which FFRENV frequency components were statistically above the noise floor for each metric and quantified the statistical power of the approaches. Across listeners and conditions, the two methods produced highly correlated results. However, PLV analysis required fewer processing stages to produce readily interpretable results. Moreover, at the fundamental frequency of the input, PLVs were farther above the metrics noise floor than spectral magnitudes. Having established the advantages of PLV analysis, the efficacy of the approach was further demonstrated by investigating how different acoustic frequencies contribute to FFRENV, analyzing responses to complex tones composed of different acoustic harmonics of 100 Hz (Experiment 2). Results show that the FFRENV response is dominated by peripheral auditory channels responding to unresolved harmonics, although low-frequency channels driven by resolved harmonics also contribute. These results demonstrate the utility of the PLV for quantifying the strength of FFRENV across conditions.


Clinical Neurophysiology | 2014

Rapid acquisition of auditory subcortical steady state responses using multichannel recordings

Hari Bharadwaj; Barbara G. Shinn-Cunningham

OBJECTIVE Auditory subcortical steady state responses (SSSRs), also known as frequency following responses (FFRs), provide a non-invasive measure of phase-locked neural responses to acoustic and cochlear-induced periodicities. SSSRs have been used both clinically and in basic neurophysiological investigation of auditory function. SSSR data acquisition typically involves thousands of presentations of each stimulus type, sometimes in two polarities, with acquisition times often exceeding an hour per subject. Here, we present a novel approach to reduce the data acquisition times significantly. METHODS Because the sources of the SSSR are deep compared to the primary noise sources, namely background spontaneous cortical activity, the SSSR varies more smoothly over the scalp than the noise. We exploit this property and extract SSSRs efficiently, using multichannel recordings and an eigendecomposition of the complex cross-channel spectral density matrix. RESULTS Our proposed method yields SNR improvement exceeding a factor of 3 compared to traditional single-channel methods. CONCLUSIONS It is possible to reduce data acquisition times for SSSRs significantly with our approach. SIGNIFICANCE The proposed method allows SSSRs to be recorded for several stimulus conditions within a single session and also makes it possible to acquire both SSSRs and cortical EEG responses without increasing the session length.


NeuroImage | 2012

Disconnectivity of the cortical ocular motor control network in autism spectrum disorders

Tal Kenet; Elena V. Orekhova; Hari Bharadwaj; Nandita R. Shetty; Emily Israeli; Adrian Lee; Yigal Agam; Mikael Elam; Robert M. Joseph; Matti Hämäläinen; Dara S. Manoach

Response inhibition, or the suppression of prepotent but contextually inappropriate behaviors, is essential to adaptive, flexible responding. Individuals with autism spectrum disorders (ASD) consistently show deficient response inhibition during antisaccades. In our prior functional MRI study, impaired antisaccade performance was accompanied by reduced functional connectivity between the frontal eye field (FEF) and dorsal anterior cingulate cortex (dACC), regions critical to volitional ocular motor control. Here we employed magnetoencephalography (MEG) to examine the spectral characteristics of this reduced connectivity. We focused on coherence between FEF and dACC during the preparatory period of antisaccade and prosaccade trials, which occurs after the presentation of the task cue and before the imperative stimulus. We found significant group differences in alpha band mediated coherence. Specifically, neurotypical participants showed significant alpha band coherence between the right inferior FEF and right dACC and between the left superior FEF and bilateral dACC across antisaccade, prosaccade, and fixation conditions. Relative to the neurotypical group, ASD participants showed reduced coherence between these regions in all three conditions. Moreover, while neurotypical participants showed increased coherence between the right inferior FEF and the right dACC in preparation for an antisaccade compared to a prosaccade or fixation, ASD participants failed to show a similar increase in preparation for the more demanding antisaccade. These findings demonstrate reduced long-range functional connectivity in ASD, specifically in the alpha band. The failure in the ASD group to increase alpha band coherence with increasing task demand may reflect deficient top-down recruitment of additional neural resources in preparation to perform a difficult task.


Frontiers in Integrative Neuroscience | 2014

Measuring auditory selective attention using frequency tagging.

Hari Bharadwaj; Adrian Lee; Barbara G. Shinn-Cunningham

Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock) has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR) at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR) is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears) rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right) precentral sulcus (lPCS), a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream. Results suggest that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity) help to explain why past ASSR studies of auditory spatial attention yield seemingly contradictory results.


Brain Research | 2015

Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses

Leonard Varghese; Hari Bharadwaj; Barbara G. Shinn-Cunningham

Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.

Collaboration


Dive into the Hari Bharadwaj's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Golbarg Mehraei

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Lee

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Sarah Verhulst

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge