Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samantha Huang is active.

Publication


Featured researches published by Samantha Huang.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise

Jyrki Ahveninen; Matti Hämäläinen; Iiro P. Jääskeläinen; Seppo P. Ahlfors; Samantha Huang; Fa-Hsuan Lin; Tommi Raij; Mikko Sams; Christos Vasios; John W. Belliveau

How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.


PLOS ONE | 2012

Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting

Samantha Huang; John W. Belliveau; Chinmayi Tengshe; Jyrki Ahveninen

In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intraparietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.


NeuroImage | 2013

Distinct cortical networks activated by auditory attention and working memory load

Samantha Huang; Larry J. Seidman; Stephanie Rossi; Jyrki Ahveninen

Auditory attention and working memory (WM) allow for selection and maintenance of relevant sound information in our minds, respectively, thus underlying goal-directed functioning in everyday acoustic environments. It is still unclear whether these two closely coupled functions are based on a common neural circuit, or whether they involve genuinely distinct subfunctions with separate neuronal substrates. In a full factorial functional MRI (fMRI) design, we independently manipulated the levels of auditory-verbal WM load and attentional interference using modified Auditory Continuous Performance Tests. Although many frontoparietal regions were jointly activated by increases of WM load and interference, there was a double dissociation between prefrontal cortex (PFC) subareas associated selectively with either auditory attention or WM. Specifically, anterior dorsolateral PFC (DLPFC) and the right anterior insula were selectively activated by increasing WM load, whereas subregions of middle lateral PFC and inferior frontal cortex (IFC) were associated with interference only. Meanwhile, a superadditive interaction between interference and load was detected in left medial superior frontal cortex, suggesting that in this area, activations are not only overlapping, but reflect a common resource pool recruited by increased attentional and WM demands. Indices of WM-specific suppression of anterolateral non-primary auditory cortices (AC) and attention-specific suppression of primary AC were also found, possibly reflecting suppression/interruption of sound-object processing of irrelevant stimuli during continuous task performance. Our results suggest a double dissociation between auditory attention and working memory in subregions of anterior DLPFC vs. middle lateral PFC/IFC in humans, respectively, in the context of substantially overlapping circuits.


Nature Communications | 2013

Evidence for distinct human auditory cortex regions for sound location versus identity processing

Jyrki Ahveninen; Samantha Huang; Aapo Nummenmaa; John W. Belliveau; An Yi Hung; Iiro P. Jääskeläinen; Josef P. Rauschecker; Stephanie Rossi; Hannu Tiitinen; Tommi Raij

Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.


Journal of Cognitive Neuroscience | 2013

Dynamic Oscillatory Processes Governing Cued Orienting and Allocation of Auditory Attention

Jyrki Ahveninen; Samantha Huang; John W. Belliveau; Wei-Tang Chang; Matti Hämäläinen

In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/EEG source estimates. During consecutive trials, participants were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 msec after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the TPJ, anterior insula, and inferior frontal cortices to the right FEFs. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 msec after the cue onset, possibly reflecting cross-modal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral pFC and anterior insula/inferior frontal cortex beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued versus novelty-triggered orienting (600–1400 msec). Our results reveal sustained oscillatory patterns associated with voluntary engagement of auditory spatial attention, with the frontoparietal and temporal gamma increases being best predictors of subsequent behavioral performance.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Neuronal representations of distance in human auditory cortex

Norbert Kopčo; Samantha Huang; John W. Belliveau; Tommi Raij; Chinmayi Tengshe; Jyrki Ahveninen

Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15–100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects’ discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.


NeuroImage | 2013

Whole-head rapid fMRI acquisition using echo-shifted magnetic resonance inverse imaging.

Wei-Tang Chang; Aapo Nummenmaa; Thomas Witzel; Jyrki Ahveninen; Samantha Huang; Kevin Wen-Kai Tsai; Ying-Hua Chu; Jonathan R. Polimeni; John W. Belliveau; Fa-Hsuan Lin

The acquisition time of BOLD contrast functional MRI (fMRI) data with whole-brain coverage typically requires a sampling rate of one volume in 1-3s. Although the volumetric sampling time of a few seconds is adequate for measuring the sluggish hemodynamic response (HDR) to neuronal activation, faster sampling of fMRI might allow for monitoring of rapid physiological fluctuations and detection of subtle neuronal activation timing information embedded in BOLD signals. Previous studies utilizing a highly accelerated volumetric MR inverse imaging (InI) technique have provided a sampling rate of one volume per 100 ms with 5mm spatial resolution. Here, we propose a novel modification of this technique, the echo-shifted InI, which allows TE to be longer than TR, to measure BOLD fMRI at an even faster sampling rate of one volume per 25 ms with whole-brain coverage. Compared with conventional EPI, echo-shifted InI provided an 80-fold speedup with similar spatial resolution and less than 2-fold temporal SNR loss. The capability of echo-shifted InI to detect HDR timing differences was tested empirically. At the group level (n=6), echo-spaced InI was able to detect statistically significant HDR timing differences of as low as 50 ms in visual stimulus presentation. At the level of individual subjects, significant differences in HDR timing were detected for 400 ms stimulus-onset differences. Our results also show that the temporal resolution of 25 ms is necessary for maintaining the temporal detecting capability at this level. With the capabilities of being able to distinguish the timing differences in the millisecond scale, echo-shifted InI could be a useful fMRI tool for obtaining temporal information at a time scale closer to that of neuronal dynamics.


NeuroImage | 2015

Combined MEG and EEG show reliable patterns of electromagnetic brain activity during natural viewing

Wei-Tang Chang; Iiro P. Jääskeläinen; John W. Belliveau; Samantha Huang; An-Yi Hung; Stephanie Rossi; Jyrki Ahveninen

Naturalistic stimuli such as movies are increasingly used to engage cognitive and emotional processes during fMRI of brain hemodynamic activity. However, movies have been little utilized during magnetoencephalography (MEG) and EEG that directly measure population-level neuronal activity at a millisecond resolution. Here, subjects watched a 17-min segment from the movie Crash (Lionsgate Films, 2004) twice during simultaneous MEG/EEG recordings. Physiological noise components, including ocular and cardiac artifacts, were removed using the DRIFTER algorithm. Dynamic estimates of cortical activity were calculated using MRI-informed minimum-norm estimation. To improve the signal-to-noise ratio (SNR), principal component analyses (PCA) were employed to extract the prevailing temporal characteristics within each anatomical parcel of the Freesurfer Desikan-Killiany cortical atlas. A variety of alternative inter-subject correlation (ISC) approaches were then utilized to investigate the reliability of inter-subject synchronization during natural viewing. In the first analysis, the ISCs of the time series of each anatomical region over the full time period across all subject pairs were calculated and averaged. In the second analysis, dynamic ISC (dISC) analysis, the correlation was calculated over a sliding window of 200 ms with 3.3 ms steps. Finally, in a between-run ISC analysis, the between-run correlation was calculated over the dynamic ISCs of the two different runs after the Fisher z-transformation. Overall, the most reliable activations occurred in occipital/inferior temporal visual and superior temporal auditory cortices as well as in the posterior cingulate, precuneus, pre- and post-central gyri, and right inferior and middle frontal gyri. Significant between-run ISCs were observed in superior temporal auditory cortices and inferior temporal visual cortices. Taken together, our results show that movies can be utilized as naturalistic stimuli in MEG/EEG similarly as in fMRI studies.


NeuroImage | 2016

Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7T fMRI

Jyrki Ahveninen; Wei-Tang Chang; Samantha Huang; Boris Keil; Norbert Kopčo; Stephanie Rossi; Giorgio Bonmassar; Thomas Witzel; Jonathan R. Polimeni

Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.


NeuroImage | 2014

Lateralized parietotemporal oscillatory phase synchronization during auditory selective attention

Samantha Huang; Wei-Tang Chang; John W. Belliveau; Matti Hämäläinen; Jyrki Ahveninen

Based on the infamous left-lateralized neglect syndrome, one might hypothesize that the dominating right parietal cortex has a bilateral representation of space, whereas the left parietal cortex represents only the contralateral right hemispace. Whether this principle applies to human auditory attention is not yet fully clear. Here, we explicitly tested the differences in cross-hemispheric functional coupling between the intraparietal sulcus (IPS) and auditory cortex (AC) using combined magnetoencephalography (MEG), EEG, and functional MRI (fMRI). Inter-regional pairwise phase consistency (PPC) was analyzed from data obtained during dichotic auditory selective attention task, where subjects were in 10-s trials cued to attend to sounds presented to one ear and to ignore sounds presented in the opposite ear. Using MEG/EEG/fMRI source modeling, parietotemporal PPC patterns were (a) mapped between all AC locations vs. IPS seeds and (b) analyzed between four anatomically defined AC regions-of-interest (ROI) vs. IPS seeds. Consistent with our hypothesis, stronger cross-hemispheric PPC was observed between the right IPS and left AC for attended right-ear sounds, as compared to PPC between the left IPS and right AC for attended left-ear sounds. In the mapping analyses, these differences emerged at 7-13Hz, i.e., at the theta to alpha frequency bands, and peaked in Heschls gyrus and lateral posterior non-primary ACs. The ROI analysis revealed similarly lateralized differences also in the beta and lower theta bands. Taken together, our results support the view that the right parietal cortex dominates auditory spatial attention.

Collaboration


Dive into the Samantha Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei-Tang Chang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge