Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xing Tian is active.

Publication


Featured researches published by Xing Tian.


Nature Neuroscience | 2016

Cortical tracking of hierarchical linguistic structures in connected speech

Nai Ding; Lucia Melloni; Hang Zhang; Xing Tian; David Poeppel

The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries that are clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. We found that, during listening to connected speech, cortical activity of different timescales concurrently tracked the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. Notably, the neural tracking of hierarchical linguistic structures was dissociated from the encoding of acoustic cues and from the predictability of incoming words. Our results indicate that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.


Frontiers in Psychology | 2010

Mental imagery of speech and movement implicates the dynamics of internal forward models.

Xing Tian; David Poeppel

The classical concept of efference copies in the context of internal forward models has stimulated productive research in cognitive science and neuroscience. There are compelling reasons to argue for such a mechanism, but finding direct evidence in the human brain remains difficult. Here we investigate the dynamics of internal forward models from an unconventional angle: mental imagery, assessed while recording high temporal resolution neuronal activity using magnetoencephalography. We compare two overt and covert tasks; our covert, mental imagery tasks are unconfounded by overt input/output demands - but in turn necessitate the development of appropriate multi-dimensional topographic analyses. Finger tapping (studies 1 and 2) and speech experiments (studies 3-5) provide temporally constrained results that implicate the estimation of an efference copy. We suggest that one internal forward model over parietal cortex subserves the kinesthetic feeling in motor imagery. Secondly, observed auditory neural activity ~170 ms after motor estimation in speech experiments (studies 3-5) demonstrates the anticipated auditory consequences of planned motor commands in a second internal forward model in imagery of speech production. Our results provide neurophysiological evidence from the human brain in favor of internal forward models deploying efference copies in somatosensory and auditory cortex, in finger tapping and speech production tasks, respectively, and also suggest the dynamics and sequential updating structure of internal forward models.


Frontiers in Human Neuroscience | 2012

Mental imagery of speech: linking motor and perceptual systems through internal simulation and estimation

Xing Tian; David Poeppel

The neural basis of mental imagery has been investigated by localizing the underlying neural networks, mostly in motor and perceptual systems, separately. However, how modality-specific representations are top-down induced and how the action and perception systems interact in the context of mental imagery is not well understood. Imagined speech production (“articulation imagery”), which induces the kinesthetic feeling of articulator movement and its auditory consequences, provides a new angle because of the concurrent involvement of motor and perceptual systems. On the basis of previous findings in mental imagery of speech, we argue for the following regarding the induction mechanisms of mental imagery and the interaction between motor and perceptual systems: (1) Two distinct top-down mechanisms, memory retrieval and motor simulation, exist to induce estimation in perceptual systems. (2) Motor simulation is sufficient to internally induce the representation of perceptual changes that would be caused by actual movement (perceptual associations); however, this simulation process only has modulatory effects on the perception of external stimuli, which critically depends on context and task demands. Considering the proposed simulation-estimation processes as common mechanisms for interaction between motor and perceptual systems, we outline how mental imagery (of speech) relates to perception and production, and how these hypothesized mechanisms might underpin certain neural disorders.


Journal of Cognitive Neuroscience | 2013

The effect of imagination on stimulation: The functional specificity of efference copies in speech processing

Xing Tian; David Poeppel

The computational role of efference copies is widely appreciated in action and perception research, but their properties for speech processing remain murky. We tested the functional specificity of auditory efference copies using magnetoencephalography recordings in an unconventional pairing: We used a classical cognitive manipulation (mental imagery—to elicit internal simulation and estimation) with a well-established experimental paradigm (one shot repetition—to assess neuronal specificity). Participants performed tasks that differentially implicated internal prediction of sensory consequences (overt speaking, imagined speaking, and imagined hearing) and their modulatory effects on the perception of an auditory (syllable) probe were assessed. Remarkably, the neural responses to overt syllable probes vary systematically, both in terms of directionality (suppression, enhancement) and temporal dynamics (early, late), as a function of the preceding covert mental imagery adaptor. We show, in the context of a dual-pathway model, that internal simulation shapes perception in a context-dependent manner.


Journal of Experimental Psychology: Human Perception and Performance | 2008

The Dynamics of Integration and Separation: ERP, MEG, and Neural Network Studies of Immediate Repetition Effects

David E. Huber; Xing Tian; Tim Curran; Randall C. O'Reilly; Brion Woroch

This article presents data and theory concerning the fundamental question of how the brain achieves a balance between integrating and separating perceptual information over time. This theory was tested in the domain of word reading by examining brain responses to briefly presented words that were either new or immediate repetitions. Critically, the prime that immediately preceded the target was presented either for 150 ms or 2,000 ms, thus examining a situation of perceptual integration versus one of perceptual separation. Electrophysiological responses during the first 200 ms following presentation of the target word were assessed using electroencephalography (EEG) and magnetoencephalography (MEG) recordings. As predicted by a dynamic neural network model with habituation, repeated words produced less of a perceptual response, and this effect diminished with increased prime duration. Using dynamics that best accounted for the behavioral transition from positive to negative priming with increasing prime duration, the model correctly predicted the time course of the event-related potential (ERP) repetition effects under the assumption that letter processing is the source of observed P100 repetition effects and word processing is the source of observed N170 repetition effects.


Current Biology | 2013

Neural Response Phase Tracks How Listeners Learn New Acoustic Representations

Huan Luo; Xing Tian; Kun Song; Ke Zhou; David Poeppel

Humans are remarkable at rapidly learning regularities through experience from a dynamic environment. For example, long-lasting memories are formed even for auditory noise patterns after short, repeated exposure in an unsupervised manner. Although animal neurophysiological and human studies demonstrate adaptive cortical plasticity after sensory learning and memory formation, the mechanisms by which the auditory system extracts and encodes holistic patterns from random noise, which contains neither semantic labels nor prominent acoustic features to facilitate encoding, remains unknown. Here we combined magnetoencephalography (MEG) with psychophysics to address the issue. We demonstrate that the establishment of a reliable neuronal phase pattern in low-frequency (3-8 Hz) auditory cortical responses mirrors the noise memory formation process. Specifically, with repeated exposure, originally novel noise patterns are memorized, as reflected in behavior, and gradually produce robust phase responses in auditory cortex. Moreover, different memorized noises elicit distinguishable phase responses, suggesting their specificity to noise structure. The results indicate that the gradual establishment of low-frequency oscillatory phase patterns in auditory neuronal responses mediates the implicit learning process by which originally undifferentiated noises become new auditory objects.


Cognitive Psychology | 2010

Testing an Associative Account of Semantic Satiation.

Xing Tian; David E. Huber

How is the meaning of a word retrieved without interference from recently viewed words? The ROUSE theory of priming assumes a discounting process to reduce source confusion between subsequently presented words. As applied to semantic satiation, this theory predicted a loss of association between the lexical item and meaning. Four experiments tested this explanation in a speeded category-matching task. All experiments used lists of 20 trials that presented a cue word for 1s followed by a target word. Randomly mixed across the list, 10 trials used cues drawn from the same category whereas the other 10 trials used cues from 10 other categories. In Experiments 1a and 1b, the cues were repeated category labels (FRUIT-APPLE) and responses gradually slowed for the repeated category. In Experiment 2, the cues were nonrepeated exemplars (PEAR-APPLE) and responses remained faster for the repeated category. In Experiment 3, the cues were repeated exemplars in a word matching task (APPLE-APPLE) and responses again remained faster for the repeated category.


Journal of Cognitive Neuroscience | 2014

Dynamics of self-monitoring and error detection in speech production: Evidence from mental imagery and meg

Xing Tian; David Poeppel

A critical subroutine of self-monitoring during speech production is to detect any deviance between expected and actual auditory feedback. Here we investigated the associated neural dynamics using MEG recording in mental-imagery-of-speech paradigms. Participants covertly articulated the vowel /a/; their own (individually recorded) speech was played back, with parametric manipulation using four levels of pitch shift, crossed with four levels of onset delay. A nonmonotonic function was observed in early auditory responses when the onset delay was shorter than 100 msec: Suppression was observed for normal playback, but enhancement for pitch-shifted playback; however, the magnitude of enhancement decreased at the largest level of pitch shift that was out of pitch range for normal conversion, as suggested in two behavioral experiments. No difference was observed among different types of playback when the onset delay was longer than 100 msec. These results suggest that the prediction suppresses the response to normal feedback, which mediates source monitoring. When auditory feedback does not match the prediction, an “error term” is generated, which underlies deviance detection. We argue that, based on the observed nonmonotonic function, a frequency window (addressing spectral difference) and a time window (constraining temporal difference) jointly regulate the comparison between prediction and feedback in speech.


Cortex | 2016

Mental imagery of speech implicates two mechanisms of perceptual reactivation

Xing Tian; Jean Mary Zarate; David Poeppel

Sensory cortices can be activated without any external stimuli. Yet, it is still unclear how this perceptual reactivation occurs and which neural structures mediate this reconstruction process. In this study, we employed fMRI with mental imagery paradigms to investigate the neural networks involved in perceptual reactivation. Subjects performed two speech imagery tasks: articulation imagery (AI) and hearing imagery (HI). We found that AI induced greater activity in frontal-parietal sensorimotor systems, including sensorimotor cortex, subcentral (BA 43), middle frontal cortex (BA 46) and parietal operculum (PO), whereas HI showed stronger activation in regions that have been implicated in memory retrieval: middle frontal (BA 8), inferior parietal cortex and intraparietal sulcus. Moreover, posterior superior temporal sulcus (pSTS) and anterior superior temporal gyrus (aSTG) was activated more in AI compared with HI, suggesting that covert motor processes induced stronger perceptual reactivation in the auditory cortices. These results suggest that motor-to-perceptual transformation and memory retrieval act as two complementary mechanisms to internally reconstruct corresponding perceptual outcomes. These two mechanisms can serve as a neurocomputational foundation for predicting perceptual changes, either via a previously learned relationship between actions and their perceptual consequences or via stored perceptual experiences of stimulus and episodic or contextual regularity.


Brain Topography | 2008

Measures of Spatial Similarity and Response Magnitude in MEG and Scalp EEG

Xing Tian; David E. Huber

Sensor selection is typically used in magnetoencephalography (MEG) and scalp electroencephalography (EEG) studies, but this practice cannot differentiate between changes in the distribution of neural sources versus changes in the magnitude of neural sources. This problem is further complicated by (1) subject averaging despite sizable individual anatomical differences and (2) experimental designs that produce overlapping waveforms due to short latencies between stimuli. Using data from the entire spatial array of sensors, we present simple multivariate measures that (1) normalize against individual differences by comparison with each individual’s standard response; (2) compare the similarity of spatial patterns in different conditions (angle test) to ascertain whether the distribution of neural sources is different; and (3) compare the response magnitude between conditions which are sufficiently similar (projection test). These claims are supported by applying the reported techniques to a short-term word priming paradigm as measured with MEG, revealing more reliable results as compared to traditional sensor selection methodology. Although precise cortical localization remains intractable, these techniques are easy to calculate, relatively assumption free, and yield the important psychological measures of similarity and response magnitude.

Collaboration


Dive into the Xing Tian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David E. Huber

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brion Woroch

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge