Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuanning Li is active.

Publication


Featured researches published by Yuanning Li.


Nature Communications | 2014

Dynamic encoding of face information in the human fusiform gyrus

Avniel Singh Ghuman; Nicolas M. Brunet; Yuanning Li; Roma O. Konecky; John A. Pyles; Shawn Walls; Vincent J. DeStefino; Wei Wang; R. Mark Richardson

Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Decoding and disrupting left midfusiform gyrus activity during word reading

Elizabeth A. Hirshorn; Yuanning Li; Michael Ward; R. Mark Richardson; Julie A. Fiez; Avniel Singh Ghuman

Significance A central issue in the neurobiology of reading is a debate regarding the visual representation of words, particularly in the left midfusiform gyrus (lmFG). Direct neural recordings, electrical brain stimulation, and pre-/postsurgical neuropsychological testing provided strong evidence that the lmFG supports an orthographically specific “visual word form” system that becomes specialized for the representation of orthographic knowledge. Machine learning elucidated the dynamic role lmFG plays with an early processing stage organized by orthographic similarity and a later stage supporting individuation of single words. The results suggest that there is a dynamic shift from gist-level to individuated orthographic representation in the lmFG in service of visual word recognition. The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.


NeuroImage | 2017

Multi-Connection Pattern Analysis: Decoding the representational content of neural communication

Yuanning Li; Robert Mark Richardson; Avniel Singh Ghuman

&NA; The lack of multivariate methods for decoding the representational content of interregional neural communication has left it difficult to know what information is represented in distributed brain circuit interactions. Here we present Multi‐Connection Pattern Analysis (MCPA), which works by learning mappings between the activity patterns of the populations as a factor of the information being processed. These maps are used to predict the activity from one neural population based on the activity from the other population. Successful MCPA‐based decoding indicates the involvement of distributed computational processing and provides a framework for probing the representational structure of the interaction. Simulations demonstrate the efficacy of MCPA in realistic circumstances. In addition, we demonstrate that MCPA can be applied to different signal modalities to evaluate a variety of hypothesis associated with information coding in neural communications. We apply MCPA to fMRI and human intracranial electrophysiological data to provide a proof‐of‐concept of the utility of this method for decoding individual natural images and faces in functional connectivity data. We further use a MCPA‐based representational similarity analysis to illustrate how MCPA may be used to test computational models of information transfer among regions of the visual processing stream. Thus, MCPA can be used to assess the information represented in the coupled activity of interacting neural circuits and probe the underlying principles of information transformation between regions. HighlightsMCPA allows for multivariate single trial classification of functional connectivity.Decodes the representational content of interregional neural communication.Extracts the discriminant information in the shared activity between populations.A general framework that can be extended and applied to different signal modalities.


Cortex | 2016

Associative hallucinations result from stimulating left ventromedial temporal cortex.

Elissa Aminoff; Yuanning Li; John A. Pyles; Michael Ward; R. Mark Richardson; Avniel Singh Ghuman

Visual recognition requires connecting perceptual information with contextual information and existing knowledge. The ventromedial temporal cortex (VTC), including the medial fusiform, has been linked with object recognition, paired associate learning, contextual processing, and episodic memory, suggesting that this area may be critical in connecting visual processing, context, knowledge and experience. However, evidence for the link between associative processing, episodic memory, and visual recognition in VTC is currently lacking. Using electrocorticography (ECoG) in a single human patient, medial regions of the left VTC were found to be sensitive to the contextual associations of objects. Electrical brain stimulation (EBS) of this part of the left VTC of the patient, functionally defined as sensitive to associative processing, caused memory related, associative experiential visual phenomena. This provides evidence of a relationship between visual recognition, associative processing, and episodic memory. These results suggest a potential role for abnormalities of these processes as part of a mechanism that gives rise to some visual hallucinations.


design automation conference | 2014

Computer-Aided Design of Machine Learning Algorithm: Training Fixed-Point Classifier for On-Chip Low-Power Implementation

Hassan Albalawi; Yuanning Li; Xin Li

In this paper, we propose a novel linear discriminant analysis algorithm, referred to as LDA-FP, to train on-chip classifiers that can be implemented with low-power fixed-point arithmetic with extremely small word length. LDA-FP incorporates the non-idealities (i.e., rounding and overflow) associated with fixed-point arithmetic into the training process so that the resulting classifiers are robust to these non-idealities. Mathematically, LDA-FP is formulated as a mixed integer programming problem that can be efficiently solved by a novel branch-and-bound method proposed in this paper. Our numerical experiments demonstrate that LDA-FP substantially outperforms the conventional approach for the emerging biomedical application of brain computer interface.


bioRxiv | 2018

Posterior and Mid-Fusiform Contribute to Distinct Stages of Facial Expression Processing

Yuanning Li; R. Mark Richardson; Avniel Singh Ghuman

Though the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity, in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior and mid-fusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and mid-fusiform showing a later and extended peak between 230 – 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior and mid-fusiform, with each contributing to temporally segregated stages of expression perception.


bioRxiv | 2018

The left midfusiform gyrus interacts with early visual cortex and the anterior temporal lobe to support word individuation.

Matthew Boring; Elizabeth A. Hirshorn; Yuanning Li; Michael Ward; R. Mark Richardson; Julie A. Fiez; Avniel Singh Ghuman

The left mid-ventral temporal cortex (lmVTC) plays a dynamic role in reading. In this study we investigated the neural interactions that influence lmVTC dynamics and the lexical information these interactions are dependent on. We monitored activity with either intracranial electroencephalography or magnetoencephalography while participants viewed real words, pseudowords, consonant strings, and false fonts. A coarse level representation in early lmVTC activity allowed for decoding of visually dissimilar real words, pseudowords, and false fonts. Functional interactions between anterior ventral temporal regions, possibly containing stored knowledge about words, and low-order visual regions occurred after this initial stage of processing and was followed by the individuation of orthographically similar real words in lmVTC, but not similar pseudowords, letter strings, or false fonts. These results suggest that the individuation of real word representations in lmVTC is catalyzed by stored knowledge about word forms that emerges from network-level interactions with anterior regions of the temporal lobe.


ACM Transactions on Design Automation of Electronic Systems | 2017

Training Fixed-Point Classifiers for On-Chip Low-Power Implementation

Hassan Albalawi; Yuanning Li; Xin Li

In this article, we develop several novel algorithms to train classifiers that can be implemented on chip with low-power fixed-point arithmetic with extremely small word length. These algorithms are based on Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and Logistic Regression (LR), and are referred to as LDA-FP, SVM-FP, and LR-FP, respectively. They incorporate the nonidealities (i.e., rounding and overflow) associated with fixed-point arithmetic into the offline training process so that the resulting classifiers are robust to these nonidealities. Mathematically, LDA-FP, SVM-FP, and LR-FP are formulated as mixed integer programming problems that can be robustly solved by the branch-and-bound methods described in this article. Our numerical experiments demonstrate that LDA-FP, SVM-FP, and LR-FP substantially outperform the conventional approaches for the emerging biomedical applications of brain decoding.


Journal of Vision | 2018

Interdigitation of words and faces in the ventral visual stream: reevaluating the spatial organization of category selective cortex using intracranial EEG

Matthew Boring; Edward Silson; Yuanning Li; Michael Ward; Chris I. Baker; Mark Richardson; Avniel Singh Ghuman


Journal of Vision | 2017

Neurodynamics of expression coding in human fusiform

Yuanning Li; Michael Ward; Witold J. Lipski; Robert Mark Richardson; Avniel Singh Ghuman

Collaboration


Dive into the Yuanning Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Ward

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth A. Hirshorn

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Hassan Albalawi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John A. Pyles

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Julie A. Fiez

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Matthew Boring

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Li

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge