Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yixuan Yuan is active.

Publication


Featured researches published by Yixuan Yuan.


Cerebral Cortex | 2013

DICCCOL: Dense Individualized and Common Connectivity-Based Cortical Landmarks

Dajiang Zhu; Kaiming Li; Lei Guo; Xi Jiang; Tuo Zhang; Degang Zhang; Hanbo Chen; Fan Deng; Carlos Faraco; Changfeng Jin; Chong Yaw Wee; Yixuan Yuan; Peili Lv; Yan Yin; Xiaolei Hu; Lian Duan; Xintao Hu; Junwei Han; Lihong Wang; Dinggang Shen; L. Stephen Miller; Lingjiang Li; Tianming Liu

Is there a common structural and functional cortical architecture that can be quantitatively encoded and precisely reproduced across individuals and populations? This question is still largely unanswered due to the vast complexity, variability, and nonlinearity of the cerebral cortex. Here, we hypothesize that the common cortical architecture can be effectively represented by group-wise consistent structural fiber connections and take a novel data-driven approach to explore the cortical architecture. We report a dense and consistent map of 358 cortical landmarks, named Dense Individualized and Common Connectivity-based Cortical Landmarks (DICCCOLs). Each DICCCOL is defined by group-wise consistent white-matter fiber connection patterns derived from diffusion tensor imaging (DTI) data. Our results have shown that these 358 landmarks are remarkably reproducible over more than one hundred human brains and possess accurate intrinsically established structural and functional cross-subject correspondences validated by large-scale functional magnetic resonance imaging data. In particular, these 358 cortical landmarks can be accurately and efficiently predicted in a new single brain with DTI data. Thus, this set of 358 DICCCOL landmarks comprehensively encodes the common structural and functional cortical architectures, providing opportunities for many applications in brain science including mapping human brain connectomes, as demonstrated in this work.


Cerebral Cortex | 2012

Axonal Fiber Terminations Concentrate on Gyri

Jingxin Nie; Lei Guo; Kaiming Li; Yonghua Wang; Guojun Chen; Longchuan Li; Hanbo Chen; Fan Deng; Xi Jiang; Tuo Zhang; Ling Huang; Carlos Faraco; Degang Zhang; Cong Guo; Pew Thian Yap; Xintao Hu; Gang Li; Jinglei Lv; Yixuan Yuan; Dajiang Zhu; Junwei Han; Dean Sabatinelli; Qun Zhao; L. Stephen Miller; Bingqian Xu; Ping Shen; Simon R. Platt; Dinggang Shen; Xiaoping Hu; Tianming Liu

Convoluted cortical folding and neuronal wiring are 2 prominent attributes of the mammalian brain. However, the macroscale intrinsic relationship between these 2 general cross-species attributes, as well as the underlying principles that sculpt the architecture of the cerebral cortex, remains unclear. Here, we show that the axonal fibers connected to gyri are significantly denser than those connected to sulci. In human, chimpanzee, and macaque brains, a dominant fraction of axonal fibers were found to be connected to the gyri. This finding has been replicated in a range of mammalian brains via diffusion tensor imaging and high-angular resolution diffusion imaging. These results may have shed some lights on fundamental mechanisms for development and organization of the cerebral cortex, suggesting that axonal pushing is a mechanism of cortical folding.


IEEE Journal of Biomedical and Health Informatics | 2016

Bleeding Frame and Region Detection in the Wireless Capsule Endoscopy Video

Yixuan Yuan; Baopu Li; Max Q.-H. Meng

Wireless capsule endoscopy (WCE) enables noninvasive and painless direct visual inspection of a patients whole digestive tract, but at the price of long time reviewing large amount of images by clinicians. Thus, an automatic computer-aided technique to reduce the burden of physicians is highly demanded. In this paper, we propose a novel color feature extraction method to discriminate the bleeding frames from the normal ones, with further localization of the bleeding regions. Our proposal is based on a twofold system. First, we make full use of the color information of WCE images and utilize K-means clustering method on the pixel represented images to obtain the cluster centers, with which we characterize WCE images as words-based color histograms. Then, we judge the status of a WCE frame by applying the support vector machine (SVM) and K-nearest neighbor methods. Comprehensive experimental results reveal that the best classification performance is obtained with YCbCr color space, cluster number 80 and the SVM. The achieved classification performance reaches 95.75% in accuracy, 0.9771 for AUC, validating that the proposed scheme provides an exciting performance for bleeding classification. Second, we propose a two-stage saliency map extraction method to highlight bleeding regions, where the first-stage saliency map is created by means of different color channels mixer and the second-stage saliency map is obtained from the visual contrast. Followed by an appropriate fusion strategy and threshold, we localize the bleeding areas. Quantitative as well as qualitative results show that our methods could differentiate the bleeding areas from neighborhoods correctly.


Neuroinformatics | 2013

Meta-analysis of Functional Roles of DICCCOLs

Yixuan Yuan; Xi Jiang; Dajiang Zhu; Hanbo Chen; Kaiming Li; Peili Lv; Xiang Yu; Xiaojin Li; Shu Zhang; Tuo Zhang; Xintao Hu; Junwei Han; Lei Guo; Tianming Liu

DICCCOL (Dense Individualized and Common Connectivity-based Cortical Landmarks) is a recently published system composed of 358 cortical landmarks that possess consistent correspondences across individuals and populations. Meanwhile, each DICCCOL landmark is localized in an individual brain’s unique morphological profile, and therefore the DICCCOL system offers a universal and individualized brain reference and localization framework. However, in current 358 diffusion tensor imaging (DTI)-derived DICCCOLs, only 95 of them have been functionally annotated via task-based or resting-state fMRI datasets and the functional roles of other DICCCOLs are unknown yet. This work aims to take the advantage of existing literature fMRI studies (1110 publications) reported and aggregated in the BrainMap database to examine the possible functional roles of 358 DICCCOLs via meta-analysis. Our experimental results demonstrate that a majority of 358 DICCCOLs can be functionally annotated by the BrainMap database, and many DICCCOLs have rich and diverse functional roles in multiple behavior domains. This study provides novel insights into the functional regularity and diversity of 358 DICCCOLs, and offers a starting point for future elucidation of fine-grained functional roles of cortical landmarks.


NeuroImage | 2012

Visual analytics of brain networks.

Kaiming Li; Lei Guo; Carlos Faraco; Dajiang Zhu; Hanbo Chen; Yixuan Yuan; Jinglei Lv; Fan Deng; Xi Jiang; Tuo Zhang; Xintao Hu; Degang Zhang; L. Stephen Miller; Tianming Liu

Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging.


IEEE Transactions on Medical Imaging | 2015

Saliency Based Ulcer Detection for Wireless Capsule Endoscopy Diagnosis

Yixuan Yuan; Jiaole Wang; Baopu Li; Max Q.-H. Meng

Ulcer is one of the most common symptoms of many serious diseases in the human digestive tract. Especially for the ulcers in the small bowel where other procedures cannot adequately visualize, wireless capsule endoscopy (WCE) is increasingly being used in the diagnosis and clinical management. Because WCE generates large amount of images from the whole process of inspection, computer-aided detection of ulcer is considered an indispensable relief to clinicians. In this paper, a two-staged fully automated computer-aided detection system is proposed to detect ulcer from WCE images. In the first stage, we propose an effective saliency detection method based on multi-level superpixel representation to outline the ulcer candidates. To find the perceptually and semantically meaningful salient regions, we first segment the image into multi-level superpixel segmentations. Each level corresponds to different initial region sizes of the superpixels. Then we evaluate the corresponding saliency according to the color and texture features in superpixel region of each level. In the end, we fuse the saliency maps from all levels together to obtain the final saliency map. In the second stage, we apply the obtained saliency map to better encode the image features for the ulcer image recognition tasks. Because the ulcer mainly corresponds to the saliency region, we propose a saliency max-pooling method integrated with the Locality-constrained Linear Coding (LLC) method to characterize the images. Experiment results achieve promising 92.65% accuracy and 94.12% sensitivity, validating the effectiveness of the proposed method. Moreover, the comparison results show that our detection system outperforms the state-of-the-art methods on the ulcer classification task.


IEEE Transactions on Medical Imaging | 2013

Inferring Group-Wise Consistent Multimodal Brain Networks via Multi-View Spectral Clustering

Hanbo Chen; Kaiming Li; Dajiang Zhu; Xi Jiang; Yixuan Yuan; Peili Lv; Tuo Zhang; Lei Guo; Dinggang Shen; Tianming Liu

Quantitative modeling and analysis of structural and functional brain networks based on diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) data have received extensive interest recently. However, the regularity of these structural and functional brain networks across multiple neuroimaging modalities and also across different individuals is largely unknown. This paper presents a novel approach to inferring group-wise consistent brain subnetworks from multimodal DTI/resting-state fMRI datasets via multi-view spectral clustering of cortical networks, which were constructed upon our recently developed and validated large-scale cortical landmarks-DICCCOL (dense individualized and common connectivity-based cortical landmarks). We applied the algorithms on DTI data of 100 healthy young females and 50 healthy young males, obtained consistent multimodal brain networks within and across multiple groups, and further examined the functional roles of these networks. Our experimental results demonstrated that the derived brain networks have substantially improved inter-modality and inter-subject consistency.


IEEE Transactions on Automation Science and Engineering | 2016

Improved Bag of Feature for Automatic Polyp Detection in Wireless Capsule Endoscopy Images

Yixuan Yuan; Baopu Li; Max Q.-H. Meng

Wireless capsule endoscopy (WCE) needs computerized method to reduce the review time for its large image data. In this paper, we propose an improved bag of feature (BoF) method to assist classification of polyps in WCE images. Instead of utilizing a single scale-invariant feature transform (SIFT) feature in the traditional BoF method, we extract different textural features from the neighborhoods of the key points and integrate them together as synthetic descriptors to carry out classification tasks. Specifically, we study influence of the number of visual words, the patch size and different classification methods in terms of classification performance. Comprehensive experimental results reveal that the best classification performance is obtained with the integrated feature strategy using the SIFT and the complete local binary pattern (CLBP) feature, the visual words with a length of 120, the patch size of 8*8, and the support vector machine (SVM). The achieved classification accuracy reaches 93.2%, confirming that the proposed scheme is promising for classification of polyps in WCE images.


Medical Physics | 2017

Deep learning for polyp recognition in wireless capsule endoscopy images

Yixuan Yuan; Max Q.-H. Meng

Purpose Wireless capsule endoscopy (WCE) enables physicians to examine the digestive tract without any surgical operations, at the cost of a large volume of images to be analyzed. In the computer‐aided diagnosis of WCE images, the main challenge arises from the difficulty of robust characterization of images. This study aims to provide discriminative description of WCE images and assist physicians to recognize polyp images automatically. Methods We propose a novel deep feature learning method, named stacked sparse autoencoder with image manifold constraint (SSAEIM), to recognize polyps in the WCE images. Our SSAEIM differs from the traditional sparse autoencoder (SAE) by introducing an image manifold constraint, which is constructed by a nearest neighbor graph and represents intrinsic structures of images. The image manifold constraint enforces that images within the same category share similar learned features and images in different categories should be kept far away. Thus, the learned features preserve large intervariances and small intravariances among images. Results The average overall recognition accuracy (ORA) of our method for WCE images is 98.00%. The accuracies for polyps, bubbles, turbid images, and clear images are 98.00%, 99.50%, 99.00%, and 95.50%, respectively. Moreover, the comparison results show that our SSAEIM outperforms existing polyp recognition methods with relative higher ORA. Conclusion The comprehensive results have demonstrated that the proposed SSAEIM can provide descriptive characterization for WCE images and recognize polyps in a WCE video accurately. This method could be further utilized in the clinical trials to help physicians from the tedious image reading work.


international conference on mechatronics and automation | 2013

Hierarchical key frames extraction for WCE video

Yixuan Yuan; Max Q.-H. Meng

Wireless capsule endoscopy (WCE) is an advanced, patient-friendly imaging technique that enables close examination of the entire small intestine. Since it usually takes hours to review all the video data even by professional clinician, the automatic computer-aided technique is highly demanded. This paper presents a hierarchical methodology for detecting key frames in WCE images. In the first stage, we choose key frames whose changes of information entropies take the local maximum by automatic threshold to cut the images into several sub clots. Then AP clustering method is applied in each clot to extract the second stage key frames. Our method maintains the temporal information and maximizes the content distance. Experimental results demonstrate that the proposed techniques achieve inspiring performance with fidelity 0.9206 and compression ratio 0.9125 on average.

Collaboration


Dive into the Yixuan Yuan's collaboration.

Top Co-Authors

Avatar

Max Q.-H. Meng

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lei Guo

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Tuo Zhang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xintao Hu

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Junwei Han

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Kaiming Li

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xi Jiang

University of Georgia

View shared research outputs
Researchain Logo
Decentralizing Knowledge