Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siqi Liu is active.

Publication


Featured researches published by Siqi Liu.


IEEE Transactions on Biomedical Engineering | 2015

Multimodal Neuroimaging Feature Learning for Multiclass Diagnosis of Alzheimer's Disease

Siqi Liu; Sidong Liu; Weidong Cai; Hangyu Che; Sonia Pujol; Ron Kikinis; Dagan Feng; Michael J. Fulham; Adni

The accurate diagnosis of Alzheimers disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.


international symposium on biomedical imaging | 2014

Early diagnosis of Alzheimer's disease with deep learning

Siqi Liu; Sidong Liu; Weidong Cai; Sonia Pujol; Ron Kikinis; Dagan Feng

The accurate diagnosis of Alzheimers disease (AD) plays a significant role in patient care, especially at the early stage, because the consciousness of the severity and the progression risks allows the patients to take prevention measures before irreversible brain damages are shaped. Although many studies have applied machine learning methods for computer-aided-diagnosis (CAD) of AD recently, a bottleneck of the diagnosis performance was shown in most of the existing researches, mainly due to the congenital limitations of the chosen learning models. In this study, we design a deep learning architecture, which contains stacked auto-encoders and a softmax output layer, to overcome the bottleneck and aid the diagnosis of AD and its prodromal stage, Mild Cognitive Impairment (MCI). Compared to the previous workflows, our method is capable of analyzing multiple classes in one setting, and requires less labeled training samples and minimal domain prior knowledge. A significant performance gain on classification of all diagnosis groups was achieved in our experiments.


Brain Informatics | 2015

Multimodal neuroimaging computing: a review of the applications in neuropsychiatric disorders

Sidong Liu; Weidong Cai; Siqi Liu; Fan Zhang; Michael J. Fulham; Dagan Feng; Sonia Pujol; Ron Kikinis

Multimodal neuroimaging is increasingly used in neuroscience research, as it overcomes the limitations of individual modalities. One of the most important applications of multimodal neuroimaging is the provision of vital diagnostic data for neuropsychiatric disorders. Multimodal neuroimaging computing enables the visualization and quantitative analysis of the alterations in brain structure and function, and has reshaped how neuroscience research is carried out. Research in this area is growing exponentially, and so it is an appropriate time to review the current and future development of this emerging area. Hence, in this paper, we review the recent advances in multimodal neuroimaging (MRI, PET) and electrophysiological (EEG, MEG) technologies, and their applications to the neuropsychiatric disorders. We also outline some future directions for multimodal neuroimaging where researchers will design more advanced methods and models for neuropsychiatric research.


Brain Informatics | 2015

Multimodal neuroimaging computing: the workflows, methods, and platforms.

Sidong Liu; Weidong Cai; Siqi Liu; Fan Zhang; Michael J. Fulham; Dagan Feng; Sonia Pujol; Ron Kikinis

The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms.


Neuroinformatics | 2016

Rivulet: 3D Neuron Morphology Tracing with Iterative Back-Tracking

Siqi Liu; Donghao Zhang; Sidong Liu; Dagan Feng; Hanchuan Peng; Weidong Cai

The digital reconstruction of single neurons from 3D confocal microscopic images is an important tool for understanding the neuron morphology and function. However the accurate automatic neuron reconstruction remains a challenging task due to the varying image quality and the complexity in the neuronal arborisation. Targeting the common challenges of neuron tracing, we propose a novel automatic 3D neuron reconstruction algorithm, named Rivulet, which is based on the multi-stencils fast-marching and iterative back-tracking. The proposed Rivulet algorithm is capable of tracing discontinuous areas without being interrupted by densely distributed noises. By evaluating the proposed pipeline with the data provided by the Diadem challenge and the recent BigNeuron project, Rivulet is shown to be robust to challenging microscopic imagestacks. We discussed the algorithm design in technical details regarding the relationships between the proposed algorithm and the other state-of-the-art neuron tracing algorithms.


international conference on control, automation, robotics and vision | 2014

Propagation graph fusion for multi-modal medical content-based retrieval

Sidong Liu; Siqi Liu; Sonia Pujol; Ron Kikinis; Dagan Feng; Weidong Cai

Medical content-based retrieval (MCBR) plays an important role in computer aided diagnosis and clinical decision support. Multi-modal imaging data have been increasingly used in MCBR, as they could provide more insights of the diseases and complement the deficiencies of single-modal data. However, it is very challenging to fuse data in different modalities since they have different physical fundamentals and large value range variations. In this study, we propose a novel Propagation Graph Fusion (PGF) framework for multi-modal medical data retrieval. PGF models the subjects relationships in single modalities using the directed propagation graphs, and then fuses the graphs into a single graph by summing up the edge weights. Our proposed PGF method could reduce the large inter-modality and inter-subject variations, and can be solved efficiently using the PageRank algorithm. We test the proposed method on a public medical database with 331 subjects using features extracted from two imaging modalities, PET and MRI. The preliminary results show that our PGF method could enhance multi-modal retrieval and modestly outperform the state-of-the-art single-modal and multi-modal retrieval methods.


international symposium on biomedical imaging | 2015

Longitudinal brain MR retrieval with diffeomorphic demons registration: What happened to those patients with similar changes?

Siqi Liu; Sidong Liu; Fan Zhang; Weidong Cai; Sonia Pujol; Ron Kikinis; Dagan Feng

Current medical content-based retrieval (MCBR) systems for neuroimaging data mainly focus on retrieving the cross-sectional neuroimaging data with similar regional or global measurements. The longitudinal pathological changes along different time-points are usually neglected in such MCBR systems. We propose the cross-registration based retrieval for longitudinal MR data to retrieve patients with similar structural changes as an extension to the existing MCBR systems. The diffeomorphic demons registration is used to extract the tissue deformation between two adjacent MR volumes. An asymmetric square dissimilarity matrix is designed for indexing the patient changes within a specific interval. A visual demonstration is given to show the registration displacement fields of the query as compared to the simulated results. The experimental performance with the mean average precision (mAP) and the average top-K accuracy (aACC) are reported for evaluation.


IEEE Transactions on Biomedical Engineering | 2016

Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging

Fan Zhang; Yang Song; Weidong Cai; Sidong Liu; Siqi Liu; Sonia Pujol; Ron Kikinis; Yong Xia; Michael J. Fulham; David Dagan Feng; Alzheimer's Disease Neuroimaging Initiative

Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.


Australasian Conference on Artificial Life and Computational Intelligence | 2015

Ranking-based vocabulary pruning in bag-of-features for image retrieval

Fan Zhang; Yang Song; Weidong Cai; Alexander G. Hauptmann; Sidong Liu; Siqi Liu; David Dagan Feng; Mei Chen

Content-based image retrieval (CBIR) has been applied to a variety of medical applications, e.g., pathology research and clinical decision support, and bag-of-features (BOF) model is one of the most widely used techniques. In this study, we address the problem of vocabulary pruning to reduce the influence from the redundant and noisy visual words. The conditional probability of each word upon the hidden topics extracted using probabilistic Latent Semantic Analysis (pLSA) is firstly calculated. A ranking method is then proposed to compute the significance of the words based on the relationship between the words and topics. Experiments on the publicly available Early Lung Cancer Action Program (ELCAP) database show that the method can reduce the number of words required while improving the retrieval performance. The proposed method is applicable to general image retrieval since it is independent of the problem domain.


Australasian Conference on Artificial Life and Computational Intelligence | 2015

Multi-Phase Feature Representation Learning for Neurodegenerative Disease Diagnosis

Siqi Liu; Sidong Liu; Weidong Cai; Sonia Pujol; Ron Kikinis; David Dagan Feng

Feature learning with high dimensional neuroimaging features has been explored for the applications on neurodegenerative diseases. Low-dimensional biomarkers, such as mental status test scores and cerebrospinal fluid level, are essential in clinical diagnosis of neurological disorders, because they could be simple and effective for the clinicians to assess the disorder’s progression and severity. Rather than only using the low-dimensional biomarkers as inputs for decision making systems, we believe that such low-dimensional biomarkers can be used for enhancing the feature learning pipeline. In this study, we proposed a novel feature representation learning framework, Multi-Phase Feature Representation (MPFR), with low-dimensional biomarkers embedded. MPFR learns high-level neuroimaging features by extracting the associations between the low-dimensional biomarkers and the high-dimensional neuroimaging features with a deep neural network. We validated the proposed framework using the Mini-Mental-State-Examination (MMSE) scores as a low-dimensional biomarker and multi-modal neuroimaging data as the high-dimensional neuroimaging features from the ADNI baseline cohort. The proposed approach outperformed the original neural network in both binary and ternary Alzheimer’s disease classification tasks.

Collaboration


Dive into the Siqi Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Kikinis

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Sonia Pujol

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Zhang

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Michael J. Fulham

Royal Prince Alfred Hospital

View shared research outputs
Top Co-Authors

Avatar

Hanchuan Peng

Allen Institute for Brain Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge