Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mingxia Liu is active.

Publication


Featured researches published by Mingxia Liu.


IEEE Transactions on Biomedical Engineering | 2015

Domain Transfer Learning for MCI Conversion Prediction

Bo Cheng; Mingxia Liu; Daoqiang Zhang; Brent C. Munsell; Dinggang Shen

Machine learning methods have successfully been used to predict the conversion of mild cognitive impairment (MCI) to Alzheimers disease (AD), by classifying MCI converters (MCI-C) from MCI nonconverters (MCI-NC). However, most existing methods construct classifiers using data from one particular target domain (e.g., MCI), and ignore data in other related domains (e.g., AD and normal control (NC)) that may provide valuable information to improve MCI conversion prediction performance. To address is limitation, we develop a novel domain transfer learning method for MCI conversion prediction, which can use data from both the target domain (i.e., MCI) and auxiliary domains (i.e., AD and NC). Specifically, the proposed method consists of three key components: 1) a domain transfer feature selection component that selects the most informative feature-subset from both target domain and auxiliary domains from different imaging modalities; 2) a domain transfer sample selection component that selects the most informative sample-subset from the same target and auxiliary domains from different data modalities; and 3) a domain transfer support vector machine classification component that fuses the selected features and samples to separate MCI-C and MCI-NC patients. We evaluate our method on 202 subjects from the Alzheimers Disease Neuroimaging Initiative (ADNI) that have MRI, FDG-PET, and CSF data. The experimental results show the proposed method can classify MCI-C patients from MCI-NC patients with an accuracy of 79.4%, with the aid of additional domain knowledge learned from AD and NC.


Human Brain Mapping | 2015

View-centralized multi-atlas classification for Alzheimer's disease diagnosis

Mingxia Liu; Daoqiang Zhang; Dinggang Shen

Multi‐atlas based methods have been recently used for classification of Alzheimers disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Compared with traditional single‐atlas based methods, multiatlas based methods adopt multiple predefined atlases and thus are less biased by a certain atlas. However, most existing multiatlas based methods simply average or concatenate the features from multiple atlases, which may ignore the potentially important diagnosis information related to the anatomical differences among different atlases. In this paper, we propose a novel view (i.e., atlas) centralized multi‐atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. Specifically, all brain images are registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view‐centralized multi‐atlas feature selection method is used to select the most discriminative features from each atlas with extra guidance from other atlases. Next, we design a support vector machine (SVM) classifier using the selected features in each atlas space. Finally, we combine multiple SVM classifiers for multiple atlases through a classifier ensemble strategy for making a final decision. We have evaluated our method on 459 subjects [including 97 AD, 117 progressive MCI (p‐MCI), 117 stable MCI (s‐MCI), and 128 normal controls (NC)] from the Alzheimers Disease Neuroimaging Initiative database, and achieved an accuracy of 92.51% for AD versus NC classification and an accuracy of 78.88% for p‐MCI versus s‐MCI classification. These results demonstrate that the proposed method can significantly outperform the previous multi‐atlas based classification methods. Hum Brain Mapp 36:1847–1865, 2015.


IEEE Transactions on Medical Imaging | 2016

Relationship Induced Multi-Template Learning for Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment

Mingxia Liu; Daoqiang Zhang; Dinggang Shen

As shown in the literature, methods based on multiple templates usually achieve better performance, compared with those using only a single template for processing medical images. However, most existing multi-template based methods simply average or concatenate multiple sets of features extracted from different templates, which potentially ignores important structural information contained in the multi-template data. Accordingly, in this paper, we propose a novel relationship induced multi-template learning method for automatic diagnosis of Alzheimers disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI), by explicitly modeling structural information in the multi-template data. Specifically, we first nonlinearly register each brains magnetic resonance (MR) image separately onto multiple pre-selected templates, and then extract multiple sets of features for this MR image. Next, we develop a novel feature selection algorithm by introducing two regularization terms to model the relationships among templates and among individual subjects. Using these selected features corresponding to multiple templates, we then construct multiple support vector machine (SVM) classifiers. Finally, an ensemble classification is used to combine outputs of all SVM classifiers, for achieving the final result. We evaluate our proposed method on 459 subjects from the Alzheimers Disease Neuroimaging Initiative (ADNI) database, including 97 AD patients, 128 normal controls (NC), 117 progressive MCI (pMCI) patients, and 117 stable MCI (sMCI) patients. The experimental results demonstrate promising classification performance, compared with several state-of-the-art methods for multi-template based AD/MCI classification.


IEEE Transactions on Reliability | 2014

Two-Stage Cost-Sensitive Learning for Software Defect Prediction

Mingxia Liu; Linsong Miao; Daoqiang Zhang

Software defect prediction (SDP), which classifies software modules into defect-prone and not-defect-prone categories, provides an effective way to maintain high quality software systems. Most existing SDP models attempt to attain lower classification error rates other than lower misclassification costs. However, in many real-world applications, misclassifying defect-prone modules as not-defect-prone ones usually leads to higher costs than misclassifying not-defect-prone modules as defect-prone ones. In this paper, we first propose a new two-stage cost-sensitive learning (TSCS) method for SDP, by utilizing cost information not only in the classification stage but also in the feature selection stage. Then, specifically for the feature selection stage, we develop three novel cost-sensitive feature selection algorithms, namely, Cost-Sensitive Variance Score (CSVS), Cost-Sensitive Laplacian Score (CSLS), and Cost-Sensitive Constraint Score (CSCS), by incorporating cost information into traditional feature selection algorithms. The proposed methods are evaluated on seven real data sets from NASA projects. Experimental results suggest that our TSCS method achieves better performance in software defect prediction compared to existing single-stage cost-sensitive classifiers. Also, our experiments show that the proposed cost-sensitive feature selection methods outperform traditional cost-blind feature selection methods, validating the efficacy of using cost information in the feature selection stage.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Joint Binary Classifier Learning for ECOC-Based Multi-Class Classification

Mingxia Liu; Daoqiang Zhang; Songcan Chen; Hui Xue

Error-correcting output coding (ECOC) is one of the most widely used strategies for dealing with multi-class problems by decomposing the original multi-class problem into a series of binary sub-problems. In traditional ECOC-based methods, binary classifiers corresponding to those sub-problems are usually trained separately without considering the relationships among these classifiers. However, as these classifiers are established on the same training data, there may be some inherent relationships among them. Exploiting such relationships can potentially improve the generalization performances of individual classifiers, and, thus, boost ECOC learning algorithms. In this paper, we explore to mine and utilize such relationship through a joint classifier learning method, by integrating the training of binary classifiers and the learning of the relationship among them into a unified objective function. We also develop an efficient alternating optimization algorithm to solve the objective function. To evaluate the proposed method, we perform a series of experiments on eleven datasets from the UCI machine learning repository as well as two datasets from real-world image recognition tasks. The experimental results demonstrate the efficacy of the proposed method, compared with state-of-the-art methods for ECOC-based multi-class classification.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Pairwise Constraint-Guided Sparse Learning for Feature Selection

Mingxia Liu; Daoqiang Zhang

Feature selection aims to identify the most informative features for a compact and accurate data representation. As typical supervised feature selection methods, Lasso and its variants using L1-norm-based regularization terms have received much attention in recent studies, most of which use class labels as supervised information. Besides class labels, there are other types of supervised information, e.g., pairwise constraints that specify whether a pair of data samples belong to the same class (must-link constraint) or different classes (cannot-link constraint). However, most of existing L1-norm-based sparse learning methods do not take advantage of the pairwise constraints that provide us weak and more general supervised information. For addressing that problem, we propose a pairwise constraint-guided sparse (CGS) learning method for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including: 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. We conduct a series of experiments on a number of data sets from University of California-Irvine machine learning repository, a gene expression data set, two real-world neuroimaging-based classification tasks, and two large-scale attribute classification tasks. Experimental results demonstrate the efficacy of our proposed methods, compared with several established feature selection methods.


Medical Image Analysis | 2017

View-aligned hypergraph learning for Alzheimer’s disease diagnosis with incomplete multi-modality data

Mingxia Liu; Jun Zhang; Pew Thian Yap; Dinggang Shen

HIGHLIGHTSWe developed a new hypergraph learning model to capture the coherence among views.We proposed a sparse representation based hypergraph construction method.We designed a multi‐view label fusion method for making classification decisions. ABSTRACT Effectively utilizing incomplete multi‐modality data for the diagnosis of Alzheimers disease (AD) and its prodrome (i.e., mild cognitive impairment, MCI) remains an active area of research. Several multi‐view learning methods have been recently developed for AD/MCI diagnosis by using incomplete multi‐modality data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to sub‐optimal learning performance. In this paper, we propose a view‐aligned hypergraph learning (VAHL) method to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view‐aligned hypergraph classification (VAHC) model is then proposed, by using a view‐aligned regularizer to capture coherence among views. We further assemble the class probability scores generated from VAHC, via a multi‐view label fusion method for making a final classification decision. We evaluate our method on the baseline ADNI‐1 database with 807 subjects and three modalities (i.e., MRI, PET, and CSF). Experimental results demonstrate that our method outperforms state‐of‐the‐art methods that use incomplete multi‐modality data for AD/MCI diagnosis.


Neurocomputing | 2014

Attribute relation learning for zero-shot classification

Mingxia Liu; Daoqiang Zhang; Songcan Chen

In computer vision and pattern recognition communities, one often-encountered problem is that the limited labeled training data are not enough to cover all the classes, which is also called the zero-shot learning problem. For addressing that challenging problem, some visual and semantic attributes are usually used as mid-level representation to transfer knowledge from training classes to unseen test ones. Recently, several studies have investigated to exploit the relation between attributes to aid the attribute-based learning methods. However, such attribute relation is commonly predefined by means of external linguistic knowledge bases, preprocessed in advance of the learning of attribute classifiers. In this paper, we propose a unified framework that learns the attribute-attribute relation and the attribute classifiers jointly to boost the performances of attribute predictors. Specifically, we unify the attribute relation learning and the attribute classifier design into a common objective function, through which we can not only predict attributes, but also automatically discover the relation between attributes from data. Furthermore, based on the afore-learnt attribute relation and classifiers, we develop two types of learning schemes for zero-shot classification. Experimental results on a series of real benchmark data sets suggest that mining the relation between attributes do enhance the performances of attribute prediction and zero-shot classification, compared with state-of-the-art methods.


Brain Imaging and Behavior | 2016

Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment

Chen Zu; Biao Jie; Mingxia Liu; Songcan Chen; Dinggang Shen; Daoqiang Zhang

Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.


IEEE Transactions on Image Processing | 2017

Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks

Jun Zhang; Mingxia Liu; Dinggang Shen

One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.

Collaboration


Dive into the Mingxia Liu's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Daoqiang Zhang

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Jun Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ehsan Adeli

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Bo Cheng

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Biao Jie

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Songcan Chen

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Le An

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Li Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Weili Lin

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge