Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ehsan Adeli is active.

Publication


Featured researches published by Ehsan Adeli.


IEEE Transactions on Biomedical Engineering | 2016

Inherent Structure-Based Multiview Learning With Multitemplate Feature Representation for Alzheimer's Disease Diagnosis

Mingxia Liu; Daoqiang Zhang; Ehsan Adeli; Dinggang Shen

Multitemplate-based brain morphometric pattern analysis using magnetic resonance imaging has been recently proposed for automatic diagnosis of Alzheimers disease (AD) and its prodromal stage (i.e., mild cognitive impairment or MCI). In such methods, multiview morphological patterns generated from multiple templates are used as feature representation for brain images. However, existing multitemplate-based methods often simply assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while in reality, the underlying data distribution is actually not preknown. In this paper, we propose an inherent structure-based multiview leaning method using multiple templates for AD/MCI classification. Specifically, we first extract multiview feature representations for subjects using multiple selected templates and then cluster subjects within a specific class into several subclasses (i.e., clusters) in each view space. Then, we encode those subclasses with unique codes by considering both their original class information and their own distribution information, followed by a multitask feature selection model. Finally, we learn an ensemble of view-specific support vector machine classifiers based on their, respectively, selected features in each view and fuse their results to draw the final decision. Experimental results on the Alzheimers Disease Neuroimaging Initiative database demonstrate that our method achieves promising results for AD/MCI classification, compared to the state-of-the-art multitemplate-based methods.


medical image computing and computer assisted intervention | 2016

3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients

Dong Nie; Han Zhang; Ehsan Adeli; Luyan Liu; Dinggang Shen

High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.


asian conference on computer vision | 2016

Deep Relative Attributes

Yaser Souri; Erfan Noury; Ehsan Adeli

Visual attributes are great means of describing images or scenes, in a way both humans and computers understand. In order to establish a correspondence between images and to be able to compare the strength of each property between images, relative attributes were introduced. However, since their introduction, hand-crafted and engineered features were used to learn increasingly complex models for the problem of relative attributes. This limits the applicability of those methods for more realistic cases. We introduce a deep neural network architecture for the task of relative attribute prediction. A convolutional neural network (ConvNet) is adopted to learn the features by including an additional layer (ranking layer) that learns to rank the images based on these features. We adopt an appropriate ranking loss to train the whole network in an end-to-end fashion. Our proposed method outperforms the baseline and state-of-the-art methods in relative attribute prediction on various coarse and fine-grained datasets. Our qualitative results along with the visualization of the saliency maps show that the network is able to learn effective features for each specific attribute. Source code of the proposed method is available at this https URL.


NeuroImage | 2016

Joint feature-sample selection and robust diagnosis of Parkinson's disease from MRI data

Ehsan Adeli; Feng Shi; Le An; Chong Yaw Wee; Guorong Wu; Tao Wang; Dinggang Shen

Parkinsons disease (PD) is an overwhelming neurodegenerative disorder caused by deterioration of a neurotransmitter, known as dopamine. Lack of this chemical messenger impairs several brain regions and yields various motor and non-motor symptoms. Incidence of PD is predicted to double in the next two decades, which urges more research to focus on its early diagnosis and treatment. In this paper, we propose an approach to diagnose PD using magnetic resonance imaging (MRI) data. Specifically, we first introduce a joint feature-sample selection (JFSS) method for selecting an optimal subset of samples and features, to learn a reliable diagnosis model. The proposed JFSS model effectively discards poor samples and irrelevant features. As a result, the selected features play an important role in PD characterization, which will help identify the most relevant and critical imaging biomarkers for PD. Then, a robust classification framework is proposed to simultaneously de-noise the selected subset of features and samples, and learn a classification model. Our model can also de-noise testing samples based on the cleaned training data. Unlike many previous works that perform de-noising in an unsupervised manner, we perform supervised de-noising for both training and testing data, thus boosting the diagnostic accuracy. Experimental results on both synthetic and publicly available PD datasets show promising results. To evaluate the proposed method, we use the popular Parkinsons progression markers initiative (PPMI) database. Our results indicate that the proposed method can differentiate between PD and normal control (NC), and outperforms the competing methods by a relatively large margin. It is noteworthy to mention that our proposed framework can also be used for diagnosis of other brain disorders. To show this, we have also conducted experiments on the widely-used ADNI database. The obtained results indicate that our proposed method can identify the imaging biomarkers and diagnose the disease with favorable accuracies compared to the baseline methods.


Medical Image Analysis | 2018

Landmark-based deep multi-instance learning for brain disease diagnosis

Mingxia Liu; Jun Zhang; Ehsan Adeli; Dinggang Shen

HighlightsWe developed a novel deep multi‐instance convolutional neural network to automatically learn both local and global representations for MR images.We proposed a landmark‐based image patch extraction approach based on a data‐driven landmark discovery algorithm.We trained the model on ADNI‐1 and tested it on two independent datasets (i.e., ADNI‐2 and MIRIAD). Graphical abstract Figure. No Caption available. Abstract In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions‐of‐interest (ROIs), and 2) extracting pre‐defined features from each ROI for diagnosis with a certain classifier. However, these pre‐defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease‐related features. In this paper, we propose a landmark‐based deep multi‐instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data‐driven learning approach to discover disease‐related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end‐to‐end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI‐1, ADNI‐2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state‐of‐the‐art approaches.


Scientific Reports | 2017

Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

Ehsan Adeli; Guorong Wu; Behrouz Saghafi; Le An; Feng Shi; Dinggang Shen

Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.


medical image computing and computer assisted intervention | 2016

Stability-Weighted Matrix Completion of Incomplete Multi-modal Data for Disease Diagnosis

Kim Han Thung; Ehsan Adeli; Pew Thian Yap; Dinggang Shen

Effective utilization of heterogeneous multi-modal data for Alzheimers Disease (AD) diagnosis and prognosis has always been hampered by incomplete data. One method to deal with this is low-rank matrix completion (LRMC), which simultaneous imputes missing data features and target values of interest. Although LRMC yields reasonable results, it implicitly weights features from all the modalities equally, ignoring the differences in discriminative power of features from different modalities. In this paper, we propose stability-weighted LRMC (swLRMC), an LRMC improvement that weights features and modalities according to their importance and reliability. We introduce a method, called stability weighting, to utilize subsampling techniques and outcomes from a range of hyper-parameters of sparse feature learning to obtain a stable set of weights. Incorporating these weights into LRMC, swLRMC can better account for differences in features and modalities for improving diagnosis. Experimental results confirm that the proposed method outperforms the conventional LRMC, feature-selection based LRMC, and other state-of-the-art methods.


IEEE Transactions on Image Processing | 2016

Multi-Level Canonical Correlation Analysis for Standard-Dose PET Image Estimation

Le An; Pei Zhang; Ehsan Adeli; Yan Wang; Guangkai Ma; Feng Shi; David S. Lalush; Weili Lin; Dinggang Shen

Positron emission tomography (PET) images are widely used in many clinical applications, such as tumor detection and brain disorder diagnosis. To obtain PET images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality. This may be achieved through mapping both S-PET and low-dose PET data into a common space and then performing patch-based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the estimation accuracy. In this paper, we propose a data-driven multi-level canonical correlation analysis scheme to solve this problem. In particular, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve estimation. In addition, we also use multi-modal magnetic resonance images to help improve the estimation with complementary information. Validations on phantom and real human brain data sets show that our method effectively estimates S-PET images and well preserves critical clinical quantification measures, such as standard uptake value.Positron emission tomography (PET) images are widely used in many clinical applications, such as tumor detection and brain disorder diagnosis. To obtain PET images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality. This may be achieved through mapping both S-PET and low-dose PET data into a common space and then performing patch-based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the estimation accuracy. In this paper, we propose a data-driven multi-level canonical correlation analysis scheme to solve this problem. In particular, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve estimation. In addition, we also use multi-modal magnetic resonance images to help improve the estimation with complementary information. Validations on phantom and real human brain data sets show that our method effectively estimates S-PET images and well preserves critical clinical quantification measures, such as standard uptake value.


Medical Image Analysis | 2017

Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning

Zhengxia Wang; Xiaofeng Zhu; Ehsan Adeli; Yingying Zhu; Feiping Nie; Brent C. Munsell; Guorong Wu

HighlightsLearn an intrinsic data representation for optimal classification.Flexible to integrate with multi‐model imaging data.Progressive graph‐based transductive learning for classification of neurodegenerative disease.Our proposed transductive learning framework is more efficient than supervised learning approaches to deal with issues such as small sample size and large data heterogeneity. ABSTRACT Graph‐based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter‐subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature‐to‐phenotype alignment is achieved using an iterative approach that: (1) refines inter‐subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter‐subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi‐modal imaging data to further improve pGTL classification accuracy. Using Alzheimers disease and Parkinsons disease study data, the classification accuracy of the proposed pGTL method is compared to several state‐of‐the‐art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. Graphical abstract Figure. No Caption available.


medical image computing and computer assisted intervention | 2016

Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease

Zhengxia Wang; Xiaofeng Zhu; Ehsan Adeli; Yingying Zhu; Chen Zu; Feiping Nie; Dinggang Shen; Guorong Wu

Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimers disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.

Collaboration


Dive into the Ehsan Adeli's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Mingxia Liu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Le An

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jun Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Feng Shi

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Han Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Weili Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dong Nie

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gang Li

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge