Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan Eugenio Iglesias is active.

Publication


Featured researches published by Juan Eugenio Iglesias.


IEEE Transactions on Medical Imaging | 2011

Robust Brain Extraction Across Datasets and Comparison With Publicly Available Methods

Juan Eugenio Iglesias; Cheng-Yi Liu; Paul M. Thompson; Zhuowen Tu

Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained/tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy/diseased). The results show that ROBEX provides significantly improved performance measures for almost every method/dataset combination.


NeuroImage | 2015

A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI.

Juan Eugenio Iglesias; Jean C. Augustinack; Khoa Nguyen; Christopher M. Player; Allison Player; Michelle Wright; Nicole Roy; Matthew P. Frosch; Ann C. McKee; Lawrence L. Wald; Bruce Fischl; Koen Van Leemput

Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise delineations were made possible by the extraordinary resolution of the scans. In addition to the subregions, manual annotations for neighboring structures (e.g., amygdala, cortex) were obtained from a separate dataset of in vivo, T1-weighted MRI scans of the whole brain (1mm resolution). The manual labels from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimers disease subjects and elderly controls with 88% accuracy in standard resolution (1mm) T1 data, significantly outperforming the atlas in FreeSurfer version 5.3 (86% accuracy) and classification based on whole hippocampal volume (82% accuracy).


information processing in medical imaging | 2011

Entangled decision forests and their application for semantic segmentation of CT images

Albert Montillo; Jamie Shotton; John Winn; Juan Eugenio Iglesias; Dimitris N. Metaxas; Antonio Criminisi

This work addresses the challenging problem of simultaneously segmenting multiple anatomical structures in highly varied CT scans. We propose the entangled decision forest (EDF) as a new discriminative classifier which augments the state of the art decision forest, resulting in higher prediction accuracy and shortened decision time. Our main contribution is two-fold. First, we propose entangling the binary tests applied at each tree node in the forest, such that the test result can depend on the result of tests applied earlier in the same tree and at image points offset from the voxel to be classified. This is demonstrated to improve accuracy and capture long-range semantic context. Second, during training, we propose injecting randomness in a guided way, in which node feature types and parameters are randomly drawn from a learned (nonuniform) distribution. This further improves classification accuracy. We assess our probabilistic anatomy segmentation technique using a labeled database of CT image volumes of 250 different patients from various scan protocols and scanner vendors. In each volume, 12 anatomical structures have been manually segmented. The database comprises highly varied body shapes and sizes, a wide array of pathologies, scan resolutions, and diverse contrast agents. Quantitative comparisons with state of the art algorithms demonstrate both superior test accuracy and computational efficiency.


medical image computing and computer assisted intervention | 2013

Is Synthesizing MRI Contrast Useful for Inter-modality Analysis?

Juan Eugenio Iglesias; Ender Konukoglu; Darko Zikic; Ben Glocker; Koen Van Leemput; Bruce Fischl

Availability of multi-modal magnetic resonance imaging (MRI) databases opens up the opportunity to synthesize different MRI contrasts without actually acquiring the images. In theory such synthetic images have the potential to reduce the amount of acquisitions to perform certain analyses. However, to what extent they can substitute real acquisitions in the respective analyses is an open question. In this study, we used a synthesis method based on patch matching to test whether synthetic images can be useful in segmentation and inter-modality cross-subject registration of brain MRI. Thirty-nine T1 scans with 36 manually labeled structures of interest were used in the registration and segmentation of eight proton density (PD) scans, for which ground truth T1 data were also available. The results show that synthesized T1 contrast can considerably enhance the quality of non-linear registration compared with using the original PD data, and it is only marginally worse than using the original T1 scans. In segmentation, the relative improvement with respect to using the PD is smaller, but still statistically significant.


information processing in medical imaging | 2011

Combining generative and discriminative models for semantic segmentation of CT scans via active learning

Juan Eugenio Iglesias; Ender Konukoglu; Albert Montillo; Zhuowen Tu; Antonio Criminisi

This paper presents a new supervised learning framework for the efficient recognition and segmentation of anatomical structures in 3D computed tomography (CT), with as little training data as possible. Training supervised classifiers to recognize organs within CT scans requires a large number of manually delineated exemplar 3D images, which are very expensive to obtain. In this study, we borrow ideas from the field of active learning to optimally select a minimum subset of such images that yields accurate anatomy segmentation. The main contribution of this work is in designing a combined generative-discriminative model which: i) drives optimal selection of training data; and ii) increases segmentation accuracy. The optimal training set is constructed by finding unlabeled scans which maximize the disagreement between our two complementary probabilistic models, as measured by a modified version of the Jensen-Shannon divergence. Our algorithm is assessed on a database of 196 labeled clinical CT scans with high variability in resolution, anatomy, pathologies, etc. Quantitative evaluation shows that, compared with randomly selecting the scans to annotate, our method decreases the number of training images by up to 45%. Moreover, our generative model of body shape substantially increases segmentation accuracy when compared to either using the discriminative model alone or a generic smoothness prior (e.g. via a Markov Random Field).


NeuroImage | 2016

Heritability and reliability of automatically segmented human hippocampal formation subregions.

Christopher D. Whelan; Derrek P. Hibar; Laura S. van Velzen; Anthony S. Zannas; Tania Carrillo-Roa; Katie L. McMahon; Gautam Prasad; Sinead Kelly; Joshua Faskowitz; Greig deZubiracay; Juan Eugenio Iglesias; Theo G.M. van Erp; Thomas Frodl; Nicholas G. Martin; Margaret J. Wright; Neda Jahanshad; Lianne Schmaal; Philipp G. Sämann; Paul M. Thompson

The human hippocampal formation can be divided into a set of cytoarchitecturally and functionally distinct subregions, involved in different aspects of memory formation. Neuroanatomical disruptions within these subregions are associated with several debilitating brain disorders including Alzheimer’s disease, major depression, schizophrenia, and bipolar disorder. Multi-center brain imaging consortia, such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) consortium, are interested in studying disease effects on these subregions, and in the genetic factors that affect them. For large-scale studies, automated extraction and subsequent genomic association studies of these hippocampal subregion measures may provide additional insight. Here, we evaluated the test–retest reliability and transplatform reliability (1.5 T versus 3 T) of the subregion segmentation module in the FreeSurfer software package using three independent cohorts of healthy adults, one young (Queensland Twins Imaging Study, N = 39), another elderly (Alzheimer’s Disease Neuroimaging Initiative, ADNI-2, N = 163) and another mixed cohort of healthy and depressed participants (Max Planck Institute, MPIP, N = 598). We also investigated agreement between the most recent version of this algorithm (v6.0) and an older version (v5.3), again using the ADNI-2 and MPIP cohorts in addition to a sample from the Netherlands Study for Depression and Anxiety (NESDA) (N = 221). Finally, we estimated the heritability (h2) of the segmented subregion volumes using the full sample of young, healthy QTIM twins (N = 728). Test–retest reliability was high for all twelve subregions in the 3 T ADNI-2 sample (intraclass correlation coefficient (ICC) = 0.70–0.97) and moderate-to-high in the 4 T QTIM sample (ICC = 0.5–0.89). Transplatform reliability was strong for eleven of the twelve subregions (ICC = 0.66–0.96); however, the hippocampal fissure was not consistently reconstructed across 1.5 T and 3 T field strengths (ICC = 0.47–0.57). Between-version agreement was moderate for the hippocampal tail, subiculum and presubiculum (ICC = 0.78–0.84; Dice Similarity Coefficient (DSC) = 0.55–0.70), and poor for all other subregions (ICC = 0.34–0.81; DSC = 0.28–0.51). All hippocampal subregion volumes were highly heritable (h2 = 0.67–0.91). Our findings indicate that eleven of the twelve human hippocampal subregions segmented using FreeSurfer version 6.0 may serve as reliable and informative quantitative phenotypes for future multi-site imaging genetics initiatives such as those of the ENIGMA consortium.


Medical Image Analysis | 2013

A unified framework for cross-modality multi-atlas segmentation of brain MRI.

Juan Eugenio Iglesias; Mert R. Sabuncu; Koen Van Leemput

Abstract Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.


Molecular Psychiatry | 2017

Progression from selective to general involvement of hippocampal subfields in schizophrenia

New Fei Ho; Juan Eugenio Iglesias; M Y Sum; C N Kuswanto; Y Y Sitoh; J De Souza; Zhaoping Hong; Bruce Fischl; Joshua L. Roffman; Juan Zhou; Kang Sim; Daphne J. Holt

Volume deficits of the hippocampus in schizophrenia have been consistently reported. However, the hippocampus is anatomically heterogeneous; it remains unclear whether certain portions of the hippocampus are affected more than others in schizophrenia. In this study, we aimed to determine whether volume deficits in schizophrenia are confined to specific subfields of the hippocampus and to measure the subfield volume trajectories over the course of the illness. Magnetic resonance imaging scans were obtained from Data set 1: 155 patients with schizophrenia (mean duration of illness of 7 years) and 79 healthy controls, and Data set 2: an independent cohort of 46 schizophrenia patients (mean duration of illness of 18 years) and 46 healthy controls. In addition, follow-up scans were collected for a subset of Data set 1. A novel, automated method based on an atlas constructed from ultra-high resolution, post-mortem hippocampal tissue was used to label seven hippocampal subfields. Significant cross-sectional volume deficits in the CA1, but not of the other subfields, were found in the schizophrenia patients of Data set 1. However, diffuse cross-sectional volume deficits across all subfields were found in the more chronic and ill schizophrenia patients of Data set 2. Consistent with this pattern, the longitudinal analysis of Data set 1 revealed progressive illness-related volume loss (~2–6% per year) that extended beyond CA1 to all of the other subfields. This decline in volume correlated with symptomatic worsening. Overall, these findings provide converging evidence for early atrophy of CA1 in schizophrenia, with extension to other hippocampal subfields and accompanying clinical sequelae over time.


NeuroImage | 2015

Bayesian segmentation of brainstem structures in MRI

Juan Eugenio Iglesias; Koen Van Leemput; Priyanka Bhatt; Christen Casillas; Shubir Dutt; Norbert Schuff; Diana Truran-Sacrey; Adam L. Boxer; Bruce Fischl

In this paper we present a method to segment four brainstem structures (midbrain, pons, medulla oblongata and superior cerebellar peduncle) from 3D brain MRI scans. The segmentation method relies on a probabilistic atlas of the brainstem and its neighboring brain structures. To build the atlas, we combined a dataset of 39 scans with already existing manual delineations of the whole brainstem and a dataset of 10 scans in which the brainstem structures were manually labeled with a protocol that was specifically designed for this study. The resulting atlas can be used in a Bayesian framework to segment the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy (mean error under 1mm) and robustness (no failures in 383 scans including 168 AD cases). We also indirectly evaluate the algorithm with a experiment in which we study the atrophy of the brainstem in aging. The results show that, when used simultaneously, the volumes of the midbrain, pons and medulla are significantly more predictive of age than the volume of the entire brainstem, estimated as their sum. The results also demonstrate that the method can detect atrophy patterns in the brainstem structures that have been previously described in the literature. Finally, we demonstrate that the proposed algorithm is able to detect differential effects of AD on the brainstem structures. The method will be implemented as part of the popular neuroimaging package FreeSurfer.


IEEE Transactions on Medical Imaging | 2009

Robust Initial Detection of Landmarks in Film-Screen Mammograms Using Multiple FFDM Atlases

Juan Eugenio Iglesias; Nico Karssemeijer

Automated analysis of mammograms requires robust methods for pectoralis segmentation and nipple detection. Locating the nipple is especially important in multiview computer aided detection systems, in which findings are matched across images using the nipple-to-finding distance. Segmenting the pectoralis is a key preprocessing step to avoid false positives when detecting masses due to the similarity of the texture of mammographic parenchyma and the pectoral muscle. A multi-atlas algorithm capable of providing very robust initial estimates of the nipple position and pectoral region in digitized mammograms is presented here. Ten full-field digital mammograms, which are easily annotated attributed to their excellent contrast, are robustly registered to the target digitized film-screen mammogram. The annotations are then propagated and fused into a final nipple position and pectoralis segmentation. Compared to other nipple detection methods in the literature, the system proposed here has the advantages that it is more robust and can provide a reliable estimate when the nipple is located outside the image. Our results show that the change in the correlation between nipple-to-finding distances in craniocaudal and mediolateral oblique views is not significant when the detected nipple positions replace the manual annotations. Moreover, the pectoralis segmentation is acceptable and can be used as initialization for a more complex algorithm to optimize the outline locally. A novel aspect of the method is that it is also capable of detecting and segmenting the pectoralis in craniocaudal views.

Collaboration


Dive into the Juan Eugenio Iglesias's collaboration.

Top Co-Authors

Avatar

Koen Van Leemput

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhuowen Tu

University of California

View shared research outputs
Top Co-Authors

Avatar

Cheng-Yi Liu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Modat

University College London

View shared research outputs
Top Co-Authors

Avatar

Tom Vercauteren

University College London

View shared research outputs
Top Co-Authors

Avatar

Paul M. Thompson

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge