Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerard Sanroma is active.

Publication


Featured researches published by Gerard Sanroma.


NeuroImage | 2015

Hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition.

Guorong Wu; Minjeong Kim; Gerard Sanroma; Qian Wang; Brent C. Munsell; Dinggang Shen

Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images. After registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process its critical for the chosen patch similarity measurement to accurately capture the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch is now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchical approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 T MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods.


IEEE Transactions on Medical Imaging | 2014

Learning to Rank Atlases for Multiple-Atlas Segmentation

Gerard Sanroma; Guorong Wu; Yaozong Gao; Dinggang Shen

Recently, multiple-atlas segmentation (MAS) has achieved a great success in the medical imaging area. The key assumption is that multiple atlases have greater chances of correctly labeling a target image than a single atlas. However, the problem of atlas selection still remains unexplored. Traditionally, image similarity is used to select a set of atlases. Unfortunately, this heuristic criterion is not necessarily related to the final segmentation performance. To solve this seemingly simple but critical problem, we propose a learning-based atlas selection method to pick up the best atlases that would lead to a more accurate segmentation. Our main idea is to learn the relationship between the pairwise appearance of observed instances (i.e., a pair of atlas and target images) and their final labeling performance (e.g., using the Dice ratio). In this way, we select the best atlases based on their expected labeling accuracy. Our atlas selection method is general enough to be integrated with any existing MAS method. We show the advantages of our atlas selection method in an extensive experimental evaluation in the ADNI, SATA, IXI, and LONI LPBA40 datasets. As shown in the experiments, our method can boost the performance of three widely used MAS methods, outperforming other learning-based and image-similarity-based atlas selection methods.


Medical Image Analysis | 2015

Building dynamic population graph for accurate correspondence detection

Shaoyi Du; Yanrong Guo; Gerard Sanroma; Dong Ni; Guorong Wu; Dinggang Shen

In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph.


computer vision and pattern recognition | 2014

Learning-Based Atlas Selection for Multiple-Atlas Segmentation

Gerard Sanroma; Guorong Wu; Yaozong Gao; Dinggang Shen

Recently, multi-atlas segmentation (MAS) has achieved a great success in the medical imaging area. The key assumption of MAS is that multiple atlases encompass richer anatomical variability than a single atlas. Therefore, we can label the target image more accurately by mapping the label information from the appropriate atlas images that have the most similar structures. The problem of atlas selection, however, still remains unexplored. Current state-of-the-art MAS methods rely on image similarity to select a set of atlases. Unfortunately, this heuristic criterion is not necessarily related to segmentation performance and, thus may undermine segmentation results. To solve this simple but critical problem, we propose a learning-based atlas selection method to pick up the best atlases that would eventually lead to more accurate image segmentation. Our idea is to learn the relationship between the pairwise appearance of observed instances (a pair of atlas and target images) and their final labeling performance (in terms of Dice ratio). In this way, we can select the best atlases according to their expected labeling accuracy. It is worth noting that our atlas selection method is general enough to be integrated with existing MAS methods. As is shown in the experiments, we achieve significant improvement after we integrate our method with 3 widely used MAS methods on ADNI and LONI LPBA40 datasets.


Medical Image Analysis | 2015

A transversal approach for patch-based label fusion via matrix completion

Gerard Sanroma; Guorong Wu; Yaozong Gao; Kim Han Thung; Yanrong Guo; Dinggang Shen

Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge.


Image and Vision Computing | 2014

A Unified Approach to the Recognition of Complex Actions from Sequences of Zone-Crossings

Gerard Sanroma; Luis Patino; Gertjan J. Burghouts; Klamer Schutte; James M. Ferryman

We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition, that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behavior on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.


International Workshop on Machine Learning in Medical Imaging | 2016

Building an Ensemble of Complementary Segmentation Methods by Exploiting Probabilistic Estimates

Gerard Sanroma; Oualid M. Benkarim; Gemma Piella; Miguel Ángel González Ballester

Two common ways of approaching atlas-based segmentation of brain MRI are (1) intensity-based modelling and (2) multi-atlas label fusion. Intensity-based methods are robust to registration errors but need distinctive image appearances. Multi-atlas label fusion can identify anatomical correspondences with faint appearance cues, but needs a reasonable registration. We propose an ensemble segmentation method that combines the complementary features of both types of approaches. Our method uses the probabilistic estimates of the base methods to compute their optimal combination weights in a spatially varying way. We also propose an intensity-based method (to be used as base method) that offers a trade-off between invariance to registration errors and dependence on distinct appearances. Results show that sacrificing invariance to registration errors (up to a certain degree) improves the performance of our intensity-based method. Our proposed ensemble method outperforms the rest of participating methods in most of the structures of the NeoBrainS12 Challenge on neonatal brain segmentation. We achieve up to \(\sim \)10 % of improvement in some structures.


International Workshop on Machine Learning in Medical Imaging | 2014

Novel Multi-Atlas Segmentation by Matrix Completion

Gerard Sanroma; Guorong Wu; Kim-Han Thung; Yanrong Guo; Dinggang Shen

The goal of multi-atlas segmentation is to estimate the anatomical labels on each target image point by combining the labels from a set of registered atlas images via label fusion. Typically, label fusion can be formulated either as a reconstruction or as a classification problem. Reconstruction-based methods compute the target labels as a weighted average of the atlas labels. Such weights are derived from the representation of the target image patches as a linear combination of the atlas image patches. However, the related issue is that the optimal weights in the image domain are not necessarily corresponding to those in the label domain. Classification-based methods can avoid this issue by directly learning the relationship between image and label domains. However, the learned relationships, describing the common characteristics of all the training atlas patches, might not be representative for a particular target image patch, and thus undermine the labeling results. In order to overcome the limitations of both types of methods, we innovatively formulate the patch-based label fusion problem as a matrix completion problem. By doing so, we can jointly utilize (1) the relationships between atlas and target image patches (thus taking the advantage of the reconstruction-based methods), and (2) the relationships between image and label domains (taking the advantage of the classification-based methods). In this way, our generalized paradigm can improve the label fusion accuracy in segmenting the challenging structures, e.g., hippocampus, compared to the state-of-the-art methods.


Human Brain Mapping | 2017

Toward the automatic quantification of in utero brain development in 3D structural MRI: A review

Oualid M. Benkarim; Gerard Sanroma; Veronika A. M. Zimmer; Emma Muñoz-Moreno; N.M. Hahner; Elisenda Eixarch; Oscar Camara; Miguel Ángel González Ballester; Gemma Piella

Investigating the human brain in utero is important for researchers and clinicians seeking to understand early neurodevelopmental processes. With the advent of fast magnetic resonance imaging (MRI) techniques and the development of motion correction algorithms to obtain high‐quality 3D images of the fetal brain, it is now possible to gain more insight into the ongoing maturational processes in the brain. In this article, we present a review of the major building blocks of the pipeline toward performing quantitative analysis of in vivo MRI of the developing brain and its potential applications in clinical settings. The review focuses on T1‐ and T2‐weighted modalities, and covers state of the art methodologies involved in each step of the pipeline, in particular, 3D volume reconstruction, spatio‐temporal modeling of the developing brain, segmentation, quantification techniques, and clinical applications. Hum Brain Mapp 38:2772–2787, 2017.


iberoamerican congress on pattern recognition | 2007

A new algorithm to compute the distance between multi-dimensional histograms

Francesc Serratosa; Gerard Sanroma; Alberto Sanfeliu

The aim of this paper is to present a new algorithm to compute the distance between n-dimensional histograms. There are some domains such as pattern recognition or image retrieval that use the distance between histograms at some step of the classification process. For this reason, some algorithms that find the distance between histograms have been proposed in the literature. Nevertheless, most of this research has been applied on one-dimensional histograms due to the computation of a distance between multi-dimensional histograms is very expensive. In this paper, we present an efficient method to compare multi-dimensional histograms in O(z2), where z represents the number of bins.

Collaboration


Dive into the Gerard Sanroma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gemma Piella

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Francesc Serratosa

Rovira i Virgili University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

René Alquézar

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oscar Camara

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Yanrong Guo

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge