Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Albert Gubern-Mérida is active.

Publication


Featured researches published by Albert Gubern-Mérida.


Medical Image Analysis | 2017

Large scale deep learning for computer aided detection of mammographic lesions

Thijs Kooi; Geert J. S. Litjens; Bram van Ginneken; Albert Gubern-Mérida; Clara I. Sánchez; Ritse M. Mann; Ard den Heeten; Nico Karssemeijer

&NA; Recent advances in machine learning yielded new techniques to train deep neural networks, which resulted in highly successful applications in many pattern recognition tasks such as object detection and speech recognition. In this paper we provide a head‐to‐head comparison between a state‐of‐the art in mammography CAD system, relying on a manually designed feature set and a Convolutional Neural Network (CNN), aiming for a system that can ultimately read mammograms independently. Both systems are trained on a large data set of around 45,000 images and results show the CNN outperforms the traditional CAD system at low sensitivity and performs comparable at high sensitivity. We subsequently investigate to what extent features such as location and patient information and commonly used manual features can still complement the network and see improvements at high specificity over the CNN especially with location and context features, which contain information not available to the CNN. Additionally, a reader study was performed, where the network was compared to certified screening radiologists on a patch level and we found no significant difference between the network and the readers. HighlightsA system based on deep learning is shown to outperform a state‐of‐the art CAD system.Adding complementary handcrafted features to the CNN is shown to increase performance.The system based on deep learning is shown to perform at the level of a radiologist. Graphical abstract Figure. No caption available.


PLOS ONE | 2014

Volumetric breast density estimation from Full-Field Digital Mammograms: A validation study

Albert Gubern-Mérida; Michiel Kallenberg; Bram Platel; Ritse M. Mann; Robert Martí; Nico Karssemeijer

A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.


Medical Image Analysis | 2015

Automated localization of breast cancer in DCE-MRI

Albert Gubern-Mérida; Robert Martí; Jaime Melendez; Jakob L. Hauth; Ritse M. Mann; Nico Karssemeijer; Bram Platel

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly being used for the detection and diagnosis of breast cancer. Compared to mammography, DCE-MRI provides higher sensitivity, however its specificity is variable. Moreover, DCE-MRI data analysis is time consuming and depends on reader expertise. The aim of this work is to propose a novel automated breast cancer localization system for DCE-MRI. Such a system can be used to support radiologists in DCE-MRI analysis by marking suspicious areas. The proposed method initially corrects for motion artifacts and segments the breast. Subsequently, blob and relative enhancement voxel features are used to locate lesion candidates. Finally, a malignancy score for each lesion candidate is obtained using region-based morphological and kinetic features computed on the segmented lesion candidate. We performed experiments to compare the use of different classifiers in the region classification stage and to study the effect of motion correction in the presented system. The performance of the algorithm was assessed using free-response operating characteristic (FROC) analysis. For this purpose, a dataset of 209 DCE-MRI studies was collected. It is composed of 95 DCE-MRI studies with 105 breast cancers (55 mass-like and 50 non-mass-like malignant lesions) and 114 DCE-MRI studies from women participating in a screening program which were diagnosed to be normal. At 4 false positives per normal case, 89% of the breast cancers (91% and 86% for mass-like and non-mass-like malignant lesions, respectively) were correctly detected.


IEEE Journal of Biomedical and Health Informatics | 2015

Breast Segmentation and Density Estimation in Breast MRI: A Fully Automatic Framework

Albert Gubern-Mérida; Michiel Kallenberg; Ritse M. Mann; Robert Martí; Nico Karssemeijer

Breast density measurement is an important aspect in breast cancer diagnosis as dense tissue has been related to the risk of breast cancer development. The purpose of this study is to develop a method to automatically compute breast density in breast MRI. The framework is a combination of image processing techniques to segment breast and fibroglandular tissue. Intra- and interpatient signal intensity variability is initially corrected. The breast is segmented by automatically detecting body-breast and air-breast surfaces. Subsequently, fibroglandular tissue is segmented in the breast area using expectation-maximization. A dataset of 50 cases with manual segmentations was used for evaluation. Dice similarity coefficient (DSC), total overlap, false negative fraction (FNF), and false positive fraction (FPF) are used to report similarity between automatic and manual segmentations. For breast segmentation, the proposed approach obtained DSC, total overlap, FNF, and FPF values of 0.94, 0.96, 0.04, and 0.07, respectively. For fibroglandular tissue segmentation, we obtained DSC, total overlap, FNF, and FPF values of 0.80, 0.85, 0.15, and 0.22, respectively. The method is relevant for researchers investigating breast density as a risk factor for breast cancer and all the described steps can be also applied in computer aided diagnosis systems.


medical image computing and computer assisted intervention | 2012

Segmentation of the pectoral muscle in breast MRI using atlas-based approaches

Albert Gubern-Mérida; Michiel Kallenberg; Robert Martí; Nico Karssemeijer

Pectoral muscle segmentation is an important step in automatic breast image analysis methods and crucial for multi-modal image registration. In breast MRI, accurate delineation of the pectoral is important for volumetric breast density estimation and for pharmacokinetic analysis of dynamic contrast enhancement. In this paper we propose and study the performance of atlas-based segmentation methods evaluating two fully automatic breast MRI dedicated strategies on a set of 27 manually segmented MR volumes. One uses a probabilistic model and the other is a multi-atlas registration based approach. The multi-atlas approach performed slightly better, with an average Dice coefficient (DSC) of 0.74, while with the much faster probabilistic method a DSC of 0.72 was obtained.


Medical Physics | 2017

Using deep learning to segment breast and fibroglandular tissue in MRI volumes

Mehmet Ufuk Dalmış; Geert J. S. Litjens; Katharina Holland; Arnaud Arindra Adiyoso Setio; Ritse M. Mann; Nico Karssemeijer; Albert Gubern-Mérida

Purpose: Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer‐aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep‐learning approach known as “U‐net.” Materials and methods: We used a dataset of 66 breast MRIs randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in‐house developed workstation. We experimented with the application of U‐net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U‐nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3‐class (3C) U‐net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas‐based method and a sheetness‐based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearsons correlation between the breast density values computed based on manual and automated segmentations. Results: The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U‐net, 2C U‐nets, atlas‐based method, and sheetness‐based method, respectively. The average DSC values for FGT segmentation obtained from 3C U‐net, 2C U‐nets, and atlas‐based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U‐net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U‐nets (P < 0.0001, Steigers Z‐test with Bonferoni correction) and 0.938 as obtained from atlas‐based method (P = 0.0016). Conclusions: In conclusion, we applied a deep‐learning method, U‐net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U‐net‐based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.


Medical Physics | 2015

Computer-aided detection of breast cancers using Haar-like features in automated 3D breast ultrasound.

Tao Tan; Jan-Jurre Mordang; Jan van Zelst; André Grivegnée; Albert Gubern-Mérida; Jaime Melendez; Ritse M. Mann; Wei Zhang; Bram Platel; Nico Karssemeijer

PURPOSE Automated 3D breast ultrasound (ABUS) has gained interest in breast imaging. Especially for screening women with dense breasts, ABUS appears to be beneficial. However, since the amount of data generated is large, the risk of oversight errors is substantial. Computer aided detection (CADe) may be used as a second reader to prevent oversight errors. When CADe is used in this fashion, it is essential that small cancers are detected, while the number of false positive findings should remain acceptable. In this work, the authors improve their previously developed CADe system in the initial candidate detection stage. METHODS The authors use a large number of 2D Haar-like features to differentiate lesion structures from false positives. Using a cascade of GentleBoost classifiers that combines these features, a likelihood score, highly specific for small cancers, can be efficiently computed. The likelihood scores are added to the previously developed voxel features to improve detection. RESULTS The method was tested in a dataset of 414 ABUS volumes with 211 cancers. Cancers had a mean size of 14.72 mm. Free-response receiver operating characteristic analysis was performed to evaluate the performance of the algorithm with and without using the aforementioned Haar-like feature likelihood scores. After the initial detection stage, the number of missed cancer was reduced by 18.8% after adding Haar-like feature likelihood scores. CONCLUSIONS The proposed technique significantly improves our previously developed CADe system in the initial candidate detection stage.


IWDM 2016 Proceedings of the 13th International Workshop on Breast Imaging - Volume 9699 | 2016

A Comparison Between a Deep Convolutional Neural Network and Radiologists for Classifying Regions of Interest in Mammography

Thijs Kooi; Albert Gubern-Mérida; Jan-Jurre Mordang; Ritse M. Mann; Ruud M. Pijnappel; Klaas H. Schuur; Ard den Heeten; Nico Karssemeijer

In this paper, we employ a deep Convolutional Neural Network CNN for the classification of regions of interest of malignant soft tissue lesions in mammography and show that it performs on par to experienced radiologists. The CNN was applied to 398 regions of 5


iberian conference on pattern recognition and image analysis | 2011

Multi-class probabilistic atlas-based segmentation method in breast MRI

Albert Gubern-Mérida; Michiel Kallenberg; Robert Martí; Nico Karssemeijer


European Journal of Radiology | 2016

Automated detection of breast cancer in false-negative screening MRI studies from women at increased risk

Albert Gubern-Mérida; Suzan Vreemann; Robert Martí; Jaime Melendez; Susanne Lardenoije; Ritse M. Mann; Nico Karssemeijer; Bram Platel

\,\times \,

Collaboration


Dive into the Albert Gubern-Mérida's collaboration.

Top Co-Authors

Avatar

Nico Karssemeijer

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Ritse M. Mann

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bram Platel

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Suzan Vreemann

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan van Zelst

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Jan-Jurre Mordang

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Michiel Kallenberg

Radboud University Nijmegen Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge