Annegreet van Opbroek
Erasmus University Rotterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Annegreet van Opbroek.
Computational Intelligence and Neuroscience | 2015
Adriënne M. Mendrik; Koen L. Vincken; Hugo J. Kuijf; Marcel Breeuwer; Willem H. Bouvy; Jeroen de Bresser; Amir Alansary; Marleen de Bruijne; Aaron Carass; Ayman El-Baz; Amod Jog; Ranveer Katyal; Ali R. Khan; Fedde van der Lijn; Qaiser Mahmood; Ryan Mukherjee; Annegreet van Opbroek; Sahil Paneri; Sérgio Pereira; Mikael Persson; Martin Rajchl; Duygu Sarikaya; Örjan Smedby; Carlos A. Silva; Henri A. Vrooman; Saurabh Vyas; Chunliang Wang; Liang Zhao; Geert Jan Biessels; Max A. Viergever
Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.
IEEE Transactions on Medical Imaging | 2015
Annegreet van Opbroek; M. Arfan Ikram; Meike W. Vernooij; Marleen de Bruijne
The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.
IEEE Transactions on Medical Imaging | 2015
Arna van Engelen; Anouk C. van Dijk; Martine T.B. Truijman; Ronald van’t Klooster; Annegreet van Opbroek; Aad van der Lugt; Wiro J. Niessen; M. Eline Kooi; Marleen de Bruijne
Automated segmentation of plaque components in carotid artery magnetic resonance imaging (MRI) is important to enable large studies on plaque vulnerability, and for incorporating plaque composition as an imaging biomarker in clinical practice. Especially supervised classification techniques, which learn from labeled examples, have shown good performance. However, a disadvantage of supervised methods is their reduced performance on data different from the training data, for example on images acquired with different scanners. Reducing the amount of manual annotations required for each new dataset will facilitate widespread implementation of supervised methods. In this paper we segment carotid plaque components of clinical interest (fibrous tissue, lipid tissue, calcification and intraplaque hemorrhage) in a multi-center MRI study. We perform voxelwise tissue classification by traditional same-center training, and compare results with two approaches that use little or no annotated same-center data. These approaches additionally use an annotated set of different-center data. We evaluate 1) a nonlinear feature normalization approach, and 2) two transfer-learning algorithms that use same and different-center data with different weights. Results showed that the best results were obtained for a combination of feature normalization and transfer learning. While for the other approaches significant differences in voxelwise or mean volume errors were found compared with the reference same-center training, the proposed approach did not yield significant differences from that reference. We conclude that both extensive feature normalization and transfer learning can be valuable for the development of supervised methods that perform well on different types of datasets.
Medical Image Analysis | 2015
Annegreet van Opbroek; Meike W. Vernooij; M. Arfan Ikram; Marleen de Bruijne
Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain. We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier. We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image.
International Workshop on Machine Learning in Medical Imaging | 2012
Annegreet van Opbroek; M. Arfan Ikram; Meike W. Vernooij; Marleen de Bruijne
Supervised classification techniques are among the most powerful methods used for automatic segmentation of medical images. A disadvantage of these methods is that they require a representative training set and thus encounter problems when the training data is acquired e.g. with a different scanner protocol than the target segmentation data. We therefore propose a framework for supervised biomedical image segmentation across different scanner protocols, by means of transfer learning. We establish a transfer learning algorithm for classification, which can exploit a large amount of labeled samples from different sources in addition to a small amount of samples from the target source. The algorithm iteratively re-weights the contribution of training samples from these different sources based on classification by a weighted SVM classifier. We evaluate this technique by performing tissue classification on MRI brain data from four substantially different scanning protocols. For a small number of labeled samples from a single image obtained with the same protocol, the proposed transfer learning method outperforms classification on all available training data as well as classification based on the labeled target samples only. The classification errors for these cases can be reduced with up to 40 percent compared to traditional classification techniques.
international conference on machine learning | 2013
Annegreet van Opbroek; M. Arfan Ikram; Meike W. Vernooij; Marleen de Bruijne
Many successful methods for biomedical image segmentation are based on supervised learning, where a segmentation algorithm is trained based on manually labeled training data. For supervised-learning algorithms to perform well, this training data has to be representative for the target data. In practice however, due to differences between scanners such representative training data is often not available. We therefore present a segmentation algorithm in which labeled training data does not necessarily need to be representative for the target data, which allows for the use of training data from different studies than the target data. The algorithm assigns an importance weight to all training images, in such a way that the Kullback-Leibler divergence between the resulting distribution of the training data and the distribution of the target data is minimized. In a set of experiments on MRI brain-tissue segmentation with training and target data from four substantially different studies our method improved mean classification errors with up to 25% compared to common supervised-learning approaches.
international conference on machine learning | 2015
Annegreet van Opbroek; Hakim C. Achterberg; Marleen de Bruijne
Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions can be very different, which can hurt the performance of such techniques. We propose a feature-space-transformation method to overcome these differences in feature distributions. Our method learns a mapping of the feature values of training voxels to values observed in images from the test scanner. This transformation is learned from unlabeled images of subjects scanned on both the training scanner and the test scanner. We evaluated our method on hippocampus segmentation on 27 images of the Harmonized Hippocampal Protocol HarP, a heterogeneous dataset consisting of 1.5T and 3T MR images. The results showed that our feature space transformation improved the Dice overlap of segmentations obtained with an SVM classifier from 0.36 to 0.85 when only 10 atlases were used and from 0.79 to 0.85 when around 100 atlases were used.
NeuroImage: Clinical | 2018
Annegreet van Opbroek; Hakim C. Achterberg; Meike W. Vernooij; M. Arfan Ikram; Marleen de Bruijne; Alzheimer's Disease Neuroimaging Initiative
Many successful approaches in MR brain segmentation use supervised voxel classification, which requires manually labeled training images that are representative of the test images to segment. However, the performance of such methods often deteriorates if training and test images are acquired with different scanners or scanning parameters, since this leads to differences in feature representations between training and test data. In this paper we propose a feature-space transformation (FST) to overcome such differences in feature representations. The proposed FST is derived from unlabeled images of a subject that was scanned with both the source and the target scan protocol. After an affine registration, these images give a mapping between source and target voxels in the feature space. This mapping is then used to map all training samples to the feature representation of the test samples. We evaluated the benefit of the proposed FST on hippocampus segmentation. Experiments were performed on two datasets: one with relatively small differences between training and test images and one with large differences. In both cases, the FST significantly improved the performance compared to using only image normalization. Additionally, we showed that our FST can be used to improve the performance of a state-of-the-art patch-based-atlas-fusion technique in case of large differences between scanners.
Stochastic Models | 2012
Michel Dekking; Derong Kong; Annegreet van Opbroek
We consider a discrete time particle model for kinetic transport on the two dimensional integer lattice. The particle can move due to advection in the x-direction and due to dispersion. This happens when the particle is free, but it can also be adsorbed and then does not move. When the dispersion of the particle is modeled by simple random walk, strange phenomena occur. In the second half of the paper, we resolve these problems and give expressions for the shape of the plume consisting of many particles.
Clinical Interventions in Aging | 2018
Lisanne Tap; Annegreet van Opbroek; Wiro J. Niessen; Marion Smits; Francesco Mattace-Raso