Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mattias P. Heinrich is active.

Publication


Featured researches published by Mattias P. Heinrich.


medical image computing and computer assisted intervention | 2011

Non-local shape descriptor: a new similarity metric for deformable multi-modal registration

Mattias P. Heinrich; Mark Jenkinson; Manav Bhushan; Tahreema N. Matin; Fergus V. Gleeson; J. Michael Brady; Julia A. Schnabel

Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this problem and proposes a new similarity metric for multi-modal registration, the non-local shape descriptor. It aims to extract the shape of anatomical features in a non-local region. By utilizing the dense evaluation of shape descriptors, this new measure bridges the gap between intensity-based and geometric feature-based similarity criteria. Our new metric allows for accurate and reliable registration of clinical multi-modal datasets and is robust against the most considerable differences between modalities, such as non-functional intensity relations, different amounts of noise and non-uniform bias fields. The measure has been implemented in a non-rigid diffusion-regularized registration framework. It has been applied to synthetic test images and challenging clinical MRI and CT chest scans. Experimental results demonstrate its advantages over the most commonly used similarity metric - mutual information, and show improved alignment of anatomical landmarks.


IEEE Transactions on Medical Imaging | 2011

Evaluation of Registration Methods on Thoracic CT: The EMPIRE10 Challenge

K. Murphy; B. van Ginneken; Joseph M. Reinhardt; Sven Kabus; Kai Ding; Xiang Deng; Kunlin Cao; Kaifang Du; Gary E. Christensen; V. Garcia; Tom Vercauteren; Nicholas Ayache; Olivier Commowick; Grégoire Malandain; Ben Glocker; Nikos Paragios; Nassir Navab; V. Gorbunova; Jon Sporring; M. de Bruijne; Xiao Han; Mattias P. Heinrich; Julia A. Schnabel; Mark Jenkinson; Cristian Lorenz; Marc Modat; Jamie R. McClelland; Sebastien Ourselin; S. E. A. Muenzing; Max A. Viergever

EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intra patient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the con figuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.


Medical Image Analysis | 2012

MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration

Mattias P. Heinrich; Mark Jenkinson; Manav Bhushan; Tahreema N. Matin; Fergus V. Gleeson; Sir Michael Brady; Julia A. Schnabel

Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations.


Medical Image Analysis | 2017

ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI

Oskar Maier; Bjoern H. Menze; Janina von der Gablentz; Levin Häni; Mattias P. Heinrich; Matthias Liebrand; Stefan Winzeck; Abdul W. Basit; Paul Bentley; Liang Chen; Daan Christiaens; Francis Dutil; Karl Egger; Chaolu Feng; Ben Glocker; Michael Götz; Tom Haeck; Hanna Leena Halme; Mohammad Havaei; Khan M. Iftekharuddin; Pierre-Marc Jodoin; Konstantinos Kamnitsas; Elias Kellner; Antti Korvenoja; Hugo Larochelle; Christian Ledig; Jia-Hong Lee; Frederik Maes; Qaiser Mahmood; Klaus H. Maier-Hein

&NA; Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non‐invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub‐challenges: Sub‐Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state‐of‐the‐art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state‐of‐the‐art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub‐acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles‐challenge.org). HighlightsEvaluation framework for automatic stroke lesion segmentation from MRIPublic multi‐center, multi‐vendor, multi‐protocol databases releasedOngoing fair and automated benchmark with expert created ground truth setsComparison of 14+7 groups who responded to an open challenge in MICCAISegmentation feasible in acute and unsolved in sub‐acute cases Graphical abstract Figure. No caption available.


Medical Image Analysis | 2014

An implicit sliding-motion preserving regularisation via bilateral filtering for deformable image registration

Bartlomiej W. Papiez; Mattias P. Heinrich; Jérôme Fehrenbach; Laurent Risser; Julia A. Schnabel

Several biomedical applications require accurate image registration that can cope effectively with complex organ deformations. This paper addresses this problem by introducing a generic deformable registration algorithm with a new regularization scheme, which is performed through bilateral filtering of the deformation field. The proposed approach is primarily designed to handle smooth deformations both between and within body structures, and also more challenging deformation discontinuities exhibited by sliding organs. The conventional Gaussian smoothing of deformation fields is replaced by a bilateral filtering procedure, which compromises between the spatial smoothness and local intensity similarity kernels, and is further supported by a deformation field similarity kernel. Moreover, the presented framework does not require any explicit prior knowledge about the organ motion properties (e.g. segmentation) and therefore forms a fully automated registration technique. Validation was performed using synthetic phantom data and publicly available clinical 4D CT lung data sets. In both cases, the quantitative analysis shows improved accuracy when compared to conventional Gaussian smoothing. In addition, we provide experimental evidence that masking the lungs in order to avoid the problem of sliding motion during registration performs similarly in terms of the target registration error when compared to the proposed approach, however it requires accurate lung segmentation. Finally, quantification of the level and location of detected sliding motion yields visually plausible results by demonstrating noticeable sliding at the pleural cavity boundaries.


medical image computing and computer assisted intervention | 2011

Motion correction and parameter estimation in dceMRI sequences: application to colorectal cancer

Manav Bhushan; Julia A. Schnabel; Laurent Risser; Mattias P. Heinrich; J. Michael Brady; Mark Jenkinson

We present a novel Bayesian framework for non-rigid motion correction and pharmacokinetic parameter estimation in dceMRI sequences which incorporates a physiological image formation model into the similarity measure used for motion correction. The similarity measure is based on the maximization of the joint posterior probability of the transformations which need to be applied to each image in the dataset to bring all images into alignment, and the physiological parameters which best explain the data. The deformation framework used to deform each image is based on the diffeomorphic logDemons algorithm. We then use this method to co-register images from simulated and real dceMRI datasets and show that the method leads to an improvement in the estimation of physiological parameters as well as improved alignment of the images.


IEEE Transactions on Medical Imaging | 2018

Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation

Ozan Oktay; Enzo Ferrante; Konstantinos Kamnitsas; Mattias P. Heinrich; Wenjia Bai; Jose Caballero; Stuart A. Cook; Antonio de Marvao; Timothy Dawes; Declan O'Regan; Bernhard Kainz; Ben Glocker; Daniel Rueckert

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.


Medical Image Analysis | 2016

Deformable image registration by combining uncertainty estimates from supervoxel belief propagation

Mattias P. Heinrich; Ivor J. A. Simpson; BartŁomiej W. Papież; Sir Michael Brady; Julia A. Schnabel

Discrete optimisation strategies have a number of advantages over their continuous counterparts for deformable registration of medical images. For example: it is not necessary to compute derivatives of the similarity term; dense sampling of the search space reduces the risk of becoming trapped in local optima; and (in principle) an optimum can be found without resorting to iterative coarse-to-fine warping strategies. However, the large complexity of high-dimensional medical data renders a direct voxel-wise estimation of deformation vectors impractical. For this reason, previous work on medical image registration using graphical models has largely relied on using a parameterised deformation model and on the use of iterative coarse-to-fine optimisation schemes. In this paper, we propose an approach that enables accurate voxel-wise deformable registration of high-resolution 3D images without the need for intermediate image warping or a multi-resolution scheme. This is achieved by representing the image domain as multiple comprehensive supervoxel layers and making use of the full marginal distribution of all probable displacement vectors after inferring regularity of the deformations using belief propagation. The optimisation acts on the coarse scale representation of supervoxels, which provides sufficient spatial context and is robust to noise in low contrast areas. Minimum spanning trees, which connect neighbouring supervoxels, are employed to model pair-wise deformation dependencies. The optimal displacement for each voxel is calculated by considering the probabilities for all displacements over all overlapping supervoxel graphs and subsequently seeking the mode of this distribution. We demonstrate the applicability of this concept for two challenging applications: first, for intra-patient motion estimation in lung CT scans; and second, for atlas-based segmentation propagation of MRI brain scans. For lung registration, the voxel-wise mode of displacements is found using the mean-shift algorithm, which enables us to determine continuous valued sub-voxel motion vectors. Finding the mode of brain segmentation labels is performed using a voxel-wise majority voting weighted by the displacement uncertainty estimates. Our experimental results show significant improvements in registration accuracy when using the additional information provided by the registration uncertainty estimates. The multi-layer approach enables fusion of multiple complementary proposals, extending the popular fusion approaches from multi-image registration to probabilistic one-to-one image registration.


IEEE Transactions on Medical Imaging | 2016

Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks

Oscar Jimenez-del-Toro; Henning Müller; Markus Krenn; Katharina Gruenberg; Abdel Aziz Taha; Marianne Winterstein; Ivan Eggel; Antonio Foncubierta-Rodríguez; Orcun Goksel; András Jakab; Georgios Kontokotsios; Georg Langs; Bjoern H. Menze; Tomas Salas Fernandez; Roger Schaer; Anna Walleyo; Marc-André Weber; Yashin Dicente Cid; Tobias Gass; Mattias P. Heinrich; Fucang Jia; Fredrik Kahl; Razmig Kéchichian; Dominic Mai; Assaf B. Spanier; Graham Vincent; Chunliang Wang; Daniel Wyeth; Allan Hanbury

Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.


medical image computing and computer assisted intervention | 2012

Globally Optimal Deformable Registration on a Minimum Spanning Tree Using Dense Displacement Sampling

Mattias P. Heinrich; Mark Jenkinson; Sir Michael Brady; Julia A. Schnabel

Deformable image registration poses a highly non-convex optimisation problem. Conventionally, medical image registration techniques rely on continuous optimisation, which is prone to local minima. Recent advances in the mathematics and new programming methods enable these disadvantages to be overcome using discrete optimisation. In this paper, we present a new technique deeds, which employs a discrete dense displacement sampling for the deformable registration of high resolution CT volumes. The image grid is represented as a minimum spanning tree. Given these constraints a global optimum of the cost function can be found efficiently using dynamic programming, which enforces the smoothness of the deformations. Experimental results demonstrate the advantages of deeds: the registration error for the challenging registration of inhale and exhale pulmonary CT scans is significantly lower than for two state-of-the-art registration techniques, especially in the presence of large deformations and sliding motion at lung surfaces.

Collaboration


Dive into the Mattias P. Heinrich's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ozan Oktay

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Glocker

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Risser

Institut de Mathématiques de Toulouse

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge