Fabian Isensee
German Cancer Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fabian Isensee.
medical image computing and computer assisted intervention | 2017
Fabian Isensee; Philipp Kickingereder; Wolfgang Wick; Martin Bendszus; Klaus H. Maier-Hein
Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time consuming and subjective, this task is at the same time very challenging to solve for automatic segmentation methods. In this paper we present our most recent effort on developing a robust segmentation algorithm in the form of a convolutional neural network. Our network architecture was inspired by the popular U-Net and has been carefully modified to maximize brain tumor segmentation performance. We use a dice loss function to cope with class imbalances and use extensive data augmentation to successfully prevent overfitting. Our method beats the current state of the art on BraTS 2015, is one of the leading methods on the BraTS 2017 validation set (dice scores of 0.896, 0.797 and 0.732 for whole tumor, tumor core and enhancing tumor, respectively) and achieves very good Dice scores on the test set (0.858 for whole, 0.775 for core and 0.647 for enhancing tumor). We furthermore take part in the survival prediction subchallenge by training an ensemble of a random forest regressor and multilayer perceptrons on shape features describing the tumor subregions. Our approach achieves 52.6% accuracy, a Spearman correlation coefficient of 0.496 and a mean square error of 209607 on the test set.
Workshops on Image processing for the medicine, 2017 | 2017
Fabian Isensee; Philipp Kickingereder; David Bonekamp; Martin Bendszus; Wolfgang Wick; Heinz-Peter Schlemmer; Klaus H. Maier-Hein
Glioblastoma segmentation is an important challenge in medical image processing. State of the art methods make use of convolutional neural networks, but generally employ only few layers and small receptive fields, which limits the amount and quality of contextual information available for segmentation. In this publication we use the well known UNet architecture to alleviate these shortcomings. We furthermore show that a sophisticated training scheme that uses dynamic sampling of training data, data augmentation and a class sensitive loss allows training such a complex architecture on relatively few data. A qualitative comparison with the state of the art shows favorable performance of our approach.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017
Eric Heim; Alexander Seitel; Christian Stock; Fabian Isensee; Lena Maier-Hein
With the rapidly increasing interest in machine learning based solutions for automatic image annotation, the availability of reference annotations for algorithm training is one of the major bottlenecks in the field. Crowdsourcing has evolved as a valuable option for low-cost and large-scale data annotation; however, quality control remains a major issue which needs to be addressed. To our knowledge, we are the first to analyze the annotation process to improve crowd-sourced image segmentation. Our method involves training a regressor to estimate the quality of a segmentation from the annotators clickstream data. The quality estimation can be used to identify spam and weight individual annotations by their (estimated) quality when merging multiple segmentations of one image. Using a total of 29,000 crowd annotations performed on publicly available data of different object classes, we show that (1) our method is highly accurate in estimating the segmentation quality based on clickstream data, (2) outperforms state-of-the-art methods for merging multiple annotations. As the regressor does not need to be trained on the object class that it is applied to it can be regarded as a low-cost option for quality control and confidence analysis in the context of crowd-based image annotation.
computer assisted radiology and surgery | 2018
Tobias Ross; David Zimmerer; Anant Vemuri; Fabian Isensee; Manuel Wiesenfarth; Sebastian Bodenstedt; Fabian Both; Philip Kessler; Martin Wagner; Beat Müller; Hannes Kenngott; Stefanie Speidel; Annette Kopp-Schneider; Klaus H. Maier-Hein; Lena Maier-Hein
PurposeSurgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue.MethodsOur approach is guided by the hypothesis that unlabeled video data can be used to learn a representation of the target domain that boosts the performance of state-of-the-art machine learning algorithms when used for pre-training. Core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a conditional generative adversarial network (cGAN)-based architecture as auxiliary task. A variant of the method involves a second pre-training step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task.ResultsThe proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases exploratively the number of labeled images by up to 75% without sacrificing performance. Our method also outperforms alternative methods for CNN pre-training, such as pre-training on publicly available non-medical (COCO) or medical data (MICCAI EndoVis2017 challenge) using the target task (in this instance: segmentation).ConclusionAs it makes efficient use of available (non-)public and (un-)labeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.
Photons Plus Ultrasound: Imaging and Sensing 2018 | 2018
Dominik Waibel; Janek Gröhl; Fabian Isensee; Thomas Kirchner; Klaus Maier-Hein; Lena Maier-Hein
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
arXiv: Computer Vision and Pattern Recognition | 2017
Fabian Isensee; Paul Jaeger; Peter M. Full; Ivo Wolf; Sandy Engelhardt; Klaus H. Maier-Hein
Cardiac magnetic resonance imaging improves on diagnosis of cardiovascular diseases by providing images at high spatiotemporal resolution. Manual evaluation of these time-series, however, is expensive and prone to biased and non-reproducible outcomes. In this paper, we present a method that addresses named limitations by integrating segmentation and disease classification into a fully automatic processing pipeline. We use an ensemble of UNet inspired architectures for segmentation of cardiac structures such as the left and right ventricular cavity (LVC, RVC) and the left ventricular myocardium (LVM) on each time instance of the cardiac cycle. For the classification task, information is extracted from the segmented time-series in form of comprehensive features handcrafted to reflect diagnostic clinical procedures. Based on these features we train an ensemble of heavily regularized multilayer perceptrons (MLP) and a random forest classifier to predict the pathologic target class. We evaluated our method on the ACDC dataset (4 pathology groups, 1 healthy group) and achieve dice scores of 0.945 (LVC), 0.908 (RVC) and 0.905 (LVM) in a cross-validation over the training set (100 cases) and 0.950 (LVC), 0.923 (RVC) and 0.911 (LVM) on the test set (50 cases). We report a classification accuracy of \(94 \%\) on a training set cross-validation and \(92\%\) on the test set. Our results underpin the potential of machine learning methods for accurate, fast and reproducible segmentation and computer-assisted diagnosis (CAD).
Bildverarbeitung für die Medizin | 2018
Paul F. Jäger; Fabian Isensee; Jens Petersen; David Zimmerer; Jakob Wasserthal; Klaus H. Maier-Hein
The remarkable rise of deep learning has led to an overwhelming amount of new papers coming up by the week. This tutorial intents to filter out the research most relevant for the medical image computing (MIC) community and present it in a structured and understandable form. It is composed of five parts: Classification, Segmentation, Detection, Generative Models and Semi- Supervised Learning.
Bildverarbeitung für die Medizin | 2018
Dominik Waibel; Janek Gröhl; Fabian Isensee; Klaus Maier-Hein; Lena Maier-Hein
Die Rekonstruktion von Bildern aus unvollstandigen Rohdaten ist eine fundamentale Herausforderung in der medizinischen Bildgebung. Dies gilt insbesondere auch fur die Photoakustik, einer neuartigen Bildgebungstechnik, welche auf dem photoakustischen Effekt basiert, bei dem durch die Absorption von Photonen aus Laserpulsen im Gewebe Schallwellen ausgelost werden. Durch den optischen Kontrast der Photoakustik konnen funktionale Parameter - wie die Blutsauerstoffsattigung - hoch aufgelost und tief im Gewebe gemessen werden.
IEEE Transactions on Medical Imaging | 2018
Olivier Bernard; Alain Lalande; Clement Zotti; Frederick Cervenansky; Xin Yang; Pheng-Ann Heng; Irem Cetin; Karim Lekadir; Oscar Camara; Miguel Ángel González Ballester; Gerard Sanroma; Sandy Napel; Steffen E. Petersen; Georgios Tziritas; Elias Grinias; Mahendra Khened; Varghese Alex Kollerathu; Ganapathy Krishnamurthi; Marc-Michel Rohé; Xavier Pennec; Maxime Sermesant; Fabian Isensee; Paul F. Jäger; Klaus H. Maier-Hein; Chrisitan F. Baumgartner; Lisa M. Koch; Jelmer M. Wolterink; Ivana Išgum; Yeonggul Jang; Yoonmi Hong
arXiv: Computer Vision and Pattern Recognition | 2018
Fabian Isensee; Philipp Kickingereder; Wolfgang Wick; Martin Bendszus; Klaus H. Maier-Hein