Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Wels is active.

Publication


Featured researches published by Michael Wels.


Medical Image Analysis | 2013

Spine detection in CT and MR using iterated marginal space learning

B. Michael Kelm; Michael Wels; S. Kevin Zhou; Sascha Seifert; Michael Suehling; Yefeng Zheng; Dorin Comaniciu

Examinations of the spinal column with both, Magnetic Resonance (MR) imaging and Computed Tomography (CT), often require a precise three-dimensional positioning, angulation and labeling of the spinal disks and the vertebrae. A fully automatic and robust approach is a prerequisite for an automated scan alignment as well as for the segmentation and analysis of spinal disks and vertebral bodies in Computer Aided Diagnosis (CAD) applications. In this article, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the spinal disks. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Finally, we propose an optional case-adaptive segmentation approach that allows to segment the spinal disks and vertebrae in MR and CT respectively. Since the proposed approaches are learning-based, they can be trained for MR or CT alike. Experimental results based on 42 MR and 30 CT volumes show that our system not only achieves superior accuracy but also is among the fastest systems of its kind in the literature. On the MR data set the spinal disks of a whole spine are detected in 11.5s on average with 98.6% sensitivity and 0.073 false positive detections per volume. On the CT data a comparable sensitivity of 98.0% with 0.267 false positives is achieved. Detected disks are localized with an average position error of 2.4 mm/3.2 mm and angular error of 3.9°/4.5° in MR/CT, which is close to the employed hypothesis resolution of 2.1 mm and 3.3°.


medical image computing and computer assisted intervention | 2008

A Discriminative Model-Constrained Graph Cuts Approach to Fully Automated Pediatric Brain Tumor Segmentation in 3-D MRI

Michael Wels; Gustavo Carneiro; Alexander Aplas; Martin Huber; Joachim Hornegger; Dorin Comaniciu

In this paper we present a fully automated approach to the segmentation of pediatric brain tumors in multi-spectral 3-D magnetic resonance images. It is a top-down segmentation approach based on a Markov random field (MRF) model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts. The PBT algorithm provides a strong discriminative observation model that classifies tumor appearance while a spatial prior takes into account the pair-wise homogeneity in terms of classification labels and multi-spectral voxel intensities. The discriminative model relies not only on observed local intensities but also on surrounding context for detecting candidate regions for pathology. A mathematically sound formulation for integrating the two approaches into a unified statistical framework is given. The proposed method is applied to the challenging task of detection and delineation of pediatric brain tumors. This segmentation task is characterized by a high non-uniformity of both the pathology and the surrounding non-pathologic brain tissue. A quantitative evaluation illustrates the robustness of the proposed method. Despite dealing with more complicated cases of pediatric brain tumors the results obtained are mostly better than those reported for current state-of-the-art approaches to 3-D MR brain tumor segmentation in adult patients. The entire processing of one multi-spectral data set does not require any user interaction, and takes less time than previously proposed methods.


computer vision and pattern recognition | 2006

Ultrasound-Specific Segmentation via Decorrelation and Statistical Region-Based Active Contours

Gregory G. Slabaugh; Gozde Unal; Tong Fang; Michael Wels

Segmentation of ultrasound images is often a very challenging task due to speckle noise that contaminates the image. It is well known that speckle noise exhibits an asymmetric distribution as well as significant spatial correlation. Since these attributes can be difficult to model, many previous ultrasound segmentation methods oversimplify the problem by assuming that the noise is white and/or Gaussian, resulting in generic approaches that are actually more suitable to MR and X-ray segmentation than ultrasound. Unlike these methods, in this paper we present an ultrasound-specific segmentation approach that first decorrelates the image, and then performs segmentation on the whitened result using statistical region-based active contours. In particular, we design a gradient ascent flow that evolves the active contours to maximize a log likelihood functional based on the Fisher-Tippett distribution. We present experimental results that demonstrate the effectiveness of our method.


medical image computing and computer assisted intervention | 2010

Detection of 3D spinal geometry using iterated marginal space learning

B. Michael Kelm; S. Kevin Zhou; Michael Suehling; Yefeng Zheng; Michael Wels; Dorin Comaniciu

Determining spinal geometry and in particular the position and orientation of the intervertebral disks is an integral part of nearly every spinal examination with Computed Tomography (CT) and Magnetic Resonance (MR) imaging. It is particularly important for the standardized alignment of the scan geometry with the spine. In this paper, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the intervertebral disks in a given spinal image volume. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Since the proposed approach is learning-based it can be trained for MR or CT alike. Experimental results based on 42 MR volumes show that our system not only achieves superior accuracy but also is the fastest system of its kind in the literature - on average, the spinal disks of a whole spine are detected in 11.5s with 98.6% sensitivity and 0.073 false positive detections per volume. An average position error of 2.4mm and angular error of 3.9° is achieved.


Proceedings of SPIE | 2012

Multi-stage osteolytic spinal bone lesion detection from CT data with internal sensitivity control

Michael Wels; B. M. Kelm; Alexey Tsymbal; Matthias Hammon; Grzegorz Soza; Michael Sühling; Alexander Cavallaro; Dorin Comaniciu

Spinal bone lesion detection is a challenging and important task in cancer diagnosis and treatment monitoring. In this paper we present a method for fully-automatic osteolytic spinal bone lesion detection from 3D CT data. It is a multi-stage approach subsequently applying multiple discriminative models, i.e., multiple random forests, for lesion candidate detection and rejection to an input volume. For each detection stage an internal control mechanism ensures maintaining sensitivity on unseen true positive lesion candidates during training. This way a pre-defined target sensitivity score of the overall system can be taken into account at the time of model generation. For a lesion not only the center is detected but also, during post-processing, its spatial extension along the three spatial axes defined by the surrounding vertebral bodys local coordinate system. Our method achieves a cross-validated sensitivity score of 75% and a mean false positive rate of 3.0 per volume on a data collection consisting of 34 patients with 105 osteolytic spinal bone lesions. The median sensitivity score is 86% at 2.0 false positives per volume.


Physics in Medicine and Biology | 2011

A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

Michael Wels; Yefeng Zheng; Martin Huber; Joachim Hornegger; Dorin Comaniciu

We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.


medical image computing and computer assisted intervention | 2009

Fast and Robust 3-D MRI Brain Structure Segmentation

Michael Wels; Yefeng Zheng; Gustavo Carneiro; Martin Huber; Joachim Hornegger; Dorin Comaniciu

We present a novel method for the automatic detection and segmentation of (sub-)cortical gray matter structures in 3-D magnetic resonance images of the human brain. Essentially, the method is a top-down segmentation approach based on the recently introduced concept of Marginal Space Learning (MSL). We show that MSL naturally decomposes the parameter space of anatomy shapes along decreasing levels of geometrical abstraction into subspaces of increasing dimensionality by exploiting parameter invariance. At each level of abstraction, i.e., in each subspace, we build strong discriminative models from annotated training data, and use these models to narrow the range of possible solutions until a final shape can be inferred. Contextual information is introduced into the system by representing candidate shape parameters with high-dimensional vectors of 3-D generalized Haar features and steerable features derived from the observed volume intensities. Our system allows us to detect and segment 8 (sub-)cortical gray matter structures in T1-weighted 3-D MR brain scans from a variety of different scanners in on average 13.9 sec., which is faster than most of the approaches in the literature. In order to ensure comparability of the achieved results and to validate robustness, we evaluate our method on two publicly available gold standard databases consisting of several T1-weighted 3-D brain MR scans from different scanners and sites. The proposed method achieves an accuracy better than most state-of-the-art approaches using standardized distance and overlap metrics.


IEEE Transactions on Medical Imaging | 2013

Image-based Co-Registration of Angiography and Intravascular Ultrasound Images

Peng Wang; Olivier Ecabert; Terrence Chen; Michael Wels; Johannes Rieber; Martin Ostermeier; Dorin Comaniciu

In image-guided cardiac interventions, X-ray imaging and intravascular ultrasound (IVUS) imaging are two often used modalities. Interventional X-ray images, including angiography and fluoroscopy, are used to assess the lumen of the coronary arteries and to monitor devices in real time. IVUS provides rich intravascular information, such as vessel wall composition, plaque, and stent expansions, but lacks spatial orientations. Since the two imaging modalities are complementary to each other, it is highly desirable to co-register the two modalities to provide a comprehensive picture of the coronaries for interventional cardiologists. In this paper, we present a solution for co-registering 2-D angiography and IVUS through image-based device tracking. The presented framework includes learning-based vessel detection and device detections, model-based tracking, and geodesic distance-based registration. The system first interactively detects the coronary branch under investigation in a reference angiography image. During the pullback of the IVUS transducers, the system acquires both ECG-triggered fluoroscopy and IVUS images, and automatically tracks the position of the medical devices in fluoroscopy. The localization of tracked IVUS transducers and guiding catheter tips is used to associate an IVUS imaging plane to a corresponding location on the vessel branch under investigation. The presented image-based solution can be conveniently integrated into existing cardiology workflow. The system is validated with a set of clinical cases, and achieves good accuracy and robustness.


international conference of the ieee engineering in medicine and biology society | 2013

Fast and robust 3D vertebra segmentation using statistical shape models

Hengameh Mirzaalian; Michael Wels; Tobias Heimann; B. Michael Kelm; Michael Suehling

We propose a top-down fully automatic 3D vertebra segmentation algorithm using global shape-related as well as local appearance-related prior information. The former is brought into the system by a global statistical shape model built from annotated training data, i.e., annotated CT volumes. The latter is handled by a machine learning-based component, i.e., a boundary detector, providing a strong discriminative model for vertebra surface appearance by making use of local context-encoding features. This boundary detector, which is essentially a probabilistic boosting-tree classifier, is also learnt from annotated training data. Contextual information is taken into account by representing vertebra surface candidate voxels with high-dimensional vectors of 3D steerable features derived from the observed volume intensities. Our system does not only consider the body of the individual vertebrae but also the spinal processes. Before segmentation, the image parts depicting individual vertebrae are spatially normalized with respect to their bounding box information in terms of translation, orientation, and scale leading to more accurate results. We evaluate segmentation accuracy on 7 CT volumes each depicting 22 vertebrae. The results indicate a symmetric point-to-mesh surface error of 1.37 ± 0.37 mm, which matches the current state-of-the-art.


medical image computing and computer assisted intervention | 2014

Estimating a Patient Surface Model for Optimizing the Medical Scanning Workflow

Vivek Kumar Singh; Yao-Jen Chang; Kai Ma; Michael Wels; Grzegorz Soza; Terrence Chen

In this paper, we present the idea of equipping a tomographic medical scanner with a range imaging device (e.g. a 3D camera) to improve the current scanning workflow. A novel technical approach is proposed to robustly estimate patient surface geometry by a single snapshot from the camera. Leveraging the information of the patient surface geometry can provide significant clinical benefits, including automation of the scan, motion compensation for better image quality, sanity check of patient movement, augmented reality for guidance, patient specific dose optimization, and more. Our approach overcomes the technical difficulties resulting from suboptimal camera placement due to practical considerations. Experimental results on more than 30 patients from a real CT scanner demonstrate the robustness of our approach.

Collaboration


Dive into the Michael Wels's collaboration.

Researchain Logo
Decentralizing Knowledge