Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florin C. Ghesu is active.

Publication


Featured researches published by Florin C. Ghesu.


IEEE Transactions on Medical Imaging | 2016

Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing

Florin C. Ghesu; Edward Krubasik; Bogdan Georgescu; Vivek Kumar Singh; Yefeng Zheng; Joachim Hornegger; Dorin Comaniciu

Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.


medical image computing and computer assisted intervention | 2016

An Artificial Agent for Anatomical Landmark Detection in Medical Images

Florin C. Ghesu; Bogdan Georgescu; Tommaso Mansi; Dominik Neumann; Joachim Hornegger; Dorin Comaniciu

Fast and robust detection of anatomical structures or pathologies represents a fundamental task in medical image analysis. Most of the current solutions are however suboptimal and unconstrained by learning an appearance model and exhaustively scanning the space of parameters to detect a specific anatomical structure. In addition, typical feature computation or estimation of meta-parameters related to the appearance model or the search strategy, is based on local criteria or predefined approximation schemes. We propose a new learning method following a fundamentally different paradigm by simultaneously modeling both the object appearance and the parameter search strategy as a unified behavioral task for an artificial agent. The method combines the advantages of behavior learning achieved through reinforcement learning with effective hierarchical feature extraction achieved through deep learning. We show that given only a sequence of annotated images, the agent can automatically and strategically learn optimal paths that converge to the sought anatomical landmark location as opposed to exhaustively scanning the entire solution space. The method significantly outperforms state-of-the-art machine learning and deep learning approaches both in terms of accuracy and speed on 2D magnetic resonance images, 2D ultrasound and 3D CT images, achieving average detection errors of 1-2 pixels, while also recognizing the absence of an object from the image.


medical image computing and computer assisted intervention | 2016

Deep Learning Computed Tomography

Tobias Würfl; Florin C. Ghesu; Vincent Christlein; Andreas K. Maier

In this paper, we demonstrate that image reconstruction can be expressed in terms of neural networks. We show that filtered back-projection can be mapped identically onto a deep neural network architecture. As for the case of iterative reconstruction, the straight forward realization as matrix multiplication is not feasible. Thus, we propose to compute the back-projection layer efficiently as fixed function and its gradient as projection operation. This allows a data-driven approach for joint optimization of correction steps in projection domain and image domain. As a proof of concept, we demonstrate that we are able to learn weightings and additional filter layers that consistently reduce the reconstruction error of a limited angle reconstruction by a factor of two while keeping the same computational complexity as filtered back-projection. We believe that this kind of learning approach can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.


medical image computing and computer-assisted intervention | 2017

Robust non-rigid registration through agent-based action learning

Julian Krebs; Tommaso Mansi; Hervé Delingette; Li Zhang; Florin C. Ghesu; Shun Miao; Andreas K. Maier; Nicholas Ayache; Rui Liao; Ali Kamen

Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.


medical image computing and computer-assisted intervention | 2017

Robust Multi-scale Anatomical Landmark Detection in Incomplete 3D-CT Data

Florin C. Ghesu; Bogdan Georgescu; Sasa Grbic; Andreas K. Maier; Joachim Hornegger; Dorin Comaniciu

Robust and fast detection of anatomical structures is an essential prerequisite for the next-generation automated medical support tools. While machine learning techniques are most often applied to address this problem, the traditional object search scheme is typically driven by suboptimal and exhaustive strategies. Most importantly, these techniques do not effectively address cases of incomplete data, i.e., scans taken with a partial field-of-view. To address these limitations, we present a solution that unifies the anatomy appearance model and the search strategy by formulating a behavior-learning task. This is solved using the capabilities of deep reinforcement learning with multi-scale image analysis and robust statistical shape modeling. Using these mechanisms artificial agents are taught optimal navigation paths in the image scale-space that can account for missing structures to ensure the robust and spatially-coherent detection of the observed anatomical landmarks. The identified landmarks are then used as robust guidance in estimating the extent of the body-region. Experiments show that our solution outperforms a state-of-the-art deep learning method in detecting different anatomical structures, without any failure, on a dataset of over 2300 3D-CT volumes. In particular, we achieve 0% false-positive and 0% false-negative rates at detecting the landmarks or recognizing their absence from the field-of-view of the scan. In terms of runtime, we reduce the detection-time of the reference method by 15−20 times to under 40 ms, an unmatched performance in the literature for high-resolution 3D-CT.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans

Florin C. Ghesu; Bogdan Georgescu; Yefeng Zheng; Sasa Grbic; Andreas K. Maier; Joachim Hornegger; Dorin Comaniciu

Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.


International MICCAI Workshop on Medical Computer Vision | 2013

Pectoral Muscle Detection in Digital Breast Tomosynthesis and Mammography

Florin C. Ghesu; Michael Wels; Anna Jerebko; Michael Sühling; Joachim Hornegger; B. Michael Kelm

Screening and diagnosis of breast cancer with Digital Breast Tomosynthesis (DBT) and Mammography are increasingly supported by algorithms for automatic post-processing. The pectoral muscle, which dorsally delineates the breast tissue towards the chest wall, is an important anatomical structure for navigation. Along with the nipple and the skin, the pectoral muscle boundary is often used for reporting the location of breast lesions. It is visible in mediolateral oblique (MLO) views where it is well approximated by a straight line. Here, we propose two machine learning-based algorithms to robustly detect the pectoral muscle in MLO views from DBT and mammography. Embedded into the Marginal Space Learning framework, the algorithms involve the evaluation of multiple candidate boundaries in a hierarchical manner. To this end, we propose a novel method for candidate generation using a Hough-based approach. Experiments were performed on a set of 100 DBT volumes and 95 mammograms from different clinical cases. Our novel combined approach achieves competitive accuracy and robustness. In particular, for the DBT data, we achieve significantly lower deviation angle error and mean distance error than the standard approach. The proposed algorithms run within a few seconds.


Medical Image Analysis | 2018

Towards intelligent robust detection of anatomical structures in incomplete volumetric data

Florin C. Ghesu; Bogdan Georgescu; Sasa Grbic; Andreas K. Maier; Joachim Hornegger; Dorin Comaniciu

HighlightsMulti‐scale DRL with robust statistical shape modeling for anatomy detection.Multi‐scale processing enables real‐time speed and high detection accuracy.Robust and principled recognition of anatomy that is missing from the field‐of‐view.Extensive experiments on up to 50 anatomical landmarks and over 5000 3D‐CT scans. Graphical abstract Figure. No caption available. ABSTRACT Robust and fast detection of anatomical structures represents an important component of medical image analysis technologies. Current solutions for anatomy detection are based on machine learning, and are generally driven by suboptimal and exhaustive search strategies. In particular, these techniques do not effectively address cases of incomplete data, i.e., scans acquired with a partial field‐of‐view. We address these challenges by following a new paradigm, which reformulates the detection task to teaching an intelligent artificial agent how to actively search for an anatomical structure. Using the principles of deep reinforcement learning with multi‐scale image analysis, artificial agents are taught optimal navigation paths in the scale‐space representation of an image, while accounting for structures that are missing from the field‐of‐view. The spatial coherence of the observed anatomical landmarks is ensured using elements from statistical shape modeling and robust estimation theory. Experiments show that our solution outperforms marginal space deep learning, a powerful deep learning method, at detecting different anatomical structures without any failure. The dataset contains 5043 3D‐CT volumes from over 2000 patients, totaling over 2,500,000 image slices. In particular, our solution achieves 0% false‐positive and 0% false‐negative rates at detecting whether the landmarks are captured in the field‐of‐view of the scan (excluding all border cases), with an average detection accuracy of 2.78 mm. In terms of runtime, we reduce the detection‐time of the marginal space deep learning method by 20–30 times to under 40 ms, an unmatched performance for high resolution incomplete 3D‐CT data.


Bildverarbeitung für die Medizin | 2018

Abstract: Robust Multi-Scale Anatomical Landmark Detection in Incomplete 3D-CT Data

Florin C. Ghesu; Bogdan Georgescu; Sasa Grbic; Andreas K. Maier; Joachim Hornegger; Dorin Comaniciu

An essential prerequisite for comprehensive medical image analysis is the robust and fast detection of anatomical structures in the human body. To this point, machine learning techniques are most often applied to address this problem, exploiting large annotated image databases to estimate parametric models for anatomy appearance. However, the performance of these methods is generally limited, due to suboptimal and exhaustive search strategies applied on large volumetric image data, e.g., 3D-CT scans.


Deep Learning for Medical Image Analysis | 2017

Efficient Medical Image Parsing

Florin C. Ghesu; Bogdan Georgescu; Joachim Hornegger

Abstract Fast and robust detection, segmentation and tracking of anatomical structures or pathologies support the entire clinical workflow enabling real-time guidance, quantification, and processing in the operating room. Most state-of-the-art solutions for parsing medical images are based on machine learning methods. While this enables the effective use of large annotated image databases, such techniques typically suffer from inherent limitations related to the efficiency in scanning high-dimensional parametric spaces and the learning of representative features for modeling the object appearance. In this context we present Marginal Space Deep Learning, a novel framework for volumetric image parsing which exploits both the strengths of efficient object parametrization in hierarchical marginal spaces and the representational power of state-of-the-art deep learning architectures. The system learns classifiers in clustered, high-probability regions of the parameter space capturing the appearance of the object under the considered pose transformations and shape variations, gradually increasing the dimensionality of the exploration space from translation (3D), translation–orientation (6D) to incorporating also the anisotropic scaling (9D) and shape variability (ND). During runtime the system uses the learned classifiers to exhaustively scan these spaces to select the most probable transformation parameters. As this implies a significant computational effort in the order of billions of scanning hypotheses we propose cascaded sparse adaptive neural networks, learning to focus the data sampling patterns of the networks on sparse, context-rich parts of the input, thereby considerably reducing the runtime and increasing the robustness of the system. While we show that this method significantly increases the performance of the state-of-the-art, we highlight its main limitation: the learning of the appearance model and the parameter scanning are completely decoupled as independent algorithmic steps. To address this we make a step toward human-like intelligent parsing, presenting an extension of the system that models the object appearance and the parameter search as a unified behavioral task for an artificial agent. As opposed to exhaustively scanning the parameter space, the system uses reinforcement learning to discover optimal navigation paths guiding the search to the optimal location. We show the initial performance of this approach on the detection of arbitrary landmarks in ultrasound, magnetic resonance, and computed tomography data, with considerable improvement over the state-of-the-art. Our future work is focused on extending this framework for generic image parsing.

Collaboration


Dive into the Florin C. Ghesu's collaboration.

Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Andreas K. Maier

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge