Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francesco Ciompi is active.

Publication


Featured researches published by Francesco Ciompi.


Medical Image Analysis | 2017

A survey on deep learning in medical image analysis

Geert J. S. Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen van der Laak; Bram van Ginneken; Clara I. Sánchez

Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.


IEEE Transactions on Medical Imaging | 2016

Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks

Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Geert J. S. Litjens; Paul K. Gerke; Colin Jacobs; Sarah J. van Riel; Mathilde M. W. Wille; Matiullah Naqibullah; Clara I. Sánchez; Bram van Ginneken

We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.


international symposium on biomedical imaging | 2015

Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans

Bram van Ginneken; Arnaud Arindra Adiyoso Setio; Colin Jacobs; Francesco Ciompi

Convolutional neural networks (CNNs) have emerged as the most powerful technique for a range of different tasks in computer vision. Recent work suggested that CNN features are generic and can be used for classification tasks outside the exact domain for which the networks were trained. In this work we use the features from one such network, OverFeat, trained for object detection in natural images, for nodule detection in computed tomography scans. We use 865 scans from the publicly available LIDC data set, read by four thoracic radiologists. Nodule candidates are generated by a state-of-the-art nodule detection system. We extract 2D sagittal, coronal and axial patches for each nodule candidate and extract 4096 features from the penultimate layer of OverFeat and classify these with linear support vector machines. We show for various configurations that the off-the-shelf CNN features perform surprisingly well, but not as good as the dedicated detection system. When both approaches are combined, significantly better results are obtained than either approach alone. We conclude that CNN features have great potential to be used for detection tasks in volumetric medical data.


Medical Image Analysis | 2015

Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box

Francesco Ciompi; Bartjan de Hoop; Sarah J. van Riel; Kaman Chung; Ernst Th. Scholten; Matthijs Oudkerk; Pim A. de Jong; Mathias Prokop; Bram van Ginneken

In this paper, we tackle the problem of automatic classification of pulmonary peri-fissural nodules (PFNs). The classification problem is formulated as a machine learning approach, where detected nodule candidates are classified as PFNs or non-PFNs. Supervised learning is used, where a classifier is trained to label the detected nodule. The classification of the nodule in 3D is formulated as an ensemble of classifiers trained to recognize PFNs based on 2D views of the nodule. In order to describe nodule morphology in 2D views, we use the output of a pre-trained convolutional neural network known as OverFeat. We compare our approach with a recently presented descriptor of pulmonary nodule morphology, namely Bag of Frequencies, and illustrate the advantages offered by the two strategies, achieving performance of AUC = 0.868, which is close to the one of human experts.


IEEE Transactions on Biomedical Engineering | 2011

Rayleigh Mixture Model for Plaque Characterization in Intravascular Ultrasound

José Seabra; Francesco Ciompi; Oriol Pujol; Josepa Mauri; Petia Radeva; João M. Sanches

Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneous areas in ultrasound images. Since plaques may contain tissues with heterogeneous regions, more complex distributions depending on multiple parameters are usually needed, such as Rice, K or Nakagami distributions. In such cases, the problem formulation becomes more complex, and the optimization procedure to estimate the plaque echomorphology is more difficult. Here, we propose to model the tissue echomorphology by means of a mixture of Rayleigh distributions, known as the Rayleigh mixture model (RMM). The problem formulation is still simple, but its ability to describe complex textural patterns is very powerful. In this paper, we present a method for the automatic estimation of the RMM mixture parameters by means of the expectation maximization algorithm, which aims at characterizing tissue echomorphology in ultrasound (US). The performance of the proposed model is evaluated with a database of in vitro intravascular US cases. We show that the mixture coefficients and Rayleigh parameters explicitly derived from the mixture model are able to accurately describe different plaque types and to significantly improve the characterization performance of an already existing methodology.


Computerized Medical Imaging and Graphics | 2014

Standardized evaluation methodology and reference database for evaluating IVUS image segmentation

Simone Balocco; Carlo Gatta; Francesco Ciompi; Andreas Wahle; Petia Radeva; Stéphane G. Carlier; Gözde B. Ünal; Elias Sanidas; Josepa Mauri; Xavier Carillo; Tomas Kovarnik; Ching-Wei Wang; Hsiang-Chou Chen; Themis P. Exarchos; Dimitrios I. Fotiadis; François Destrempes; Guy Cloutier; Oriol Pujol; Marina Alberti; E. Gerardo Mendizabal-Ruiz; Mariano Rivera; Timur Aksoy; Richard Downe; Ioannis A. Kakadiaris

This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.


Medical Image Analysis | 2012

HoliMAb: A holistic approach for Media–Adventitia border detection in intravascular ultrasound

Francesco Ciompi; Oriol Pujol; Carlo Gatta; Marina Alberti; Simone Balocco; Xavier Carrillo; Josepa Mauri-Ferré; Petia Radeva

We present a fully automatic methodology for the detection of the Media-Adventitia border (MAb) in human coronary artery in Intravascular Ultrasound (IVUS) images. A robust border detection is achieved by means of a holistic interpretation of the detection problem where the target object, i.e. the media layer, is considered as part of the whole vessel in the image and all the relationships between tissues are learnt. A fairly general framework exploiting multi-class tissue characterization as well as contextual information on the morphology and the appearance of the tissues is presented. The methodology is (i) validated through an exhaustive comparison with both Inter-observer variability on two challenging databases and (ii) compared with state-of-the-art methods for the detection of the MAb in IVUS. The obtained averaged values for the mean radial distance and the percentage of area difference are 0.211 mm and 10.1%, respectively. The applicability of the proposed methodology to clinical practice is also discussed.


IEEE Transactions on Biomedical Engineering | 2012

Automatic Bifurcation Detection in Coronary IVUS Sequences

Marina Alberti; Simone Balocco; Carlo Gatta; Francesco Ciompi; Oriol Pujol; Joana Silva; Xavier Carrillo; Petia Radeva

In this paper, we present a fully automatic method which identifies every bifurcation in an intravascular ultrasound (IVUS) sequence, the corresponding frames, the angular orientation with respect to the IVUS acquisition, and the extension. This goal is reached using a two-level classification scheme: first, a classifier is applied to a set of textural features extracted from each image of a sequence. A comparison among three state-of-the-art discriminative classifiers (AdaBoost, random forest, and support vector machine) is performed to identify the most suitable method for the branching detection task. Second, the results are improved by exploiting contextual information using a multiscale stacked sequential learning scheme. The results are then successively refined using a-priori information about branching dimensions and geometry. The proposed approach provides a robust tool for the quick review of pullback sequences, facilitating the evaluation of the lesion at bifurcation sites. The proposed method reaches an F-Measure score of 86.35%, while the F-Measure scores for inter- and intraobserver variability are 71.63% and 76.18%, respectively. The obtained results are positive. Especially, considering the branching detection task is very challenging, due to high variability in bifurcation dimensions and appearance.


International Journal of Cardiovascular Imaging | 2010

Fusing in-vitro and in-vivo intravascular ultrasound data for plaque characterization

Francesco Ciompi; Oriol Pujol; Carlo Gatta; Oriol Rodriguez-Leor; Josepa Mauri-Ferré; Petia Radeva

Accurate detection of in-vivo vulnerable plaque in coronary arteries is still an open problem. Recent studies show that it is highly related to tissue structure and composition. Intravascular Ultrasound (IVUS) is a powerful imaging technique that gives a detailed cross-sectional image of the vessel, allowing to explore arteries morphology. IVUS data validation is usually performed by comparing post-mortem (in-vitro) IVUS data and corresponding histological analysis of the tissue. The main drawback of this method is the few number of available case studies and validated data due to the complex procedure of histological analysis of the tissue. On the other hand, IVUS data from in-vivo cases is easy to obtain but it can not be histologically validated. In this work, we propose to enhance the in-vitro training data set by selectively including examples from in-vivo plaques. For this purpose, a Sequential Floating Forward Selection method is reformulated in the context of plaque characterization. The enhanced classifier performance is validated on in-vitro data set, yielding an overall accuracy of 91.59% in discriminating among fibrotic, lipidic and calcified plaques, while reducing the gap between in-vivo and in-vitro data analysis. Experimental results suggest that the obtained classifier could be properly applied on in-vivo plaque characterization and also demonstrate that the common hypothesis of assuming the difference between in-vivo and in-vitro as negligible is incorrect.


Scientific Reports | 2017

Towards automatic pulmonary nodule management in lung cancer screening with deep learning

Francesco Ciompi; Kaman Chung; Sarah J. van Riel; Arnaud Arindra Adiyoso Setio; Paul K. Gerke; Colin Jacobs; Ernst Th. Scholten; Cornelia Schaefer-Prokop; Mathilde M. W. Wille; Alfonso Marchianò; Ugo Pastorino; Mathias Prokop; Bram van Ginneken

The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.

Collaboration


Dive into the Francesco Ciompi's collaboration.

Top Co-Authors

Avatar

Petia Radeva

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Oriol Pujol

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Bram van Ginneken

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Carlo Gatta

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Colin Jacobs

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ernst Th. Scholten

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Josepa Mauri

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Xavier Carrillo

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge