Fons van der Sommen
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fons van der Sommen.
Endoscopy | 2016
Fons van der Sommen; S Sveta Zinger; Wouter L. Curvers; Raf Bisschops; Oliver Pech; Bas L. Weusten; Jacques J. Bergman; Erik J. Schoon
BACKGROUND AND STUDY AIMS Early neoplasia in Barretts esophagus is difficult to detect and often overlooked during Barretts surveillance. An automatic detection system could be beneficial, by assisting endoscopists with detection of early neoplastic lesions. The aim of this study was to assess the feasibility of a computer system to detect early neoplasia in Barretts esophagus. PATIENTS AND METHODS Based on 100 images from 44 patients with Barretts esophagus, a computer algorithm, which employed specific texture, color filters, and machine learning, was developed for the detection of early neoplastic lesions in Barretts esophagus. The evaluation by one endoscopist, who extensively imaged and endoscopically removed all early neoplastic lesions and was not blinded to the histological outcome, was considered the gold standard. For external validation, four international experts in Barretts neoplasia, who were blinded to the pathology results, reviewed all images. RESULTS The system identified early neoplastic lesions on a per-image analysis with a sensitivity and specificity of 0.83. At the patient level, the system achieved a sensitivity and specificity of 0.86 and 0.87, respectively. A trade-off between the two performance metrics could be made by varying the percentage of training samples that showed neoplastic tissue. CONCLUSION The automated computer algorithm developed in this study was able to identify early neoplastic lesions with reasonable accuracy, suggesting that automated detection of early neoplasia in Barretts esophagus is feasible. Further research is required to improve the accuracy of the system and prepare it for real-time operation, before it can be applied in clinical practice.
Proceedings of SPIE | 2013
Fons van der Sommen; S Sveta Zinger; Erik J. Schoon
Esophageal cancer is the fastest rising type of cancer in the Western world. The recent development of High-Definition (HD) endoscopy has enabled the specialist physician to identify cancer at an early stage. Nevertheless, it still requires considerable effort and training to be able to recognize these irregularities associated with early cancer. As a first step towards a Computer-Aided Detection (CAD) system that supports the physician in finding these early stages of cancer, we propose an algorithm that is able to identify irregularities in the esophagus automatically, based on HD endoscopic images. The concept employs tile-based processing, so our system is not only able to identify that an endoscopic image contains early cancer, but it can also locate it. The identification is based on the following steps: (1) preprocessing, (2) feature extraction with dimensionality reduction, (3) classification. We evaluate the detection performance in RGB, HSI and YCbCr color space using the Color Histogram (CH) and Gabor features and we compare with other well-known features to describe texture. For classification, we employ a Support Vector Machine (SVM) and evaluate its performance using different parameters and kernel functions. In experiments, our system achieves a classification accuracy of 95.9% on 50×50 pixel tiles of tumorous and normal tissue and reaches an Area Under the Curve (AUC) of 0.990. In 22 clinical examples our algorithm was able to identify all (pre-)cancerous regions and annotate those regions reasonably well. The experimental and clinical validation are considered promising for a CAD system that supports the physician in finding early stage cancer.
Proceedings of SPIE | 2017
Sander R. Klomp; Fons van der Sommen; Anne-Fré Swager; S Sveta Zinger; Erik J. Schoon; Wouter L. Curvers; Jacques J. Bergman
Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett’s Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.
Proceedings of SPIE | 2016
Markus H. A. Janse; Fons van der Sommen; S Sveta Zinger; Erik J. Schoon
Esophageal cancer is one of the fastest rising forms of cancer in the Western world. Using High-Definition (HD) endoscopy, gastroenterology experts can identify esophageal cancer at an early stage. Recent research shows that early cancer can be found using a state-of-the-art computer-aided detection (CADe) system based on analyzing static HD endoscopic images. Our research aims at extending this system by applying Random Forest (RF) classification, which introduces a confidence measure for detected cancer regions. To visualize this data, we propose a novel automated annotation system, employing the unique characteristics of the previous confidence measure. This approach allows reliable modeling of multi-expert knowledge and provides essential data for real-time video processing, to enable future use of the system in a clinical setting. The performance of the CADe system is evaluated on a 39-patient dataset, containing 100 images annotated by 5 expert gastroenterologists. The proposed system reaches a precision of 75% and recall of 90%, thereby improving the state-of-the-art results by 11 and 6 percentage points, respectively.
international conference on image processing | 2015
Mar Martin Pieck; Fons van der Sommen; S Sveta Zinger
The use of context information in a scene is an important aid for full semantic scene understanding in security and surveillance applications. To this end, this paper presents an innovative semantic context-labeling algorithm for three context classes, trading-off quality and real-time execution. Our system consists of three consecutive stages: image segmentation, region-based feature extraction and classification. We propose the joint use of the features color in HSV space, texture from Gabor filters and spatial context, in combination with the Directional Nearest Neighbor (DNN) method for constructing the undirected graph for segmentation. Compared to recent literature, this combination is over 35 times faster and achieves a coverability rate that is 65% higher.
medical image computing and computer assisted intervention | 2018
Annika Reinke; Matthias Eisenmann; Sinan Onogur; Marko Stankovic; Patrick Scholz; Peter M. Full; Hrvoje Bogunovic; Bennett A. Landman; Oskar Maier; Bjoern H. Menze; G Sharp; Korsuk Sirinukunwattana; Stefanie Speidel; Fons van der Sommen; Guoyan Zheng; Henning Müller; Michal Kozubek; Tal Arbel; Andrew P. Bradley; Pierre Jannin; Annette Kopp-Schneider; Lena Maier-Hein
Since the first MICCAI grand challenge organized in 2007 in Brisbane, challenges have become an integral part of MICCAI conferences. In the meantime, challenge datasets have become widely recognized as international benchmarking datasets and thus have a great influence on the research community and individual careers. In this paper, we show several ways in which weaknesses related to current challenge design and organization can potentially be exploited. Our experimental analysis, based on MICCAI segmentation challenges organized in 2015, demonstrates that both challenge organizers and participants can potentially undertake measures to substantially tune rankings. To overcome these problems we present best practice recommendations for improving challenge design and organization.
international conference on distributed smart cameras | 2018
Joost van der Putten; Fons van der Sommen; S Sveta Zinger; Daniel M. de Bruin; Guido Kamphuis
Nonmuscle Invasive Bladder Cancer (NMIBC) has high incidence, and close follow-up with cystoscopy is necessary due to its high recurrence rate after initial treatment, estimated to be as high as 75%. Because of the high recurrence rate, it is vital that the detection of bladder cancer is improved. Computer automated detection algorithms have shown to be exceptionally effective in achieving this goal. This paper presents the first automated segmentation algorithm for bladder cancer in endoscopic images. The second purpose of this study is to determine which modality is best suited for computer-aided segmentation of bladder cancer. Gabor and color features are extracted from 20 patients in four different modalities with a block-based strategy. Three different classifiers are used to classify the blocks and post-processing is applied to obtain a segmented region. The best classification results were obtained by using a support vector machine and the Spectrum B modality. Additionally, color features were found to be effective for obtaining segmentations comparable to experts.
Medical Imaging 2018: Computer-Aided Diagnosis | 2018
Joost van der Putten; S Sveta Zinger; Fons van der Sommen; Mathias Prokop; John Hermans
In current clinical practice, the resectability of pancreatic ductal adenocarcinoma (PDA) is determined subjec- tively by a physician, which is an error-prone procedure. In this paper, we present a method for automated determination of resectability of PDA from a routine abdominal CT, to reduce such decision errors. The tumor features are extracted from a group of patients with both hypo- and iso-attenuating tumors, of which 29 were resectable and 21 were not. The tumor contours are supplied by a medical expert. We present an approach that uses intensity, shape, and texture features to determine tumor resectability. The best classification results are obtained with fine Gaussian SVM and the L0 Feature Selection algorithms. Compared to expert predictions made on the same dataset, our method achieves better classification results. We obtain significantly better results on correctly predicting non-resectability (+17%) compared to a expert, which is essential for patient treatment (negative prediction value). Moreover, our predictions of resectability exceed expert predictions by approximately 3% (positive prediction value).
Computerized Medical Imaging and Graphics | 2018
Fons van der Sommen; Sander R. Klomp; Anne-Fré Swager; S Sveta Zinger; Wouter L. Curvers; Jacques J. Bergman; Erik J. Schoon
The incidence of Barrett cancer is increasing rapidly and current screening protocols often miss the disease at an early, treatable stage. Volumetric Laser Endomicroscopy (VLE) is a promising new tool for finding this type of cancer early, capturing a full circumferential scan of Barretts Esophagus (BE), up to 3-mm depth. However, the interpretation of these VLE scans can be complicated, due to the large amount of cross-sectional images and the subtle grayscale variations. Therefore, algorithms for automated analysis of VLE data can offer a valuable contribution to its overall interpretation. In this study, we broadly investigate the potential of Computer-Aided Detection (CADe) for the identification of early Barretts cancer using VLE. We employ a histopathologically validated set of ex-vivo VLE images for evaluating and comparing a considerable set of widely-used image features and machine learning algorithms. In addition, we show that incorporating clinical knowledge in feature design, leads to a superior classification performance and additional benefits, such as low complexity and fast computation time. Furthermore, we identify an optimal tissue depth for classification of 0.5-1.0 mm, and propose an extension to the evaluated features that exploits this phenomenon, improving their predictive properties for cancer detection in VLE data. Finally, we compare the performance of the CADe methods with the classification accuracy of two VLE experts. With a maximum Area Under the Curve (AUC) in the range of 0.90-0.93 for the evaluated features and machine learning methods versus an AUC of 0.81 for the medical experts, our experiments show that computer-aided methods can achieve a considerably better performance than trained human observers in the analysis of VLE data.
computer-based medical systems | 2017
Alexandros Rikos; Fons van der Sommen; Anne-Fré Swager; S Sveta Zinger; Erik J. Schoon; Wouter L. Curvers; Jacques J. Bergman
This paper explores the feasibility of using multiframe analysis to increase the classification performance of machine learning methods for cancer detection in Volumetric Laser Endomicroscopy (VLE). VLE is a novel and promising modality for the detection of neoplasia in patients with Baretts Esophagus (BE). It produces hundreds of high-resolution, cross-sectional images of the esophagus and offers considerable advantages compared to current methods. While some recent studies have proposed cancer detection algorithms for single VLE frames, the study described in this paper is the first to make use of VLE volumes for the differentiation between dysplastic and non-dysplastic tissue. We explore the use of various voting schemes for a broad range of features and classification methods. Our results demonstrate that multi-frame analysis leads to superior performance, irrespective of the chosen feature-classifier combination. By using multi-frame analysis with straightforward voting methods, the Area Under the receiver operating Curve (AUC) is increased by an average of over 12% compared to using single VLE frames. When only considering methods that achieve expert performance or higher (AUC≥0.81), an even larger performance improvement of up to 16.9% is observed. Furthermore, with many feature/classifier combinations showing AUC values ranging from 0.90 to 0.98, our experiments indicate that computeraided methods can considerably outperform medical experts, who demonstrate an AUC of 0.81 using a recently proposed clinical prediction model.