Maurice Samulski
Radboud University Nijmegen Medical Centre
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maurice Samulski.
IEEE Transactions on Medical Imaging | 2011
Maurice Samulski; Nico Karssemeijer
When reading mammograms, radiologists combine information from multiple views to detect abnormalities. Most computer-aided detection (CAD) systems, however, use primitive methods for inclusion of multiview context or analyze each view independently. In previous research it was found that in mammography lesion-based detection performance of CAD systems can be improved when correspondences between MLO and CC views are taken into account. However, detection at case level detection did not improve. In this paper, we propose a new learning method for multiview CAD systems, which is aimed at optimizing case-based detection performance. The method builds on a single-view lesion detection system and a correspondence classifier. The latter provides class probabilities for the various types of region pairs and correspondence features. The correspondence classifier output is used to bias the selection of training patterns for a multiview CAD system. In this way training can be forced to focus on optimization of case-based detection performance. The method is applied to the problem of detecting malignant masses and architectural distortions. Experiments involve 454 mammograms consisting of four views with a malignant region visible in at least one of the views. To evaluate performance, five-fold cross validation and FROC analysis was performed. Bootstrapping was used for statistical analysis. A significant increase of case-based detection performance was found when the proposed method was used. Mean sensitivity increased by 4.7% in the range of 0.01-0.5 false positives per image.
Physics in Medicine and Biology | 2009
Marina Velikova; Maurice Samulski; Peter J. F. Lucas; Nico Karssemeijer
Mammographic reading by radiologists requires the comparison of at least two breast projections (views) for the detection and the diagnosis of breast abnormalities. Despite their reported potential to support radiologists, most mammographic computer-aided detection (CAD) systems have a major limitation: as opposed to the radiologists practice, computerized systems analyze each view independently. To tackle this problem, in this paper, we propose a Bayesian network framework for multi-view mammographic analysis, with main focus on breast cancer detection at a patient level. We use causal-independence models and context modeling over the whole breast represented as links between the regions detected by a single-view CAD system in the two breast projections. The proposed approach is implemented and tested with screening mammograms for 1063 cases of whom 385 had breast cancer. The single-view CAD system is used as a benchmark method for comparison. The results show that our multi-view modeling leads to significantly better performance in discriminating between normal and cancerous patients. We also demonstrate the potential of our multi-view system for selecting the most suspicious cases.
Radiology | 2013
Rianne Hupse; Maurice Samulski; Marc Lobbes; Ritse M. Mann; Roel Mus; Gerard J. den Heeten; David Beijerinck; Ruud M. Pijnappel; Carla Boetes; Nico Karssemeijer
PURPOSE To compare effectiveness of an interactive computer-aided detection (CAD) system, in which CAD marks and their associated suspiciousness scores remain hidden unless their location is queried by the reader, with the effect of traditional CAD prompts used in current clinical practice for the detection of malignant masses on full-field digital mammograms. MATERIALS AND METHODS The requirement for institutional review board approval was waived for this retrospective observer study. Nine certified screening radiologists and three residents who were trained in breast imaging read 200 studies (63 studies containing at least one screen-detected mass, 17 false-negative studies, 20 false-positive studies, and 100 normal studies) twice, once with CAD prompts and once with interactive CAD. Localized findings were reported and scored by the readers. In the prompted mode, findings were recorded before and after activation of CAD. The partial area under the location receiver operating characteristic (ROC) curve for an interval of low false-positive fractions typical for screening, from 0 to 0.2, was computed for each reader and each mode. Differences in reader performance were analyzed by using software. RESULTS The average partial area under the location ROC curve with unaided reading was 0.57, and it increased to 0.62 with interactive CAD, while it remained unaffected by prompts. The difference in reader performance for unaided reading versus interactive CAD was statistically significant (P = .009). CONCLUSION When used as decision support, interactive use of CAD for malignant masses on mammograms may be more effective than the current use of CAD, which is aimed at the prevention of perceptual oversights.
Medical Image Analysis | 2012
Marina Velikova; Peter J. F. Lucas; Maurice Samulski; Nico Karssemeijer
The recent increased interest in information fusion methods for solving complex problem, such as in image analysis, is motivated by the wish to better exploit the multitude of information, available from different sources, to enhance decision-making. In this paper, we propose a novel method, that advances the state of the art of fusing image information from different views, based on a special class of probabilistic graphical models, called causal independence models. The strength of this method is its ability to systematically and naturally capture uncertain domain knowledge, while performing information fusion in a computationally efficient way. We examine the value of the method for mammographic analysis and demonstrate its advantages in terms of explicit knowledge representation and accuracy (increase of at least 6.3% and 5.2% of true positive detection rates at 5% and 10% false positive rates) in comparison with previous single-view and multi-view systems, and benchmark fusion methods such as naïve Bayes and logistic regression.
Artificial Intelligence in Medicine | 2013
Marina Velikova; Peter J. F. Lucas; Maurice Samulski; Nico Karssemeijer
OBJECTIVES To obtain a balanced view on the role and place of expert knowledge and learning methods in building Bayesian networks for medical image interpretation. METHODS AND MATERIALS The interpretation of mammograms was selected as the example medical image interpretation problem. Medical image interpretation has its own common standards and procedures. The impact of these on two complementary methods for Bayesian network construction was explored. Firstly, methods for the discretisation of continuous features were investigated, yielding multinomial distributions that were compared to the original Gaussian probabilistic parameters of the network. Secondly, the structure of a manually constructed Bayesian network was tested by structure learning from image data. The image data used for the research came from screening mammographic examinations of 795 patients, of whom 344 were cancerous. RESULTS The experimental results show that there is an interesting interplay of machine learning results and background knowledge in medical image interpretation. Networks with discretised data lead to better classification performance (increase in the detected cancers of up to 11.7%), easier interpretation, and a better fit to the data in comparison to the expert-based Bayesian network with Gaussian probabilistic parameters. Gaussian probability distributions are often used in medical image interpretation because of the continuous nature of many of the image features. The structures learnt supported many of the expert-originated relationships but also revealed some novel relationships between the mammographic features. Using discretised features and performing structure learning on the mammographic data has further improved the cancer detection performance of up to 17% compared to the manually constructed Bayesian network model. CONCLUSION Finding the right balance between expert knowledge and data-derived knowledge, both at the level of network structure and parameters, is key to using Bayesian networks for medical image interpretation. A balanced approach to building Bayesian networks for image interpretation yields more accurate and understandable Bayesian network models.
Physics in Medicine and Biology | 2011
Guido van Schie; Christine Tanner; Peter R. Snoeren; Maurice Samulski; Karin Leifland; Matthew G. Wallis; Nico Karssemeijer
To improve cancer detection in mammography, breast examinations usually consist of two views per breast. In order to combine information from both views, corresponding regions in the views need to be matched. In 3D digital breast tomosynthesis (DBT), this may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. For multiview computer-aided detection (CAD) systems, matching corresponding regions is an essential step that needs to be automated. In this study, we developed an automatic method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation. First we match a model of a compressed breast to the tomosynthesis view containing a point of interest. Then we estimate the location of the corresponding point in the ipsilateral view by assuming that this model was decompressed, rotated and compressed again. In this study, we use a relatively simple, elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. We investigate three different methods to match the compression model to the data by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation, we annotated 208 landmarks in both views of a total of 146 imaged breasts of 109 different patients and applied our method to each location. The best results are obtained by using the centre of gravity of the breast to define the central axis of the model, around which the breast is assumed to rotate between views. Results show a median 3D distance between the actual location and the estimated location of 14.6 mm, a good starting point for a registration method or a feature-based local search method to link suspicious regions in a multiview CAD system. Approximately half of the estimated locations are at most one slice away from the actual location, which makes the method useful as a mammographic workstation tool for radiologists to interactively find corresponding locations in ipsilateral tomosynthesis views.
International Journal of Remote Sensing | 2007
Maurice Samulski; Nico Karssemeijer; Peter J. F. Lucas; Perry Groot
In this paper, we compare two state-of-the-art classification techniques characterizing masses as either benign or malignant, using a dataset consisting of 271 cases (131 benign and 140 malignant), containing both a MLO and CC view. For suspect regions in a digitized mammogram, 12 out of 81 calculated image features have been selected for investigating the classification accuracy of support vector machines (SVMs) and Bayesian networks (BNs). Additional techniques for improving their performance were included in their comparison: the Manly transformation for achieving a normal distribution of image features and principal component analysis (PCA) for reducing our high-dimensional data. The performance of the classifiers were evaluated with Receiver Operating Characteristics (ROC) analysis. The classifiers were trained and tested using a k-fold cross-validation test method (k=10). It was found that the area under the ROC curve (Az) of the BN increased significantly (p=0.0002) using the Manly transformation, from Az = 0.767 to Az = 0.795. The Manly transformation did not result in a significant change for SVMs. Also the difference between SVMs and BNs using the transformed dataset was not statistically significant (p=0.78). Applying PCA resulted in an improvement in classification accuracy of the naive Bayesian classifier, from Az = 0.767 to Az = 0.786. The difference in classification performance between BNs and SVMs after applying PCA was small and not statistically significant (p=0.11).
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Maurice Samulski; Nico Karssemeijer
Most of the current CAD systems detect suspicious mass regions independently in single views. In this paper we present a method to match corresponding regions in mediolateral oblique (MLO) and craniocaudal (CC) mammographic views of the breast. For every possible combination of mass regions in the MLO view and CC view, a number of features are computed, such as the difference in distance of a region to the nipple, a texture similarity measure, the gray scale correlation and the likelihood of malignancy of both regions computed by single-view analysis. In previous research, Linear Discriminant Analysis was used to discriminate between correct and incorrect links. In this paper we investigate if the performance can be improved by employing a statistical method in which four classes are distinguished. These four classes are defined by the combinations of view (MLO/CC) and pathology (TP/FP) labels. We use distance-weighted k-Nearest Neighbor density estimation to estimate the likelihood of a region combination. Next, a correspondence score is calculated as the likelihood that the region combination is a TP-TP link. The method was tested on 412 cases with a malignant lesion visible in at least one of the views. In 82.4% of the cases a correct link could be established between the TP detections in both views. In future work, we will use the framework presented here to develop a context dependent region matching scheme, which takes the number and likelihood of possible alternatives into account. It is expected that more accurate determination of matching probabilities will lead to improved CAD performance.
IWDM '08 Proceedings of the 9th international workshop on Digital Mammography | 2008
Nico Karssemeijer; Andrea Hupse; Maurice Samulski; Michiel Kallenberg; Carla Boetes; Gerard den Heeten
A mammographic screening workstation has been developed in which CAD results for mass detection are presented fundamentally different than in current practice. Instead of displaying all CAD findings as prompts the reader can probe image regions for the presence of CAD information. The aim of the system is to help radiologists with decision making rather than avoiding oversight errors. In a preliminary observer study we studied the effect of using the interactive CAD system. Four non-radiologists and two radiologists participated. Each observer read 60 cases two times, once with and once without CAD. The set included 20 cases with subtle cancers that were missed at screening. It was found that performance of the readers increased significantly with interactive use of CAD.
knowledge representation for health care | 2010
Niels Radstake; Peter J. F. Lucas; Marina Velikova; Maurice Samulski
Medical image interpretation is a difficult problem for which human interpreters, radiologists in this case, are normally better equipped than computers. However, there are many clinical situations where radiologists performance is suboptimal, yielding a need for exploitation of computer-based interpretation for assistance. A typical example of such a problem is the interpretation of mammograms for breast-cancer detection. For this paper, we investigated the use of Bayesian networks as a knowledge-representation formalism, where the structure was drafted by hand and the probabilistic parameters learnt from image data. Although this method allowed for explicitly taking into account expert knowledge from radiologists, the performance was suboptimal. We subsequently carried out extensive experiments with Bayesian-network structure learning, for critiquing the Bayesian network. Through these experiments we have gained much insight into the problem of knowledge representation and concluded that structure learning results can be conceptually clear and of help in designing a Bayesian network for medical image interpretation.