Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sharbell Y. Hashoul.
Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2018
Anastasia Dubrovina; Pavel Kisilev; Boris Ginsburg; Sharbell Y. Hashoul; Ron Kimmel
Automatic tissue classification from medical images is an important step in pathology detection and diagnosis. Here, we deal with mammography images and present a novel supervised deep learning-based framework for region classification into semantically coherent tissues. The proposed method uses Convolutional Neural Network (CNN) to learn discriminative features automatically. We overcome the difficulty involved in a medium-size database by training the CNN in an overlapping patch-wise manner. In order to accelerate the pixel-wise automatic class prediction, we use convolutional layers instead of the classical fully connected layers. This approach results in significantly faster computation, while preserving the classification accuracy. The proposed method was tested on annotated mammography images and demonstrates promising image segmentation and tissue classification results.
medical image computing and computer assisted intervention | 2015
Miri Erihov; Sharon Alpert; Pavel Kisilev; Sharbell Y. Hashoul
Identifying regions that break the symmetry within an organ or between paired organs is widely used to detect tumors on various modalities. Algorithms for detecting these regions often align the images and compare the corresponding regions using some set of features. This makes them prone to misalignment errors and inaccuracies in the estimation of the symmetry axis. Moreover, they are susceptible to errors induced by both inter and intra image correlations. We use recent advances in image saliency and extend them to handle pairs of images, by introducing an algorithm for image cross-saliency. Our cross-saliency approach is able to estimate regions that differ both in organ and paired organs symmetry without using image alignment, and can handle large errors in symmetry axis estimation. Since our approach is based on internal patch distribution it returns only statistically informative regions and can be applied as is to various modalities. We demonstrate the application of our approach on brain MRI and breast mammogram.
International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis | 2016
Pavel Kisilev; Eli Sason; Ella Barkan; Sharbell Y. Hashoul
Automatic detection and classification of lesions in medical images remains one of the most important and challenging problems. In this paper, we present a new multi-task convolutional neural network (CNN) approach for detection and semantic description of lesions in diagnostic images. The proposed CNN-based architecture is trained to generate and rank rectangular regions of interests (ROI’s) surrounding suspicious areas. The highest score candidates are fed into the subsequent network layers. These layers are trained to generate semantic description of the remaining ROI’s.
medical image computing and computer assisted intervention | 2017
Guy Amit; Omer Hadad; Sharon Alpert; Tal Tlusty; Yaniv Gur; Rami Ben-Ari; Sharbell Y. Hashoul
To interpret a breast MRI study, a radiologist has to examine over 1000 images, and integrate spatial and temporal information from multiple sequences. The automated detection and classification of suspicious lesions can help reduce the workload and improve accuracy. We describe a hybrid mass-detection algorithm that combines unsupervised candidate detection with deep learning-based classification. The detection algorithm first identifies image-salient regions, as well as regions that are cross-salient with respect to the contralateral breast image. We then use a convolutional neural network (CNN) to classify the detected candidates into true-positive and false-positive masses. The network uses a novel multi-channel image representation; this representation encompasses information from the anatomical and kinetic image features, as well as saliency maps. We evaluated our algorithm on a dataset of MRI studies from 171 patients, with 1957 annotated slices of malignant (59%) and benign (41%) masses. Unsupervised saliency-based detection provided a sensitivity of 0.96 with 9.7 false-positive detections per slice. Combined with CNN classification, the number of false positive detections dropped to 0.7 per slice, with 0.85 sensitivity. The multi-channel representation achieved higher classification performance compared to single-channel images. The combination of domain-specific unsupervised methods and general-purpose supervised learning offers advantages for medical imaging applications, and may improve the ability of automated algorithms to assist radiologists.
international symposium on biomedical imaging | 2017
Omer Hadad; Ran Bakalo; Rami Ben-Ari; Sharbell Y. Hashoul; Guy Amit
Automatic detection and classification of lesions in medical images is a desirable goal, with numerous clinical applications. In breast imaging, multiple modalities such as X-ray, ultrasound and MRI are often used in the diagnostic workflow. Training robust classifiers for each modality is challenging due to the typically small size of the available datasets. We propose to use cross-modal transfer learning to improve the robustness of the classifiers. We demonstrate the potential of this approach on a problem of identifying masses in breast MRI images, using a network that was trained on mammography images. Comparison between cross-modal and cross-domain transfer learning showed that the former improved the classification performance, with overall accuracy of 0.93 versus 0.90, while the accuracy of de-novo training was 0.94. Using transfer learning within the medical imaging domain may help to produce standard pre-trained shared models, which can be utilized to solve a variety of specific clinical problems.
Proceedings of SPIE | 2017
Guy Amit; Rami Ben-Ari; Omer Hadad; Einat Monovich; Noa Granot; Sharbell Y. Hashoul
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
international symposium on biomedical imaging | 2016
Rami Ben-Ari; Aviad Zlotnick; Sharbell Y. Hashoul
Breast tissue segmentation is a fundamental task in digital mammography. Commonly, this segmentation is applied prior to breast density estimation. However, observations show a strong correlation between the segmentation parameters and the breast density, resulting in a chicken and egg problem. This paper presents a new method for breast segmentation, based on training with weakly labeled data, namely breast density categories. To this end, a Fuzzy-logic module is employed computing an adaptive parameter for segmentation. The suggested scheme consists of a feedback stage where a preliminary segmentation is used to allow extracting domain specific features from an early estimation of the tissue regions. Selected features are then fed into a fuzzy logic module to yield an updated threshold for segmentation. Our evaluation is based on 50 fibroglandular delineated images and on breast density classification, obtained on a large data set of 1243 full-field digital mammograms. The data set contained images from different devices. The proposed analysis provided an average Jaccard spatial similarity coefficient of 0.4 with improvement of this measure in 70% of cases where the suggested module was applied. In breast density classification, average classification accuracy of 75% was obtained, which significantly improved the baseline method (67.4%). Major improvement is obtained in low breast densities where higher threshold levels rejects false positive regions. These results show a promise for the clinical application of this method in breast segmentation, without the need for laborious tissue annotation.
medical image computing and computer assisted intervention | 2015
Guy Amit; Sharbell Y. Hashoul; Pavel Kisilev; Boaz Ophir; Eugene Walach; Aviad Zlotnick
Mammography is the first-line modality for screening and diagnosis of breast cancer. Following the common practice of radiologists to examine two mammography views, we propose a fully automated dual-view analysis framework for breast mass detection in mammograms. The framework combines unsupervised segmentation and random-forest classification to detect and rank candidate masses in cranial-caudal (CC) and mediolateral-oblique (MLO) views. Subsequently, it estimates correspondences between pairs of candidates in the two views. The performance of the method was evaluated using a publicly available full-field digital mammography database (INbreast). Dual-view analysis provided area under the ROC curve of 0.94, with detection sensitivity of 87% at specificity of 90%, which significantly improved single-view performance (72% sensitivity at 90% specificity, 78% specificity at 87% sensitivity, P<0.05). One-to-one mapping of candidate masses from two views facilitated correct estimation of the breast quadrant in 77% of the cases. The proposed method may assist radiologists to efficiently identify and classify breast masses.
international symposium on biomedical imaging | 2017
Rami Ben-Ari; Ayelet Akselrod-Ballin; Leonid Karlinsky; Sharbell Y. Hashoul
Detection of Architectural distortion (AD) is important for ruling out possible pre-malignant lesions in breast, but due to its subtlety, it is often missed on the screening mammograms. In this work we suggest a novel AD detection method based on region proposal convolution neural nets (R-CNN). When the data is scarce, as typically the case in medical domain, R-CNN yields poor results. In this study, we suggest a new R-CNN method addressing this shortcoming by using a pretrained network on a candidate region guided by clinical observations. We test our method on the publicly available DDSM data set, with comparison to the latest faster R-CNN and previous works. Our detection accuracy allows binary image classification (normal vs. containing AD) with over 80% sensitivity and specificity, and yields 0.46 false-positives per image at 83% true-positive rate, for localization accuracy. These measures significantly improve the best results in the literature.
Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2017
Ayelet Akselrod-Ballin; Leonid Karlinsky; Sharon Alpert; Sharbell Y. Hashoul; Rami Ben-Ari; Ella Barkan
Abstract A novel system for detection and classification of masses in breast mammography is introduced. The system integrates a breast segmentation module together with a modified region-based convolutional network to obtain detection and classification of masses according to BI-RADS score. While most of the previous work on mass identification in breast mammography has focused on classification, this study proposes to solve both the detection and the classification problems. The method is evaluated on a large multi-centre clinical data-set and compared to ground truth annotated by expert radiologists. Preliminary experimental results show the high accuracy and efficiency obtained by the suggested network structure. As the volume and complexity of data in health care continues to accelerate generalising such an approach may have a profound impact on patient care in many applications.