Assaf Hoogi
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Assaf Hoogi.
Medical Image Analysis | 2016
Jocelyn Barker; Assaf Hoogi; Adrien Depeursinge; Daniel L. Rubin
Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p << 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p << 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.
Journal of Digital Imaging | 2017
Zeynettin Akkus; Alfiia Galimzianova; Assaf Hoogi; Daniel L. Rubin; Bradley J. Erickson
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Medical Image Analysis | 2017
Assaf Hoogi; Christopher F. Beaulieu; Guilherme Moura da Cunha; Elhamy Heba; Claude B. Sirlin; Sandy Napel; Daniel L. Rubin
HighlightsWe improve the local level set segmentation by proposing a novel method to estimate the adaptive local window size surrounding each contour point.The local window size is re‐estimated at each point by an iterative process that considers the object scale, local and global texture statistics, and minimization of the cost function, thus generating an adaptive local window.The proposed method estimates the size of the local window directly from the image, not by testing a specific scale from a range of scales and thus requires no pyramid of pre‐defined scales as input, removing any potential sensitivity to user input regarding scale sizes.The results indicate that our proposed method outperforms the state of the art methods in terms of agreement with the manual marking and segmentation robustness to contour initialization or the energy model used. In case of complex lesions, such as low contrast lesions, heterogeneous lesions, or lesions with a noisy background, our method shows significantly better segmentation with an improvement of 0.25 ± 0.13 in Dice similarity coefficient, compared with state of the art fixed‐size local windows (Wilcoxon, p < 0.001). Abstract We propose a novel method, the adaptive local window, for improving level set segmentation technique. The window is estimated separately for each contour point, over iterations of the segmentation process, and for each individual object. Our method considers the object scale, the spatial texture, and the changes of the energy functional over iterations. Global and local statistics are considered by calculating several gray level co‐occurrence matrices. We demonstrate the capabilities of the method in the domain of medical imaging for segmenting 233 images with liver lesions. To illustrate the strength of our method, those lesions were screened by either Computed Tomography or Magnetic Resonance Imaging. Moreover, we analyzed images using three different energy models. We compared our method to a global level set segmentation, to a local framework that uses predefined fixed‐size square windows and to a local region‐scalable fitting model. The results indicate that our proposed method outperforms the other methods in terms of agreement with the manual marking and dependence on contour initialization or the energy model used. In case of complex lesions, such as low contrast lesions, heterogeneous lesions, or lesions with a noisy background, our method shows significantly better segmentation with an improvement of 0.25 ± 0.13 in Dice similarity coefficient, compared with state of the art fixed‐size local windows (Wilcoxon, p < 0.001). Graphical abstract Figure. No Caption available.
IEEE Journal of Biomedical and Health Informatics | 2016
Idit Diamant; Assaf Hoogi; Christopher F. Beaulieu; Mustafa Safdari; Eyal Klang; Michal Amitai; Hayit Greenspan; Daniel L. Rubin
The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions (“dual dictionaries” of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.
IEEE Transactions on Medical Imaging | 2017
Assaf Hoogi; Arjun Subramaniam; Rishi Veerapaneni; Daniel L. Rubin
In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of ����.�������� with our method (p < 0.001, Wilcoxon).In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNN-based and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of 0.24 with our method (p <0.001, Wilcoxon).
Scientific Data | 2017
Rebecca Sawyer Lee; Francisco Gimenez; Assaf Hoogi; Kanae Kawai Miyake; Mia Gorovoy; Daniel L. Rubin
Published research results are difficult to replicate due to the lack of a standard evaluation data set in the area of decision support systems in mammography; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. This causes an inability to directly compare the performance of methods or to replicate prior results. We seek to resolve this substantial challenge by releasing an updated and standardized version of the Digital Database for Screening Mammography (DDSM) for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Our data set, the CBIS-DDSM (Curated Breast Imaging Subset of DDSM), includes decompressed images, data selection and curation by trained mammographers, updated mass segmentation and bounding boxes, and pathologic diagnosis for training data, formatted similarly to modern computer vision data sets. The data set contains 753 calcification cases and 891 mass cases, providing a data-set size capable of analyzing decision support systems in mammography.
Journal of Neuroscience Methods | 2018
Xuerong Xiao; Maja Djurisic; Assaf Hoogi; Richard W. Sapp; Carla J. Shatz; Daniel L. Rubin
BACKGROUND Dendritic spines are structural correlates of excitatory synapses in the brain. Their density and structure are shaped by experience, pointing to their role in memory encoding. Dendritic spine imaging, followed by manual analysis, is a primary way to study spines. However, an approach that analyses dendritic spines images in an automated and unbiased manner is needed to fully capture how spines change with normal experience, as well as in disease. NEW METHOD We propose an approach based on fully convolutional neural networks (FCNs) to detect dendritic spines in two-dimensional maximum-intensity projected images from confocal fluorescent micrographs. We experiment on both fractionally strided convolution and efficient sub-pixel convolutions. Dendritic spines far from the dendritic shaft are pruned by extraction of the shaft to reduce false positives. Performance of the proposed method is evaluated by comparing predicted spine positions to those manually marked by experts. RESULTS The averaged distance between predicted and manually annotated spines is 2.81 ± 2.63 pixels (0.082 ± 0.076 microns) and 2.87 ± 2.33 pixels (0.084 ± 0.068 microns) based on two different experts. FCN-based detection achieves F scores > 0.80 for both sets of expert annotations. COMPARISON WITH EXISTING METHODS Our method significantly outperforms two well-known software, NeuronStudio and Neurolucida (p-value < 0.02). CONCLUSIONS FCN architectures used in this work allow for automated dendritic spine detection. Superior outcomes are possible even with small training data-sets. The proposed method may generalize to other datasets on larger scales.
internaltional ultrasonics symposium | 2017
Ahmed El Kaffas; Assaf Hoogi; Albert Tseng; Jianhua Zhou; Huaijun Wang; Hersh Sagreiya; Dimitre Hristov; Daniel L. Rubin; Juergen K. Willmann
Volumetric dynamic contrast-enhanced ultrasound (DCE-US) can be used to yield 3D parametric maps to assess spatial changes in tumor perfusion heterogeneity during cancer treatment. Here, quantitative image features (texture and histogram-based features) extracted from 3D parametric maps were evaluated as surrogates of treatment response, and compared to conventional perfusion parameters.
international conference of the ieee engineering in medicine and biology society | 2015
Yixuan Yuan; Assaf Hoogi; Christopher F. Beaulieu; Max Q.-H. Meng; Daniel L. Rubin
Computed tomography is a popular imaging modality for detecting abnormalities associated with abdominal organs such as the liver, kidney and uterus. In this paper, we propose a novel weighted locality-constrained linear coding (LLC) method followed by a weighted max-pooling method to classify liver lesions into three classes: cysts, metastases, hemangiomas. We first divide the lesions into same-size patches. Then, we extract the raw features in all patches followed by Principal Components Analysis (PCA) and apply K means to obtain a single LLC dictionary. Since the interior lesion patches and the boundary patches contribute different information in the image, we assign different weights on these two types of patches to obtain the LLC codes. Moreover, a weighted max pooling approach is also proposed to further evaluate the importance of these two types of patches in feature pooling. Experiments on 109 images of liver lesions were carried out to validate the proposed method. The proposed method achieves a best lesion classification accuracy of 96.33%, which appears to be superior compared with traditional image coding methods: LLC method and Bag-of-words method (BoW) and traditional features: Local Binary Pattern (LBP) features, uniform LBP and complete LBP, demonstrating that the proposed method provides better classification.
arXiv: Machine Learning | 2018
Haque Ishfaq; Assaf Hoogi; Daniel L. Rubin