Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hannah Gilmore is active.

Publication


Featured researches published by Hannah Gilmore.


IEEE Transactions on Medical Imaging | 2016

Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images

Jun Xu; Lei Xiang; Qingshan Liu; Hannah Gilmore; Jianzhong Wu; Jinghai Tang; Anant Madabhushi

Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.


Neurocomputing | 2016

A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images.

Jun Xu; Xiaofei Luo; Guanhao Wang; Hannah Gilmore; Anant Madabhushi

Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.


Journal of medical imaging | 2014

Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features

Haibo Wang; Angel Cruz-Roa; Ajay Basavanhally; Hannah Gilmore; Natalie Shih; Michael Feldman; John E. Tomaszewski; Fabio A. González; Anant Madabhushi

Abstract. Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is the mitotic count, which involves quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at multiple high power fields (HPFs) on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Although handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely supervised feature generation methods, there is an appeal in attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. We present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color, and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing the performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 HPFs (400× magnification) by several pathologists and 15 testing HPFs yielded an F-measure of 0.7345. Our approach is accurate, fast, and requires fewer computing resources compared to existent methods, making this feasible for clinical use.


Proceedings of SPIE | 2014

Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks

Angel Cruz-Roa; Ajay Basavanhally; Fabio A. González; Hannah Gilmore; Michael Feldman; Shridar Ganesan; Natalie Shih; John E. Tomaszewski; Anant Madabhushi

This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative results for automatic detection of IDC regions in WSI in terms of F-measure and balanced accuracy (71.80%, 84.23%), in comparison with an approach using handcrafted image features (color, texture and edges, nuclear textural and architecture), and a machine learning classifier for invasive tumor classification using a Random Forest. The best performing handcrafted features were fuzzy color histogram (67.53%, 78.74%) and RGB histogram (66.64%, 77.24%). Our results also suggest that at least some of the tissue classification mistakes (false positives and false negatives) were less due to any fundamental problems associated with the approach, than the inherent limitations in obtaining a very highly granular annotation of the diseased area of interest by an expert pathologist.


Genome Research | 2015

Optimizing sparse sequencing of single cells for highly multiplex copy number profiling

Timour Baslan; Jude Kendall; Brian Ward; Hilary Cox; Anthony Leotta; Linda Rodgers; Michael Riggs; Sean D'Italia; Guoli Sun; Mao Yong; Kristy Miskimen; Hannah Gilmore; Michael Saborowski; Nevenka Dimitrova; Alexander Krasnitz; Lyndsay Harris; Michael Wigler; James Hicks

Genome-wide analysis at the level of single cells has recently emerged as a powerful tool to dissect genome heterogeneity in cancer, neurobiology, and development. To be truly transformative, single-cell approaches must affordably accommodate large numbers of single cells. This is feasible in the case of copy number variation (CNV), because CNV determination requires only sparse sequence coverage. We have used a combination of bioinformatic and molecular approaches to optimize single-cell DNA amplification and library preparation for highly multiplexed sequencing, yielding a method that can produce genome-wide CNV profiles of up to a hundred individual cells on a single lane of an Illumina HiSeq instrument. We apply the method to human cancer cell lines and biopsied cancer tissue, thereby illustrating its efficiency, reproducibility, and power to reveal underlying genetic heterogeneity and clonal phylogeny. The capacity of the method to facilitate the rapid profiling of hundreds to thousands of single-cell genomes represents a key step in making single-cell profiling an easily accessible tool for studying cell lineage.


Proceedings of SPIE | 2014

Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

Haibo Wang; Angel Cruz-Roa; Ajay Basavanhally; Hannah Gilmore; Natalie Shih; Michael Feldman; John E. Tomaszewski; Fabio A. González; Anant Madabhushi

Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 High Power Fields (HPF, x400 magnification) by several pathologists and 15 testing HPFs yielded an F-measure of 0.7345. Apart from this being the second best performance ever recorded for this MITOS dataset, our approach is faster and requires fewer computing resources compared to extant methods, making this feasible for clinical use.


Scientific Reports | 2017

Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent

Angel Cruz-Roa; Hannah Gilmore; Ajay Basavanhally; Michael Feldman; Shridar Ganesan; Natalie Shih; John E. Tomaszewski; Fabio A. González; Anant Madabhushi

With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.


Breast Cancer Research | 2017

Intratumoral and peritumoral radiomics for the pretreatment prediction of pathological complete response to neoadjuvant chemotherapy based on breast DCE-MRI

Nathaniel Braman; Maryam Etesami; Prateek Prasanna; Christina Dubchuk; Hannah Gilmore; Pallavi Tiwari; Donna Plecha; Anant Madabhushi

BackgroundIn this study, we evaluated the ability of radiomic textural analysis of intratumoral and peritumoral regions on pretreatment breast cancer dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to predict pathological complete response (pCR) to neoadjuvant chemotherapy (NAC).MethodsA total of 117 patients who had received NAC were retrospectively analyzed. Within the intratumoral and peritumoral regions of T1-weighted contrast-enhanced MRI scans, a total of 99 radiomic textural features were computed at multiple phases. Feature selection was used to identify a set of top pCR-associated features from within a training set (n = 78), which were then used to train multiple machine learning classifiers to predict the likelihood of pCR for a given patient. Classifiers were then independently tested on 39 patients. Experiments were repeated separately among hormone receptor-positive and human epidermal growth factor receptor 2-negative (HR+, HER2−) and triple-negative or HER2+ (TN/HER2+) tumors via threefold cross-validation to determine whether receptor status-specific analysis could improve classification performance.ResultsAmong all patients, a combined intratumoral and peritumoral radiomic feature set yielded a maximum AUC of 0.78 ± 0.030 within the training set and 0.74 within the independent testing set using a diagonal linear discriminant analysis (DLDA) classifier. Receptor status-specific feature discovery and classification enabled improved prediction of pCR, yielding maximum AUCs of 0.83 ± 0.025 within the HR+, HER2− group using DLDA and 0.93 ± 0.018 within the TN/HER2+ group using a naive Bayes classifier. In HR+, HER2− breast cancers, non-pCR was characterized by elevated peritumoral heterogeneity during initial contrast enhancement. However, TN/HER2+ tumors were best characterized by a speckled enhancement pattern within the peritumoral region of nonresponders. Radiomic features were found to strongly predict pCR independent of choice of classifier, suggesting their robustness as response predictors.ConclusionsThrough a combined intratumoral and peritumoral radiomics approach, we could successfully predict pCR to NAC from pretreatment breast DCE-MRI, both with and without a priori knowledge of receptor status. Further, our findings suggest that the radiomic features most predictive of response vary across different receptor subtypes.


Archives of Pathology & Laboratory Medicine | 2015

Comparison of Oncotype DX Recurrence Score by Histologic Types of Breast Carcinoma

Philip E. Bomeisl; Cheryl L. Thompson; Lyndsay Harris; Hannah Gilmore

CONTEXT Oncotype DX (ODX) is a widely used commercial assay that estimates the risk of distant recurrence and may predict the benefit of chemotherapy in a subset of breast cancers. Some studies have shown the ability to predict Oncotype DX recurrence score (ODXRS), based on routinely reported pathologic features; however, there are limited data correlating specific histologic type of breast cancer to ODXRS. OBJECTIVE To compare ODXRS to specific histologic types of breast cancer. DESIGN One hundred eighty-four cases were sent for ODXRS testing and the results were compared with histologic type and grade. RESULTS The highest average ODXRS was seen in invasive ductal carcinoma with micropapillary features (29) followed by invasive ductal carcinoma not otherwise specified (mean = 19.4, SD = 11.6), invasive mucinous carcinoma (mean = 17.2, SD = 5.9), invasive lobular carcinoma (mean = 15.7, SD = 7.2), mixed ductal and lobular carcinoma (mean = 14.1, SD = 7.7), tubular carcinoma (10.0), and mixed ductal and mucinous carcinoma (mean = 8.0, SD = 4.2). Most tumors that had a high ODXRS were grade 3 invasive ductal carcinoma, representing 13 of a total of 20 cases (65%). Interestingly, 3 of the 4 cases of pure invasive mucinous carcinoma had an intermediate ODXRS. CONCLUSIONS Although the numbers are small, our findings raise further awareness of the significance between histologic type and grade, and RS in breast cancer. In some special histologic types of breast cancer, particularly those considered to follow either an excellent or poor clinical course by histology alone, it is unclear whether the ODXRS results are as meaningful as in carcinomas of no special type. Further investigation with higher numbers and outcome data is needed.


Scientific Reports | 2016

A Radio-genomics Approach for Identifying High Risk Estrogen Receptor-positive Breast Cancers on DCE-MRI: Preliminary Results in Predicting OncotypeDX Risk Scores.

Tao Wan; B. Nicolas Bloch; Donna Plecha; Chery I L Thompson; Hannah Gilmore; C. Carl Jaffe; Lyndsay Harris; Anant Madabhushi

To identify computer extracted imaging features for estrogen receptor (ER)-positive breast cancers on dynamic contrast en-hanced (DCE)-MRI that are correlated with the low and high OncotypeDX risk categories. We collected 96 ER-positivebreast lesions with low (<18, N = 55) and high (>30, N = 41) OncotypeDX recurrence scores. Each lesion was quantitatively charac-terize via 6 shape features, 3 pharmacokinetics, 4 enhancement kinetics, 4 intensity kinetics, 148 textural kinetics, 5 dynamic histogram of oriented gradient (DHoG), and 6 dynamic local binary pattern (DLBP) features. The extracted features were evaluated by a linear discriminant analysis (LDA) classifier in terms of their ability to distinguish low and high OncotypeDX risk categories. Classification performance was evaluated by area under the receiver operator characteristic curve (Az). The DHoG and DLBP achieved Az values of 0.84 and 0.80, respectively. The 6 top features identified via feature selection were subsequently combined with the LDA classifier to yield an Az of 0.87. The correlation analysis showed that DHoG (ρ = 0.85, P < 0.001) and DLBP (ρ = 0.83, P < 0.01) were significantly associated with the low and high risk classifications from the OncotypeDX assay. Our results indicated that computer extracted texture features of DCE-MRI were highly correlated with the high and low OncotypeDX risk categories for ER-positive cancers.

Collaboration


Dive into the Hannah Gilmore's collaboration.

Top Co-Authors

Avatar

Anant Madabhushi

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Lyndsay Harris

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Vinay Varadan

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Kristy Miskimen

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Michael Feldman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donna Plecha

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Natalie Shih

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge