Andrew Janowczyk
Case Western Reserve University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew Janowczyk.
Medical Image Analysis | 2011
Jun Xu; Andrew Janowczyk; Sharat Chandran; Anant Madabhushi
In this paper a minimally interactive high-throughput system which employs a color gradient based active contour model for rapid and accurate segmentation of multiple target objects on very large images is presented. While geodesic active contours (GAC) have become very popular tools for image segmentation, they tend to be sensitive to model initialization. A second limitation of GAC models is that the edge detector function typically involves use of gray scale gradients; color images usually being converted to gray scale, prior to gradient computation. For color images, however, the gray scale gradient image results in broken edges and weak boundaries, since the other channels are not exploited in the gradient computation. To cope with these limitations, we present a new GAC model that is driven by an accurate and rapid object initialization scheme; hierarchical normalized cuts (HNCut). HNCut draws its strength from the integration of two powerful segmentation strategies-mean shift clustering and normalized cuts. HNCut involves first defining a color swatch (typically a few pixels) from the object of interest. A multi-scale, mean shift coupled normalized cuts algorithm then rapidly yields an initial accurate detection of all objects in the scene corresponding to the colors in the swatch. This detection result provides the initial contour for a GAC model. The edge-detector function of the GAC model employs a local structure tensor based color gradient, obtained by calculating the local min/max variations contributed from each color channel. We show that the color gradient based edge-detector function results in more prominent boundaries compared to the classical gray scale gradient based function. By integrating the HNCut initialization scheme with color gradient based GAC (CGAC), HNCut-CGAC embodies five unique and novel attributes: (1) efficiency in segmenting multiple target structures; (2) the ability to segment multiple objects from very large images; (3) minimal human interaction; (4) accuracy; and (5) reproducibility. A quantitative and qualitative comparison of the HNCut-CGAC model against other state of the art active contour schemes (including a Hybrid Active Contour model (Paragios-Deriche) and a region-based AC model (Rousson-Deriche)), across 196 digitized prostate histopathology images, suggests that HNCut-CGAC is able to outperform state of the art hybrid and region based AC techniques. Our results show that HNCut-CGAC is computationally efficient and may be easily applied to a variety of different problems and applications.
IEEE Transactions on Biomedical Engineering | 2012
Andrew Janowczyk; Sharat Chandran; Rajendra Singh; Dimitra Sasaroli; George Coukos; Michael Feldman; Anant Madabhushi
We present a system for accurately quantifying the presence and extent of stain on account of a vascular biomarker on tissue microarrays. We demonstrate our flexible, robust, accurate, and high-throughput minimally supervised segmentation algorithm, termed hierarchical normalized cuts (HNCuts) for the specific problem of quantifying extent of vascular staining on ovarian cancer tissue microarrays. The high-throughput aspect of HNCut is driven by the use of a hierarchically represented data structure that allows us to merge two powerful image segmentation algorithms-a frequency weighted mean shift and the normalized cuts algorithm. HNCuts rapidly traverses a hierarchical pyramid, generated from the input image at various color resolutions, enabling the rapid analysis of large images (e.g., a 1500 × 1500 sized image under 6 s on a standard 2.8-GHz desktop PC). HNCut is easily generalizable to other problem domains and only requires specification of a few representative pixels (swatch) from the object of interest in order to segment the target class. Across ten runs, the HNCut algorithm was found to have average true positive, false positive, and false negative rates (on a per pixel basis) of 82%, 34%, and 18%, in terms of overlap, when evaluated with respect to a pathologist annotated ground truth of the target region of interest. By comparison, a popular supervised classifier (probabilistic boosting trees) was only able to marginally improve on the true positive and false negative rates (84% and 14%) at the expense of a higher false positive rate (73%), with an additional computation time of 62\% compared to HNCut. We also compared our scheme against a k-means clustering approach, which both the HNCut and PBT schemes were able to outperform. Our success in accurately quantifying the extent of vascular stain on ovarian cancer TMAs suggests that HNCut could be a very powerful tool in digital pathology and bioinformatics applications where it could be used to facilitate computer-assisted prognostic predictions of disease outcome.
Proceedings of SPIE | 2010
Jun Xu; Andrew Janowczyk; Sharat Chandran; Anant Madabhushi
While geodesic active contours (GAC) have become very popular tools for image segmentation, they are sensitive to model initialization. In order to get an accurate segmentation, the model typically needs to be initialized very close to the true object boundary. Apart from accuracy, automated initialization of the objects of interest is an important pre-requisite to being able to run the active contour model on very large images (such as those found in digitized histopathology). A second limitation of GAC model is that the edge detector function is based on gray scale gradients; color images typically being converted to gray scale prior to computing the gradient. For color images, however, the gray scale gradient results in broken edges and weak boundaries, since the other channels are not exploited for the gradient determination. In this paper we present a new geodesic active contour model that is driven by an accurate and rapid object initialization scheme-weighted mean shift normalized cuts (WNCut). WNCut draws its strength from the integration of two powerful segmentation strategies-mean shift clustering and normalized cuts. WNCut involves first defining a color swatch (typically a few pixels) from the object of interest. A multi-scale mean shift coupled normalized cuts algorithm then rapidly yields an initial accurate detection of all objects in the scene corresponding to the colors in the swatch. This detection result provides the initial boundary for GAC model. The edge-detector function of the GAC model employs a local structure tensor based color gradient, obtained by calculating the local min/max variations contributed from each color channel (e.g. R,G,B or H,S,V). Our color gradient based edge-detector function results in more prominent boundaries compared to classical gray scale gradient based function. We evaluate segmentation results of our new WNCut initialized color gradient based GAC (WNCut-CGAC) model against a popular region-based model (Chan & Vese) on a total of 60 digitized histopathology images. Across a total of 60 images, the WNCut-CGAC model yielded an average overlap, sensitivity, specificity, and positive predictive value of 73%, 83%, 97%, 84%, compared to the Chan & Vese model which had corresponding values of 64%, 75%, 95%, 72%. The rapid and accurate object initialization scheme (WNCut) and the color gradient make the WNCut-CGAC scheme, an ideal segmentation tool for very large, color imagery.
Scientific Reports | 2016
David Romo-Bucheli; Andrew Janowczyk; Hannah Gilmore; Eduardo Romero; Anant Madabhushi
Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these tests are expensive, tissue destructive and require specialized facilities. Interestingly BCa grade has been shown to be correlated with the ODX risk score. Unfortunately Bloom-Richardson (BR) grade determined by pathologists can be variable. A constituent category in BR grading is tubule formation. This study aims to develop a deep learning classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei (a tubule formation indicator - TFI) correlates with the corresponding ODX risk categories. This correlation was assessed in 7513 fields extracted from 174 WSI. The results suggests that low ODX/BR cases have a larger TFI than high ODX/BR cases (p < 0.01). The low ODX/BR cases also presented a larger TFI than that obtained for the rest of cases (p < 0.05). Finally, the high ODX/BR cases have a significantly smaller TFI than that obtained for the rest of cases (p < 0.01).
medical image computing and computer assisted intervention | 2009
Andrew Janowczyk; Sharat Chandran; Rajendra Singh; Dimitra Sasaroli; George Coukos; Michael Feldman; Anant Madabhushi
Research has shown that tumor vascular markers (TVMs) may serve as potential OCa biomarkers for prognosis prediction. One such TVM is ESM-1, which can be visualized by staining ovarian Tissue Microarrays (TMA) with an antibody to ESM-1. The ability to quickly and quantitatively estimate vascular stained regions may yield an image based metric linked to disease survival and outcome. Automated segmentation of the vascular stained regions on the TMAs, however, is hindered by the presence of spuriously stained false positive regions. In this paper, we present a general, robust and efficient unsupervised segmentation algorithm, termed Hierarchical Normalized Cuts (HNCut), and show its application in precisely quantifying the presence and extent of a TVM on OCa TMAs. The strength of HNCut is in the use of a hierarchically represented data structure that bridges the mean shift (MS) and the normalized cuts (NCut) algorithms. This allows HNCut to efficiently traverse a pyramid of the input image at various color resolutions, efficiently and accurately segmenting the object class of interest (in this case ESM-1 vascular stained regions) by simply annotating half a dozen pixels belonging to the target class. Quantitative and qualitative analysis of our results, using 100 pathologist annotated samples across multiple studies, prove the superiority of our method (sensitivity 81%, Positive predictive value (PPV), 80%) versus a popular supervised learning technique, Probabilistic Boosting Trees (sensitivity, PPV of 76% and 66%).
Cytometry Part A | 2017
David Romo-Bucheli; Andrew Janowczyk; Hannah Gilmore; Eduardo Romero; Anant Madabhushi
The treatment and management of early stage estrogen receptor positive (ER+) breast cancer is hindered by the difficulty in identifying patients who require adjuvant chemotherapy in contrast to those that will respond to hormonal therapy. To distinguish between the more and less aggressive breast tumors, which is a fundamental criterion for the selection of an appropriate treatment plan, Oncotype DX (ODX) and other gene expression tests are typically employed. While informative, these gene expression tests are expensive, tissue destructive, and require specialized facilities. Bloom‐Richardson (BR) grade, the common scheme employed in breast cancer grading, has been shown to be correlated with the Oncotype DX risk score. Unfortunately, studies have also shown that the BR grade determined experiences notable inter‐observer variability. One of the constituent categories in BR grading is the mitotic index. The goal of this study was to develop a deep learning (DL) classifier to identify mitotic figures from whole slides images of ER+ breast cancer, the hypothesis being that the number of mitoses identified by the DL classifier would correlate with the corresponding Oncotype DX risk categories. The mitosis detector yielded an average F‐score of 0.556 in the AMIDA mitosis dataset using a 6‐fold validation setup. For a cohort of 174 whole slide images with early stage ER+ breast cancer for which the corresponding Oncotype DX score was available, the distributions of the number of mitoses identified by the DL classifier was found to be significantly different between the high vs low Oncotype DX risk groups (P < 0.01). Comparisons of other risk groups, using both ODX score and histological grade, were also found to present significantly different automated mitoses distributions. Additionally, a support vector machine classifier trained to separate low/high Oncotype DX risk categories using the mitotic count determined by the DL classifier yielded a 83.19% classification accuracy.
Computerized Medical Imaging and Graphics | 2017
Andrew Janowczyk; Ajay Basavanhally; Anant Madabhushi
Digital histopathology slides have many sources of variance, and while pathologists typically do not struggle with them, computer aided diagnostic algorithms can perform erratically. This manuscript presents Stain Normalization using Sparse AutoEncoders (StaNoSA) for use in standardizing the color distributions of a test image to that of a single template image. We show how sparse autoencoders can be leveraged to partition images into tissue sub-types, so that color standardization for each can be performed independently. StaNoSA was validated on three experiments and compared against five other color standardization approaches and shown to have either comparable or superior results.
Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2018
Andrew Janowczyk; Scott Doyle; Hannah Gilmore; Anant Madabhushi
Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40 magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.
Journal of Pathology Informatics | 2013
Andrew Janowczyk; Sharat Chandran; Anant Madabhushi
Introduction: The notion of local scale was introduced to characterize varying levels of image detail so that localized image processing tasks could be performed while simultaneously yielding a globally optimal result. In this paper, we have presented the methodological framework for a novel locally adaptive scale definition, morphologic scale (MS), which is different from extant local scale definitions in that it attempts to characterize local heterogeneity as opposed to local homogeneity. Methods: At every point of interest, the MS is determined as a series of radial paths extending outward in the direction of least resistance, navigating around obstructions. Each pixel can then be directly compared to other points of interest via a rotationally invariant quantitative feature descriptor, determined by the application of Fourier descriptors to the collection of these paths. Results: Our goal is to distinguish tumor and stromal tissue classes in the context of four different digitized pathology datasets: prostate tissue microarrays (TMAs) stained with hematoxylin and eosin (HE) (44 images) and TMAs stained with only hematoxylin (H) (44 images), slide mounts of ovarian H (60 images), and HE breast cancer (51 images) histology images. Classification performance over 50 cross-validation runs using a Bayesian classifier produced mean areas under the curve of 0.88 ± 0.01 (prostate HE), 0.87 ± 0.02 (prostate H), 0.88 ± 0.01 (ovarian H), and 0.80 ± 0.01 (breast HE). Conclusion: For each dataset listed in [Table 3], we randomly selected 100 points per image, and using the procedure described in Experiment 1, we attempted to separate them as belonging to stroma or epithelium.
Scientific Reports | 2016
Gregory Penzias; Andrew Janowczyk; Asha Singanamalli; Mirabela Rusu; Natalie Shih; Michael Feldman; Warick Delprado; Sarita Tiwari; Maret Böhm; Anne Maree Haynes; Lee E. Ponsky; Satish Viswanath; Anant Madabhushi
In applications involving large tissue specimens that have been sectioned into smaller tissue fragments, manual reconstruction of a “pseudo whole-mount” histological section (PWMHS) can facilitate (a) pathological disease annotation, and (b) image registration and correlation with radiological images. We have previously presented a program called HistoStitcher, which allows for more efficient manual reconstruction than general purpose image editing tools (such as Photoshop). However HistoStitcher is still manual and hence can be laborious and subjective, especially when doing large cohort studies. In this work we present AutoStitcher, a novel automated algorithm for reconstructing PWMHSs from digitized tissue fragments. AutoStitcher reconstructs (“stitches”) a PWMHS from a set of 4 fragments by optimizing a novel cost function that is domain-inspired to ensure (i) alignment of similar tissue regions, and (ii) contiguity of the prostate boundary. The algorithm achieves computational efficiency by performing reconstruction in a multi-resolution hierarchy. Automated PWMHS reconstruction results (via AutoStitcher) were quantitatively and qualitatively compared to manual reconstructions obtained via HistoStitcher for 113 prostate pathology sections. Distances between corresponding fiducials placed on each of the automated and manual reconstruction results were between 2.7%–3.2%, reflecting their excellent visual similarity.