Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chukka Srinivas is active.

Publication


Featured researches published by Chukka Srinivas.


Proceedings of SPIE | 2015

Automatic glandular and tubule region segmentation in histological grading of breast cancer

Kien Nguyen; Michael Barnes; Chukka Srinivas; Christophe Chefd'hotel

In the popular Nottingham histologic score system for breast cancer grading, the pathologist analyzes the H and E tissue slides and assigns a score, in the range of 1-3, for tubule formation, nuclear pleomorphism and mitotic activity in the tumor regions. The scores from these three factors are added to give a final score, ranging from 3-9 to grade the cancer. Tubule score (TS), which reflects tubular formation, is a value in 1-3 given by manually estimating the percentage of glandular regions in the tumor that form tubules. In this paper, given an H and E tissue image representing a tumor region, we propose an automated algorithm to detect glandular regions and detect the presence of tubules in these regions. The algorithm first detects all nuclei and lumen candidates in the input image, followed by identifying tumor nuclei from the detected nuclei and identifying true lumina from the lumen candidates using a random forest classifier. Finally, it forms the glandular regions by grouping the closely located tumor nuclei and lumina using a graph-cut-based method. The glandular regions containing true lumina are considered as the ones that form tubules (tubule regions). To evaluate the proposed method, we calculate the tubule percentage (TP), i.e., the ratio of the tubule area to the total glandular area for 353 H and E images of the three TSs, and plot the distribution of these TP values. This plot shows the clear separation among these three scores, suggesting that the proposed algorithm is useful in distinguishing images of these TSs.


Computerized Medical Imaging and Graphics | 2015

Group sparsity model for stain unmixing in brightfield multiplex immunohistochemistry images

Ting Chen; Chukka Srinivas

Multiplex immunohistochemistry (IHC) staining is a new, emerging technique for the detection of multiple biomarkers within a single tissue section. The initial key step in multiplex IHC image analysis in digital pathology is of tremendous clinical importance due to its ability to accurately unmix the IHC image and differentiate each of the stains. The technique has become popular due to its significant efficiency and the rich diagnostic information it contains. The intriguing task of unmixing a three-channel CCD color camera acquired RGB image into more than three colors is very challenging, and to the best of our knowledge, hardly studied in academic literature. This paper presents a novel stain unmixing algorithm for brightfield multiplex IHC images based on a group sparsity model. The proposed framework achieves robust unmixing for more than three chromogenic dyes while preserving the biological constraints of the biomarkers. Typically, a number of biomarkers co-localize in the same cell parts named priori. With this biological information in mind, the number of stains at one pixel therefore has a fixed up-bound, i.e. equivalent to the number of co-localized biomarkers. By leveraging the group sparsity model, the fractions of stain contributions from the co-localized biomarkers are explicitly modeled into one group to yield the least square solution within the group. A sparse solution is obtained among the groups since ideally only one group of biomarkers is present at each pixel. The algorithm is evaluated on both synthetic and clinical data sets, and demonstrates better unmixing results than the existing strategies.


Proceedings of SPIE | 2015

Structure preserving color deconvolution for immunohistochemistry images

Ting Chen; Chukka Srinivas

Immunohistochemistry (IHC) staining is an important technique for the detection of one or more biomarkers within a single tissue section. In digital pathology applications, the correct unmixing of the tissue image into its individual constituent dyes for each biomarker is a prerequisite for accurate detection and identification of the underlying cellular structures. A popular technique thus far is the color deconvolution method1 proposed by Ruifrok et al. However, Ruifroks method independently estimates the individual dye contributions at each pixel which potentially leads to “holes and cracks” in the cells in the unmixed images. This is clearly inadequate since strong spatial dependencies exist in the tissue images which contain rich cellular structures. In this paper, we formulate the unmixing algorithm into a least-square framework of image patches, and propose a novel color deconvolution method which explicitly incorporates the spatial smoothness and structure continuity constraint into a neighborhood graph regularizer. An analytical closed-form solution to the cost function is derived for this algorithm for fast implementation. The algorithm is evaluated on a clinical data set containing a number of 3,3-Diaminobenzidine (DAB) and hematoxylin (HTX) stained IHC slides and demonstrates better unmixing results than the existing strategy.


international symposium on biomedical imaging | 2014

A robust method for inter-marker whole slide registration of digital pathology images using lines based features

Anindya Sarkar; Quan Yuan; Chukka Srinivas

An automated image registration method is proposed for registration of digital whole slide scans of adjacent tissue sections. An example workflow is where a pathologist annotates a tumor region in a primary (H&E1) stained image and the corresponding regions, in nearby sections, stained with other biomarkers, are identified and analyzed for aiding in clinical diagnosis. The different markers convey different clinical information leading to a more comprehensive analysis. This registration process, if done manually, is tedious and time-consuming. We propose a two-pass algorithm. A lines-based approach, computed from tissue boundary regions, returns the global transformation. Then normalized correlation, based on gradient magnitude images, is used for more precise local matching in a multi-resolution approach. The algorithm is robust to flips, large rotation angles, tissue wear-and-tear, and Area of Interest (AOI2) mismatch, and outperforms state-of-the-art registration methods for digital pathology images.


Laboratory Investigation | 2017

Whole tumor section quantitative image analysis maximizes between-pathologists’ reproducibility for clinical immunohistochemistry-based biomarkers

Michael Barnes; Chukka Srinivas; Isaac Bai; Judith Frederick; Wendy Liu; Anindya Sarkar; Xiuzhong Wang; Yao Nie; Bryce Portier; Monesh Kapadia; Olcay Sertel; Elizabeth Little; Bikash Sabata; Jim Ranger-Moore

Pathologists have had increasing responsibility for quantitating immunohistochemistry (IHC) biomarkers with the expectation of high between-reader reproducibility due to clinical decision-making especially for patient therapy. Digital imaging-based quantitation of IHC clinical slides offers a potential aid for improvement; however, its clinical adoption is limited potentially due to a conventional field-of-view annotation approach. In this study, we implemented a novel solely morphology-based whole tumor section annotation strategy to maximize image analysis quantitation results between readers. We first compare the field-of-view image analysis annotation approach to digital and manual-based modalities across multiple clinical studies (~120 cases per study) and biomarkers (ER, PR, HER2, Ki-67, and p53 IHC) and then compare a subset of the same cases (~40 cases each from the ER, PR, HER2, and Ki-67 studies) using whole tumor section annotation approach to understand incremental value of all modalities. Between-reader results for each biomarker in relation to conventional scoring modalities showed similar concordance as manual read: ER field-of-view image analysis: 95.3% (95% CI 92.0-98.2%) vs digital read: 92.0% (87.8-95.8%) vs manual read: 94.9% (91.4-97.8%); PR field-of-view image analysis: 94.1% (90.3-97.2%) vs digital read: 94.0% (90.2-97.1%) vs manual read: 94.4% (90.9-97.2%); Ki-67 field-of-view image analysis: 86.8% (82.1-91.4%) vs digital read: 76.6% (70.9-82.2%) vs manual read: 85.6% (80.4-90.4%); p53 field-of-view image analysis: 81.7% (76.4-86.8%) vs digital read: 80.6% (75.0-86.0%) vs manual read: 78.8% (72.2-83.3%); and HER2 field-of-view image analysis: 93.8% (90.0-97.2%) vs digital read: 91.0 (86.6-94.9%) vs manual read: 87.2% (82.1-91.9%). Subset implementation and analysis on the same cases using whole tumor section image analysis approach showed significant improvement between pathologists over field-of-view image analysis and manual read (HER2 100% (97-100%), P=0.013 field-of-view image analysis and 0.013 manual read; Ki-67 100% (96.9-100%), P=0.040 and 0.012; ER 98.3% (94.1-99.5%), p=0.232 and 0.181; and PR 96.6% (91.5-98.7%), p=0.012 and 0.257). Overall, whole tumor section image analysis significantly improves between-pathologists reproducibility and is the optimal approach for clinical-based image analysis algorithms.


Proceedings of SPIE | 2016

Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

Xiuzhong Wang; Chukka Srinivas

This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.


international symposium on biomedical imaging | 2015

A method for generating context-aware features for object classification and its application to IHC stained image analysis

Yao Nie; Chukka Srinivas

Tissue object classification, such as nuclei classification, is often the pre-requisite for quantitative analysis of histopathological tissue slide images. However, inter-image variations, including biological and stain variations, impose great challenges for robust classification. While biological variations have yet been addressed explicitly, stain variation issues are mainly solved by aligning color distribution to improve global stain appearance consistency. Such methods are risky when classification needs to be performed among objects differing by subtle color differences, as image color distributions are also affected by objects prevalence in a given image. In this paper, we present a simple yet effective method that incorporates object and context features to implicitly compensate for the inter-image variations. The basic idea is to first identify object features that characterize the objects well within the image. Then, for each object feature, identify a set of associated context features that characterize the feature variations between the images. Finally, each object feature and the associated context features are used to train a base classifier, which generates a numeric value representing the degree to which an object belongs to a class. This value is called the “context-aware” feature and the input to the end classifier. The proposed method is used to address classification of lymphocytes and negatively expressed tumor cells in immunohistochemical (IHC) stained breast cancer images. Experiments show significant descriptive power gain of the features which also boost the end classifier performance.


Proceedings of SPIE | 2015

Adaptive whole slide tissue segmentation to handle inter-slide tissue variability

Kien Nguyen; Ting Chen; Joerg Bredno; Chukka Srinivas; Christophe Chefd'hotel; Solange Romagnoli; Astrid Heller; Oliver Grimm; Fabien Gaire

Automatic whole slide (WS) tissue image segmentation is an important problem in digital pathology. A conventional classification-based method (referred to as CCb method) to tackle this problem is to train a classifier on a pre-built training database (pre-built DB) obtained from a set of training WS images, and use it to classify all image pixels or image patches (test samples) in the test WS image into different tissue types. This method suffers from a major challenge in WS image analysis: the strong inter-slide tissue variability (ISTV), i.e., the variability of tissue appearance from slide to slide. Due to this ISTV, the test samples are usually very different from the training data, which is the source of misclassification. To address the ISTV, we propose a novel method, called slide-adapted classification (SAC), to extend the CCb method. We assume that in the test WS image, besides regions with high variation from the pre-built DB, there are regions with lower variation from this DB. Hence, the SAC method performs a two-stage classification: first classifies all test samples in a WS image (as done in the CCb method) and compute their classification confidence scores. Next, the samples classified with high confidence scores (samples being reliably classified due to their low variation from the pre-built DB) are combined with the pre-built DB to generate an adaptive training DB to reclassify the low confidence samples. The method is motivated by the large size of the test WS image (a large number of high confidence samples are obtained), and the lower variability between the low and high confidence samples (both belonging to the same WS image) compared to the ISTV. Using the proposed SAC method to segment a large dataset of 24 WS images, we improve the accuracy over the CCb method.


international symposium on biomedical imaging | 2014

Adaptive spectral unmixing for histopathology fluorescent images

Ting Chen; Anindya Sarkar; Chukka Srinivas

Accurate spectral unmixing of fluorescent images is clinically important because it is one of the key steps in multiplex histopathology image analysis. The narrow-band reference spectra for quantum dot biomarkers are often precisely known apriori, while the broad-band DAPI (nuclear biomarker) and tissue auto-fluorescence reference spectra are tissue dependent and vary from image to image. This paper presents a novel spectral unmixing algorithm based on data adaptive broad-band reference spectrum refinement for accurate reference spectra estimation of each image. The algorithm detects nuclear and tissue regions from the DAPI channel in the unmixed images, and estimates the new reference spectra for the biomarkers. A nuclear ranking algorithm is proposed for nuclear region segmentation to achieve more robust and accurate reference spectra estimations for the given image. The proposed framework iteratively updates the broad-band reference spectra and unmixes the fluorescent image till convergence. The algorithm was tested on a clinical data set containing a large number of multiplex fluorescent slides and demonstrates better unmixing results than the existing spectral unmixing strategies.


Archive | 2017

METHODS AND SYSTEMS FOR ASSESSING RISK OF BREAST CANCER RECURRENCE

Barnes Michael; Chukka Srinivas; Knowles David

Collaboration


Dive into the Chukka Srinivas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge