Zhaoxuan Ma
Cedars-Sinai Medical Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhaoxuan Ma.
Computerized Medical Imaging and Graphics | 2015
Arkadiusz Gertych; Nathan Ing; Zhaoxuan Ma; Thomas J. Fuchs; Sadri Salman; Sambit K. Mohanty; Sanica Bhele; Adriana Velásquez-Vacca; Mahul B. Amin; Beatrice Knudsen
Computerized evaluation of histological preparations of prostate tissues involves identification of tissue components such as stroma (ST), benign/normal epithelium (BN) and prostate cancer (PCa). Image classification approaches have been developed to identify and classify glandular regions in digital images of prostate tissues; however their success has been limited by difficulties in cellular segmentation and tissue heterogeneity. We hypothesized that utilizing image pixels to generate intensity histograms of hematoxylin (H) and eosin (E) stains deconvoluted from H&E images numerically captures the architectural difference between glands and stroma. In addition, we postulated that joint histograms of local binary patterns and local variance (LBPxVAR) can be used as sensitive textural features to differentiate benign/normal tissue from cancer. Here we utilized a machine learning approach comprising of a support vector machine (SVM) followed by a random forest (RF) classifier to digitally stratify prostate tissue into ST, BN and PCa areas. Two pathologists manually annotated 210 images of low- and high-grade tumors from slides that were selected from 20 radical prostatectomies and digitized at high-resolution. The 210 images were split into the training (n=19) and test (n=191) sets. Local intensity histograms of H and E were used to train a SVM classifier to separate ST from epithelium (BN+PCa). The performance of SVM prediction was evaluated by measuring the accuracy of delineating epithelial areas. The Jaccard J=59.5 ± 14.6 and Rand Ri=62.0 ± 7.5 indices reported a significantly better prediction when compared to a reference method (Chen et al., Clinical Proteomics 2013, 10:18) based on the averaged values from the test set. To distinguish BN from PCa we trained a RF classifier with LBPxVAR and local intensity histograms and obtained separate performance values for BN and PCa: JBN=35.2 ± 24.9, OBN=49.6 ± 32, JPCa=49.5 ± 18.5, OPCa=72.7 ± 14.8 and Ri=60.6 ± 7.6 in the test set. Our pixel-based classification does not rely on the detection of lumens, which is prone to errors and has limitations in high-grade cancers and has the potential to aid in clinical studies in which the quantification of tumor content is necessary to prognosticate the course of the disease. The image data set with ground truth annotation is available for public use to stimulate further research in this area.
Computers in Biology and Medicine | 2016
Arkadiusz Gertych; Zhaoxuan Ma; Jian Tajbakhsh; Adriana Velásquez-Vacca; Beatrice Knudsen
High-resolution three-dimensional (3-D) microscopy combined with multiplexing of fluorescent labels allows high-content analysis of large numbers of cell nuclei. The full automation of 3-D screening platforms necessitates image processing algorithms that can accurately and robustly delineate nuclei in images with little to no human intervention. Imaging-based high-content screening was originally developed as a powerful tool for drug discovery. However, cell confluency, complexity of nuclear staining as well as poor contrast between nuclei and background result in slow and unreliable 3-D image processing and therefore negatively affect the performance of studying a drug response. Here, we propose a new method, 3D-RSD, to delineate nuclei by means of 3-D radial symmetries and test it on high-resolution image data of human cancer cells treated by drugs. The nuclei detection performance was evaluated by means of manually generated ground truth from 2351 nuclei (27 confocal stacks). When compared to three other nuclei segmentation methods, 3D-RSD possessed a better true positive rate of 83.3% and F-score of 0.895±0.045 (p-value=0.047). Altogether, 3D-RSD is a method with a very good overall segmentation performance. Furthermore, implementation of radial symmetries offers good processing speed, and makes 3D-RSD less sensitive to staining patterns. In particular, the 3D-RSD method performs well in cell lines, which are often used in imaging-based HCS platforms and are afflicted by nuclear crowding and overlaps that hinder feature extraction.
The Journal of Pathology: Clinical Research | 2016
Fangjin Huang; Zhaoxuan Ma; Sara Pollan; Xiaopu Yuan; Steven Swartwood; Arkadiusz Gertych; Maria Rodriguez; Jayati Mallick; Sanica Bhele; Maha Guindi; Deepti Dhall; Ann E. Walts; Shikha Bose; Mariza de Peralta Venturina; Alberto M. Marchevsky; Daniel Luthringer; Stephan M. Feller; Benjamin P. Berman; Michael R. Freeman; W. Gregory Alvord; George F. Vande Woude; Mahul B. Amin; Beatrice Knudsen
The limited clinical success of anti‐HGF/MET drugs can be attributed to the lack of predictive biomarkers that adequately select patients for treatment. We demonstrate here that quantitative digital imaging of formalin fixed paraffin embedded tissues stained by immunohistochemistry can be used to measure signals from weakly staining antibodies and provides new opportunities to develop assays for detection of MET receptor activity. To establish a biomarker panel of MET activation, we employed seven antibodies measuring protein expression in the HGF/MET pathway in 20 cases and up to 80 cores from 18 human cancer types. The antibodies bind to epitopes in the extra (EC)‐ and intracellular (IC) domains of MET (MET4EC, SP44_METIC, D1C2_METIC), to MET‐pY1234/pY1235, a marker of MET kinase activation, as well as to HGF, pSFK or pMAPK. Expression of HGF was determined in tumour cells (T_HGF) as well as in stroma surrounding cancer (St_HGF). Remarkably, MET4EC correlated more strongly with pMET (r = 0.47) than SP44_METIC (r = 0.21) or D1C2_METIC (r = 0.08) across 18 cancer types. In addition, correlation coefficients of pMET and T_HGF (r = 0.38) and pMET and pSFK (r = 0.56) were high. Prediction models of MET activation reveal cancer‐type specific differences in performance of MET4EC, SP44_METIC and anti‐HGF antibodies. Thus, we conclude that assays to predict the response to HGF/MET inhibitors require a cancer‐type specific antibody selection and should be developed in those cancer types in which they are employed clinically.
Archive | 2014
Sadri Salman; Zhaoxuan Ma; Sambit K. Mohanty; Sanica Bhele; Yung-Tien Chu; Beatrice Knudsen; Arkadiusz Gertych
Separating benign glands, and cancer areas from stroma is one of the vital steps towards automated grading of prostate cancer in digital images of H&E preparations. In this work we present a novel tool that utilizes a supervised classification of histograms of staining components in hematoxylin and eosin images to delineate areas of benign and cancer glands. Using high resolution images of whole slide prostatectomies we compared several image classification schemes which included intensity histograms, histograms of oriented gradients, and their concatenations to the manual annotations of tissues by a pathologist, and showed that joint intensity histograms of hematoxylin and eosin components performed with the highest accuracy.
Conference of Information Technologies in Biomedicine | 2016
Nathan Ing; Sadri Salman; Zhaoxuan Ma; Ann E. Walts; Beatrice Knudsen; Arkadiusz Gertych
The World Health Organization recommends subclassification of lung cancer according to the percentages of histologic subtypes within a tumor. The manual quantification of lung tumor composition is very time consuming, but it can potentially be aided by a machine learning application. We have updated our previously developed methodology to segment and distinguish solid and micropapillary lung tumor subtypes. Binary tumor masks delineated by machine learning were defined by the mean area of binary objects and by the number of objects found in an image frame. These two features distinguished solid (\(n=31\)) and micropapillary (\(n=61\)) histologic subtypes with excellent performance (\(p<4.04{\textsc {e}}{\text {-}}19\)) for three different frame sizes. Our method to quantify tumor growth patterns applied to histological images of lung adenocarcinoma, demonstrates for the first time that it is feasible to quantify the composition of histological subtypes in individual lung cancers.
Medical Imaging 2018: Digital Pathology | 2018
Nathan Ing; Zhaoxuan Ma; Jiayun Li; Hootan Salemi; Corey W. Arnold; Beatrice Knudsen; Arkadiusz Gertych
Certain pathology workflows, such as classification and grading of prostate adenocarcinoma according to the Gleason grade scheme, stand to gain speed and objectivity by incorporating contemporary digital image analysis methods. We compiled a dataset of 513 high resolution image tiles from primary prostate adenocarcinoma wherein individual glands and stroma were demarcated and graded by hand. With this unique dataset, we tested four Convolutional Neural Network architectures including FCN-8s, two SegNet variants, and multi-scale U-Net for performance in semantic segmentation of high- and low-grade tumors. In a 5-fold cross-validation experiment, the FCN-8s architecture achieved a mIOU of 0.759 and an accuracy of 0.87, while the less complex U-Net architecture achieved a mIOU of 0.738 and accuracy of 0.885. The FCN-8s architecture applied to whole slide images not used for training achieved a mIOU of 0.857 in annotated tumor foci with a multiresolution processing time averaging 11 minutes per slide. The three architectures tested on whole slides all achieved areas under the Receiver Operating Characteristic curve near 1, strongly demonstrating the suitability of semantic segmentation Convolutional Neural Networks for detecting and grading prostate cancer foci in radical prostatectomies.
International Conference on Information Technologies in Biomedicine | 2018
Zhaoxuan Ma; Zaneta Swiderska-Chadaj; Nathan Ing; Hootan Salemi; Dermot P. McGovern; Beatrice Knudsen; Arkadiusz Gertych
Robust delineation of tissue components in hematoxylin and eosin (H&E) stained slides is a critical step in quantifying tissue morphology. Fully convolutional neural networks (FCN) are ideally suited for automatic and efficient segmentation of tissue components in H&E slides. However, their performance relies on the network architecture, quality and depth of training. Here we introduce a set of 802 image tiles of colon biopsies from 2 subjects with inflammatory bowel disease (IBD) annotated for glandular epithelium (EP), gland lumen together with goblet cells (LG), and stroma (ST). We either trained the FCN-8s de-novo on our images (DN-FCN-8s) or pre-trained on the ImageNet dataset and fine-tuned on our images (FT-FCN-8s). For comparison, we used the U-Net trained de-novo. The training involved 700/802 images, leaving 102 images as a testing set. Ultimately, each model was validated in an independent digital biopsy slide. We also determined how the number of images used for training affects the performance of the model and observed a plateau in trainability at 700 images. In the testing set, U-Net and FT-FCN-8s achieved accuracies of 92.30% and 92.26% respectively. In the independent biopsy slide, U-Net demonstrated a segmentation accuracy of 88.64%, with F1-scores of 0.74 (EP), 0.92 (LG), and 0.93 (ST). The performance of the FT-FCN-8s was slightly worse, but the model required fewer images to reach a high classification performance. Our data demonstrate that all 3 FCNs are appropriate for segmentation of glands in biopsies from patients with IBD and open the door for quantification of IBD associated pathologies.
Scientific Reports | 2017
Nathan Ing; Fangjin Huang; Andrew Conley; Sungyong You; Zhaoxuan Ma; Sergey Klimov; Chisato Ohe; Xiaopu Yuan; Mahul B. Amin; Robert A. Figlin; Arkadiusz Gertych; Beatrice Knudsen
Gene expression signatures are commonly used as predictive biomarkers, but do not capture structural features within the tissue architecture. Here we apply a 2-step machine learning framework for quantitative imaging of tumor vasculature to derive a spatially informed, prognostic gene signature. The trained algorithms classify endothelial cells and generate a vascular area mask (VAM) in H&E micrographs of clear cell renal cell carcinoma (ccRCC) cases from The Cancer Genome Atlas (TCGA). Quantification of VAMs led to the discovery of 9 vascular features (9VF) that predicted disease-free-survival in a discovery cohort (n = 64, HR = 2.3). Correlation analysis and information gain identified a 14 gene expression signature related to the 9VF’s. Two generalized linear models with elastic net regularization (14VF and 14GT), based on the 14 genes, separated independent cohorts of up to 301 cases into good and poor disease-free survival groups (14VF HR = 2.4, 14GT HR = 3.33). For the first time, we successfully applied digital image analysis and targeted machine learning to develop prognostic, morphology-based, gene expression signatures from the vascular architecture. This novel morphogenomic approach has the potential to improve previous methods for biomarker development.
Diagnostic Pathology | 2017
Zhaoxuan Ma; Stephen L. Shiao; Emi J. Yoshida; Steven Swartwood; Fangjin Huang; Michael E. Doche; Alice P. Chung; Beatrice Knudsen; Arkadiusz Gertych
Cancer Research | 2018
Yiwu Yan; Bo Zhou; Xiaopu Yuan; Michael E. Doche; Zhaoxuan Ma; Colm Morrissey; Arkadiusz Gertych; Sungyong You; Beatrice Knudsen; Michael R. Freeman; Wei Yang