Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ken Chang is active.

Publication


Featured researches published by Ken Chang.


Neuro-oncology | 2017

Multimodal MRI features predict isocitrate dehydrogenase genotype in high-grade gliomas

Biqi Zhang; Ken Chang; Shakti Ramkissoon; Shyam K. Tanguturi; Wenya Linda Bi; David A. Reardon; Keith L. Ligon; Brian M. Alexander; Patrick Y. Wen; Raymond Huang

Background. High-grade gliomas with mutations in the isocitrate dehydrogenase (IDH) gene family confer longer overall survival relative to their IDH-wild-type counterparts. Accurate determination of the IDH genotype preoperatively may have both prognostic and diagnostic value. The current study used a machine-learning algorithm to generate a model predictive of IDH genotype in high-grade gliomas based on clinical variables and multimodal features extracted from conventional MRI. Methods. Preoperative MRIs were obtained for 120 patients with primary grades III (n = 35) and IV (n = 85) glioma in this retrospective study. IDH genotype was confirmed for grade III (32/35, 91%) and IV (22/85, 26%) tumors by immunohistochemistry, spectrometry-based mutation genotyping (OncoMap), or multiplex exome sequencing (OncoPanel). IDH1 and IDH2 mutations were mutually exclusive, and all mutated tumors were collapsed into one IDH-mutated cohort. Cases were randomly assigned to either the training (n = 90) or validation cohort (n = 30). A total of 2970 imaging features were extracted from pre- and postcontrast T1-weighted, T2-weighted, and apparent diffusion coefficient map. Using a random forest algorithm, nonredundant features were integrated with clinical data to generate a model predictive of IDH genotype. Results. Our model achieved accuracies of 86% (area under the curve [AUC] = 0.8830) in the training cohort and 89% (AUC = 0.9231) in the validation cohort. Features with the highest predictive value included patient age as well as parametric intensity, texture, and shape features. Conclusion. Using a machine-learning algorithm, we achieved accurate prediction of IDH genotype in high-grade gliomas with preoperative clinical and MRI features.


Clinical Cancer Research | 2017

Residual Convolutional Neural Network for Determination of IDH Status in Low- and High-grade Gliomas from MR Imaging

Ken Chang; Harrison X. Bai; Hao Zhou; Chang Su; Wenya Linda Bi; Ena Agbodza; Vasileios K. Kavouridis; Joeky T. Senders; Alessandro Boaro; Andrew Beers; Biqi Zhang; Alexandra Capellini; Weihua Liao; Qin Shen; Xuejun Li; Bo Xiao; Jane Cryan; Shakti Ramkissoon; Lori A. Ramkissoon; Keith L. Ligon; Patrick Y. Wen; Ranjit S. Bindra; John H. Woo; Omar Arnaout; Elizabeth R. Gerstner; Paul J. Zhang; Bruce R. Rosen; Li Yang; Raymond Huang; Jayashree Kalpathy-Cramer

Purpose: Isocitrate dehydrogenase (IDH) mutations in glioma patients confer longer survival and may guide treatment decision making. We aimed to predict the IDH status of gliomas from MR imaging by applying a residual convolutional neural network to preoperative radiographic data. Experimental Design: Preoperative imaging was acquired for 201 patients from the Hospital of University of Pennsylvania (HUP), 157 patients from Brigham and Womens Hospital (BWH), and 138 patients from The Cancer Imaging Archive (TCIA) and divided into training, validation, and testing sets. We trained a residual convolutional neural network for each MR sequence (FLAIR, T2, T1 precontrast, and T1 postcontrast) and built a predictive model from the outputs. To increase the size of the training set and prevent overfitting, we augmented the training set images by introducing random rotations, translations, flips, shearing, and zooming. Results: With our neural network model, we achieved IDH prediction accuracies of 82.8% (AUC = 0.90), 83.0% (AUC = 0.93), and 85.7% (AUC = 0.94) within training, validation, and testing sets, respectively. When age at diagnosis was incorporated into the model, the training, validation, and testing accuracies increased to 87.3% (AUC = 0.93), 87.6% (AUC = 0.95), and 89.1% (AUC = 0.95), respectively. Conclusions: We developed a deep learning technique to noninvasively predict IDH genotype in grade II–IV glioma using conventional MR imaging using a multi-institutional data set. Clin Cancer Res; 24(5); 1073–81. ©2017 AACR.


Neuro-oncology | 2017

Quantitative imaging biomarkers for risk stratification of patients with recurrent glioblastoma treated with bevacizumab

Patrick Grossmann; Vivek Narayan; Ken Chang; Rifaquat Rahman; Lauren E. Abrey; David A. Reardon; Lawrence H. Schwartz; Patrick Y. Wen; Brian M. Alexander; Raymond Huang; Hugo J.W.L. Aerts

Background Anti-angiogenic therapy with bevacizumab is the most widely used treatment option for recurrent glioblastoma, but therapeutic response varies substantially and effective biomarkers for patient selection are not available. To this end, we determine whether novel quantitative radiomic strategies on the basis of MRI have the potential to noninvasively stratify survival and progression in this patient population. Methods In an initial cohort of 126 patients, we identified a distinct set of features representative of the radiographic phenotype on baseline (pretreatment) MRI. These selected features were evaluated on a second cohort of 165 patients from the multicenter BRAIN trial with prospectively acquired clinical and imaging data. Features were evaluated in terms of prognostic value for overall survival (OS), progression-free survival (PFS), and progression within 3, 6, and 9 months using baseline imaging and first follow-up imaging at 6 weeks posttreatment initiation. Results Multivariable analysis of features derived at baseline imaging resulted in significant stratification of OS (hazard ratio [HR] = 2.5; log-rank P = 0.001) and PFS (HR = 4.5; log-rank P = 2.1 × 10-5) in validation data. These stratifications were stronger compared with clinical or volumetric covariates (permutation test false discovery rate [FDR] <0.05). Univariable analysis of a prognostic textural heterogeneity feature (information correlation) derived from postcontrast T1-weighted imaging revealed significantly higher scores for patients who progressed within 3 months (Wilcoxon test P = 8.8 × 10-8). Generally, features derived from postcontrast T1-weighted imaging yielded higher prognostic power compared with precontrast enhancing T2-weighted imaging. Conclusion Radiomics provides prognostic value for survival and progression in patients with recurrent glioblastoma receiving bevacizumab treatment. These results could lead to the development of quantitative pretreatment biomarkers to predict benefit from bevacizumab using standard of care imaging.


JAMA Ophthalmology | 2018

Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks

James M. Brown; J. Peter Campbell; Andrew Beers; Ken Chang; Susan Ostmo; R.V. Paul Chan; Jennifer G. Dy; Deniz Erdogmus; Stratis Ioannidis; Jayashree Kalpathy-Cramer; Michael F. Chiang

Importance Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable. Objective To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs. Design, Setting, and Participants A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre–plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP. Data were collected from July 2011 to December 2016. Data were analyzed from December 2016 to September 2017. Exposures A deep learning algorithm trained on retinal photographs. Main Outcomes and Measures Receiver operating characteristic analysis was performed to evaluate performance of the algorithm against the RSD. Quadratic-weighted &kgr; coefficients were calculated for ternary classification (ie, normal, pre–plus disease, and plus disease) to measure agreement with the RSD and 8 independent experts. Results Of the 5511 included retinal photographs, 4535 (82.3%) were graded as normal, 805 (14.6%) as pre–plus disease, and 172 (3.1%) as plus disease, based on the RSD. Mean (SD) area under the receiver operating characteristic curve statistics were 0.94 (0.01) for the diagnosis of normal (vs pre–plus disease or plus disease) and 0.98 (0.01) for the diagnosis of plus disease (vs normal or pre–plus disease). For diagnosis of plus disease in an independent test set of 100 retinal images, the algorithm achieved a sensitivity of 93% with 94% specificity. For detection of pre–plus disease or worse, the sensitivity and specificity were 100% and 94%, respectively. On the same test set, the algorithm achieved a quadratic-weighted &kgr; coefficient of 0.92 compared with the RSD, outperforming 6 of 8 ROP experts. Conclusions and Relevance This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts. This has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP.


Journal of the American Medical Informatics Association | 2018

Distributed deep learning networks among institutions for medical imaging

Ken Chang; Niranjan Balachandar; Carson Lam; Darvin Yi; James M. Brown; Andrew Beers; Bruce R. Rosen; Daniel L. Rubin; Jayashree Kalpathy-Cramer

Abstract Objective Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. Methods We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). Results We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. Conclusions We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.


Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications | 2018

Fully automated disease severity assessment and treatment monitoring in retinopathy of prematurity using deep learning

James M. Brown; J. Peter Campbell; Andrew Beers; Ken Chang; Kyra Donohue; Susan Ostmo; R.V. Paul Chan; Jennifer G. Dy; Deniz Erdogmus; Stratis Ioannidis; Michael F. Chiang; Jayashree Kalpathy-Cramer

Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts’ consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.


Medical Imaging 2018: Physics of Medical Imaging | 2018

Anatomical DCE-MRI phantoms generated from glioma patient data

Andrew Beers; Ken Chang; James A. Brown; Xia Zhu; Dipanjan Sengupta; Theodore L. Willke; Elizabeth R. Gerstner; Bruce R. Rosen; Jayashree Kalpathy-Cramer

Several digital reference objects (DROs) for DCE-MRI have been created to test the accuracy of pharmacokinetic modeling software under a variety of different noise conditions. However, there are few DROs that mimic the anatomical distribution of voxels found in real data, and similarly few DROs that are based on both malignant and normal tissue. We propose a series of DROs for modeling Ktrans and Ve derived from a publically-available RIDER DCEMRI dataset of 19 patients with gliomas. For each patient’s DCE-MRI data, we generate Ktrans and Ve parameter maps using an algorithm validated on the QIBA Tofts model phantoms. These parameter maps are denoised, and then used to generate noiseless time-intensity curves for each of the original voxels. This is accomplished by reversing the Tofts model to generate concentration-times curves from Ktrans and Ve inputs, and subsequently converting those curves into intensity values by normalizing to each patient’s average pre-bolus image intensity. The result is a noiseless DRO in the shape of the original patient data with known ground-truth Ktrans and Ve values. We make this dataset publically available for download for all 19 patients of the original RIDER dataset.


Medical Imaging 2018: Image Processing | 2018

Sequential neural networks for biologically informed glioma segmentation

Andrew Beers; Ken Chang; James M. Brown; Elizabeth R. Gerstner; Bruce R. Rosen; Jayashree Kalpathy-Cramer

In the last five years, advances in processing power and computational efficiency in graphical processing units have catalyzed dozens of deep neural network segmentation algorithms for a variety of target tissues and malignancies. However, few of these algorithms preconfigure any biological context of their chosen segmentation tissues, instead relying on the neural network’s optimizer to develop such associations de novo. We present a novel method for applying deep neural networks to the problem of glioma tissue segmentation that takes into account the structured nature of gliomas – edematous tissue surrounding mutually-exclusive regions of enhancing and non-enhancing tumor. We trained separate deep neural networks with a 3D U-Net architecture in a tree structure to create segmentations for edema, non-enhancing tumor, and enhancing tumor regions. Specifically, training was configured such that the whole tumor region including edema was predicted first, and its output segmentation was fed as input into separate models to predict enhancing and non-enhancing tumor. We trained our model on publicly available pre- and post-contrast T1 images, T2 images, and FLAIR images, and validated our trained model on patient data from an ongoing clinical trial.


Journal of Neuro-oncology | 2018

Reduced expression of DNA repair genes and chemosensitivity in 1p19q codeleted lower-grade gliomas

Lei Tang; Lu Deng; Harrison X. Bai; James Sun; Natalie Neale; Jing Wu; Yinyan Wang; Ken Chang; Raymond Huang; Paul J. Zhang; Xuejun Li; Bo Xiao; Ya Cao; Yongguang Tao; Li Yang

BackgroundLower-grade gliomas (LGGs, defined as WHO grades II and III) with 1p19q codeletion have increased chemosensitivity when compared to LGGs without 1p19q codeletion, but the mechanism is currently unknown.MethodsRNAseq data from 515 LGG patients in the Cancer Genome Atlas (TCGA) were analyzed to compare the effect of expression of the 9 DNA repair genes located on chromosome arms 1p and 19q on progression free survival (PFS) and overall survival (OS) between patients who received chemotherapy and those who did not. Chemosensitivity of cells with DNA repair genes knocked down was tested using MTS cell proliferation assay in HS683 cell line and U251 cell line.ResultsThe expression of 9 DNA repair genes on 1p and 19q was significantly lower in 1p19q-codeleted tumors (n = 175) than in tumors without the codeletion (n = 337) (p < 0.001). In LGG patients who received chemotherapy, lower expression of LIG1, POLD1, PNKP, RAD54L and MUTYH was associated with longer PFS and OS. This difference between chemotherapy and non-chemotherapy groups in the association of gene expression with survival was not observed in non-DNA repair genes located on chromosome arms 1p and 19q. MTS assays showed that knockdown of DNA repair genes LIG1, POLD1, PNKP, RAD54L and MUTYH significantly inhibited recovery in response to temozolomide when compared with control group (p < 0.001).ConclusionsOur results suggest that reduced expression of DNA repair genes on chromosome arms 1p and 19q may account for the increased chemosensitivity of LGGs with 1p19q codeletion.


Neuro-oncology | 2016

Multimodal imaging patterns predict survival in recurrent glioblastoma patients treated with bevacizumab

Ken Chang; Biqi Zhang; Xiaotao Guo; Min Zong; Rifaquat Rahman; David Sanchez; Nicolette Winder; David A. Reardon; Binsheng Zhao; Patrick Y. Wen; Raymond Huang

Collaboration


Dive into the Ken Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raymond Huang

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian M. Alexander

Brigham and Women's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge