Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Beers is active.

Publication


Featured researches published by Andrew Beers.


Clinical Cancer Research | 2017

Residual Convolutional Neural Network for Determination of IDH Status in Low- and High-grade Gliomas from MR Imaging

Ken Chang; Harrison X. Bai; Hao Zhou; Chang Su; Wenya Linda Bi; Ena Agbodza; Vasileios K. Kavouridis; Joeky T. Senders; Alessandro Boaro; Andrew Beers; Biqi Zhang; Alexandra Capellini; Weihua Liao; Qin Shen; Xuejun Li; Bo Xiao; Jane Cryan; Shakti Ramkissoon; Lori A. Ramkissoon; Keith L. Ligon; Patrick Y. Wen; Ranjit S. Bindra; John H. Woo; Omar Arnaout; Elizabeth R. Gerstner; Paul J. Zhang; Bruce R. Rosen; Li Yang; Raymond Huang; Jayashree Kalpathy-Cramer

Purpose: Isocitrate dehydrogenase (IDH) mutations in glioma patients confer longer survival and may guide treatment decision making. We aimed to predict the IDH status of gliomas from MR imaging by applying a residual convolutional neural network to preoperative radiographic data. Experimental Design: Preoperative imaging was acquired for 201 patients from the Hospital of University of Pennsylvania (HUP), 157 patients from Brigham and Womens Hospital (BWH), and 138 patients from The Cancer Imaging Archive (TCIA) and divided into training, validation, and testing sets. We trained a residual convolutional neural network for each MR sequence (FLAIR, T2, T1 precontrast, and T1 postcontrast) and built a predictive model from the outputs. To increase the size of the training set and prevent overfitting, we augmented the training set images by introducing random rotations, translations, flips, shearing, and zooming. Results: With our neural network model, we achieved IDH prediction accuracies of 82.8% (AUC = 0.90), 83.0% (AUC = 0.93), and 85.7% (AUC = 0.94) within training, validation, and testing sets, respectively. When age at diagnosis was incorporated into the model, the training, validation, and testing accuracies increased to 87.3% (AUC = 0.93), 87.6% (AUC = 0.95), and 89.1% (AUC = 0.95), respectively. Conclusions: We developed a deep learning technique to noninvasively predict IDH genotype in grade II–IV glioma using conventional MR imaging using a multi-institutional data set. Clin Cancer Res; 24(5); 1073–81. ©2017 AACR.


Scientific Reports | 2017

A Multi-Institutional Comparison of Dynamic Contrast-Enhanced Magnetic Resonance Imaging Parameter Calculations

Rachel B. Ger; Abdallah S.R. Mohamed; Musaddiq J. Awan; Yao Ding; Kimberly Li; Xenia Fave; Andrew Beers; Brandon Driscoll; Hesham Elhalawani; David A. Hormuth; Petra J. van Houdt; Renjie He; Shouhao Zhou; Kelsey B. Mathieu; Heng Li; C. Coolens; Caroline Chung; James A. Bankson; Wei Huang; Jihong Wang; Vlad C. Sandulache; Stephen Y. Lai; Rebecca M. Howell; R. Jason Stafford; Thomas E. Yankeelov; Uulke A. van der Heide; Steven J. Frank; Daniel P. Barboriak; John D. Hazle; L Court

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provides quantitative metrics (e.g. Ktrans, ve) via pharmacokinetic models. We tested inter-algorithm variability in these quantitative metrics with 11 published DCE-MRI algorithms, all implementing Tofts-Kermode or extended Tofts pharmacokinetic models. Digital reference objects (DROs) with known Ktrans and ve values were used to assess performance at varying noise levels. Additionally, DCE-MRI data from 15 head and neck squamous cell carcinoma patients over 3 time-points during chemoradiotherapy were used to ascertain Ktrans and ve kinetic trends across algorithms. Algorithms performed well (less than 3% average error) when no noise was present in the DRO. With noise, 87% of Ktrans and 84% of ve algorithm-DRO combinations were generally in the correct order. Low Krippendorff’s alpha values showed that algorithms could not consistently classify patients as above or below the median for a given algorithm at each time point or for differences in values between time points. A majority of the algorithms produced a significant Spearman correlation in ve of the primary gross tumor volume with time. Algorithmic differences in Ktrans and ve values over time indicate limitations in combining/comparing data from distinct DCE-MRI model implementations. Careful cross-algorithm quality-assurance must be utilized as DCE-MRI results may not be interpretable using differing software.Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provides quantitative metrics (e.g. Ktrans, ve) via pharmacokinetic models. We tested inter-algorithm variability in these quantitative metrics with 11 published DCE-MRI algorithms, all implementing Tofts-Kermode or extended Tofts pharmacokinetic models. Digital reference objects (DROs) with known Ktrans and ve values were used to assess performance at varying noise levels. Additionally, DCE-MRI data from 15 head and neck squamous cell carcinoma patients over 3 time-points during chemoradiotherapy were used to ascertain Ktrans and ve kinetic trends across algorithms. Algorithms performed well (less than 3% average error) when no noise was present in the DRO. With noise, 87% of Ktrans and 84% of ve algorithm-DRO combinations were generally in the correct order. Low Krippendorff’s alpha values showed that algorithms could not consistently classify patients as above or below the median for a given algorithm at each time point or for differences in values between time points. A majority of the algorithms produced a significant Spearman correlation in ve of the primary gross tumor volume with time. Algorithmic differences in Ktrans and ve values over time indicate limitations in combining/comparing data from distinct DCE-MRI model implementations. Careful cross-algorithm quality-assurance must be utilized as DCE-MRI results may not be interpretable using differing software.


JAMA Ophthalmology | 2018

Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks

James M. Brown; J. Peter Campbell; Andrew Beers; Ken Chang; Susan Ostmo; R.V. Paul Chan; Jennifer G. Dy; Deniz Erdogmus; Stratis Ioannidis; Jayashree Kalpathy-Cramer; Michael F. Chiang

Importance Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable. Objective To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs. Design, Setting, and Participants A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre–plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP. Data were collected from July 2011 to December 2016. Data were analyzed from December 2016 to September 2017. Exposures A deep learning algorithm trained on retinal photographs. Main Outcomes and Measures Receiver operating characteristic analysis was performed to evaluate performance of the algorithm against the RSD. Quadratic-weighted &kgr; coefficients were calculated for ternary classification (ie, normal, pre–plus disease, and plus disease) to measure agreement with the RSD and 8 independent experts. Results Of the 5511 included retinal photographs, 4535 (82.3%) were graded as normal, 805 (14.6%) as pre–plus disease, and 172 (3.1%) as plus disease, based on the RSD. Mean (SD) area under the receiver operating characteristic curve statistics were 0.94 (0.01) for the diagnosis of normal (vs pre–plus disease or plus disease) and 0.98 (0.01) for the diagnosis of plus disease (vs normal or pre–plus disease). For diagnosis of plus disease in an independent test set of 100 retinal images, the algorithm achieved a sensitivity of 93% with 94% specificity. For detection of pre–plus disease or worse, the sensitivity and specificity were 100% and 94%, respectively. On the same test set, the algorithm achieved a quadratic-weighted &kgr; coefficient of 0.92 compared with the RSD, outperforming 6 of 8 ROP experts. Conclusions and Relevance This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts. This has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP.


Journal of the American Medical Informatics Association | 2018

Distributed deep learning networks among institutions for medical imaging

Ken Chang; Niranjan Balachandar; Carson Lam; Darvin Yi; James M. Brown; Andrew Beers; Bruce R. Rosen; Daniel L. Rubin; Jayashree Kalpathy-Cramer

Abstract Objective Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. Methods We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). Results We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. Conclusions We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.


Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications | 2018

Fully automated disease severity assessment and treatment monitoring in retinopathy of prematurity using deep learning

James M. Brown; J. Peter Campbell; Andrew Beers; Ken Chang; Kyra Donohue; Susan Ostmo; R.V. Paul Chan; Jennifer G. Dy; Deniz Erdogmus; Stratis Ioannidis; Michael F. Chiang; Jayashree Kalpathy-Cramer

Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts’ consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.


Proceedings of SPIE | 2017

Making sense of large data sets without annotations: Analyzing age-related correlations from lung CT scans

Yashin Dicente Cid; Artem Mamonov; Andrew Beers; Armin Thomas; Vassili Kovalev; Jayashree Kalpathy-Cramer; Henning Müller

The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be taken into account for the interpretation of new cases. The database used includes patients that had suspicions on a chest X-ray, so it is not a group of healthy people, and only tendencies and not a model of a healthy lung at a specific age can be derived.


Scientific Reports | 2018

Publisher Correction: A Multi-Institutional Comparison of Dynamic Contrast-Enhanced Magnetic Resonance Imaging Parameter Calculations (Scientific Reports (2017) DOI: 10.1038/s41598-017-11554-w)

Rachel B. Ger; Abdallah S.R. Mohamed; Musaddiq J. Awan; Yao Ding; Kimberly Li; Xenia Fave; Andrew Beers; Brandon Driscoll; Hesham Elhalawani; David A. Hormuth; Petra J. van Houdt; Renjie He; Shouhao Zhou; Kelsey B. Mathieu; Heng Li; C. Coolens; Caroline Chung; James A. Bankson; Wei Huang; Jihong Wang; Vlad C. Sandulache; Stephen Y. Lai; Rebecca M. Howell; R. Jason Stafford; Thomas E. Yankeelov; Uulke A. van der Heide; Steven J. Frank; Daniel P. Barboriak; John D. Hazle; L Court

A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.


Scientific Data | 2018

Erratum: Dynamic contrast-enhanced magnetic resonance imaging for head and neck cancers

Joint Head; Hesham Elhalawani; Rachel B. Ger; Abdallah S.R. Mohamed; Musaddiq J. Awan; Yao Ding; Kimberly Li; Xenia Fave; Andrew Beers; Brandon Driscoll; David A. Hormuth; Petra J. van Houdt; Renjie He; Shouhao Zhou; Kelsey B. Mathieu; Heng Li; C. Coolens; Caroline Chung; James A. Bankson; Wei Huang; Jihong Wang; Vlad C. Sandulache; Stephen Y. Lai; Rebecca M. Howell; R. Jason Stafford; Thomas E. Yankeelov; Uulke A. van der Heide; Steven J. Frank; Daniel P. Barboriak; John D. Hazle

This corrects the article DOI: 10.1038/sdata.2018.8.


Scientific Data | 2018

Data descriptor: Dynamic contrastenhanced magnetic resonance imaging for head and neck cancers

Hesham Elhalawani; Rachel B. Ger; Abdallah S.R. Mohamed; Musaddiq J. Awan; Yao Ding; Kimberly Li; Xenia Fave; Andrew Beers; Brandon Driscoll; David A. Hormuth; Petra J. van Houdt; Renjie He; Shouhao Zhou; Kelsey B. Mathieu; Heng Li; C. Coolens; Caroline Chung; James A. Bankson; Wei Huang; Jihong Wang; Vlad C. Sandulache; Stephen Y. Lai; Rebecca M. Howell; R. Jason Stafford; Thomas E. Yankeelov; Uulke A. van der Heide; Steven J. Frank; Daniel P. Barboriak; John D. Hazle; L Court

Dynamic myraidpro contrast-enhanced magnetic resonance imaging (DCE-MRI) has been correlated with prognosis in head and neck squamous cell carcinoma as well as with changes in normal tissues. These studies implement different software, either commercial or in-house, and different scan protocols. Thus, the generalizability of the results is not confirmed. To assist in the standardization of quantitative metrics to confirm the generalizability of these previous studies, this data descriptor delineates in detail the DCE-MRI digital imaging and communications in medicine (DICOM) files with DICOM radiation therapy (RT) structure sets and digital reference objects (DROs), as well as, relevant clinical data that encompass a data set that can be used by all software for comparing quantitative metrics. Variable flip angle (VFA) with six flip angles and DCE-MRI scans with a temporal resolution of 5.5 s were acquired in the axial direction on a 3T MR scanner with a field of view of 25.6 cm, slice thickness of 4 mm, and 256×256 matrix size.


Medical Imaging 2018: Physics of Medical Imaging | 2018

Anatomical DCE-MRI phantoms generated from glioma patient data

Andrew Beers; Ken Chang; James A. Brown; Xia Zhu; Dipanjan Sengupta; Theodore L. Willke; Elizabeth R. Gerstner; Bruce R. Rosen; Jayashree Kalpathy-Cramer

Several digital reference objects (DROs) for DCE-MRI have been created to test the accuracy of pharmacokinetic modeling software under a variety of different noise conditions. However, there are few DROs that mimic the anatomical distribution of voxels found in real data, and similarly few DROs that are based on both malignant and normal tissue. We propose a series of DROs for modeling Ktrans and Ve derived from a publically-available RIDER DCEMRI dataset of 19 patients with gliomas. For each patient’s DCE-MRI data, we generate Ktrans and Ve parameter maps using an algorithm validated on the QIBA Tofts model phantoms. These parameter maps are denoised, and then used to generate noiseless time-intensity curves for each of the original voxels. This is accomplished by reversing the Tofts model to generate concentration-times curves from Ktrans and Ve inputs, and subsequently converting those curves into intensity values by normalizing to each patient’s average pre-bolus image intensity. The result is a noiseless DRO in the shape of the original patient data with known ground-truth Ktrans and Ve values. We make this dataset publically available for download for all 19 patients of the original RIDER dataset.

Collaboration


Dive into the Andrew Beers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdallah S.R. Mohamed

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Hormuth

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Heng Li

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Hesham Elhalawani

University of Texas MD Anderson Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge