Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maryellen L. Giger is active.

Publication


Featured researches published by Maryellen L. Giger.


14th International Workshop on Breast Imaging (IWBI 2018) | 2018

Effect of biopsy on the MRI radiomics classification of benign lesions and luminal A cancers.

Heather M. Whitney; Karen Drukker; Alexandra Edwards; John Papaioannou; Maryellen L. Giger

Radiomic features extracted from breast magnetic resonance (MR) images have demonstrated potential in diagnosis and prognosis of breast cancer. However, presentation of lesions on MRI may be affected by a biopsy event. We investigated, relative to biopsy condition, the difference in radiomic features and performance (the area under the receiver operating curve (AUC)) for the task of distinguishing between benign lesions and luminal A cancers. Dynamic contrast-enhanced MR images were collected retrospectively under IRB/HIPAA compliance. The 361-case dataset included 92 benign and 30 luminal A lesions imaged pre-biopsy and 40 benign and 199 luminal A lesions imaged post-biopsy. Thirty-four radiomic non-size features were extracted and their values compared for each group of lesions by biopsy condition using the Kolmogorov-Smirnov test to determine if the two groups were drawn from the same patient distribution. For each feature by biopsy condition, the median of the AUC and the confidence interval for the difference in AUC were determined (2000 bootstrap iterations). In all analyses, after correction for multiple comparisons, a difference was considered significant when the p-value was less than 0.0015 (0.05/34) or, equivalently, the 99.85% confidence interval for the difference excluded zero. If comparisons failed to meet p < 0.0015, features were considered potentially robust. While, as expected, the morphology feature of irregularity was significantly different (p = 0.0003) for benign lesions due to how biopsy events increased irregularity of benign lesions, most features were potentially robust between biopsy conditions. While features did well in distinguishing between luminal A and benign lesions, all failed to demonstrate significance differences in AUC between biopsy conditions.


Medical Imaging 2018: Physics of Medical Imaging | 2018

Assessment of diagnostic image quality of computed tomography (CT) images of the lung using deep learning

Byron R. Grant; Jonathan H. Chung; Ingrid Reiser; Maryellen L. Giger; John H. Lee

For computed tomography (CT) imaging, it is important that the imaging protocols be optimized so that the scan is performed at the lowest dose that yields diagnostic images in order to minimize patients’ exposure to ionizing radiation. To accomplish this, it is important to verify that image quality of the acquired scan is sufficient for the diagnostic task at hand. Since the image quality strongly depends on both the characteristics of the patient as well as the imager, both of which are highly variable, using simplistic parameters like noise to determine the quality threshold is challenging. In this work, we apply deep learning using convolutional neural network (CNN) to predict whether CT scans meet the minimal image quality threshold for diagnosis. The dataset consists of 74 cases of high resolution axial CT scans acquired for the diagnosis of interstitial lung disease. The quality of the images is rated by a radiologist. While the number of cases is relatively small for deep learning tasks, each case consists of more than 200 slices, comprising a total of 21,257 images. The deep learning involves fine-tuning of a pre-trained VGG19 network, which results in an accuracy of 0.76 (95% CI: 0.748 – 0.773) and an AUC of 0.78 (SE: 0.01). While the number of total images is relatively large, the result is still significantly limited by the small number of cases. Despite the limitation, this work demonstrates the potential for using deep learning to characterize the diagnostic quality of CT scans.


Medical Imaging 2018: Computer-Aided Diagnosis | 2018

Transfer learning with convolutional neural networks for lesion classification on clinical breast tomosynthesis.

Kayla R. Mendel; Hui Li; Deepa Sheth; Maryellen L. Giger

With growing adoption of digital breast tomosynthesis (DBT) in breast cancer screening protocols, it is important to compare the performance of computer-aided diagnosis (CAD) in the diagnosis of breast lesions on DBT images compared to conventional full-field digital mammography (FFDM). In this study, we retrospectively collected FFDM and DBT images of 78 lesions from 76 patients, each containing lesions that were biopsy-proven as either malignant or benign. A square region of interest (ROI) was placed to fully cover the lesion on each FFDM, DBT synthesized 2D images, and DBT key slice images in the cranial-caudal (CC) and mediolateral-oblique (MLO) views. Features were extracted on each ROI using a pre-trained convolutional neural network (CNN). These features were then input to a support vector machine (SVM) classifier, and area under the ROC curve (AUC) was used as the figure of merit. We found that in both the CC view and MLO view, the synthesized 2D image performed best (AUC = 0.814, AUC = 0.881 respectively) in the task of lesion characterization. Small database size was a key limitation in this study, and could lead to overfitting in the application of the SVM classifier. In future work, we plan to expand this dataset and to explore more robust deep learning methodology such as fine-tuning.


Medical Imaging 2018: Computer-Aided Diagnosis | 2018

Deep learning in breast cancer risk assessment: evaluation of fine-tuned convolutional neural networks on a clinical dataset of FFDMs.

Hui Li; Kayla R. Mendel; John H. Lee; Li Lan; Maryellen L. Giger

We evaluated the potential of deep learning in the assessment of breast cancer risk using convolutional neural networks (CNNs) fine-tuned on full-field digital mammographic (FFDM) images. This study included 456 clinical FFDM cases from two high-risk datasets: BRCA1/2 gene-mutation carriers (53 cases) and unilateral cancer patients (75 cases), and a low-risk dataset as the control group (328 cases). All FFDM images (12-bit quantization and 100 micron pixel) were acquired with a GE Senographe 2000D system and were retrospectively collected under an IRB-approved, HIPAA-compliant protocol. Regions of interest of 256x256 pixels were selected from the central breast region behind the nipple in the craniocaudal projection. VGG19 pre-trained on the ImageNet dataset was used to classify the images either as high-risk or as low-risk subjects. The last fully-connected layer of pre-trained VGG19 was fine-tuned on FFDM images for breast cancer risk assessment. Performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) in the task of distinguishing between high-risk and low-risk subjects. AUC values of 0.84 (SE=0.05) and 0.72 (SE=0.06) were obtained in the task of distinguishing between the BRCA1/2 gene-mutation carriers and low-risk women and between unilateral cancer patients and low-risk women, respectively. Deep learning with CNNs appears to be able to extract parenchymal characteristics directly from FFDMs which are relevant to the task of distinguishing between cancer risk populations, and therefore has potential to aid clinicians in assessing mammographic parenchymal patterns for cancer risk assessment.


Medical Imaging 2018: Computer-Aided Diagnosis | 2018

Variations in algorithm implementation among quantitative texture analysis software packages.

Joseph J. Foy; Prerana Mitta; Lauren R. Nowosatka; Kayla R. Mendel; Hui Li; Maryellen L. Giger; Hania A. Al-Hallaq; Samuel G. Armato

Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.


14th International Workshop on Breast Imaging (IWBI 2018) | 2018

Deep learning in computer-aided diagnosis incorporating mammographic characteristics of both tumor and parenchyma stroma.

Hui Li; Deepa Sheth; Kayla R. Mendel; Li Lan; Maryellen L. Giger

We investigated the additive role of breast parenchyma stroma in the computer-aided diagnosis (CADx) of tumors on full-field digital mammograms (FFDM) by combining images of the tumor and contralateral normal parenchyma information via deep learning. The study included 182 breast lesions in which 106 were malignant and 76 were benign. All FFDM images were acquired using a GE 2000D Senographe system and retrospectively collected under an Institution Review Board (IRB) approved, Health Insurance Portability and Accountability Act (HIPAA) compliant protocol. Convolutional neutral networks (CNNs) with transfer learning were used to extract image-based characteristics of lesions and of parenchymal patterns (on the contralateral breast) directly from the FFDM images. Classification performance was evaluated and compared between analysis of only tumors and that of combined tumor and parenchymal patterns in the task of distinguishing between malignant and benign cases with the area under the Receiver Operating Characteristic (ROC) curve (AUC) used as the figure of merit. Using only lesion image data, the transfer learning method yielded an AUC value of 0.871 (SE=0.025) and using combined information from both lesion and parenchyma analyses, an AUC value of 0.911 (SE=0.021) was observed. This improvement was statistically significant (p-value=0.0362). Thus, we conclude that using CNNs with transfer learning to combine extracted image information of both tumor and parenchyma may improve breast cancer diagnosis.


Medical Physics | 2015

TU-AB-BRA-07: Radiomics of Breast Cancer: A Robustness Study

Natalia Antropova; Maryellen L. Giger; Hui Li; Karen Drukker; Li Lan

Purpose: Computer-extracted image phenotypes (CEIPs) are being investigated as complimentary attributes in the characterization of breast cancer in radiomics/ radiogenomics research. To be useful, CIEPs need to be robust across data obtained with different manufacturers’ MRI scanners and imaging protocols. Methods: Our research involved two HIPAA-compliant retrospectively-collected MRI datasets: Database 1 included 91 imaged breast cancers from the National Cancer Institute repository (imaged using General Electric equipment) and Database 2 included 117 breast cancers (imaged at our site using Phillips equipment). For each case, information on clinical lymph node status and histopathology on ER, PR, and Her2 receptor status was available. Each lesion underwent quantitative radiomics analysis yielding CEIPs characterizing tumor size, shape, morphology, enhancement texture, kinetic curve assessment, and enhancement variance kinetics. The robustness of CEIPs was assessed through statistical comparisons across the two datasets in terms of average CEIP values, t-test results on the subgroups of interest, and non-inferiority testing of performance in the prognostic tasks of distinguishing ER, PR, and Her2 receptor status and lymph node status using area under the receiver operating characteristic curve (AUC). Results: We failed to find any statistically significant differences in the average value of the CEIP distributions across the 2 scanners for subgroups possessing enough cases. We found greater variation in average feature values for the clinical subgroups having less than 20 cases. Non-inferiority analysis demonstrated varying degrees of robustness for different MRI phenotypes. The most enhancing volume and total rate variation showed the best agreement with absolute value of the lower bound of the 90% confidence for delta AUC<0.02. Conclusion: MRI phenotypes appeared robust in their average values across MRI scanners given large enough datasets. Statistical analysis revealed the robust phenotypes in cancer subtype classification. In future work, larger data sets will be collected and robustness of CEIP further investigated. Supported, in part, by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under grant number T32 EB002103.


Medical Physics | 2010

WE-B-201B-03: Fractal Dimension Analysis of Kinetic Feature Maps in Contrast-Enhanced Breast MRI

J Bancroft Brown; Maryellen L. Giger

Purpose: To investigate whether CADx performance on breast DCE‐MRI can be improved by estimating the spatial complexity of lesion kinetic feature maps using generalized fractal dimension lesion descriptors (FDLDs). Method and Materials: A database of 181 histologically classified breast lesions visible on DCE‐MRI was analyzed as follows. Initially, each lesion was segmented from the parenchyma using a previously developed and validated fuzzy C‐means clustering technique. A kinetic curve was obtained from each lesion voxel, and kinetic features were extracted from each kinetic curve. These features were used to generate 3‐D kinetic feature maps for each lesion, and generalized FDLDs were calculated for each kinetic feature map. The diagnostic efficacy of the individual FDLDs was then evaluated using ROC analysis. Next, to explore whether the FDLDs could improve the performance of previous CADX methods, a conventional set of kinetic and morphological lesion features was compared with a feature set containing conventional features and FDLDs. Each feature set was merged using linear discriminant analysis(LDA) and evaluated using ROC analysis, together with a leave‐one‐case‐out method to minimize database bias. Finally, the area under the ROC curve (A z) values of the two feature sets were statistically compared using ROCKIT software.Results: The individual FDLDs obtained a maximum performance of A z = 0.85 ± 0.03. The conventional features achieved A z = 0.87 ± 0.03, and the FDLDs combined with conventional features gave A z = 0.90 ± 0.02. The A z value of the conventional features and FDLDs was significantly higher than the A z value of the conventional features alone (p = 0.023). Conclusion: The work suggests that generalized FDLDs could potentially be beneficial to a clinical CADx system for breast DCE‐MRI in the future. Conflict of Interest: Stockholder: R2 Technology/Hologic; royalties: Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi and Toshiba.


Radiology | 1996

Malignant and benign clustered microcalcifications: automated feature analysis and classification.

Yulei Jiang; Robert M. Nishikawa; Dulcy E. Wolverton; Charles E. Metz; Maryellen L. Giger; Robert A. Schmidt; Carl J. Vyborny; Kunio Doi


Archive | 2001

Method, system and computer readable medium for an intelligent search workstation for computer assisted interpretation of medical images

Maryellen L. Giger; Carl J. Vyborny; Zhimin Huo; Li Lan

Collaboration


Dive into the Maryellen L. Giger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kunio Doi

University of Chicago

View shared research outputs
Top Co-Authors

Avatar

Li Lan

University of Chicago

View shared research outputs
Top Co-Authors

Avatar

Hui Li

University of Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge