Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theresa C. Thai is active.

Publication


Featured researches published by Theresa C. Thai.


Computer Methods and Programs in Biomedicine | 2017

A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images

Yunzhi Wang; Yuchen Qiu; Theresa C. Thai; Kathleen N. Moore; Hong Liu; Bin Zheng

Accurately assessment of adipose tissue volume inside a human body plays an important role in predicting disease or cancer risk, diagnosis and prognosis. In order to overcome limitation of using only one subjectively selected CT image slice to estimate size of fat areas, this study aims to develop and test a computer-aided detection (CAD) scheme based on deep learning technique to automatically segment subcutaneous fat areas (SFA) and visceral fat areas (VFA) depicting on volumetric CT images. A retrospectively collected CT image dataset was divided into two independent training and testing groups. The proposed CAD framework consisted of two steps with two convolution neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. The first CNN was trained using 2,240 CT slices to select abdominal CT slices depicting SFA and VFA. The second CNN was trained with 84,000pixel patches and applied to the selected CT slices to identify fat-related pixels and assign them into SFA and VFA classes. Comparing to the manual CT slice selection and fat pixel segmentation results, the accuracy of CT slice selection using the Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using the Segmentation-CNN was 96.8%. This study demonstrated the feasibility of applying a new deep learning based CAD scheme to automatically recognize abdominal section of human body from CT scans and segment SFA and VFA from volumetric CT data with high accuracy or agreement with the manual segmentation results.


BMC Medical Imaging | 2016

Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome

Yunzhi Wang; Yuchen Qiu; Theresa C. Thai; Kathleen N. Moore; Hong Liu; Bin Zheng

BackgroundTo investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome.MethodsA computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients.ResultsThe correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01).ConclusionUsing a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome.


Acta Radiologica | 2016

Early prediction of clinical benefit of treating ovarian cancer using quantitative CT image feature analysis.

Yuchen Qiu; Maxine Tan; Scott McMeekin; Theresa C. Thai; Kai Ding; Kathleen N. Moore; Hong Liu; Bin Zheng

Background In current clinical trials of treating ovarian cancer patients, how to accurately predict patients’ response to the chemotherapy at an early stage remains an important and unsolved challenge. Purpose To investigate feasibility of applying a new quantitative image analysis method for predicting early response of ovarian cancer patients to chemotherapy in clinical trials. Material and Methods A dataset of 30 patients was retrospectively selected in this study, among which 12 were responders with 6-month progression-free survival (PFS) and 18 were non-responders. A computer-aided detection scheme was developed to segment tumors depicted on two sets of CT images acquired pre-treatment and 4–6 weeks post treatment. The scheme computed changes of three image features related to the tumor volume, density, and density variance. We analyzed performance of using each image feature and applying a decision tree to predict patients’ 6-month PFS. The prediction accuracy of using quantitative image features was also compared with the clinical record based on the Response Evaluation Criteria in Solid Tumors (RECIST) guideline. Results The areas under receiver operating characteristic curve (AUC) were 0.773 ± 0.086, 0.680 ± 0.109, and 0.668 ± 0.101, when using each of three features, respectively. AUC value increased to 0.831 ± 0.078 when combining these features together. The decision-tree classifier achieved a higher predicting accuracy (76.7%) than using RECIST guideline (60.0%). Conclusion This study demonstrated the potential of using a quantitative image feature analysis method to improve accuracy of predicting early response of ovarian cancer patients to the chemotherapy in clinical trials.


Proceedings of SPIE | 2017

Improving efficacy of metastatic tumor segmentation to facilitate early prediction of ovarian cancer patients' response to chemotherapy

Gopichandh Danala; Yunzhi Wang; Theresa C. Thai; Camille C. Gunderson; Katherine Moxley; Kathleen N. Moore; Robert S. Mannel; Samuel Cheng; Hong Liu; Bin Zheng; Yuchen Qiu

Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.


Journal of Pediatric and Adolescent Gynecology | 2017

Uterine Didelphys with Bilateral Cervical Agenesis in a 15-Year-Old Girl

Kate C. Arnold; Theresa C. Thai; La Tasha B. Craig

BACKGROUND Isolated uterine didelphys requires no treatment in contrast to cervical agenesis, which requires a hysterectomy. Because of this, correct diagnosis of Müllerian anomalies is paramount for making recommendations for patient care. CASE A 15-year-old girl presented to clinic with pelvic pain and primary amenorrhea. Uterine didelphys with bilateral cervical agenesis was diagnosed using imaging. Hysterectomy was recommended and diagnosis was confirmed at surgery and according to anatomic pathology. SUMMARY AND CONCLUSION Our patient with uterine didelphys with bilateral cervical agenesis presented a diagnostic challenge, because, to our knowledge, it has never been reported before in the literature. Her pattern of anomalies had significant implications for future fertility. Radiology exam was vital to confirming this diagnosis in a young, virginal female patient.


Physics in Medicine and Biology | 2018

Prediction of chemotherapy response in ovarian cancer patients using a new clustered quantitative image marker

Abolfazl Zargari; Yue Du; Morteza Heidari; Theresa C. Thai; Camille C. Gunderson; Kathleen N. Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Yuchen Qiu

This study aimed to investigate the feasibility of integrating image features computed from both spatial and frequency domain to better describe the tumor heterogeneity for precise prediction of tumor response to postsurgical chemotherapy in patients with advanced-stage ovarian cancer. A computer-aided scheme was applied to first compute 133 features from five categories namely, shape and density, fast Fourier transform, discrete cosine transform (DCT), wavelet, and gray level difference method. An optimal feature cluster was then determined by the scheme using the particle swarm optimization algorithm aiming to achieve an enhanced discrimination power that was unattainable with the single features. The scheme was tested using a balanced dataset (responders and non-responders defined using 6 month PFS) retrospectively collected from 120 ovarian cancer patients. By evaluating the performance of the individual features among the five categories, the DCT features achieved the highest predicting accuracy than the features in other groups. By comparison, a quantitative image marker generated from the optimal feature cluster yielded the area under ROC curve (AUC) of 0.86, while the top performing single feature only had an AUC of 0.74. Furthermore, it was observed that the features computed from the frequency domain were as important as those computed from the spatial domain. In conclusion, this study demonstrates the potential of our proposed new quantitative image marker fused with the features computed from both spatial and frequency domain for a reliable prediction of tumor response to postsurgical chemotherapy.


Medical Imaging 2018: Digital Pathology | 2018

A performance comparison of low- and high-level features learned by deep convolutional neural networks in epithelium and stroma classification.

Yue Du; Roy Zhang; Abolfazl Zargari; Theresa C. Thai; Camille C. Gunderson; Katherine Moxley; Hong Liu; Bin Zheng; Yuchen Qiu

Deep convolutional neural networks (CNNs) based transfer learning is an effective tool to reduce the dependence on hand-crafted features for handling medical classification problems, which may mitigate the problem of the insufficient training caused by the limited sample size. In this study, we investigated the discrimination power of the features at different CNN levels for the task of classifying epithelial and stromal regions on digitized pathologic slides which are prepared from breast cancer tissue. We extracted the low level and high level features from four different deep CNN architectures namely, AlexNet, Places365-AlexNet, VGG, and GoogLeNet. These features are used as input to train and optimize different classifiers including support vector machine (SVM), random forest (RF), and k-nearest neighborhood (KNN). A number of 15000 regions of interest (ROIs) acquired from the public database are employed to conduct this study. The result was observed that the low-level features of AlexNet, Places365-AlexNet and VGG outperformed the high-level ones, but the situation is in the opposite direction when the GoogLeNet is applied. Moreover, the best accuracy was achieved as 89.7% by the relatively deep layer of max pool 4 of GoogLeNet. In summary, our extensive empirical evaluation may suggest that it is viable to extend the use of transfer learning to the development of high-performance detection and diagnosis systems for medical imaging tasks.


Biophotonics and Immune Responses XIII | 2018

Assessing the performance of quantitative image features on early stage prediction of treatment effectiveness for ovary cancer patients: a preliminary investigation

Yuchen Qiu; Yue Du; Theresa C. Thai; Camille C. Gunderson; Kathleen M. Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Abolfazl Zargari Khuzani

The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.


Annals of Biomedical Engineering | 2018

Classification of Tumor Epithelium and Stroma by Exploiting Image Features Learned by Deep Convolutional Neural Networks

Yue Du; Roy Zhang; Abolfazl Zargari; Theresa C. Thai; Camille C. Gunderson; Katherine Moxley; Hong Liu; Bin Zheng; Yuchen Qiu

The tumor–stroma ratio (TSR) reflected on hematoxylin and eosin (H&E)-stained histological images is a potential prognostic factor for survival. Automatic image processing techniques that allow for high-throughput and precise discrimination of tumor epithelium and stroma are required to elevate the prognostic significance of the TSR. As a variant of deep learning techniques, transfer learning leverages nature-images features learned by deep convolutional neural networks (CNNs) to relieve the requirement of deep CNNs for immense sample size when handling biomedical classification problems. Herein we studied different transfer learning strategies for accurately distinguishing epithelial and stromal regions of H&E-stained histological images acquired from either breast or ovarian cancer tissue. We compared the performance of important deep CNNs as either a feature extractor or as an architecture for fine-tuning with target images. Moreover, we addressed the current contradictory issue about whether the higher-level features would generalize worse than lower-level ones because they are more specific to the source-image domain. Under our experimental setting, the transfer learning approach achieved an accuracy of 90.2 (vs. 91.1 for fine tuning) with GoogLeNet, suggesting the feasibility of using it in assisting pathology-based binary classification problems. Our results also show that the superiority of the lower-level or the higher-level features over the other ones was determined by the architecture of deep CNNs.


Proceedings of SPIE | 2017

Apply radiomics approach for early stage prognostic evaluation of ovarian cancer patients: A preliminary study

Gopichandh Danala; Yunzhi Wang; Theresa C. Thai; Camille C. Gunderson; Katherine Moxley; Kathleen N. Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Yuchen Qiu

Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838±0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.

Collaboration


Dive into the Theresa C. Thai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Zheng

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Hong Liu

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Yuchen Qiu

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Camille C. Gunderson

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Kai Ding

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Yunzhi Wang

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Katherine Moxley

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maxine Tan

University of Oklahoma

View shared research outputs
Researchain Logo
Decentralizing Knowledge