Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuchen Qiu is active.

Publication


Featured researches published by Yuchen Qiu.


IEEE Transactions on Biomedical Engineering | 2016

Fusion of Quantitative Image and Genomic Biomarkers to Improve Prognosis Assessment of Early Stage Lung Cancer Patients

Nastaran Emaminejad; Wei Qian; Yubao Guan; Maxine Tan; Yuchen Qiu; Hong Liu; Bin Zheng

Objective: This study aims to develop a new quantitative image feature analysis scheme and investigate its role along with two genomic biomarkers, namely protein expression of the excision repair cross-complementing 1 genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting cancer recurrence risk of stage I nonsmall-cell lung cancer (NSCLC) patients after surgery. Methods: By using chest computed tomography images, we developed a computer-aided detection scheme to segment lung tumors and computed tumor-related image features. After feature selection, we trained a Naïve Bayesian network-based classifier using eight image features and a multilayer perceptron classifier using two genomic biomarkers to predict cancer recurrence risk, respectively. Two classifiers were trained and tested using a dataset with 79 stage I NSCLC cases, a synthetic minority oversampling technique and a leave-one-case-out validation method. A fusion method was also applied to combine prediction scores of two classifiers. Results: Areas under ROC curves (AUC) values are 0.78 ± 0.06 and 0.68 ± 0.07 when using the image feature and genomic biomarker-based classifiers, respectively. AUC value significantly increased to 0.84 ± 0.05 (p <; 0.05) when fusion of two classifier-generated prediction scores using an equal weighting factor. Conclusion: A quantitative image feature-based classifier yielded significantly higher discriminatory power than a genomic biomarker-based classifier in predicting cancer recurrence risk. Fusion of prediction scores generated by the two classifiers further improved prediction performance. Significance: We demonstrated a new approach that has potential to assist clinicians in more effectively managing stage I NSCLC patients to reduce cancer recurrence risk.


Proceedings of SPIE | 2016

An initial investigation on developing a new method to predict short-term breast cancer risk based on deep learning technology

Yuchen Qiu; Yunzhi Wang; Shiju Yan; Maxine Tan; Samuel Cheng; Hong Liu; Bin Zheng

In order to establish a new personalized breast cancer screening paradigm, it is critically important to accurately predict the short-term risk of a woman having image-detectable cancer after a negative mammographic screening. In this study, we developed and tested a novel short-term risk assessment model based on deep learning method. During the experiment, a number of 270 “prior” negative screening cases was assembled. In the next sequential (“current”) screening mammography, 135 cases were positive and 135 cases remained negative. These cases were randomly divided into a training set with 200 cases and a testing set with 70 cases. A deep learning based computer-aided diagnosis (CAD) scheme was then developed for the risk assessment, which consists of two modules: adaptive feature identification module and risk prediction module. The adaptive feature identification module is composed of three pairs of convolution-max-pooling layers, which contains 20, 10, and 5 feature maps respectively. The risk prediction module is implemented by a multiple layer perception (MLP) classifier, which produces a risk score to predict the likelihood of the woman developing short-term mammography-detectable cancer. The result shows that the new CAD-based risk model yielded a positive predictive value of 69.2% and a negative predictive value of 74.2%, with a total prediction accuracy of 71.4%. This study demonstrated that applying a new deep learning technology may have significant potential to develop a new short-term risk predicting scheme with improved performance in detecting early abnormal symptom from the negative mammograms.


Medical Physics | 2015

Assessment of performance and reproducibility of applying a content-based image retrieval scheme for classification of breast lesions

Rohith Reddy Gundreddy; Maxine Tan; Yuchen Qiu; Samuel Cheng; Hong Liu; Bin Zheng

PURPOSE To develop a new computer-aided diagnosis (CAD) scheme using a content-based image retrieval (CBIR) approach for classification between the malignant and benign breast lesions depicted on the digital mammograms and assess CAD performance and reproducibility. METHODS An image dataset including 820 regions of interest (ROIs) was used. Among them, 431 ROIs depict malignant lesions and 389 depict benign lesions. After applying an image preprocessing process to define the lesion center, two image features were computed from each ROI. The first feature is an average pixel value of a mapped region generated using a watershed algorithm. The second feature is an average pixel value difference between a ROIs center region and the rest of the image. A two-step CBIR approach uses these two features sequentially to search for ten most similar reference ROIs for each queried ROI. A similarity based classification score was then computed to predict the likelihood of the queried ROI depicting a malignant lesion. To assess the reproducibility of the CAD scheme, we selected another independent testing dataset of 100 ROIs. For each ROI in the testing dataset, we added four randomly queried lesion center pixels and examined the variation of the classification scores. RESULTS The area under the ROC curve (AUC) = 0.962 ± 0.006 was obtained when applying a leave-one-out validation method to 820 ROIs. Using the independent testing dataset, the initial AUC value was 0.832 ± 0.040, and using the median classification score of each ROI with five queried seeds, AUC value increased to 0.878 ± 0.035. CONCLUSIONS The authors demonstrated that (1) a simple and efficient CBIR scheme using two lesion density distribution related features achieved high performance in classifying breast lesions without actual lesion segmentation and (2) similar to the conventional CAD schemes using global optimization approaches, improving reproducibility is also one of the challenges in developing CAD schemes using a CBIR based regional optimization approach.


Computer Methods and Programs in Biomedicine | 2017

A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images

Yunzhi Wang; Yuchen Qiu; Theresa C. Thai; Kathleen N. Moore; Hong Liu; Bin Zheng

Accurately assessment of adipose tissue volume inside a human body plays an important role in predicting disease or cancer risk, diagnosis and prognosis. In order to overcome limitation of using only one subjectively selected CT image slice to estimate size of fat areas, this study aims to develop and test a computer-aided detection (CAD) scheme based on deep learning technique to automatically segment subcutaneous fat areas (SFA) and visceral fat areas (VFA) depicting on volumetric CT images. A retrospectively collected CT image dataset was divided into two independent training and testing groups. The proposed CAD framework consisted of two steps with two convolution neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. The first CNN was trained using 2,240 CT slices to select abdominal CT slices depicting SFA and VFA. The second CNN was trained with 84,000pixel patches and applied to the selected CT slices to identify fat-related pixels and assign them into SFA and VFA classes. Comparing to the manual CT slice selection and fat pixel segmentation results, the accuracy of CT slice selection using the Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using the Segmentation-CNN was 96.8%. This study demonstrated the feasibility of applying a new deep learning based CAD scheme to automatically recognize abdominal section of human body from CT scans and segment SFA and VFA from volumetric CT data with high accuracy or agreement with the manual segmentation results.


Analytical Cellular Pathology | 2013

Evaluations of auto-focusing methods under a microscopic imaging modality for metaphase chromosome image analysis

Yuchen Qiu; Xiaodong Chen; Yuhua Li; Wei R. Chen; Bin Zheng; Shibo Li; Hong Liu

Background: Auto-focusing is an important operation in high throughput imaging scanning. Although many auto-focusing methods have been developed and tested for a variety of imaging modalities, few investigations have been performed on the selection of an optimal auto-focusing method that is suitable for the pathological metaphase chromosome analysis under a high resolution scanning microscopic system. Objective: The purpose of this study is to investigate and identify an optimal auto-focusing method for the pathological metaphase chromosome analysis. Methods: In this study, five auto-focusing methods were applied and tested using metaphase chromosome images acquired from bone marrow and blood specimens. These methods were assessed by measuring a number of indices including execution time, accuracy, number of false maxima, and full width at half maximum (FWHM). Results: For the specific condition investigated in this study, the results showed that the Brenner gradient and threshold pixel counting methods were the optimal methods for acquiring high quality metaphase chromosome images from the bone marrow and blood specimens, respectively. Conclusions: Selecting an optimal auto-focusing method depends on the specific clinical tasks. This study also provides useful information for the design and implementation of the high throughput microscopic image scanning systems in the future digital pathology.


BMC Medical Imaging | 2016

Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome

Yunzhi Wang; Yuchen Qiu; Theresa C. Thai; Kathleen N. Moore; Hong Liu; Bin Zheng

BackgroundTo investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome.MethodsA computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients.ResultsThe correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01).ConclusionUsing a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome.


Acta Radiologica | 2016

Early prediction of clinical benefit of treating ovarian cancer using quantitative CT image feature analysis.

Yuchen Qiu; Maxine Tan; Scott McMeekin; Theresa C. Thai; Kai Ding; Kathleen N. Moore; Hong Liu; Bin Zheng

Background In current clinical trials of treating ovarian cancer patients, how to accurately predict patients’ response to the chemotherapy at an early stage remains an important and unsolved challenge. Purpose To investigate feasibility of applying a new quantitative image analysis method for predicting early response of ovarian cancer patients to chemotherapy in clinical trials. Material and Methods A dataset of 30 patients was retrospectively selected in this study, among which 12 were responders with 6-month progression-free survival (PFS) and 18 were non-responders. A computer-aided detection scheme was developed to segment tumors depicted on two sets of CT images acquired pre-treatment and 4–6 weeks post treatment. The scheme computed changes of three image features related to the tumor volume, density, and density variance. We analyzed performance of using each image feature and applying a decision tree to predict patients’ 6-month PFS. The prediction accuracy of using quantitative image features was also compared with the clinical record based on the Response Evaluation Criteria in Solid Tumors (RECIST) guideline. Results The areas under receiver operating characteristic curve (AUC) were 0.773 ± 0.086, 0.680 ± 0.109, and 0.668 ± 0.101, when using each of three features, respectively. AUC value increased to 0.831 ± 0.078 when combining these features together. The decision-tree classifier achieved a higher predicting accuracy (76.7%) than using RECIST guideline (60.0%). Conclusion This study demonstrated the potential of using a quantitative image feature analysis method to improve accuracy of predicting early response of ovarian cancer patients to the chemotherapy in clinical trials.


Applied Optics | 2011

New method for determining the depth of field of microscope systems

Xiaodong Chen; Liqiang Ren; Yuchen Qiu; Hong Liu

This paper presents new formulas to determine the depth of field (DOF) of optical and digital microscope systems. Unlike the conventional DOF formula, the new methods consider the interplay of geometric and diffraction optics for infinite and finite optical microscopes and for corresponding digital microscope systems. It is shown that in addition to the well understood parameters such as numerical apertures, focal length, and light wavelength, system components such as aperture stops also affect the DOF. For the same objective lens, the DOF is inversely proportional to the size of the aperture stop, and it is proportional to the focal length of the ocular lens. It is also shown that under optimal viewing and operating conditions, the visual accommodation of human observers has no meaningful impact on DOF. The new formulas reported are useful for accurately calculating the DOF of microscopes.


Proceedings of SPIE | 2016

Computer-aided classification of mammographic masses using the deep learning technology: a preliminary study

Yuchen Qiu; Shiju Yan; Maxine Tan; Samuel Cheng; Hong Liu; Bin Zheng

Although mammography is the only clinically acceptable imaging modality used in the population-based breast cancer screening, its efficacy is quite controversy. One of the major challenges is how to help radiologists more accurately classify between benign and malignant lesions. The purpose of this study is to investigate a new mammographic mass classification scheme based on a deep learning method. In this study, we used an image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms, which includes 280 malignant and 280 benign mass ROIs, respectively. An eight layer deep learning network was applied, which employs three pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perception (MLP) classifier for feature categorization. In order to improve robustness of selected features, each convolution layer is connected with a max-pooling layer. A number of 20, 10, and 5 feature maps were utilized for the 1st, 2nd and 3rd convolution layer, respectively. The convolution networks are followed by a MLP classifier, which generates a classification score to predict likelihood of a ROI depicting a malignant mass. Among 560 ROIs, 420 ROIs were used as a training dataset and the remaining 140 ROIs were used as a validation dataset. The result shows that the new deep learning based classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.810±0.036. This study demonstrated the potential superiority of using a deep learning based classifier to distinguish malignant and benign breast masses without segmenting the lesions and extracting the pre-defined image features.


Journal of Biomedical Optics | 2012

Impact of the optical depth of field on cytogenetic image quality

Yuchen Qiu; Xiaodong Chen; Yuhua Li; Bin Zheng; Shibo Li; Wei R. Chen; Hong Liu

Abstract. In digital pathology, clinical specimen slides are converted into digital images by microscopic image scanners. Since random vibration and mechanical drifting are unavoidable on even high-precision moving stages, the optical depth of field (DOF) of microscopic systems may affect image quality, in particular when using an objective lens with high magnification power. The DOF of a microscopic system was theoretically analyzed and experimentally validated using standard resolution targets under 60× dry and 100× oil objective lenses, respectively. Then cytogenetic samples were imaged at in-focused and off-focused states to analyze the impact of DOF on the acquired image qualities. For the investigated system equipped with the 60× dry and 100× oil objective lenses, the theoretical estimation of the DOF are 0.855 μm and 0.703 μm, and the measured DOF are 3.0 μm and 1.8 μm, respectively. The observation reveals that the chromosomal bands of metaphase cells are distinguishable when images are acquired up to approximately 1.5 μm or 1 μm out of focus using the 60× dry and 100× oil objective lenses, respectively. The results of this investigation provide important designing trade-off parameters to optimize the digital microscopic image scanning systems in the future.

Collaboration


Dive into the Yuchen Qiu's collaboration.

Top Co-Authors

Avatar

Bin Zheng

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Hong Liu

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yunzhi Wang

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Maxine Tan

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Shibo Li

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuhua Li

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Camille C. Gunderson

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge