Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenny H. Cha is active.

Publication


Featured researches published by Kenny H. Cha.


Medical Physics | 2016

Urinary bladder segmentation in CT urography using deep‐learning convolutional neural network and level sets

Kenny H. Cha; Lubomir M. Hadjiiski; Ravi K. Samala; Heang Ping Chan; Elaine M. Caoili; Richard H. Cohan

PURPOSE The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. METHODS A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. RESULTS With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. CONCLUSIONS The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder.


Medical Physics | 2016

Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography

Ravi K. Samala; Heang Ping Chan; Lubomir M. Hadjiiski; Mark A. Helvie; Jun Wei; Kenny H. Cha

PURPOSE Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. METHODS A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. RESULTS Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). CONCLUSIONS The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality.


Proceedings of SPIE | 2016

Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis

Ravi K. Samala; Heang Ping Chan; Lubomir M. Hadjiiski; Kenny H. Cha; Mark A. Helvie

A deep learning convolution neural network (DLCNN) was designed to differentiate microcalcification candidates detected during the prescreening stage as true calcifications or false positives in a computer-aided detection (CAD) system for clustered microcalcifications. The microcalcification candidates were extracted from the planar projection image generated from the digital breast tomosynthesis volume reconstructed by a multiscale bilateral filtering regularized simultaneous algebraic reconstruction technique. For training and testing of the DLCNN, true microcalcifications are manually labeled for the data sets and false positives were obtained from the candidate objects identified by the CAD system at prescreening after exclusion of the true microcalcifications. The DLCNN architecture was selected by varying the number of filters, filter kernel sizes and gradient computation parameter in the convolution layers, resulting in a parameter space of 216 combinations. The exhaustive grid search method was used to select an optimal architecture within the parameter space studied, guided by the area under the receiver operating characteristic curve (AUC) as a figure-of-merit. The effects of varying different categories of the parameter space were analyzed. The selected DLCNN was compared with our previously designed CNN architecture for the test set. The AUCs of the CNN and DLCNN was 0.89 and 0.93, respectively. The improvement was statistically significant (p < 0.05).


Physics in Medicine and Biology | 2014

CT urography: segmentation of urinary bladder using CLASS with local contour refinement

Kenny H. Cha; Lubomir M. Hadjiiski; Heang Ping Chan; Elaine M. Caoili; Richard H. Cohan; Chuan Zhou

We are developing a computerized system for bladder segmentation on CT urography (CTU), as a critical component for computer-aided detection of bladder cancer. The presence of regions filled with intravenous contrast and without contrast presents a challenge for bladder segmentation. Previously, we proposed a conjoint level set analysis and segmentation system (CLASS). In case the bladder is partially filled with contrast, CLASS segments the non-contrast (NC) region and the contrast-filled (C) region separately and automatically conjoins the NC and C region contours; however, inaccuracies in the NC and C region contours may cause the conjoint contour to exclude portions of the bladder. To alleviate this problem, we implemented a local contour refinement (LCR) method that exploits model-guided refinement (MGR) and energy-driven wavefront propagation (EDWP). MGR propagates the C region contours if the level set propagation in the C region stops prematurely due to substantial non-uniformity of the contrast. EDWP with regularized energies further propagates the conjoint contours to the correct bladder boundary. EDWP uses changes in energies, smoothness criteria of the contour, and previous slice contour to determine when to stop the propagation, following decision rules derived from training. A data set of 173 cases was collected for this study: 81 cases in the training set (42 lesions, 21 wall thickenings, 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, 13 normal bladders). For all cases, 3D hand segmented contours were obtained as reference standard and used for the evaluation of the computerized segmentation accuracy. For CLASS with LCR, the average volume intersection ratio, average volume error, absolute average volume error, average minimum distance and Jaccard index were 84.2 ± 11.4%, 8.2 ± 17.4%, 13.0 ± 14.1%, 3.5 ± 1.9 mm, 78.8 ± 11.6%, respectively, for the training set and 78.0 ± 14.7%, 16.4 ± 16.9%, 18.2 ± 15.0%, 3.8 ± 2.3 mm, 73.8 ± 13.4% respectively, for the test set. With CLASS only, the corresponding values were 75.1 ± 13.2%, 18.7 ± 19.5%, 22.5 ± 14.9%, 4.3 ± 2.2 mm, 71.0 ± 12.6%, respectively, for the training set and 67.3 ± 14.3%, 29.3 ± 15.9%, 29.4 ± 15.6%, 4.9 ± 2.6 mm, 65.0 ± 13.3%, respectively, for the test set. The differences between the two methods for all five measures were statistically significant (p < 0.001) for both the training and test sets. The results demonstrate the potential of CLASS with LCR for segmentation of the bladder.


Scientific Reports | 2017

Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning

Kenny H. Cha; Lubomir M. Hadjiiski; Heang Ping Chan; Alon Z. Weizer; Ajjai Alva; Richard H. Cohan; Elaine M. Caoili; Chintana Paramagul; Ravi K. Samala

Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.


Tomography: A Journal for Imaging Research | 2016

Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network - A Pilot Study

Kenny H. Cha; Lubomir M. Hadjiiski; Ravi K. Samala; Heang Ping Chan; Richard H. Cohan; Elaine M. Caoili; Chintana Paramagul; Ajjai Alva; Alon Z. Weizer

Assessing the response of bladder cancer to neoadjuvant chemotherapy is crucial for reducing morbidity and increasing quality of life of patients. Changes in tumor volume during treatment is generally used to predict treatment outcome. We are developing a method for bladder cancer segmentation in CT using a pilot data set of 62 cases. 65 000 regions of interests were extracted from pre-treatment CT images to train a deep-learning convolution neural network (DL-CNN) for tumor boundary detection using leave-one-case-out cross-validation. The results were compared to our previous AI-CALS method. For all lesions in the data set, the longest diameter and its perpendicular were measured by two radiologists, and 3D manual segmentation was obtained from one radiologist. The World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) were calculated, and the prediction accuracy of complete response to chemotherapy was estimated by the area under the receiver operating characteristic curve (AUC). The AUCs were 0.73 ± 0.06, 0.70 ± 0.07, and 0.70 ± 0.06, respectively, for the volume change calculated using DL-CNN segmentation, the AI-CALS and the manual contours. The differences did not achieve statistical significance. The AUCs using the WHO criteria were 0.63 ± 0.07 and 0.61 ± 0.06, while the AUCs using RECIST were 0.65 ± 007 and 0.63 ± 0.06 for the two radiologists, respectively. Our results indicate that DL-CNN can produce accurate bladder cancer segmentation for calculation of tumor size change in response to treatment. The volume change performed better than the estimations from the WHO criteria and RECIST for the prediction of complete response.


Medical Physics | 2017

Urinary bladder cancer staging in CT urography using machine learning

Sankeerth S. Garapati; Lubomir M. Hadjiiski; Kenny H. Cha; Heang Ping Chan; Elaine M. Caoili; Richard H. Cohan; Alon Z. Weizer; Ajjai Alva; Chintana Paramagul; Jun Wei; Chuan Zhou

Purpose: To evaluate the feasibility of using an objective computer‐aided system to assess bladder cancer stage in CT Urography (CTU). Materials and methods: A dataset consisting of 84 bladder cancer lesions from 76 CTU cases was used to develop the computerized system for bladder cancer staging based on machine learning approaches. The cases were grouped into two classes based on pathological stage ≥ T2 or below T2, which is the decision threshold for neoadjuvant chemotherapy treatment clinically. There were 43 cancers below stage T2 and 41 cancers at stage T2 or above. All 84 lesions were automatically segmented using our previously developed auto‐initialized cascaded level sets (AI‐CALS) method. Morphological and texture features were extracted. The features were divided into subspaces of morphological features only, texture features only, and a combined set of both morphological and texture features. The dataset was split into Set 1 and Set 2 for two‐fold cross‐validation. Stepwise feature selection was used to select the most effective features. A linear discriminant analysis (LDA), a neural network (NN), a support vector machine (SVM), and a random forest (RAF) classifier were used to combine the features into a single score. The classification accuracy of the four classifiers was compared using the area under the receiver operating characteristic (ROC) curve (Az). Results: Based on the texture features only, the LDA classifier achieved a test Az of 0.91 on Set 1 and a test Az of 0.88 on Set 2. The test Az of the NN classifier for Set 1 and Set 2 were 0.89 and 0.92, respectively. The SVM classifier achieved test Az of 0.91 on Set 1 and test Az of 0.89 on Set 2. The test Az of the RAF classifier for Set 1 and Set 2 was 0.89 and 0.97, respectively. The morphological features alone, the texture features alone, and the combined feature set achieved comparable classification performance. Conclusion: The predictive model developed in this study shows promise as a classification tool for stratifying bladder cancer into two staging categories: greater than or equal to stage T2 and below stage T2.


Proceedings of SPIE | 2017

Segmentation of inner and outer bladder wall using deep-learning convolutional neural network in CT urography

Marshall N. Gordon; Lubomir M. Hadjiiski; Kenny H. Cha; Heang Ping Chan; Ravi K. Samala; Richard H. Cohan; Elaine M. Caoili

We are developing a computerized system for detection of bladder cancer in CT urography. In this study, we used a deep-learning convolutional neural network (DL-CNN) to segment the bladder wall. This task is challenging due to differences in the wall between the contrast and non-contrast-filled regions, significant variations in appearance, size, and shape of the bladder among cases, overlap of the prostate with the bladder wall, and the wall being extremely thin compared to the overall size of the bladder. We trained a DL-CNN to estimate the likelihood that a given pixel would be inside the wall of the bladder using neighborhood information. A segmented bladder wall was then obtained using level sets with this likelihood map as a term in the level set energy formulation to obtain contours of the inner and outer bladder walls. The accuracy of the segmentation was evaluated by comparing the segmented wall outlines to hand outlines for a set of 79 training cases and 15 test cases using the average volume intersection % as the metric. For the training set, the inner wall achieved an average volume intersection of 90.0±8.7% and the outer wall achieved 93.7±3.9%. For the test set, the inner wall achieved an average volume intersection of 87.6±7.6% and the outer wall achieved 87.2±9.3%. The results show that the DL-CNN with level sets was effective in segmenting the inner and outer bladder walls.


Proceedings of SPIE | 2017

Bladder cancer treatment response assessment using deep learning in CT with transfer learning

Kenny H. Cha; Lubomir M. Hadjiiski; Heang Ping Chan; Ravi K. Samala; Richard H. Cohan; Elaine M. Caoili; Chintana Paramagul; Ajjai Alva; Alon Z. Weizer

We are developing a CAD system for bladder cancer treatment response assessment in CT. We compared the performance of the deep-learning convolution neural network (DL-CNN) using different network sizes, and with and without transfer learning using natural scene images or regions of interest (ROIs) inside and outside the bladder. The DL-CNN was trained to identify responders (T0 disease) and non-responders to chemotherapy. ROIs were extracted from segmented lesions in pre- and post-treatment scans of a patient and paired to generate hybrid pre-post-treatment paired ROIs. The 87 lesions from 82 patients generated 104 temporal lesion pairs and 6,700 pre-post-treatment paired ROIs. Two-fold cross-validation and receiver operating characteristic analysis were performed and the area under the curve (AUC) was calculated for the DL-CNN estimates. The AUCs for prediction of T0 disease after treatment were 0.77±0.08 and 0.75±0.08, respectively, for the two partitions using DL-CNN without transfer learning and a small network, and were 0.74±0.07 and 0.74±0.08 with a large network. The AUCs were 0.73±0.08 and 0.62±0.08 with transfer learning using a small network pre-trained with bladder ROIs. The AUC values were 0.77±0.08 and 0.73±0.07 using the large network pre-trained with the same bladder ROIs. With transfer learning using the large network pretrained with the Canadian Institute for Advanced Research (CIFAR-10) data set, the AUCs were 0.72±0.06 and 0.64±0.09, respectively, for the two partitions. None of the differences in the methods reached statistical significance. Our study demonstrated the feasibility of using DL-CNN for the estimation of treatment response in CT. Transfer learning did not improve the treatment response estimation. The DL-CNN performed better when transfer learning with bladder images was used instead of natural scene images.


American Journal of Roentgenology | 2015

Treatment Response Assessment for Bladder Cancer on CT Based on Computerized Volume Analysis, World Health Organization Criteria, and RECIST

Lubomir M. Hadjiiski; Alon Z. Weizer; Ajjai Alva; Elaine M. Caoili; Richard H. Cohan; Kenny H. Cha; Heang Ping Chan

OBJECTIVE The purpose of this study was to evaluate the accuracy of our autoinitialized cascaded level set 3D segmentation system as compared with the World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) for estimation of treatment response of bladder cancer in CT urography. MATERIALS AND METHODS CT urograms before and after neoadjuvant chemo-therapy treatment were collected from 18 patients with muscle-invasive localized or locally advanced bladder cancers. The disease stage as determined on pathologic samples at cystectomy after chemotherapy was considered as reference standard of treatment response. Two radiologists measured the longest diameter and its perpendicular on the pre- and posttreatment scans. Full 3D contours for all tumors were manually outlined by one radiologist. The autoinitialized cascaded level set method was used to automatically extract 3D tumor boundary. The prediction accuracy of pT0 disease (complete response) at cystectomy was estimated by the manual, autoinitialized cascaded level set, WHO, and RECIST methods on the basis of the AUC. RESULTS The AUC for prediction of pT0 disease at cystectomy was 0.78 ± 0.11 for autoinitialized cascaded level set compared with 0.82 ± 0.10 for manual segmentation. The difference did not reach statistical significance (p = 0.67). The AUCs using RECIST criteria were 0.62 ± 0.16 and 0.71 ± 0.12 for the two radiologists, both lower than those of the two 3D methods. The AUCs using WHO criteria were 0.56 ± 0.15 and 0.60 ± 0.13 and thus were lower than all other methods. CONCLUSION The pre- and posttreatment 3D volume change estimates obtained by the radiologists manual outlines and the autoinitialized cascaded level set segmentation were more accurate for irregularly shaped tumors than were those based on RECIST and WHO criteria.

Collaboration


Dive into the Kenny H. Cha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chuan Zhou

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajjai Alva

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Jun Wei

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge