Hoo-Chang Shin
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hoo-Chang Shin.
IEEE Transactions on Medical Imaging | 2016
Hoo-Chang Shin; Holger R. Roth; Mingchen Gao; Le Lu; Ziyue Xu; Isabella Nogues; Jianhua Yao; Daniel J. Mollura; Ronald M. Summers
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
computer vision and pattern recognition | 2016
Hoo-Chang Shin; Kirk Roberts; Le Lu; Dina Demner-Fushman; Jianhua Yao; Ronald M. Summers
Despite the recent advances in automatically describing image contents, their applications have been mostly limited to image caption datasets containing natural images (e.g., Flickr 30k, MSCOCO). In this paper, we present a deep learning model to efficiently detect a disease from an image and annotate its contexts (e.g., location, severity and the affected organs). We employ a publicly available radiology dataset of chest x-rays and their reports, and use its image annotations to mine disease names to train convolutional neural networks (CNNs). In doing so, we adopt various regularization techniques to circumvent the large normalvs-diseased cases bias. Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features. Moreover, we introduce a novel approach to use the weights of the already trained pair of CNN/RNN on the domain-specific image/text dataset, to infer the joint image/text contexts for composite image labeling. Significantly improved image annotation results are demonstrated using the recurrent neural cascade model by taking the joint image/text contexts into account.
Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2018
Mingchen Gao; Ulas Bagci; Le Lu; Aaron Wu; Mario Buty; Hoo-Chang Shin; Holger R. Roth; Georgios Z. Papadakis; Adrien Depeursinge; Ronald M. Summers; Ziyue Xu; Daniel J. Mollura
Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise clinical decision making of the extent and nature of the diseases. Therefore, it is important for developing automated pulmonary computer-aided detection systems. Conventionally, this task relies on experts’ manual identification of regions of interest (ROIs) as a prerequisite to diagnose potential diseases. This protocol is time consuming and inhibits fully automatic assessment. In this paper, we present a new method to classify ILD imaging patterns on CT images. The main difference is that the proposed algorithm uses the entire image as a holistic input. By circumventing the prerequisite of manual input ROIs, our problem set-up is significantly more difficult than previous work but can better address the clinical workflow. Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image.
international symposium on biomedical imaging | 2015
Holger R. Roth; Christopher T. Lee; Hoo-Chang Shin; Ari Seff; Lauren Kim; Jianhua Yao; Le Lu; Ronald M. Summers
Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. “Deep learning” methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ- or body-part-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using ConvNets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.
computer vision and pattern recognition | 2015
Hoo-Chang Shin; Le Lu; Lauren Kim; Ari Seff; Jianhua Yao; Ronald M. Summers
Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospitals picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative ~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector labels. Our system interleaves between unsupervised learning (e.g., latent Dirichlet allocation, recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demonstrated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning “data-hungry” obstacle in the medical domain.
medical image computing and computer assisted intervention | 2015
Ari Seff; Le Lu; Adrian Barbu; Holger R. Roth; Hoo-Chang Shin; Ronald M. Summers
Histograms of oriented gradients (HOG) are widely employed image descriptors in modern computer-aided diagnosis systems. Built upon a set of local, robust statistics of low-level image gradients, HOG features are usually computed on raw intensity images. In this paper, we explore a learned image transformation scheme for producing higher-level inputs to HOG. Leveraging semantic object boundary cues, our methods compute data-driven image feature maps via a supervised boundary detector. Compared with the raw image map, boundary cues offer mid-level, more object-specific visual responses that can be suited for subsequent HOG encoding. We validate integrations of several image transformation maps with an application of computer-aided detection of lymph nodes on thoracoabdominal CT images. Our experiments demonstrate that semantic boundary cues based HOG descriptors complement and enrich the raw intensity alone. We observe an overall system with substantially improved results (~78% versus 60% recall at 3 FP/volume for two target regions). The proposed system also moderately outperforms the state-of-the-art deep convolutional neural network (CNN) system in the mediastinum region, without relying on data augmentation and requiring significantly fewer training samples.
Deep Learning for Medical Image Analysis | 2017
Hoo-Chang Shin; Le Lu; Ronald M. Summers
Recent advances in deep learning enable us to analyze a large number of images efficiently; however, collecting such large dataset has been mostly hindered by the rate of manual human efforts. Nonetheless, medical images are usually saved with the accompanying radiology reports, and accommodating the natural language information for image analysis has great potential. For example, data collection can be automated to leverage the large volume of data available in the Picture Archiving and Communication Systems (PACS). Additionally, image annotation can be automated by incorporating the human annotation in the radiology reports. The size of medical dataset usually is much smaller than the natural image dataset which advanced deep learning technology is developed for. We can unleash the full capacity of deep learning for analyzing a large volume of medical images, by automating the data collection and annotation. Moreover, a sustainable system can be developed even when the data are continuously being updated, shared, and integrated. This chapter will review some fundamentals of natural language processing (NLP) and cover various NLP techniques to help automate medical image collection and annotation.
Deep Learning and Convolutional Neural Networks for Medical Image Computing | 2017
Hoo-Chang Shin; Holger R. Roth; Mingchen Gao; Le Lu; Ziyue Xu; Isabella Nogues; Jianhua Yao; Daniel J. Mollura; Ronald M. Summers
Deep convolutional neural networks (CNNs) enable learning trainable, highly representative and hierarchical image feature from sufficient training data which makes rapid progress in computer vision possible. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pretrained CNN features, and transfer learning , i.e., fine-tuning CNN models pretrained from natural image dataset (such as large-scale annotated natural image database: ImageNet) to medical image tasks. In this chapter, we exploit three important factors of employing deep convolutional neural networks to computer-aided detection problems. First, we exploit and evaluate several different CNN architectures including from shallower to deeper CNNs: classical CifarNet, to recent AlexNet and state-of-the-art GoogLeNet and their variants. The studied models contain five thousand to 160 million parameters and vary in the numbers of layers. Second, we explore the influence of dataset scales and spatial image context configurations on medical image classification performance. Third, when and why transfer learning from the pretrained ImageNet CNN models (via fine-tuning) can be useful for medical imaging tasks are carefully examined. We study two specific computer-aided detection (CADe) problems, namely thoracoabdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection and report the first fivefold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive quantitative evaluation, CNN model analysis, and empirical insights can be helpful to the design of high-performance CAD systems for other medical imaging tasks, without loss of generality.
medical image computing and computer assisted intervention | 2015
Holger R. Roth; Le Lu; Amal Farag; Hoo-Chang Shin; Jiamin Liu; Evrim B. Turkbey; Ronald M. Summers
Journal of Machine Learning Research | 2016
Hoo-Chang Shin; Le Lu; Lauren Kim; Ari Seff; Jianhua Yao; Ronald M. Summers