Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lauren Kim is active.

Publication


Featured researches published by Lauren Kim.


IEEE Transactions on Medical Imaging | 2016

Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation

Holger R. Roth; Le Lu; Jiamin Liu; Jianhua Yao; Ari Seff; Kevin M. Cherry; Lauren Kim; Ronald M. Summers

Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ~ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.


international symposium on biomedical imaging | 2015

Anatomy-specific classification of medical images using deep convolutional nets

Holger R. Roth; Christopher T. Lee; Hoo-Chang Shin; Ari Seff; Lauren Kim; Jianhua Yao; Le Lu; Ronald M. Summers

Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. “Deep learning” methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ- or body-part-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using ConvNets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.


computer vision and pattern recognition | 2015

Interleaved text/image Deep Mining on a large-scale radiology database

Hoo-Chang Shin; Le Lu; Lauren Kim; Ari Seff; Jianhua Yao; Ronald M. Summers

Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospitals picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative ~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector labels. Our system interleaves between unsupervised learning (e.g., latent Dirichlet allocation, recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demonstrated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning “data-hungry” obstacle in the medical domain.


Medical Image Analysis | 2015

Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression

Kevin M. Cherry; Brandon Peplinski; Lauren Kim; Shijun Wang; Le Lu; Weidong Zhang; Jianfei Liu; Zhuoshi Wei; Ronald M. Summers

Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse.


Medical Physics | 2016

Mediastinal lymph node detection and station mapping on chest CT using spatial priors and random forest

Jiamin Liu; Joanne Hoffman; Jocelyn Zhao; Jianhua Yao; Le Lu; Lauren Kim; Evrim B. Turkbey; Ronald M. Summers

PURPOSE To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. METHODS The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifier for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. RESULTS The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. CONCLUSIONS Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.


Proceedings of SPIE | 2016

Colitis detection on abdominal CT scans by rich feature hierarchies

Jiamin Liu; Nathan Lay; Zhuoshi Wei; Le Lu; Lauren Kim; Evrim B. Turkbey; Ronald M. Summers

Colitis is inflammation of the colon due to neutropenia, inflammatory bowel disease (such as Crohn disease), infection and immune compromise. Colitis is often associated with thickening of the colon wall. The wall of a colon afflicted with colitis is much thicker than normal. For example, the mean wall thickness in Crohn disease is 11-13 mm compared to the wall of the normal colon that should measure less than 3 mm. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with linear SVMs. We applied the detection method to 260 images from 26 CT scans of patients with colitis for evaluation. The detection system can achieve 0.85 sensitivity at 1 false positive per image.


Medical Physics | 2017

detection and diagnosis of colitis on computed tomography using deep convolutional neural networks

Jiamin Liu; David H. Wang; Le Lu; Zhuoshi Wei; Lauren Kim; Evrim B. Turkbey; Berkman Sahiner; Nicholas Petrick; Ronald M. Summers

Purpose Colitis refers to inflammation of the inner lining of the colon that is frequently associated with infection and allergic reactions. In this paper, we propose deep convolutional neural networks methods for lesion‐level colitis detection and a support vector machine (SVM) classifier for patient‐level colitis diagnosis on routine abdominal CT scans. Methods The recently developed Faster Region‐based Convolutional Neural Network (Faster RCNN) is utilized for lesion‐level colitis detection. For each 2D slice, rectangular region proposals are generated by region proposal networks (RPN). Then, each region proposal is jointly classified and refined by a softmax classifier and bounding‐box regressor. Two convolutional neural networks, eight layers of ZF net and 16 layers of VGG net are compared for colitis detection. Finally, for each patient, the detections on all 2D slices are collected and a SVM classifier is applied to develop a patient‐level diagnosis. We trained and evaluated our method with 80 colitis patients and 80 normal cases using 4 × 4‐fold cross validation. Results For lesion‐level colitis detection, with ZF net, the mean of average precisions (mAP) were 48.7% and 50.9% for RCNN and Faster RCNN, respectively. The detection system achieved sensitivities of 51.4% and 54.0% at two false positives per patient for RCNN and Faster RCNN, respectively. With VGG net, Faster RCNN increased the mAP to 56.9% and increased the sensitivity to 58.4% at two false positive per patient. For patient‐level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively. The difference was not statistically significant with P = 0.18. At the optimal operating point, the RCNN method correctly identified 90.4% (72.3/80) of the colitis patients and 94.0% (75.2/80) of normal cases. The sensitivity improved to 91.6% (73.3/80) and the specificity improved to 95.0% (76.0/80) for the Faster RCNN method. With VGG net, Faster RCNN increased the AUC to 0.986 ± 0.007 and increased the diagnosis sensitivity to 93.7% (75.0/80) and specificity was unchanged at 95.0% (76.0/80). Conclusion Colitis detection and diagnosis by deep convolutional neural networks is accurate and promising for future clinical application.


Deep Learning and Convolutional Neural Networks for Medical Image Computing | 2017

Efficient False Positive Reduction in Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation

Holger R. Roth; Le Lu; Jiamin Liu; Jianhua Yao; Ari Seff; Kevin M. Cherry; Lauren Kim; Ronald M. Summers

In clinical practice and medical imaging research , automated computer-aided detection (CADe) is an important tool. While many methods can achieve high sensitivities, they typically suffer from high false positives (FP) per patient. In this study, we describe a two-stage coarse-to-fine approach using CADe candidate generation systems that operate at high sensitivity rates (close to \(100\%\) recall). In a second stage, we reduce false positive numbers using state-of-the-art machine learning methods, namely deep convolutional neural networks (ConvNet). The ConvNets are trained to differentiate hard false positives from true-positives utilizing a set of 2D (two-dimensional) or 2.5D re-sampled views comprising random translations, rotations, and multi-scale observations around a candidate’s center coordinate. During the test phase, we apply the ConvNets on unseen patient data and aggregate all probability scores for lesions (or pathology). We found that this second stage is a highly selective classifier that is able to reject difficult false positives while retaining good sensitivity rates. The method was evaluated on three data sets (sclerotic metastases, lymph nodes, colonic polyps) with varying numbers patients (59, 176, and 1,186, respectively). Experiments show that the method is able to generalize to different applications and increasing data set sizes. Marked improvements are observed in all cases: sensitivities increased from 57 to 70%, from 43 to 77% and from 58 to 75% for sclerotic metastases, lymph nodes and colonic polyps, respectively, at low FP rates per patient (3 FPs/patient).


international symposium on biomedical imaging | 2016

Colitis detection on computed tomography using regional convolutional neural networks

Jiamin Liu; David H. Wang; Zhuoshi Wei; Le Lu; Lauren Kim; Evrim B. Turkbey; Ronald M. Summers

Colitis is inflammation of the colon that is frequently associated with infection and immune compromise. The wall of a colon afflicted with colitis is much thicker than normal. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural net-works (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with a linear SVM. We applied the detection method to 448 images from 56 CT scans of patients with colitis for evaluation. The detection system achieved 85% sensitivity at 1 false positive per image.


international symposium on biomedical imaging | 2015

Automated segmentation of the thyroid gland on CT using multi-atlas label fusion and random forest

Jiamin Liu; Divya Narayanan; Kevin W. Chang; Lauren Kim; Evrim B. Turkbey; Le Lu; Jianhua Yao; Ronald M. Summers

The thyroid gland is an important endocrine organ. For a variety of clinical applications, a system for automated segmentation of the thyroid is desirable. Thyroid segmentation is challenging due to the inhomogeneous nature of the thyroid and the surrounding structures which have similar intensities. In this paper, we propose a fully automated method for thyroid detection and segmentation on CT scans. The thyroid gland is initially estimated by a multi-atlas segmentation with joint label fusion algorithm. The segmentation is then corrected by supervised statistical learning-based voxel labeling with a random forest algorithm. Multi-atlas label fusion transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest employs an ensemble of decision trees that are trained on labeled thyroids to recognize various features. The trained forest classifier is then applied to the estimated thyroid by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes and background non-thyroid voxels as negatives. We applied our method to 73 patients using 5 as atlases. The system achieved an overall 0.70 Dice Similarity Coefficient (DSC) if using the multi-atlas label fusion only and was improved to 0.75 DSC after the random forest correction.

Collaboration


Dive into the Lauren Kim's collaboration.

Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Jianhua Yao

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Jiamin Liu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Evrim B. Turkbey

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ari Seff

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Hoo-Chang Shin

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Zhuoshi Wei

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Kevin M. Cherry

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge