Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan Lay is active.

Publication


Featured researches published by Nathan Lay.


medical image computing and computer assisted intervention | 2016

Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images

Isabella Nogues; Le Lu; Xiaosong Wang; Holger R. Roth; Gedas Bertasius; Nathan Lay; Jianbo Shi; Yohannes Tsehay; Ronald M. Summers

Lymph node segmentation is an important yet challenging problem in medical image analysis. The presence of enlarged lymph nodes (LNs) signals the onset or progression of a malignant disease or infection. In the thoracoabdominal (TA) body region, neighboring enlarged LNs often spatially collapse into “swollen” lymph node clusters (LNCs) (up to 9 LNs in our dataset). Accurate segmentation of TA LNCs is complexified by the noticeably poor intensity and texture contrast among neighboring LNs and surrounding tissues, and has not been addressed in previous work. This paper presents a novel approach to TA LNC segmentation that combines holistically-nested neural networks (HNNs) and structured optimization (SO). Two HNNs, built upon recent fully convolutional networks (FCNs) and deeply supervised networks (DSNs), are trained to learn the LNC appearance (HNN-A) or contour (HNN-C) probabilistic output maps, respectively. HNN first produces the class label maps with the same resolution as the input image, like FCN. Afterwards, HNN predictions for LNC appearance and contour cues are formulated into the unary and pairwise terms of conditional random fields (CRFs), which are subsequently solved using one of three different SO methods: dense CRF, graph cuts, and boundary neural fields (BNF). BNF yields the highest quantitative results. Its mean Dice coefficient between segmented and ground truth LN volumes is 82.1 % ± 9.6 %, compared to 73.0 % ± 17.6 % for HNN-A alone. The LNC relative volume (\(cm^3\)) difference is 13.7 % ± 13.1 %, a promising result for the development of LN imaging biomarkers based on volumetric measurements.


Radiology | 2017

Validation of the Dominant Sequence Paradigm and Role of Dynamic Contrast-enhanced Imaging in PI-RADS Version 2

Matthew D. Greer; Joanna H. Shih; Nathan Lay; Tristan Barrett; Leonardo Kayat Bittencourt; Samuel Borofsky; Ismail M. Kabakus; Yan Mee Law; Jamie Marko; Haytham Shebel; Francesca Mertan; Maria J. Merino; Bradford J. Wood; Peter A. Pinto; Ronald M. Summers; Peter L. Choyke; Baris Turkbey

Purpose To validate the dominant pulse sequence paradigm and limited role of dynamic contrast material-enhanced magnetic resonance (MR) imaging in the Prostate Imaging Reporting and Data System (PI-RADS) version 2 for prostate multiparametric MR imaging by using data from a multireader study. Materials and Methods This HIPAA-compliant retrospective interpretation of prospectively acquired data was approved by the local ethics committee. Patients were treatment-naïve with endorectal coil 3-T multiparametric MR imaging. A total of 163 patients were evaluated, 110 with prostatectomy after multiparametric MR imaging and 53 with negative multiparametric MR imaging and systematic biopsy findings. Nine radiologists participated in this study and interpreted images in 58 patients, on average (range, 56-60 patients). Lesions were detected with PI-RADS version 2 and were compared with whole-mount prostatectomy findings. Probability of cancer detection for overall, T2-weighted, and diffusion-weighted (DW) imaging PI-RADS scores was calculated in the peripheral zone (PZ) and transition zone (TZ) by using generalized estimating equations. To determine dominant pulse sequence and benefit of dynamic contrast-enhanced (DCE) imaging, odds ratios (ORs) were calculated as the ratio of odds of cancer of two consecutive scores by logistic regression. Results A total of 654 lesions (420 in the PZ) were detected. The probability of cancer detection for PI-RADS category 2, 3, 4, and 5 lesions was 15.7%, 33.1%, 70.5%, and 90.7%, respectively. DW imaging outperformed T2-weighted imaging in the PZ (OR, 3.49 vs 2.45; P = .008). T2-weighted imaging performed better but did not clearly outperform DW imaging in the TZ (OR, 4.79 vs 3.77; P = .494). Lesions classified as PI-RADS category 3 at DW MR imaging and as positive at DCE imaging in the PZ showed a higher probability of cancer detection than did DCE-negative PI-RADS category 3 lesions (67.8% vs 40.0%, P = .02). The addition of DCE imaging to DW imaging in the PZ was beneficial (OR, 2.0; P = .027), with an increase in the probability of cancer detection of 15.7%, 16.0%, and 9.2% for PI-RADS category 2, 3, and 4 lesions, respectively. Conclusion DW imaging outperforms T2-weighted imaging in the PZ; T2-weighted imaging did not show a significant difference when compared with DW imaging in the TZ by PI-RADS version 2 criteria. The addition of DCE imaging to DW imaging scores in the PZ yields meaningful improvements in probability of cancer detection.


Medical Image Analysis | 2018

Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation

Holger R. Roth; Le Lu; Nathan Lay; Adam P. Harrison; Amal Farag; Andrew Sohn; Ronald M. Summers

&NA; Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter‐patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two‐stage cascaded approach—pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep‐learning approach, based on an efficient application of holistically‐nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per‐pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non‐deep‐learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid‐level cues of deeply‐learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid‐level cues, our method is capable of generating boundary‐preserving pixel‐wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4‐fold cross‐validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state‐of‐the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.


Journal of medical imaging | 2017

Detection of prostate cancer in multiparametric MRI using random forest with instance weighting

Nathan Lay; Yohannes Tsehay; Matthew D. Greer; Baris Turkbey; Jin Tae Kwak; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Ronald M. Summers

Abstract. A prostate computer-aided diagnosis (CAD) based on random forest to detect prostate cancer using a combination of spatial, intensity, and texture features extracted from three sequences, T2W, ADC, and B2000 images, is proposed. The random forest training considers instance-level weighting for equal treatment of small and large cancerous lesions as well as small and large prostate backgrounds. Two other approaches, based on an AutoContext pipeline intended to make better use of sequence-specific patterns, were considered. One pipeline uses random forest on individual sequences while the other uses an image filter described to produce probability map-like images. These were compared to a previously published CAD approach based on support vector machine (SVM) evaluated on the same data. The random forest, features, sampling strategy, and instance-level weighting improve prostate cancer detection performance [area under the curve (AUC) 0.93] in comparison to SVM (AUC 0.86) on the same test data. Using a simple image filtering technique as a first-stage detector to highlight likely regions of prostate cancer helps with learning stability over using a learning-based approach owing to visibility and ambiguity of annotations in each sequence.


Proceedings of SPIE | 2017

Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images

Yohannes Tsehay; Nathan Lay; Holger R. Roth; Xiaosong Wang; Jin Tae Kwak; Baris Turkbey; Peter A. Pinto; Bradford J. Wood; Ronald M. Summers

Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD’s (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.


Proceedings of SPIE | 2016

Colitis detection on abdominal CT scans by rich feature hierarchies

Jiamin Liu; Nathan Lay; Zhuoshi Wei; Le Lu; Lauren Kim; Evrim B. Turkbey; Ronald M. Summers

Colitis is inflammation of the colon due to neutropenia, inflammatory bowel disease (such as Crohn disease), infection and immune compromise. Colitis is often associated with thickening of the colon wall. The wall of a colon afflicted with colitis is much thicker than normal. For example, the mean wall thickness in Crohn disease is 11-13 mm compared to the wall of the normal colon that should measure less than 3 mm. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with linear SVMs. We applied the detection method to 260 images from 26 CT scans of patients with colitis for evaluation. The detection system can achieve 0.85 sensitivity at 1 false positive per image.


Journal of medical imaging | 2017

Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter A. Pinto; Peter L. Choyke; Matthew J. McAuliffe; Ronald M. Summers

Abstract. Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p<0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.


Proceedings of SPIE | 2017

Automatic MR prostate segmentation by deep learning with holistically-nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe

Accurate automatic prostate magnetic resonance image (MRI) segmentation is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. The proposed method performs end-to- end segmentation by integrating holistically nested edge detection with fully convolutional neural networks. Holistically-nested networks (HNN) automatically learn the hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 247 patients in 5-fold cross-validation. We achieve a mean Dice Similarity Coefficient of 88.70% and a mean Jaccard Similarity Coefficient of 80.29% without trimming any erroneous contours at apex and base.


workshop on applications of computer vision | 2016

Accurate 3D bone segmentation in challenging CT images: Bottom-up parsing and contextualized optimization

Le Lu Dijia Wu; Nathan Lay; David Liu; Isabella Nogues; Ronald M. Summers

In full or arbitrary field-of-view (FOV) 3D CT imaging, obtaining an accurate per-voxel segmentation for complete large and small bones remains an unsolved and challenging problem. The difficulty lies in the notable variation in appearance and position observed among cortical bones, marrow and pathologies. To approach this problem, several studies have employed active shape models and atlas models. In this paper, we argue that a bottom-up approach, defined by classifying and grouping supervoxels, is another viable technique. Moreover, it can be integrated into a conditional random field (CRF) representation. Our approach consists of the following steps: first, an input CT volume is decomposed into supervoxels, in order to ensure very high bone boundary recall. Supervoxels are generated via a robust process of conservative region partitioning and recursive region merging. In order to maximize sparsity and classification efficiency, we use a Bayesian sparse linear classifier to compute and optimize middle-level image features. Next, we disambiguate the CRF unary potentials via contextualized optimization by pooling over selective supervoxel pairs. Finally, we adopt a pairwise support vector machine (SVM) model to learn the CRF pairwise potential in a fully supervised manner. We evaluate our method quantitatively on 137 low-resolution, low-contrast CT volumes with severe imaging noise, among which various bone pathologies are represented. Our system proves to be efficient; it achieves a clinically significant segmentation accuracy level (Dice Coefficient 98.2%).


Proceedings of SPIE | 2016

Detection of benign prostatic hyperplasia nodules in T2W MR images using fuzzy decision forest

Nathan Lay; Sabrina Freeman; Baris Turkbey; Ronald M. Summers

Prostate cancer is the second leading cause of cancer-related death in men MRI has proven useful for detecting prostate cancer, and CAD may further improve detection. One source of false positives in prostate computer-aided diagnosis (CAD) is the presence of benign prostatic hyperplasia (BPH) nodules. These nodules have a distinct appearance with a pseudo-capsule on T2 weighted MR images but can also resemble cancerous lesions in other sequences such as the ADC or high B-value images. Describing their appearance with hand-crafted heuristics (features) that also exclude the appearance of cancerous lesions is challenging. This work develops a method based on fuzzy decision forests to automatically learn discriminative features for the purpose of BPH nodule detection in T2 weighted images for the purpose of improving prostate CAD systems.

Collaboration


Dive into the Nathan Lay's collaboration.

Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Baris Turkbey

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Peter L. Choyke

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Peter A. Pinto

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Bradford J. Wood

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yohannes Tsehay

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Joanna H. Shih

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ruida Cheng

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge