Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam P. Harrison is active.

Publication


Featured researches published by Adam P. Harrison.


medical image computing and computer assisted intervention | 2017

Progressive and Multi-path Holistically Nested Neural Networks for Pathological Lung Segmentation from CT Images

Adam P. Harrison; Ziyue Xu; Kevin George; Le Lu; Ronald M. Summers; Daniel J. Mollura

Pathological lung segmentation (PLS) is an important, yet challenging, medical image application due to the wide variability of pathological lung appearance and shape. Because PLS is often a pre-requisite for other imaging analytics, methodological simplicity and generality are key factors in usability. Along those lines, we present a bottom-up deep-learning based approach that is expressive enough to handle variations in appearance, while remaining unaffected by any variations in shape. We incorporate the deeply supervised learning framework, but enhance it with a simple, yet effective, progressive multi-path scheme, which more reliably merges outputs from different network stages. The result is a deep model able to produce finer detailed masks, which we call progressive holistically-nested networks (P-HNNs). Using extensive cross-validation, our method is tested on multi-institutional datasets comprising 929 CT scans (848 publicly available), of pathological lungs, reporting mean dice scores of 0.985 and demonstrating significant qualitative and quantitative improvements over state-of-the art approaches.


Medical Image Analysis | 2018

Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation

Holger R. Roth; Le Lu; Nathan Lay; Adam P. Harrison; Amal Farag; Andrew Sohn; Ronald M. Summers

&NA; Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter‐patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two‐stage cascaded approach—pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep‐learning approach, based on an efficient application of holistically‐nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per‐pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non‐deep‐learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid‐level cues of deeply‐learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid‐level cues, our method is capable of generating boundary‐preserving pixel‐wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4‐fold cross‐validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state‐of‐the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.


medical image computing and computer-assisted intervention | 2013

IntellEditS: intelligent learning-based editor of segmentations.

Adam P. Harrison; Neil Birkbeck; Michal Sofka

Automatic segmentation techniques, despite demonstrating excellent overall accuracy, can often produce inaccuracies in local regions. As a result, correcting segmentations remains an important task that is often laborious, especially when done manually for 3D datasets. This work presents a powerful tool called Intelligent Learning-Based Editor of Segmentations (IntellEditS) that minimizes user effort and further improves segmentation accuracy. The tool partners interactive learning with an energy-minimization approach to editing. Based on interactive user input, a discriminative classifier is trained and applied to the edited 3D region to produce soft voxel labeling. The labels are integrated into a novel energy functional along with the existing segmentation and image data. Unlike the state of the art, IntellEditS is designed to correct segmentation results represented not only as masks but also as meshes. In addition, IntellEditS accepts intuitive boundary-based user interactions. The versatility and performance of IntellEditS are demonstrated on both MRI and CT datasets consisting of varied anatomical structures and resolutions.


International Workshop on Machine Learning in Medical Imaging | 2016

Multi-label Deep Regression and Unordered Pooling for Holistic Interstitial Lung Disease Pattern Detection

Mingchen Gao; Ziyue Xu; Le Lu; Adam P. Harrison; Ronald M. Summers; Daniel J. Mollura

Holistically detecting interstitial lung disease (ILD) patterns from CT images is challenging yet clinically important. Unfortunately, most existing solutions rely on manually provided regions of interest, limiting their clinical usefulness. In addition, no work has yet focused on predicting more than one ILD from the same CT slice, despite the frequency of such occurrences. To address these limitations, we propose two variations of multi-label deep convolutional neural networks (CNNs). The first uses a deep CNN to detect the presence of multiple ILDs using a regression-based loss function. Our second variant further improves performance, using spatially invariant Fisher Vector encoding of the CNN feature activations. We test our algorithms on a dataset of 533 patients using five-fold cross-validation, achieving high area-under-curve (AUC) scores of 0.982, 0.972, 0.893 and 0.993 for Ground Glass, Reticular, Honeycomb and Emphysema, respectively. As such, our work represents an important step forward in providing clinically effective ILD detection.


medical image computing and computer-assisted intervention | 2018

CT-Realistic Lung Nodule Simulation from 3D Conditional Generative Adversarial Networks for Robust Lung Segmentation.

Dakai Jin; Ziyue Xu; Youbao Tang; Adam P. Harrison; Daniel J. Mollura

Data availability plays a critical role for the performance of deep learning systems. This challenge is especially acute within the medical image domain, particularly when pathologies are involved, due to two factors: 1) limited number of cases, and 2) large variations in location, scale, and appearance. In this work, we investigate whether augmenting a dataset with artificially generated lung nodules can improve the robustness of the progressive holistically nested network (P-HNN) model for pathological lung segmentation of CT scans. To achieve this goal, we develop a 3D generative adversarial network (GAN) that effectively learns lung nodule property distributions in 3D space. In order to embed the nodules within their background context, we condition the GAN based on a volume of interest whose central part containing the nodule has been erased. To further improve realism and blending with the background, we propose a novel multi-mask reconstruction loss. We train our method on over 1000 nodules from the LIDC dataset. Qualitative results demonstrate the effectiveness of our method compared to the state-of-art. We then use our GAN to generate simulated training images where nodules lie on the lung border, which are cases where the published P-HNN model struggles. Qualitative and quantitative results demonstrate that armed with these simulated images, the P-HNN model learns to better segment lung regions under these challenging situations. As a result, our system provides a promising means to help overcome the data paucity that commonly afflicts medical imaging.


Medical Physics | 2017

A multichannel block‐matching denoising algorithm for spectral photon‐counting CT images

Adam P. Harrison; Ziyue Xu; Amir Pourmorteza; David A. Bluemke; Daniel J. Mollura

Purpose We present a denoising algorithm designed for a whole‐body prototype photon‐counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy‐binned images. Methods Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously‐acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly‐effective block‐matching 3D (BM3D) denoising algorithm for PCCT. The original single‐channel BM3D algorithm operates patch‐by‐patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi‐channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross‐channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Results Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three‐contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). Conclusion We outline a multi‐channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state‐of‐the‐art, single‐channel approach.


MLMI@MICCAI | 2018

Attention-Guided Curriculum Learning for Weakly Supervised Classification and Localization of Thoracic Diseases on Chest Radiographs

Yuxing Tang; Xiaosong Wang; Adam P. Harrison; Le Lu; Jing Xiao; Ronald M. Summers

In this work, we exploit the task of joint classification and weakly supervised localization of thoracic diseases from chest radiographs, with only image-level disease labels coupled with disease severity-level (DSL) information of a subset. A convolutional neural network (CNN) based attention-guided curriculum learning (AGCL) framework is presented, which leverages the severity-level attributes mined from radiology reports. Images in order of difficulty (grouped by different severity-levels) are fed to CNN to boost the learning gradually. In addition, highly confident samples (measured by classification probabilities) and their corresponding class-conditional heatmaps (generated by the CNN) are extracted and further fed into the AGCL framework to guide the learning of more distinctive convolutional features in the next iteration. A two-path network architecture is designed to regress the heatmaps from selected seed samples in addition to the original classification task. The joint learning scheme can improve the classification and localization performance along with more seed samples for the next iteration. We demonstrate the effectiveness of this iterative refinement framework via extensive experimental evaluations on the publicly available ChestXray14 dataset. AGCL achieves over 5.7% (averaged over 14 diseases) increase in classification AUC and 7%/11% increases in Recall/Precision for the localization task compared to the state of the art.


International Workshop on Machine Learning in Medical Imaging | 2017

3D Convolutional Neural Networks with Graph Refinement for Airway Segmentation Using Incomplete Data Labels

Dakai Jin; Ziyue Xu; Adam P. Harrison; Kevin George; Daniel J. Mollura

Intrathoracic airway segmentation from computed tomography images is a frequent prerequisite for further quantitative lung analyses. Due to low contrast and noise, especially at peripheral branches, it is often challenging for automatic methods to strike a balance between extracting deeper airway branches and avoiding leakage to the surrounding parenchyma. Meanwhile, manual annotations are extremely time consuming for the airway tree, which inhibits automated methods requiring training data. To address this, we introduce a 3D deep learning-based workflow able to produce high-quality airway segmentation from incompletely labeled training data generated without manual intervention. We first train a 3D fully convolutional network (FCN) based on the fact that 3D spatial information is crucial for small highly anisotropic tubular structures such as airways. For training the 3D FCN, we develop a domain-specific sampling scheme that strategically uses incomplete labels from a previous highly specific segmentation method, aiming to retain similar specificity while boosting sensitivity. Finally, to address local discontinuities of the coarse 3D FCN output, we apply a graph-based refinement incorporating fuzzy connectedness segmentation and robust curve skeletonization. Evaluations on the EXACT’09 and LTRC datasets demonstrate considerable improvements in airway extraction while maintaining reasonable leakage compared with a state-of-art method and the dataset reference standard.


DLMIA/ML-CDS@MICCAI | 2017

Pathological Pulmonary Lobe Segmentation from CT Images Using Progressive Holistically Nested Neural Networks and Random Walker

Kevin George; Adam P. Harrison; Dakai Jin; Ziyue Xu; Daniel J. Mollura

Automatic pathological pulmonary lobe segmentation(PPLS) enables regional analyses of lung disease, a clinically important capability. Due to often incomplete lobe boundaries, PPLS is difficult even for experts, and most prior art requires inference from contextual information. To address this, we propose a novel PPLS method that couples deep learning with the random walker (RW) algorithm. We first employ the recent progressive holistically-nested network (P-HNN) model to identify potential lobar boundaries, then generate final segmentations using a RW that is seeded and weighted by the P-HNN output. We are the first to apply deep learning to PPLS. The advantages are independence from prior airway/vessel segmentations, increased robustness in diseased lungs, and methodological simplicity that does not sacrifice accuracy. Our method posts a high mean Jaccard score of 0.888


International Workshop on Machine Learning in Medical Imaging | 2018

CT Image Enhancement Using Stacked Generative Adversarial Networks and Transfer Learning for Lesion Segmentation Improvement.

Youbao Tang; Jinzheng Cai; Le Lu; Adam P. Harrison; Ke Yan; Jing Xiao; Lin Yang; Ronald M. Summers

\pm

Collaboration


Dive into the Adam P. Harrison's collaboration.

Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Mollura

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ziyue Xu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Youbao Tang

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dakai Jin

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Yan

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Lin Yang

University of Florida

View shared research outputs
Top Co-Authors

Avatar

Kevin George

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge