Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Plassard is active.

Publication


Featured researches published by Andrew J. Plassard.


Medical Image Analysis | 2015

Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

Andrew J. Asman; Yuankai Huo; Andrew J. Plassard; Bennett A. Landman

We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information.


Human Brain Mapping | 2017

Simultaneous total intracranial volume and posterior fossa volume estimation using multi-atlas label fusion

Yuankai Huo; Andrew J. Asman; Andrew J. Plassard; Bennett A. Landman

Total intracranial volume (TICV) is an essential covariate in brain volumetric analyses. The prevalent brain imaging software packages provide automatic TICV estimates. FreeSurfer and FSL estimate TICV using a scaling factor while SPM12 accumulates probabilities of brain tissues. None of the three provide explicit skull/CSF boundary (SCB) since it is challenging to distinguish these dark structures in a T1‐weighted image. However, explicit SCB not only leads to a natural way of obtaining TICV (i.e., counting voxels inside the skull) but also allows sub‐definition of TICV, for example, the posterior fossa volume (PFV). In this article, they proposed to use multi‐atlas label fusion to obtain TICV and PFV simultaneously. The main contributions are: (1) TICV and PFV are simultaneously obtained with explicit SCB from a single T1‐weighted image. (2) TICV and PFV labels are added to the widely used BrainCOLOR atlases. (3) Detailed mathematical derivation of non‐local spatial STAPLE (NLSS) label fusion is presented. As the skull is clearly distinguished in CT images, we use a semi‐manual procedure to obtain atlases with TICV and PFV labels using 20 subjects who both have a MR and CT scan. The proposed method provides simultaneous TICV and PFV estimation while achieving more accurate TICV estimation compared with FreeSurfer, FSL, SPM12, and the previously proposed STAPLE based approach. The newly developed TICV and PFV labels for the OASIS BrainCOLOR atlases provide acceptable performance, which enables simultaneous TICV and PFV estimation during whole brain segmentation. The NLSS method and the new atlases have been made freely available. Hum Brain Mapp 38:599–616, 2017.


Proceedings of SPIE | 2017

Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

Shunxing Bao; Frederick D. Weitendorf; Andrew J. Plassard; Yuankai Huo; Aniruddha S. Gokhale; Bennett A. Landman

The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.


NeuroImage | 2017

Tests of cortical parcellation based on white matter connectivity using diffusion tensor imaging

Yurui Gao; Kurt G. Schilling; Iwona Stepniewska; Andrew J. Plassard; Ann S. Choe; Xia Li; Bennett A. Landman; Adam W. Anderson

ABSTRACT The cerebral cortex is conventionally divided into a number of domains based on cytoarchitectural features. Diffusion tensor imaging (DTI) enables noninvasive parcellation of the cortex based on white matter connectivity patterns. However, the correspondence between DTI‐connectivity‐based and cytoarchitectural parcellation has not been systematically established. In this study, we compared histological parcellation of New World monkey neocortex to DTI‐ connectivity‐based classification and clustering in the same brains. First, we used supervised classification to parcellate parieto‐frontal cortex based on DTI tractograms and the cytoarchitectural prior (obtained using Nissl staining). We performed both within and across sample classification, showing reasonable classification performance in both conditions. Second, we used unsupervised clustering to parcellate the cortex and compared the clusters to the cytoarchitectonic standard. We then explored the similarities and differences with several post‐hoc analyses, highlighting underlying principles that drive the DTI‐connectivity‐based parcellation. The differences in parcellation between DTI‐connectivity and Nissl histology probably represent both DTIs bias toward easily‐tracked bundles and true differences between cytoarchitectural and connectivity defined domains. DTI tractograms appear to cluster more according to functional networks, rather than mapping directly onto cytoarchitectonic domains. Our results show that caution should be used when DTI‐tractography classification, based on data from another brain, is used as a surrogate for cytoarchitectural parcellation. HIGHLIGHTSDTI‐connectivity‐based parcellation and Nissl histology are compared.Intra‐subject supervised DTI parcellation has up to 87% agreement with Nissl histology.Inter‐subject supervised DTI parcellation has up to 71% agreement with Nissl histology.Unsupervised DTI parcellation has 39% agreement with Nissl histology.Differences are from DTI errors and true differences between connectivity‐defined and cytoarchitectural domains.


Proceedings of SPIE | 2015

Toward content-based image retrieval with deep convolutional neural networks

Judah E. S. Sklan; Andrew J. Plassard; Daniel Fabbri; Bennett A. Landman

Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.


arXiv: Computer Vision and Pattern Recognition | 2018

Splenomegaly segmentation using global convolutional kernels and conditional generative adversarial networks.

Yuankai Huo; Zhoubing Xu; Shunxing Bao; Camilo Bermudez; Andrew J. Plassard; Jiaqi Liu; Yuang Yao; Albert Assad; Richard G. Abramson; Bennett A. Landman

Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.


Proceedings of SPIE | 2016

On the Fallacy of Quantitative Segmentation for T1-Weighted MRI

Andrew J. Plassard; Robert L. Harrigan; Allen T. Newton; Swati Rane; Srivatsan Pallavaram; Pierre-François D'Haese; Benoit M. Dawant; Daniel O. Claassen; Bennett A. Landman

T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure “similar” contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply “T1-weighted”. Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but “normal study-to-study variation” in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.


Proceedings of SPIE | 2015

Bootstrapping white matter segmentation, Eve++

Andrew J. Plassard; Kendra E. Hinton; Vijay K. Venkatraman; Christopher E. Gonzalez; Susan M. Resnick; Bennett A. Landman

Multi-atlas labeling has come in wide spread use for whole brain labeling on magnetic resonance imaging. Recent challenges have shown that leading techniques are near (or at) human expert reproducibility for cortical gray matter labels. However, these approaches tend to treat white matter as essentially homogeneous (as white matter exhibits isointense signal on structural MRI). The state-of-the-art for white matter atlas is the single-subject Johns Hopkins Eve atlas. Numerous approaches have attempted to use tractography and/or orientation information to identify homologous white matter structures across subjects. Despite success with large tracts, these approaches have been plagued by difficulties in with subtle differences in course, low signal to noise, and complex structural relationships for smaller tracts. Here, we investigate use of atlas-based labeling to propagate the Eve atlas to unlabeled datasets. We evaluate single atlas labeling and multi-atlas labeling using synthetic atlases derived from the single manually labeled atlas. On 5 representative tracts for 10 subjects, we demonstrate that (1) single atlas labeling generally provides segmentations within 2mm mean surface distance, (2) morphologically constraining DTI labels within structural MRI white matter reduces variability, and (3) multi-atlas labeling did not improve accuracy. These efforts present a preliminary indication that single atlas labels with correction is reasonable, but caution should be applied. To purse multi-atlas labeling and more fully characterize overall performance, more labeled datasets would be necessary.Multi-atlas labeling has come in wide spread use for whole brain labeling on magnetic resonance imaging. Recent challenges have shown that leading techniques are near (or at) human expert reproducibility for cortical gray matter labels. However, these approaches tend to treat white matter as essentially homogeneous (as white matter exhibits isointense signal on structural MRI). The state-of-the-art for white matter atlas is the single-subject Johns Hopkins Eve atlas. Numerous approaches have attempted to use tractography and/or orientation information to identify homologous white matter structures across subjects. Despite success with large tracts, these approaches have been plagued by difficulties in with subtle differences in course, low signal to noise, and complex structural relationships for smaller tracts. Here, we investigate use of atlas-based labeling to propagate the Eve atlas to unlabeled datasets. We evaluate single atlas labeling and multi-atlas labeling using synthetic atlases derived from the single manually labeled atlas. On 5 representative tracts for 10 subjects, we demonstrate that (1) single atlas labeling generally provides segmentations within 2mm mean surface distance, (2) morphologically constraining DTI labels within structural MRI white matter reduces variability, and (3) multi-atlas labeling did not improve accuracy. These efforts present a preliminary indication that single atlas labels with correction is reasonable, but caution should be applied. To purse multi-atlas labeling and more fully characterize overall performance, more labeled datasets would be necessary.


Medical Imaging 2018: Image Processing | 2018

Learning implicit brain MRI manifolds with deep learning.

Camilo Bermudez; Andrew J. Plassard; Larry T. Davis; Allen T. Newton; Susan M. Resnick; Bennett A. Landman

An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.


Proceedings of SPIE | 2017

DAX - the next generation: towards one million processes on commodity hardware

Stephen M. Damon; Brian D. Boyd; Andrew J. Plassard; Warren D. Taylor; Bennett A. Landman

Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.

Collaboration


Dive into the Andrew J. Plassard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel O. Claassen

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge