Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carolyn B. Lauzon is active.

Publication


Featured researches published by Carolyn B. Lauzon.


PLOS ONE | 2013

Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

Carolyn B. Lauzon; Andrew J. Asman; Michael L. Esparza; Scott S. Burns; Qiuyun Fan; Yurui Gao; Adam W. Anderson; Nicole Davis; Laurie E. Cutting; Bennett A. Landman

Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.


Proceedings of SPIE | 2013

Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis

Yurui Gao; Scott S. Burns; Carolyn B. Lauzon; Andrew E. Fong; Terry A. James; Joel F. Lubar; Robert W. Thatcher; David A. Twillie; Michael D. Wirt; Marc A. Zola; Bret W. Logan; Adam W. Anderson; Bennett A. Landman

Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.


Magnetic Resonance in Medicine | 2013

Assessment of bias in experimentally measured diffusion tensor imaging parameters using SIMEX

Carolyn B. Lauzon; Ciprian M. Crainiceanu; Brian C. Caffo; Bennett A. Landman

Diffusion tensor imaging enables in vivo investigation of tissue cytoarchitecture through parameter contrasts sensitive to water diffusion barriers at the micrometer level. Parameters are derived through an estimation process that is susceptible to noise and artifacts. Estimated parameters (e.g., fractional anisotropy) exhibit both variability and bias relative to the true parameter value estimated from a hypothetical noise‐free acquisition. Herein, we present the use of the simulation and extrapolation (SIMEX) approach for post hoc assessment of bias in a massively univariate imaging setting and evaluate the potential of a SIMEX‐based bias correction. Using simulated data with known truth models, spatially varying fractional anisotropy bias error maps are evaluated on two independent and highly differentiated case studies. The stability of SIMEX and its distributional properties are further evaluated on 42 empirical diffusion tensor imaging datasets. Using gradient subsampling, an empirical experiment with a known true outcome is designed and SIMEX performance is compared to the original estimator. With this approach, we find SIMEX bias estimates to be highly accurate offering significant reductions in parameter bias for individual datasets and greater accuracy in averaged population‐based estimates. Magn Reson Med, 2013.


Magnetic Resonance Imaging | 2013

Correcting power and p-value calculations for bias in diffusion tensor imaging

Carolyn B. Lauzon; Bennett A. Landman

Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values.


Proceedings of SPIE | 2012

Towards automatic quantitative quality control for MRI

Carolyn B. Lauzon; Brian C. Caffo; Bennett A. Landman

Quality and consistency of clinical and research data collected from Magnetic Resonance Imaging (MRI) scanners may become suspect due to a wide variety of common factors including, experimental changes, hardware degradation, hardware replacement, software updates, personnel changes, and observed imaging artifacts. Standard practice limits quality analysis to visual assessment by a researcher/clinician or a quantitative quality control based upon phantoms which may not be timely, cannot account for differing experimental protocol (e.g. gradient timings and strengths), and may not be pertinent to the data or experimental question at hand. This paper presents a parallel processing pipeline developed towards experiment specific automatic quantitative quality control of MRI data using diffusion tensor imaging (DTI) as an experimental test case. The pipeline consists of automatic identification of DTI scans run on the MRI scanner, calculation of DTI contrasts from the data, implementation of modern statistical methods (wild bootstrap and SIMEX) to assess variance and bias in DTI contrasts, and quality assessment via power calculations and normative values. For this pipeline, a DTI specific power calculation analysis is developed as well as the first incorporation of bias estimates in DTI data to improve statistical analysis.


medical image computing and computer assisted intervention | 2011

Assessment of bias for MRI diffusion tensor imaging using SIMEX

Carolyn B. Lauzon; Andrew J. Asman; Ciprian M. Crainiceanu; Brian C. Caffo; Bennett A. Landman

Diffusion Tensor Imaging (DTI) is a Magnetic Resonance Imaging method for measuring water diffusion in vivo. One powerful DTI contrast is fractional anisotropy (FA). FA reflects the strength of waters diffusion directional preference and is a primary metric for neuronal fiber tracking. As with other DTI contrasts, FA measurements are obscured by the well established presence of bias. DTI bias has been challenging to assess because it is a multivariable problem including SNR, six tensor parameters, and the DTI collection and processing method used. SIMEX is a modem statistical technique that estimates bias by tracking measurement error as a function of added noise. Here, we use SIMEX to assess bias in FA measurements and show the method provides; i) accurate FA bias estimates, ii) representation of FA bias that is data set specific and accessible to non-statisticians, and iii) a first time possibility for incorporation of bias into DTI data analysis.


The American Statistician | 2013

Correction to “Easy Multiplicity Control in Equivalence Testing Using Two One-Sided Tests”

Brian Caffo; Carolyn B. Lauzon; Joachim Röhmel

Lauzon and Caffo (2009) proposed a simple correction for all pair-wise comparisons for equivalence testing of independent groups. The procedure only controls the family-wise error rate (FWER) under settings equivalent to when the null hypothesis is true for all pairs of means. Rohmel (2011) suggested that the procedure should control the error rate regardless of the configurations of null or alternative status for the collection of means. Here, we give the appropriate correction for all numbers of groups (K) using the logic from the original article. Let μ1 ≤ μ2 ≤ · · · ≤ μK be the true means. Using the notation and arguments of Lauzon and Caffo (2009), the primary contributor to the error rate is comparisons where θ ≤ μ j . We define the worst case (WC) scenario as the configuration of means that has the maximum number of null comparisons with θ ≤ μ < 2θ (labeled meaningful null comparisons, MNCs). The WC scenario yields the divisor for the multiple comparisons procedure. We show that the WC scenario involves K2/4 comparisons, as is conjectured in Rohmel (2011).


Proceedings of SPIE--the International Society for Optical Engineering | 2013

Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.

Yurui Gao; Scott S. Burns; Carolyn B. Lauzon; Andrew E. Fong; Terry A. James; Joel F. Lubar; Robert W. Thatcher; David A. Twillie; Wirt; Marc A. Zola; Bret W. Logan; Adam W. Anderson; Bennett A. Landman

Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.


Proceedings of SPIE--the International Society for Optical Engineering | 2013

Robust Inter-Modality Multi-Atlas Segmentation for PACS-based DTI Quality Control.

Andrew J. Asman; Carolyn B. Lauzon; Bennett A. Landman

Anatomical contexts (spatial labels) are critical for interpretation of medical imaging content. Numerous approaches have been devised for segmentation, query, and retrieval within the Picture Archive and Communication System (PACS) framework. To date, application-based methods for anatomical localization and tissue classification have yielded the most successful results, but these approaches typically rely upon the availability of standardized imaging sequences. With the ever expanding scope of PACS archives — including multiple imaging modalities, multiple image types within a modality, and multi-site efforts, it is becoming increasingly burdensome to devise a specific method for each data type. To address the challenge of generalizing segmentations from one modality to another, we consider multi-atlas segmentation to transfer label information from labeled T1-weighted MRI data to unlabeled data collected in a diffusion tensor imaging (DTI) experiment. The label transfer approach is fully automated and enables a generalizable cross-modality segmentation method. Herein, we propose a multi-tier multi-atlas segmentation framework for the segmentation of previously unlabeled imaging modalities (e.g.,B0 images for DTI analysis). We show that this approach can be used to construct informed structure-wise noise estimates for fractional anisotropy (FA) measurements of DTI. Although this label transfer methodology is demonstrated in the context of quality control of DTI images, the proposed framework is applicable to any application where the segmentation of unlabeled modalities is limited due to the current collection of available atlases.


Proceedings of SPIE | 2013

Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

Yurui Gao; Scott S. Burns; Carolyn B. Lauzon; Andrew E. Fong; Terry A. James; Joel F. Lubar; Robert W. Thatcher; David A. Twillie; Michael D. Wirt; Marc A. Zola; Bret W. Logan; Adam W. Anderson; Bennett A. Landman

Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

Collaboration


Dive into the Carolyn B. Lauzon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian C. Caffo

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yurui Gao

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan M. Resnick

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Xue Yang

Vanderbilt University

View shared research outputs
Researchain Logo
Decentralizing Knowledge