Brian E. Chapman
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian E. Chapman.
Academic Radiology | 2003
Joseph K. Leader; Bin Zheng; Robert M. Rogers; Frank C. Sciurba; Andrew Perez; Brian E. Chapman; Sanjay R. Patel; Carl R. Fuhrman; David Gur
RATIONALE AND OBJECTIVES To develop and evaluate a reliable, fully-automated lung segmentation scheme for application in X-ray computed tomography. MATERIALS AND METHODS The automated scheme was heuristically developed using a slice-based, pixel-value threshold and two sets of classification rules. Features used in the rules include size, circularity, and location. The segmentation scheme operates slice-by-slice and performs three key operations: (1) image preprocessing to remove background pixels, (2) computation and application of a pixel-value threshold to identify lung tissue, and (3) refinement of the initial segmented regions to prune incorrectly detected airways and separate fused right and left lungs. RESULTS The performance of the automated segmentation scheme was evaluated using 101 computed tomography cases (91 thick slice, 10 thin slice scans). The 91 thick cases were pre- and post-surgery from 50 patients and were not independent. The automated scheme successfully segmented 94.0% of the 2,969 thick slice images and 97.6% of the 1,161 thin slice images. The mean difference of the total lung volumes calculated by the automated scheme and functional residual capacity plus 60% inspiratory capacity was -24.7 +/- 508.1 mL. The mean differences of the total lung volumes calculated by the automated scheme and an established, commonly used semi-automated scheme were 95.2 +/- 52.5 mL and -27.7 +/- 66.9 mL for the thick and thin slice cases, respectively. CONCLUSION This simple, fully-automated lung segmentation scheme provides an objective tool to facilitate lung segmentation from computed tomography scans.
Journal of the American Medical Informatics Association | 2012
Lucila Ohno-Machado; Vineet Bafna; Aziz A. Boxwala; Brian E. Chapman; Wendy W. Chapman; Kamalika Chaudhuri; Michele E. Day; Claudiu Farcas; Nathaniel D. Heintzman; Xiaoqian Jiang; Hyeoneui Kim; Jihoon Kim; Michael E. Matheny; Frederic S. Resnic; Staal A. Vinterbo
iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses.
Journal of Biomedical Informatics | 2001
Wendy W. Chapman; Marcelo Fizman; Brian E. Chapman; Peter J. Haug
We compared the performance of expert-crafted rules, a Bayesian network, and a decision tree at automatically identifying chest X-ray reports that support acute bacterial pneumonia. We randomly selected 292 chest X-ray reports, 75 (25%) of which were from patients with a hospital discharge diagnosis of bacterial pneumonia. The reports were encoded by our natural language processor and then manually corrected for mistakes. The encoded observations were analyzed by three expert systems to determine whether the reports supported pneumonia. The reference standard for radiologic support of pneumonia was the majority vote of three physicians. We compared (a) the performance of the expert systems against each other and (b) the performance of the expert systems against that of four physicians who were not part of the gold standard. Output from the expert systems and the physicians was transformed so that comparisons could be made with both binary and probabilistic output. Metrics of comparison for binary output were sensitivity (sens), precision (prec), and specificity (spec). The metric of comparison for probabilistic output was the area under the receiver operator characteristic (ROC) curve. We used McNemars test to determine statistical significance for binary output and univariate z-tests for probabilistic output. Measures of performance of the expert systems for binary (probabilistic) output were as follows: Rules--sens, 0.92; prec, 0.80; spec, 0.86 (Az, 0.960); Bayesian network--sens, 0.90; prec, 0.72; spec, 0.78 (Az, 0.945); decision tree--sens, 0.86; prec, 0.85; spec, 0.91 (Az, 0.940). Comparisons of the expert systems against each other using binary output showed a significant difference between the rules and the Bayesian network and between the decision tree and the Bayesian network. Comparisons of expert systems using probabilistic output showed no significant differences. Comparisons of binary output against physicians showed differences between the Bayesian network and two physicians. Comparisons of probabilistic output against physicians showed a difference between the decision tree and one physician. The expert systems performed similarly for the probabilistic output but differed in measures of sensitivity, precision, and specificity produced by the binary output. All three expert systems performed similarly to physicians.
Journal of the American Medical Informatics Association | 2003
Wendy W. Chapman; Gregory F. Cooper; Paul Hanbury; Brian E. Chapman; Lee H. Harrison; Michael M. Wagner
OBJECTIVE The aim of this study was to create a classifier for automatic detection of chest radiograph reports consistent with the mediastinal findings of inhalational anthrax. DESIGN The authors used the Identify Patient Sets (IPS) system to create a key word classifier for detecting reports describing mediastinal findings consistent with anthrax and compared their performances on a test set of 79,032 chest radiograph reports. MEASUREMENTS Area under the ROC curve was the main outcome measure of the IPS classifier. Sensitivity and specificity of an initial IPS model were calculated based on an existing key word search and were compared against a Boolean version of the IPS classifier. RESULTS The IPS classifier received an area under the ROC curve of 0.677 (90% CI = 0.628 to 0.772) with a specificity of 0.99 and maximum sensitivity of 0.35. The initial IPS model attained a specificity of 1.0 and a sensitivity of 0.04. CONCLUSION The IPS system is a useful tool for helping domain experts create a statistical key word classifier for textual reports that is a potentially useful component in surveillance of radiographic findings suspicious for anthrax.
Medical Image Analysis | 2004
Brian E. Chapman; Janet O. Stapelton; Dennis L. Parker
We evaluate the accuracy of a vascular segmentation algorithm which uses continuity in the maximum intensity projection (MIP) depth Z-buffer as a pre-processing step to generate a list of 3D seed points for further segmentation. We refer to the algorithm as Z-buffer segmentation (ZBS). The pre-processing of the MIP Z-buffer is based on smoothness measured using the minimum chi-square value of a least square fit. Points in the Z-buffer with chi-square values below a selected threshold are used as seed points for 3D region growing. The ZBS algorithm couples spatial continuity information with intensity information to create a simple yet accurate segmentation algorithm. We examine the dependence of the segmentation on various parameters of the algorithm. Performance is assessed in terms of the inclusion/exclusion of vessel/background voxels in the segmentation of intracranial time-of-flight MRA images. The evaluation is based on 490,256 voxels from 14 patients which were classified by an observer. ZBS performance was compared to simple thresholding and to segmentation based on vessel enhancement filtering. The ZBS segmentation was only weakly dependent on the parameters of the initial MIP image generation, indicating the robustness of this approach. Region growing based on Z-buffer generated seeds was advantageous compared to simple thresholding. The ZBS algorithm provided segmentation accuracies similar to that obtained with the vessel enhancement filter. The ZBS performance was notably better than the filter based segmentation for aneurysms where the assumptions of the filter were violated. As currently implemented the algorithm slightly under-segments the intracranial vasculature.
information processing in medical imaging | 2005
Prashanthi Vemuri; Eugene Kholmovski; Dennis L. Parker; Brian E. Chapman
Magnetic resonance (MR) images can be acquired by multiple receiver coil systems to improve signal-to-noise ratio (SNR) and to decrease acquisition time. The optimal SNR images can be reconstructed from the coil data when the coil sensitivities are known. In typical MR imaging studies, the information about coil sensitivity profiles is not available. In such cases the sum-of-squares (SoS) reconstruction algorithm is usually applied. The intensity of the SoS reconstructed image is modulated by a spatially variable function due to the non-uniformity of coil sensitivities. Additionally, the SoS images also have sub-optimal SNR and bias in image intensity. All these effects might introduce errors when quantitative analysis and/or tissue segmentation are performed on the SoS reconstructed images. In this paper, we present an iterative algorithm for coil sensitivity estimation and demonstrate its applicability for optimal SNR reconstruction and intensity inhomogeneity correction in phased array MR imaging.
Medical Image Analysis | 2005
Brian E. Chapman; Dennis L. Parker
In this paper we evaluate the use of voxel intensity curvature measurements to enhance vessels in 3D MRA images. We compare a multi-scale discrete kernel filter (MaxCurve) to the Hessian matrix based filter proposed by Frangi and co-workers. The MaxCurve filter is based on the maximum difference between the negative curvature computed along orthogonal lines defined by a 3x3x3 kernel. Filter performance is assessed using measures of vessel and background separation (contrast and the area under the ROC curve). Filter parameters are optimized using a training set of four typical time-of-flight MRA images and tested on a separate set of ten MRA images with the same acquisition parameters. The filters tended to provide good MIP image contrast enhancement. The filters are applied to MRA images acquired with different parameters and field strengths indicating potential usefulness for a variety of images. Overall the discrete kernel and Hessian matrix filter performed quite similarly.
Computer Methods and Programs in Biomedicine | 2005
Wen-Chi Christina Lee; Mitchell E. Tublin; Brian E. Chapman
The purpose of this work was to determine the feasibility and efficacy of retrospective registration of MR and CT images of the liver. The open-source ITK Insight Software package developed by the National Library of Medicine (USA) contains a multi-resolution, voxel-similarity-based registration algorithm which we selected as our baseline registration method. For comparison we implemented a multi-scale surface fitting technique based on the head-and-hat algorithm. Registration accuracy was assessed using the mean displacement of automatically selected point landmarks. The ITK voxel-similarity-based registration algorithm performed better than the surface-based approach with mean misregistration in the range of 7.7-8.4 mm for CT-CT registration, 8.2 mm for MR-MR registration, and 14.0-18.9 mm for MR-CT registration compared to mean misregistration from the surface-based technique in the range of 9.6-11.1 mm for CT-CT registration, 9.2-12.4 mm for MR-MR registration, and 15.2-19.0 mm for MR-CT registration.
Journal of Magnetic Resonance Imaging | 2000
J. Rock Hadley; Brian E. Chapman; John A. Roberts; David C. Chapman; K. Craig Goodrich; Henry R. Buswell; Andrew L. Alexander; Jay S. Tsuruda; Dennis L. Parker
The purpose of this work was to compare intracranial magnetic resonance angiography (MRA) image quality using three different radiofrequency coils. The three coil types included a reduced volume quadrature birdcage coil with endcap, a commercially available quadrature birdcage head coil, and a four‐element phased‐array coil. Signal‐to‐noise ratio (SNR) measurements were obtained from comparison studies performed on a uniform cylindrical phantom. MRA comparisons were performed using data acquired from 15 volunteers and applying a thick‐slab three‐dimensional time‐of‐flight sequence. Analysis was performed using the signal difference‐to‐noise ratio, a quantitative measure of the relative vascular signal. The reduced‐volume endcap and phased‐array coils, which were designed specifically for imaging the intracranial volume of the head, improved the image SNR and vascular detail considerably over that obtained using the commercially available head coil. The endcap coil configuration provided the best vascular signal overall, while the phased‐array coil provided the best results for arteries close to the coil elements. J. Magn. Reson. Imaging 2000;11:458–468.
Medical Imaging 2003: Image Processing | 2003
Bin Zheng; J. Ken Leader; Glenn S. Maitz; Brian E. Chapman; Carl R. Fuhrman; Robert M. Rogers; Frank C. Sciurba; Andrew Perez; Paul P. Thompson; Walter F. Good; David Gur
We developed and tested an automated scheme to segment lung areas depicted in CT images. The scheme includes a series of six steps. 1) Filtering and removing pixels outside the scanned anatomic structures. 2) Segmenting the potential lung areas using an adaptive threshold based on pixel value distribution in each CT slice. 3) Labeling all selected pixels ingo segmented regions and deleting isolated regions in non-lung area. 4) Labeling and filling interior cavities (e.g., pleural nodules, airway wall, and major blood vessels) inside lung areas. 5) Detecting and deleting the main airways (e.g., trachea and central bronchi) connected to the segmented lung areas. 6) Detecting and separating possible anterior or posterior junctions between the lungs. Five lung CT cases (7-10 mm in slice thickness) with variety of disease patterns were used to train or set up the classification rules in the scheme. Fifty examinations of emphysema patients were then used to test the scheme. The results were compared with the results generated from a semi-automated method with manual interaction by an expert observer. The experimental results showed that the average difference in estimated lung volumes between the automated scheme and manually corrected approach was 2.91%±0.88%. Visual examination of segmentation results indicated that the difference of the two methods was larger in the areas near the apices and the diaphragm. This preliminary study demonstrated that a simple multi-stage scheme had potential of eliminating the need for manual interaction during lunch segmentation. Hence, it can ultimately be integrated into computer schemes for quantitative analysis and diagnosis of lung diseases.