Evan S. McCreedy
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evan S. McCreedy.
Nature Protocols | 2014
Abhishek Kumar; Yicong Wu; Ryan Christensen; Panagiotis Chandris; William Gandler; Evan S. McCreedy; Alexandra Bokinsky; Daniel A. Colón-Ramos; Zhirong Bao; Matthew J. McAuliffe; Gary Rondeau; Hari Shroff
We describe the construction and use of a compact dual-view inverted selective plane illumination microscope (diSPIM) for time-lapse volumetric (4D) imaging of living samples at subcellular resolution. Our protocol enables a biologist with some prior microscopy experience to assemble a diSPIM from commercially available parts, to align optics and test system performance, to prepare samples, and to control hardware and data processing with our software. Unlike existing light sheet microscopy protocols, our method does not require the sample to be embedded in agarose; instead, samples are prepared conventionally on glass coverslips. Tissue culture cells and Caenorhabditis elegans embryos are used as examples in this protocol; successful implementation of the protocol results in isotropic resolution and acquisition speeds up to several volumes per s on these samples. Assembling and verifying diSPIM performance takes ∼6 d, sample preparation and data acquisition take up to 5 d and postprocessing takes 3–8 h, depending on the size of the data.
Journal of the American Medical Informatics Association | 2010
Stephen B. Johnson; Glen Whitney; Matthew J. McAuliffe; Hailong Wang; Evan S. McCreedy; Leon Rozenblit; Clark C. Evans
OBJECTIVE To propose a centralized method for generating global unique identifiers to link collections of research data and specimens. DESIGN The work is a collaboration between the Simons Foundation Autism Research Initiative and the National Database for Autism Research. The system is implemented as a web service: an investigator inputs identifying information about a participant into a client application and sends encrypted information to a server application, which returns a generated global unique identifier. The authors evaluated the system using a volume test of one million simulated individuals and a field test on 2000 families (over 8000 individual participants) in an autism study. MEASUREMENTS Inverse probability of hash codes; rate of false identity of two individuals; rate of false split of single individual; percentage of subjects for which identifying information could be collected; percentage of hash codes generated successfully. RESULTS Large-volume simulation generated no false splits or false identity. Field testing in the Simons Foundation Autism Research Initiative Simplex Collection produced identifiers for 96% of children in the study and 77% of parents. On average, four out of five hash codes per subject were generated perfectly (only one perfect hash is required for subsequent matching). DISCUSSION The system must achieve balance among the competing goals of distinguishing individuals, collecting accurate information for matching, and protecting confidentiality. Considerable effort is required to obtain approval from institutional review boards, obtain consent from participants, and to achieve compliance from sites during a multicenter study. CONCLUSION Generic unique identifiers have the potential to link collections of research data, augment the amount and types of data available for individuals, support detection of overlap between collections, and facilitate replication of research findings.
international conference of the ieee engineering in medicine and biology society | 2006
Evan S. McCreedy; Ruida Cheng; Paul F. Hemler; Anand Viswanathan; Bradford J. Wood; Matthew J. McAuliffe
The radio frequency ablation segmentation tool (RFAST) is a software application developed using the National Institutes of Healths medical image processing analysis and visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize, and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented three dimensional (3-D) surface models enables the physician to interactively position the ablation probe to simulate burns and to semimanually simulate sphere packing in an attempt to optimize probe placement. This paper describes software systems contained in RFAST to address the needs of clinicians in planning, evaluating, and simulating RFA treatments of malignant hepatic tissue
Proceedings of SPIE | 2016
Ruida Cheng; Holger R. Roth; Le Lu; Shijun Wang; Baris Turkbey; William Gandler; Evan S. McCreedy; Harsh K. Agarwal; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
international conference of the ieee engineering in medicine and biology society | 2014
Ruida Cheng; Baris Turkbey; William Gandler; Harsh K. Agarwal; Vijay P. Shah; Alexandra Bokinsky; Evan S. McCreedy; Shijun Wang; Sandeep Sankineni; Marcelino Bernardo; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe
Automatic prostate segmentation in MR images is a challenging task due to inter-patient prostate shape and texture variability, and the lack of a clear prostate boundary. We propose a supervised learning framework that combines the atlas based AAM and SVM model to achieve a relatively high segmentation result of the prostate boundary. The performance of the segmentation is evaluated with cross validation on 40 MR image datasets, yielding an average segmentation accuracy near 90%.
arXiv: Computer Vision and Pattern Recognition | 2016
Naji Khosravan; Haydar Celik; Baris Turkbey; Ruida Cheng; Evan S. McCreedy; Matthew J. McAuliffe; Sandra Bednarova; Elizabeth Jones; Xinjian Chen; Peter L. Choyke; Bradford J. Wood; Ulas Bagci
In this study, we developed a novel system, called Gaze2Segment, integrating biological and computer vision techniques to support radiologists’ reading experience with an automatic image segmentation task. During diagnostic assessment of lung CT scans, the radiologists’ gaze information were used to create a visual attention map. Next, this map was combined with a computer-derived saliency map, extracted from the gray-scale CT images. The visual attention map was used as an input for indicating roughly the location of a region of interest. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest found in the previous step. These cues are used to initiate a seed-based delineation process. The proposed Gaze2Segment achieved a dice similarity coefficient of 86% and Hausdorff distance of 1.45 mm as a segmentation accuracy. To the best of our knowledge, Gaze2Segment is the first true integration of eye-tracking technology into a medical image segmentation task without the need for any further user-interaction.
Proceedings of SPIE | 2015
Ryan Christensen; Alexandra Bokinsky; Anthony Santella; Yicong Wu; Javier Marquina; Ismar Kovacevic; Abhishek Kumar; Peter W. Winter; Evan S. McCreedy; William A. Mohler; Zhirong Bao; Daniel A. Colón-Ramos; Hari Shroff
How an entire nervous system develops remains mysterious. We have developed a light-sheet microscope system to examine neurodevelopment in C. elegans embryos. Our system creates overlapping light sheets from two orthogonally positioned objectives, enabling imaging from the first cell division to hatching (~14 hours) with 350 nm isotropic resolution. We have also developed computer algorithms to computationally straighten nematode embryos, facilitating data comparison and combination from multiple animals. We plan to use these tools to create an atlas showing the position and morphology of all neurons in the developing embryo.
Journal of medical imaging | 2017
Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter A. Pinto; Peter L. Choyke; Matthew J. McAuliffe; Ronald M. Summers
Abstract. Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p<0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.
2010 Biomedical Sciences and Engineering Conference | 2010
Kelsie Covington; Evan S. McCreedy; Min Chen; Aaron Carass; Nicole Aucoin; Bennett A. Landman
Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).
Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling | 2008
Ruida Cheng; Alexandra Bokinsky; Paul F. Hemler; Evan S. McCreedy; Matthew J. McAuliffe
In recent years, the number and utility of 3-D rendering frameworks has grown substantially. A quantitative and qualitative evaluation of the capabilities of a subset of these systems is important to determine the applicability of these methods to typical medical visualization tasks. The libraries evaluated in this paper include the Java3D Application Programming Interface (API), Java OpenGL (Jogl) API, a multi-histogram software-based rendering method, and the WildMagic API. Volume renderer implementations using each of these frameworks were developed using the platform-independent Java programming language. Quantitative performance measurements (frames per second, memory usage) were used to evaluate the strengths and weaknesses of each implementation.