Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Gandler is active.

Publication


Featured researches published by William Gandler.


computer based medical systems | 2001

Medical Image Processing, Analysis and Visualization in clinical research

Matthew J. McAuliffe; Francois M. Lalonde; Delia P. McGarry; William Gandler; Karl Csaky; Benes L. Trus

Imaging has become an essential component in many fields of medical and laboratory research and clinical practice. Biologists study cells and generate 3D confocal microscopy data sets; virologists generate 3D reconstructions of viruses from micrographs; radiologists identify and quantify tumors from MRI and CT scans; and neuroscientists detect regional metabolic brain activity from PET and functional MRI scans. Analysis of these diverse image types requires sophisticated computerized quantification and visualization tools. Until recently, 3D visualization of images and quantitative analysis could only be performed using expensive UNIX workstations and customized software. Today, much of the visualization and analysis can be performed on an inexpensive desktop computer equipped with the appropriate graphics hardware and software. This paper introduces an extensible, platform-independent, general-purpose image processing and visualization program specifically designed to meet the needs of an Internet-linked medical research community. The application, named MIPAV (Medical Image Processing, Analysis and Visualization), enables clinical and quantitative analysis of medical images over the Internet. Using MIPAVs standard user interface and analysis tools, researcher and clinicians at remote sites can easily share research data and analyses, thereby enhancing their ability to study, diagnose, monitor and treat medical disorders.


Journal of Neuroscience Methods | 2007

Volumetric Neuroimage Analysis Extensions for the MIPAV Software Package

Pierre Louis Bazin; Jennifer L. Cuzzocreo; Michael A. Yassa; William Gandler; Matthew J. McAuliffe; Susan Spear Bassett; Dzung L. Pham

We describe a new collection of publicly available software tools for performing quantitative neuroimage analysis. The tools perform semi-automatic brain extraction, tissue classification, Talairach alignment, and atlas-based measurements within a user-friendly graphical environment. They are implemented as plug-ins for MIPAV, a freely available medical image processing software package from the National Institutes of Health. Because the plug-ins and MIPAV are implemented in Java, both can be utilized on nearly any operating system platform. In addition to the software plug-ins, we have also released a digital version of the Talairach atlas that can be used to perform regional volumetric analyses. Several studies are conducted applying the new tools to simulated and real neuroimaging data sets.


Nature Protocols | 2014

Dual-view plane illumination microscopy for rapid and spatially isotropic imaging

Abhishek Kumar; Yicong Wu; Ryan Christensen; Panagiotis Chandris; William Gandler; Evan S. McCreedy; Alexandra Bokinsky; Daniel A. Colón-Ramos; Zhirong Bao; Matthew J. McAuliffe; Gary Rondeau; Hari Shroff

We describe the construction and use of a compact dual-view inverted selective plane illumination microscope (diSPIM) for time-lapse volumetric (4D) imaging of living samples at subcellular resolution. Our protocol enables a biologist with some prior microscopy experience to assemble a diSPIM from commercially available parts, to align optics and test system performance, to prepare samples, and to control hardware and data processing with our software. Unlike existing light sheet microscopy protocols, our method does not require the sample to be embedded in agarose; instead, samples are prepared conventionally on glass coverslips. Tissue culture cells and Caenorhabditis elegans embryos are used as examples in this protocol; successful implementation of the protocol results in isotropic resolution and acquisition speeds up to several volumes per s on these samples. Assembling and verifying diSPIM performance takes ∼6 d, sample preparation and data acquisition take up to 5 d and postprocessing takes 3–8 h, depending on the size of the data.


Medical Imaging 2005: Image Processing | 2005

Free Software Tools for Atlas-based Volumetric Neuroimage Analysis

Pierre-Louis Bazin; Dzung L. Pham; William Gandler; Matthew J. McAuliffe

We describe new and freely available software tools for measuring volumes in subregions of the brain. The method is fast, flexible, and employs well-studied techniques based on the Talairach-Tournoux atlas. The software tools are released as plug-ins for MIPAV, a freely available and user-friendly image analysis software package developed by the National Institutes of Health. Our software tools include a digital Talairach atlas that consists of labels for 148 different substructures of the brain at various scales.


Proceedings of SPIE | 2016

Active appearance model and deep learning for more accurate prostate segmentation on MRI

Ruida Cheng; Holger R. Roth; Le Lu; Shijun Wang; Baris Turkbey; William Gandler; Evan S. McCreedy; Harsh K. Agarwal; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe

Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.


international conference of the ieee engineering in medicine and biology society | 2014

Atlas based AAM and SVM model for fully automatic MRI prostate segmentation

Ruida Cheng; Baris Turkbey; William Gandler; Harsh K. Agarwal; Vijay P. Shah; Alexandra Bokinsky; Evan S. McCreedy; Shijun Wang; Sandeep Sankineni; Marcelino Bernardo; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe

Automatic prostate segmentation in MR images is a challenging task due to inter-patient prostate shape and texture variability, and the lack of a clear prostate boundary. We propose a supervised learning framework that combines the atlas based AAM and SVM model to achieve a relatively high segmentation result of the prostate boundary. The performance of the segmentation is evaluated with cross validation on 40 MR image datasets, yielding an average segmentation accuracy near 90%.


Journal of medical imaging | 2017

Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter A. Pinto; Peter L. Choyke; Matthew J. McAuliffe; Ronald M. Summers

Abstract. Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p<0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.


Proceedings of SPIE | 2017

Automatic MR prostate segmentation by deep learning with holistically-nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe

Accurate automatic prostate magnetic resonance image (MRI) segmentation is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. The proposed method performs end-to- end segmentation by integrating holistically nested edge detection with fully convolutional neural networks. Holistically-nested networks (HNN) automatically learn the hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 247 patients in 5-fold cross-validation. We achieve a mean Dice Similarity Coefficient of 88.70% and a mean Jaccard Similarity Coefficient of 80.29% without trimming any erroneous contours at apex and base.


Proceedings of SPIE | 2013

2D registration guided models for semi-automatic MRI prostate segmentation

Ruida Cheng; Baris Turkbey; Justin Senseney; Marcelino Bernardo; Alexandra Bokinsky; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe

Accurate segmentation of prostate magnetic resonance images (MRI) is a challenging task due to the variable anatomical structure of the prostate. In this work, two semi-automatic techniques for segmentation of T2-weighted MRI images of the prostate are presented. Both models are based on 2D registration that changes shape to fit the prostate boundary between adjacent slices. The first model relies entirely on registration to segment the prostate. The second model applies Fuzzy-C means and morphology filters on top of the registration in order to refine the prostate boundary. Key to the success of the two models is the careful initialization of the prostate contours, which requires specifying three Volume of Interest (VOI) contours to each axial, sagittal and coronal image. Then, a fully automatic segmentation algorithm generates the final results with the three images. The algorithm performance is evaluated with 45 MR image datasets. VOI volume, 3D surface volume and VOI boundary masks are used to quantify the segmentation accuracy between the semi-automatic and expert manual segmentations. Both models achieve an average segmentation accuracy of 90%. The proposed registration guided segmentation model has been generalized to segment a wide range of T2- weighted MRI prostate images.


international symposium on biomedical imaging | 2017

Deep learning with orthogonal volumetric HED segmentation and 3D surface reconstruction model of prostate MRI

Ruida Cheng; Nathan Lay; Francesca Mertan; Baris Turkbey; Holger R. Roth; Le Lu; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe; Ronald M. Summers

Automatic MR whole prostate segmentation is a challenging task. Recent approaches have attempted to harness the capabilities of deep learning for MR prostate segmentation to tackle pixel-level labeling tasks. Patch-based and hierarchical features-based deep CNN models were used to delineate the prostate boundary. To further investigate this problem, we introduce a Holistically-Nested Edge Detector (HED) MRI prostate deep learning segmentation and 3D surface reconstruction model that facilitate the registration of multi-parametric MRI with histopathology slides from radical prostatectomy specimens and targeted biopsy specimens. Application of this technique combines deep learning and computer aided design to provide a generalized solution to construct a high-resolution 3D prostate surface from MRI images in three orthogonal views. The performance of the segmentation is evaluated with MRI scans of 100 patients in 4-fold cross-validation. We achieve a mean Dice Similarity of 88.6%.

Collaboration


Dive into the William Gandler's collaboration.

Top Co-Authors

Avatar

Matthew J. McAuliffe

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Evan S. McCreedy

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ruida Cheng

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Baris Turkbey

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Peter L. Choyke

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Alexandra Bokinsky

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas J. Pohida

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge