Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William M. Wells is active.

Publication


Featured researches published by William M. Wells.


International Journal of Computer Vision | 1997

Alignment by Maximization of Mutual Information

Paul A. Viola; William M. Wells

A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and may foreseeably be used in a wide variety of imaging situations.Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images.The method is based on a formulation of the mutual information between the model and the image. As applied here the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation.


Medical Image Analysis | 1996

Multi-modal volume registration by maximization of mutual information

William M. Wells; Paul A. Viola; Hideki Atsumi; Shin Nakajima; Ron Kikinis

A new information-theoretic approach is presented for finding the registration of volumetric medical images of differing modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. In our derivation of the registration procedure, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used with a wide variety of imaging devices. This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Surgical applications of the registration method are described.


international conference on computer vision | 1995

Alignment by maximization of mutual information

Paul A. Viola; William M. Wells

A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result, the algorithms are quite general and can foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach in registering magnetic resonance images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied in this paper, the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation.<<ETX>>


IEEE Transactions on Medical Imaging | 2004

Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation

Simon K. Warfield; Kelly H. Zou; William M. Wells

Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptable approach, and yet suffers from intra-rater and inter-rater variability. Automated algorithms have been sought in order to remove the variability introduced by raters, but such algorithms must be assessed to ensure they are suitable for the task. The performance of raters (human or algorithmic) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data. Although physical and digital phantoms can be constructed for which ground truth is known or readily estimated, such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data. Comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data. However, the most appropriate measure or set of measures with which to compare such segmentations has not been clarified and several measures are used in practice. We present here an expectation-maximization algorithm for simultaneous truth and performance level estimation (STAPLE). The algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. The source of each segmentation in the collection may be an appropriately trained human rater or raters, or may be an automated segmentation algorithm. The probabilistic estimate of the true segmentation is formed by estimating an optimal combination of the segmentations, weighting each segmentation depending upon the estimated performance level, and incorporating a prior model for the spatial distribution of structures being segmented as well as spatial homogeneity constraints. STAPLE is straightforward to apply to clinical imaging data, it readily enables assessment of the performance of an automated image segmentation algorithm, and enables direct comparison of human rater and algorithm performance.


international conference on computer vision | 1996

Adaptive segmentation of MRI data

William M. Wells; W.E.L. Grimson; Ron Kikinis; Ferenc A. Jolesz

Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intra-scan and inter-scan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intra-scan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the EM algorithm leads to a fully automatic method that allows for more accurate segmentation of tissue types as well as better visualization of MRI data, that has proven to be effective in a study that includes more than 1000 brain scans.


IEEE Transactions on Medical Imaging | 2003

A shape-based approach to the segmentation of medical imagery using level sets

Andy Tsai; Anthony J. Yezzi; William M. Wells; Clare M. Tempany; D. Tucker; Ayres Fan; W.E.L. Grimson; Alan S. Willsky

We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras (2000), we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data. The parameters of this representation are then manipulated to minimize an objective function for segmentation. The resulting algorithm is able to handle multidimensional data, can deal with topological changes of the curve, is robust to noise and initial contour placements, and is computationally efficient. At the same time, it avoids the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications; two-dimensional segmentation of cardiac magnetic resonance imaging (MRI) and three-dimensional segmentation of prostate MRI.


Academic Radiology | 2004

Statistical validation of image segmentation quality based on a spatial overlap index.

Kelly H. Zou; Simon K. Warfield; Aditya Bharatha; Clare M. Tempany; Michael Kaus; Steven Haker; William M. Wells; Ferenc A. Jolesz; Ron Kikinis

RATIONALE AND OBJECTIVES To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. MATERIALS AND METHODS The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). RESULTS Example 1: The mean DSCs of 0.883 (range, 0.876-0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819-0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519-0.893), astrocytomas (0.487-0.972), and other mixed gliomas (0.490-0.899). CONCLUSION The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks.


Neurosurgery | 2001

Serial intraoperative magnetic resonance imaging of brain shift.

Arya Nabavi; Peter McL. Black; David T. Gering; Carl-Fredrik Westin; Vivek Mehta; Richard S. Pergolizzi; Mathieu Ferrant; Simon K. Warfield; Nobuhiko Hata; Richard B. Schwartz; William M. Wells; Ron Kikinis; Ferenc A. Jolesz

OBJECTIVEA major shortcoming of image-guided navigational systems is the use of preoperatively acquired image data, which does not account for intraoperative changes in brain morphology. The occurrence of these surgically induced volumetric deformations (“brain shift”) has been well established. Maximal measurements for surface and midline shifts have been reported. There has been no detailed analysis, however, of the changes that occur during surgery. The use of intraoperative magnetic resonance imaging provides a unique opportunity to obtain serial image data and characterize the time course of brain deformations during surgery. METHODSThe vertically open intraoperative magnetic resonance imaging system (SignaSP, 0.5 T; GE Medical Systems, Milwaukee, WI) permits access to the surgical field and allows multiple intraoperative image updates without the need to move the patient. We developed volumetric display software (the 3D Slicer) that allows quantitative analysis of the degree and direction of brain shift. For 25 patients, four or more intraoperative volumetric image acquisitions were extensively evaluated. RESULTSSerial acquisitions allow comprehensive sequential descriptions of the direction and magnitude of intraoperative deformations. Brain shift occurs at various surgical stages and in different regions. Surface shift occurs throughout surgery and is mainly attributable to gravity. Subsurface shift occurs during resection and involves collapse of the resection cavity and intraparenchymal changes that are difficult to model. CONCLUSIONBrain shift is a continuous dynamic process that evolves differently in distinct brain regions. Therefore, only serial imaging or continuous data acquisition can provide consistently accurate image guidance. Furthermore, only serial intraoperative magnetic resonance imaging provides an accurate basis for the computational analysis of brain deformations, which might lead to an understanding and eventual simulation of brain shift for intraoperative guidance.


IEEE Transactions on Medical Imaging | 1996

An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

W.E.L. Grimson; Gil J. Ettinger; Steven J. White; Tomás Lozano-Pérez; William M. Wells; Ron Kikinis

There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. The authors demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows us to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images.


Journal of Magnetic Resonance Imaging | 2001

An Integrated Visualization System for Surgical Planning and Guidance Using Image Fusion and an Open MR

David T. Gering; Arya Nabavi; Ron Kikinis; Noby Hata; Lauren J. O'Donnell; W. Eric L. Grimson; Ferenc A. Jolesz; Peter McL. Black; William M. Wells

A surgical guidance and visualization system is presented, which uniquely integrates capabilities for data analysis and on‐line interventional guidance into the setting of interventional MRI. Various pre‐operative scans (T1‐ and T2‐weighted MRI, MR angiography, and functional MRI (fMRI)) are fused and automatically aligned with the operating field of the interventional MR system. Both pre‐surgical and intra‐operative data may be segmented to generate three‐dimensional surface models of key anatomical and functional structures. Models are combined in a three‐dimensional scene along with reformatted slices that are driven by a tracked surgical device. Thus, pre‐operative data augments interventional imaging to expedite tissue characterization and precise localization and targeting. As the surgery progresses, and anatomical changes subsequently reduce the relevance of pre‐operative data, interventional data is refreshed for software navigation in true real time. The system has been applied in 45 neurosurgical cases and found to have beneficial utility for planning and guidance. J. Magn. Reson. Imaging 2001;13:967–975.

Collaboration


Dive into the William M. Wells's collaboration.

Top Co-Authors

Avatar

Ron Kikinis

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Simon K. Warfield

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Ferenc A. Jolesz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

W. Eric L. Grimson

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Alexandra J. Golby

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Matthew Toews

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Clare M. Tempany

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Tina Kapur

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Andriy Fedorov

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Kilian M. Pohl

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge