Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip J. Edwards is active.

Publication


Featured researches published by Philip J. Edwards.


IEEE Transactions on Medical Imaging | 2003

Registration and tracking to integrate X-ray and MR images in an XMR Facility

Kawal S. Rhode; Derek L. G. Hill; Philip J. Edwards; John H. Hipwell; Daniel Rueckert; Gerardo I. Sanchez-Ortiz; Sanjeet Hegde; Vithuran Rahunathan; Reza Razavi

We describe a registration and tracking technique to integrate cardiac X-ray images and cardiac magnetic resonance (MR) images acquired from a combined X-ray and MR interventional suite (XMR). Optical tracking is used to determine the transformation matrices relating MR image coordinates and X-ray image coordinates. Calibration of X-ray projection geometry and tracking of the X-ray C-arm and table enable three-dimensional (3-D) reconstruction of vessel centerlines and catheters from bi-plane X-ray views. We can, therefore, combine single X-ray projection images with registered projection MR images from a volume acquisition, and we can also display 3-D reconstructions of catheters within a 3-D or multi-slice MR volume. Registration errors were assessed using phantom experiments. Errors in the combined projection images (two-dimensional target registration error - TRE) were found to be 2.4 to 4.2 mm, and the errors in the integrated volume representation (3-D TRE) were found to be 4.6 to 5.1 mm. These errors are clinically acceptable for alignment of images of the great vessels and the chambers of the heart. Results are shown for two patients. The first involves overlay of a catheter used for invasive pressure measurements on an MR volume that provides anatomical context. The second involves overlay of invasive electrode catheters (including a basket catheter) on a tagged MR volume in order to relate electrophysiology to myocardial motion in a patient with an arrhythmia. Visual assessment of these results suggests the errors were of a similar magnitude to those obtained in the phantom measurements.


Medical Image Analysis | 2008

Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging.

Dean C. Barratt; Carolyn S. K. Chan; Philip J. Edwards; Graeme P. Penney; Mike Slomczykowski; Timothy J. Carter; David J. Hawkes

Statistical shape modelling potentially provides a powerful tool for generating patient-specific, 3D representations of bony anatomy for computer-aided orthopaedic surgery (CAOS) without the need for a preoperative CT scan. Furthermore, freehand 3D ultrasound (US) provides a non-invasive method for digitising bone surfaces in the operating theatre that enables a much greater region to be sampled compared with conventional direct-contact (i.e., pointer-based) digitisation techniques. In this paper, we describe how these approaches can be combined to simultaneously generate and register a patient-specific model of the femur and pelvis to the patient during surgery. In our implementation, a statistical deformation model (SDM) was constructed for the femur and pelvis by performing a principal component analysis on the B-spline control points that parameterise the freeform deformations required to non-rigidly register a training set of CT scans to a carefully segmented template CT scan. The segmented template bone surface, represented by a triangulated surface mesh, is instantiated and registered to a cloud of US-derived surface points using an iterative scheme in which the weights corresponding to the first five principal modes of variation of the SDM are optimised in addition to the rigid-body parameters. The accuracy of the method was evaluated using clinically realistic data obtained on three intact human cadavers (three whole pelves and six femurs). For each bone, a high-resolution CT scan and rigid-body registration transformation, calculated using bone-implanted fiducial markers, served as the gold standard bone geometry and registration transformation, respectively. After aligning the final instantiated model and CT-derived surfaces using the iterative closest point (ICP) algorithm, the average root-mean-square distance between the surfaces was 3.5mm over the whole bone and 3.7mm in the region of surgical interest. The corresponding distances after aligning the surfaces using the marker-based registration transformation were 4.6 and 4.5mm, respectively. We conclude that despite limitations on the regions of bone accessible using US imaging, this technique has potential as a cost-effective and non-invasive method to enable surgical navigation during CAOS procedures, without the additional radiation dose associated with performing a preoperative CT scan or intraoperative fluoroscopic imaging. However, further development is required to investigate errors using error measures relevant to specific surgical procedures.


IEEE Transactions on Medical Imaging | 2006

Self-calibrating 3D-ultrasound-based bone registration for minimally invasive orthopedic surgery

Dean C. Barratt; Graeme P. Penney; Carolyn S. K. Chan; Mike Slomczykowski; Timothy J. Carter; Philip J. Edwards; David J. Hawkes

Intraoperative freehand three-dimensional (3-D) ultrasound (3D-US) has been proposed as a noninvasive method for registering bones to a preoperative computed tomography image or computer-generated bone model during computer-aided orthopedic surgery (CAOS). In this technique, an US probe is tracked by a 3-D position sensor and acts as a percutaneous device for localizing the bone surface. However, variations in the acoustic properties of soft tissue, such as the average speed of sound, can introduce significant errors in the bone depth estimated from US images, which limits registration accuracy. We describe a new self-calibrating approach to US-based bone registration that addresses this problem, and demonstrate its application within a standard registration scheme. Using realistic US image data acquired from 6 femurs and 3 pelves of intact human cadavers, and accurate Gold Standard registration transformations calculated using bone-implanted fiducial markers, we show that self-calibrating registration is significantly more accurate than a standard method, yielding an average root mean squared target registration error of 1.6 mm. We conclude that self-calibrating registration results in significant improvements in registration accuracy for CAOS applications over conventional approaches where calibration parameters of the 3D-US system remain fixed to values determined using a preoperative phantom-based calibration.


Medical Image Analysis | 2014

Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

Geert J. S. Litjens; Robert Toth; Wendy J. M. van de Ven; C.M.A. Hoeks; Sjoerd Kerkstra; Bram van Ginneken; Graham Vincent; Gwenael Guillard; Neil Birbeck; Jindang Zhang; Robin Strand; Filip Malmberg; Yangming Ou; Christos Davatzikos; Matthias Kirschner; Florian Jung; Jing Yuan; Wu Qiu; Qinquan Gao; Philip J. Edwards; Bianca Maan; Ferdinand van der Heijden; Soumya Ghose; Jhimli Mitra; Jason Dowling; Dean C. Barratt; Henkjan J. Huisman; Anant Madabhushi

Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.


Journal of Image Guided Surgery | 1995

Augmentation of Reality Using an Operating Microscope for Otolaryngology and Neurosurgical Guidance

Philip J. Edwards; David J. Hawkes; Derek L. G. Hill; D. Jewell; R. Spink; Anthony J. Strong; Michael Gleeson

The operating microscope is an integral part of many neurosurgery and otolaryngology procedures; the surgeon often uses the microscopic view for a large portion of the operation. Information from preoperative radiological images is often viewed only on X-ray films. The surgeon then has the difficult task of relating this information to the appearance of the surgical view. Image guidance techniques attempt to relate these two sets of information by registering the patient in the operating room to preoperative images using locating devices. Conventionally, image data are presented on a computer monitor, which requires the surgeon to look away from the operative scene. We describe a guidance system, for procedures in which the operating microscope is used, which super-imposes image-derived data upon the operative scene. We create a model of relevant structures (e.g., tumor volume, blood vessels, and nerves) from multimodality preoperative images. By calibrating microscope optics, registering the patient to image coordinates, and tracking the microscope and patient intraoperatively, we can generate stereo projections of the three-dimensional model and project them into the microscope eyepieces, allowing critical structures to be overlaid on the operative scene in the correct position. Measurements with a head phantom gave a root mean square (RMS) error of 1.08 mm, and the estimated error for a human volunteer is between 2 and 3 mm. Initial evaluation in the operating room was very promising.


Presence: Teleoperators & Virtual Environments | 2000

Stereo Augmented Reality in the Surgical Microscope

Andrew P. King; Philip J. Edwards; Calvin R. Maurer; Darryl A. de Cunha; Ronald P. Gaston; Matthew J. Clarkson; Derek L. G. Hill; David J. Hawkes; Michael R. Fenlon; Anthony J. Strong; Tim C. S. Cox; Michael Gleeson

This paper describes the MAGI (microscope-assisted guided interventions) augmented-reality system, which allows surgeons to view virtual features segmented from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope. The aim of the system is to enable the surgeon to see in the correct 3-D position the structures that are beneath the physical surface. The technical challenges involved are calibration, segmentation, registration, tracking, and visualization. This paper details our solutions to these problems. As it is difficult to make reliable quantitative assessments of the accuracy of augmented-reality systems, results are presented from a numerical simulation, and these show that the system has a theoretical overlay accuracy of better than 1 mm at the focal plane of the microscope. Implementations of the system have been tested on volunteers, phantoms, and seven patients in the operating room. Observations are consistent with this accuracy prediction.


medical image computing and computer assisted intervention | 1999

AcouStick: A Tracked A-Mode Ultrasonography System for Registration in Image-Guided Surgery

Calvin R. Maurer; Ronald P. Gaston; Derek L. G. Hill; Michael Gleeson; M. Graeme Taylor; Michael R. Fenlon; Philip J. Edwards; David J. Hawkes

In this paper, we describe a system for noninvasively determining bone surface points using an optically tracked A-mode ultrasound transducer. We develop and validate a calibration method; acquire cranial surface points for a skull phantom, three volunteers, and one patient; and register these points to surfaces extracted from CT images of the phantom and patient. Our results suggest that the bone surface point localization error of this system is less than 0.5 mm. The target registration error (TRE) of the cranial surface-based registration for the skull phantom was computed by using as a reference gold standard the point-based registration obtained with eight bone-implanted markers. The mean TRE for a 150-surface-point registration is 1.0 mm, and ranges between 1.0 and 1.7 mm for six 25-surface-point registrations. Our preliminary results suggest that accurate, noninvasive, image-to-physical registration of head images may be possible using an A-mode ultrasound-based system.


IEEE Transactions on Medical Imaging | 2012

A Comprehensive Cardiac Motion Estimation Framework Using Both Untagged and 3-D Tagged MR Images Based on Nonrigid Registration

Wenzhe Shi; Xiahai Zhuang; Haiyan Wang; Simon G. Duckett; Duy V. N. Luong; Catalina Tobon-Gomez; Kai-Pin Tung; Philip J. Edwards; Kawal S. Rhode; Reza Razavi; Sebastien Ourselin; Daniel Rueckert

In this paper, we present a novel technique based on nonrigid image registration for myocardial motion estimation using both untagged and 3-D tagged MR images. The novel aspect of our technique is its simultaneous usage of complementary information from both untagged and 3-D tagged MR images. To estimate the motion within the myocardium, we register a sequence of tagged and untagged MR images during the cardiac cycle to a set of reference tagged and untagged MR images at end-diastole. The similarity measure is spatially weighted to maximize the utility of information from both images. In addition, the proposed approach integrates a valve plane tracker and adaptive incompressibility into the framework. We have evaluated the proposed approach on 12 subjects. Our results show a clear improvement in terms of accuracy compared to approaches that use either 3-D tagged or untagged MR image information alone. The relative error compared to manually tracked landmarks is less than 15% throughout the cardiac cycle. Finally, we demonstrate the automatic analysis of cardiac function from the myocardial deformation fields.


Medical Imaging 2003: Image Processing | 2003

Integration of ultrasound-based registration with statistical shape models for computer-assisted orthopaedic surgery

Carolyn S. K. Chan; Philip J. Edwards; David J. Hawkes

We present the first use of ultrasound to instantiate and register a statistical shape model of bony structures. Our aim is to provide accurate image-guided total hip replacement without the need for a preoperative computed tomography (CT) scan. We propose novel methods to determine the location of the bone surface intraoperatively using percutaneous ultrasound and, with the aid of a statistical shape model, reconstruct a complete three-dimensional (3D) model of relevant anatomy. The centre of the femoral head is used as a further constraint to improve accuracy in regions not accessible to ultrasound. CT scans of the femur from a database were aligned to one target CT scan using a non-rigid registration algorithm. The femur surface from the target scan was then propagated to each of the subjects and used to produce a statistical shape model. A cadaveric femur not used in the shape model construction was scanned using freehand 3D ultrasound. The iterative closest point (ICP) algorithm was used to match points corresponding to the bone Surface derived from ultrasound with the statistical bone surface model. We used the mean shape and the first five modes of variation of the shape model. The resulting root mean square (RMS) point-to-surface distance from ICP was minimised to provide the best fit of the model to the ultrasound data.


Stereotactic and Functional Neurosurgery | 1999

A System for Microscope-Assisted Guided Interventions

Andrew P. King; Philip J. Edwards; C.R. Maurer; D.A. de Cunha; David J. Hawkes; Dlg Hill; Ronald P. Gaston; Michael R. Fenlon; Anthony J. Strong; C.L. Chandler; Aurelia Richards; Michael Gleeson

We present a system for surgical navigation using stereo overlays in the operating microscope aligned to the operative scene. This augmented reality system provides 3D information about nearby structures and offers a significant advancement over pointer-based guidance, which provides only the location of one point and requires the surgeon to look away from the operative scene. With a previous version of this system, we demonstrated feasibility, but it became clear that to achieve convincing guidance through the magnified microscope view, a very high alignment accuracy was required. We have made progress with several aspects of the system, including automated calibration, error simulation, bone-implanted fiducials and a dental attachment for tracking. We have performed experiments to establish the visual display parameters required to perceive overlaid structures beneath the operative surface. Easy perception of real and virtual structures with the correct transparency has been demonstrated in a laboratory and through the microscope. The result is a system with a predicted accuracy of 0.9 mm and phantom errors of 0.5 mm. In clinical practice errors are 0.5–1.5 mm, rising to 2–4 mm when brain deformation occurs.

Collaboration


Dive into the Philip J. Edwards's collaboration.

Top Co-Authors

Avatar

David J. Hawkes

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Figl

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar

Dean C. Barratt

University College London

View shared research outputs
Top Co-Authors

Avatar

Mingxing Hu

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge