Cliff F. Ruff
Guy's Hospital
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cliff F. Ruff.
information processing in medical imaging | 1993
Derek L. G. Hill; David J. Hawkes; Neil A. Harrison; Cliff F. Ruff
This paper describes two methods for automating registration of 3D medical images acquired from different modalities. One uses dispersion in an intensity based feature space as a measure of mis-registration, together with knowledge of imager characteristics. The other uses anatomical knowledge of proximity and containment between associated structures to modify a distance transform for registration. Pre-registered training images are used to customise the algorithms for specific applications. Using stochastic optimisation techniques, we automatically registered MR and CT images of the head from three patients using one training set. In each case, the accuracy of registration was comparable to that obtained by point landmark registration. We present initial results for the modified distance transform in the same clinical application, and in a new application to combine angiographic data with the surface of the brain derived from MR.
Computerized Medical Imaging and Graphics | 1993
Derek L. G. Hill; David J. Hawkes; Z. Hussain; S. E. M. Green; Cliff F. Ruff; Glynn P. Robinson
A method is presented for the accurate combination of magnetic resonance (MR) and computed tomographic (CT) images of the head. Our technique is based on user identified 3D landmarks followed by data combination and display as adjacent slices, a single fused slice representation, colour overlay and three-dimensional (3D) rendered scenes. Validation with a point phantom and computer simulation has established the relationship of registration accuracy with point location accuracy, the number of points used and their spatial distribution. The technique is in clinical use in the planning of skull base surgery, transferring MR images acquired without a stereotaxic frame to stereotaxic space, and staging and planning therapy of nasopharyngeal tumours.
Image and Vision Computing | 1999
Cliff F. Ruff; S. W. Hughes; David J. Hawkes
In this article we will present Point Distribution Models (PDMs) constructed from Magnetic Resonance scanned foetal livers and will investigate their use in reconstructing 3D shapes from sparse data, as an aid to volume estimation. A solution of the model to data matching problem will be presented that is based on a hybrid Genetic Algorithm (GA). The GA has amongst its genetic operators, elements that extend the general Iterative Closest Point (ICP) algorithm to include deformable shape parameters. Results from using the GA to estimate volumes from two sparse sampling schemes will be presented. We will show how the algorithm can estimate liver volumes in the range of 10.26 to 28.84 cc with an accuracy of 0.17 +/- 4.44% when using only three sections through the liver volume
international conference on functional imaging and modeling of heart | 2005
Irving Dindoyal; Tryphon Lambrou; J Deng; Cliff F. Ruff; Alf D. Linney; Charles H. Rodeck; Andrew Todd-Pokropek
Segmentation of the fetal heart can facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. This paper presents a level set deformable model to simultaneously segment all four cardiac chambers using region based information. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm’s delineation and manual tracings are within 7 pixels (<2mm) in 2D and under 3 voxels (<4.5mm) in 3D. The ejection fraction was determined from the 3D dataset. Future work will include further testing on additional datasets and validation on a phantom.
international conference on computer vision | 1995
Jason Zhao; Alan C. F. Colchester; Christopher J. Henri; David J. Hawkes; Cliff F. Ruff
This paper describes two new methods of rendering multimodal images developed for a neurosurgical planning and guidance system (VISLAN). In our volume rendering technique we introduce a colour dependent filtering mechanism that enhances the representation of objects and improves the visualisation of spatial relationships. To achieve a good compromise between rendering speed and image quality, surface rendering is divided into two processes, a fast surface voxel projection and a surface refining and shading process. By considering the reflections from voxels both near to and on a surface in shading calculations, renderings become less sensitive to small surface extraction errors. A scheme which intermixes the volume rendering for some objects and surface rendering for others in the same scene is also presented. We show examples to illustrate each method in the context of preoperative surgical planning and intraoperative guidance.
Visualization in Biomedical Computing '92 | 1992
Derek L. G. Hill; S. M. Green; John E. Crossman; David J. Hawkes; Glynn P. Robinson; Cliff F. Ruff; Tim C. S. Cox; Anthony J. Strong; Michael Gleeson
This paper describes our methodology for combining and visualizing registered MR, CT, and angiographic images of the head. We register the individual datasets using the location of a number of user identified anatomical point landmarks to derive the rigid body transformation between datasets. The combined images are displayed either as 2-D slices or as 3-D rendered scenes. Three independent observers performed a detailed assessment of the usefulness of the combined images in the planning of resection of skull base lesions in seven patients. We have shown that in all patients studied at least one of our observers obtained significant extra clinical information from the combined images, while all observers showed significantly increased confidence in the pre-operative surgical plan in all but one patient. Initial evaluation of the 3-D rendered displays showed that the size, shape, and extent of the tumors were better visualized, 3-D spatial relationships between structures were clarified, viewing the resection site in 3-D was very useful, and movie loops provided a very strong 3-D cue. An improved method of registering information from multiple imaging modalities is described and future directions for image combination and visualization are suggested.
Archive | 1993
Cliff F. Ruff; Derek L. G. Hill; Glynn P. Robinson; David J. Hawkes
In this paper we present an application in which multimodal images are used to generate pseudo-3D scenes to assist in the planning and execution of skull base surgery. Image data from CT, MRI and MR Angiography are registered such that the relationship between their image coordinates is known. The images are then transformed into occupancy maps for the structures of interest. Commercially available rendering algorithms then require that either the data from the multiple imaging modalities be combined into a single volumetric dataset prior to rendering, or that the data from each modality is rendered separately and subsequently overlaid. We present an algorithm that allows us to bypass this last stage and leave the objects in their original datasets. The 3D scenes are rendered by casting a ray simultaneously through the multi-dimensional space represented by the individual datasets.
Radiology | 1994
Dlg Hill; David J. Hawkes; Michael Gleeson; Tim C. S. Cox; Anthony J. Strong; W. L. Wong; Cliff F. Ruff; Neil Kitchen; D.G.T. Thomas; A Sofat
Archive | 1995
David J. Hawkes; Cliff F. Ruff; Derek L. G. Hill; Colin Studholme; Philip J. Edwards; William Wong
Seminars in Interventional Radiology | 1995
David J. Hawkes; Cliff F. Ruff; Dlg Hill; Colin Studholme; Philip J. Edwards; W. L. Wong; Anwar R. Padhani