Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Mountney is active.

Publication


Featured researches published by Peter Mountney.


IEEE Signal Processing Magazine | 2010

Three-Dimensional Tissue Deformation Recovery and Tracking

Peter Mountney; Danail Stoyanov; Guang-Zhong Yang

Recent advances in surgical robotics have provided a platform for extending the current capabilities of minimally invasive surgery by incorporating both preoperative and intraoperative imaging data. In this tutorial article, we introduce techniques for in vivo three-dimensional (3-D) tissue deformation recovery and tracking based on laparoscopic or endoscopic images. These optically based techniques provide a unique opportunity for recovering surface deformation of the soft tissue without the need of additional instrumentation. They can therefore be easily incorporated into the existing surgical workflow. Technically, the problem formulation is challenging due to nonrigid deformation of the tissue and instrument interaction. Current approaches and future research directions in terms of intraoperative planning and adaptive surgical navigation are explained in detail.


medical image computing and computer assisted intervention | 2006

Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery

Peter Mountney; Danail Stoyanov; Andrew J. Davison; Guang-Zhong Yang

Minimally Invasive Surgery (MIS) has recognized benefits of reduced patient trauma and recovery time. In practice, MIS procedures present a number of challenges due to the loss of 3D vision and the narrow field-of-view provided by the camera. The restricted vision can make navigation and localization within the human body a challenging task. This paper presents a robust technique for building a repeatable long term 3D map of the scene whilst recovering the camera movement based on Simultaneous Localization and Mapping (SLAM). A sequential vision only approach is adopted which provides 6 DOF camera movement that exploits the available textured surfaces and reduces reliance on strong planar structures required for range finders. The method has been validated with a simulated data set using real MIS textures, as well as in vivo MIS video sequences. The results indicate the strength of the proposed algorithm under the complex reflectance properties of the scene, and the potential for real-time application for integrating with the existing MIS hardware.


medical image computing and computer assisted intervention | 2010

Motion compensated SLAM for image guided surgery

Peter Mountney; Guang-Zhong Yang

The effectiveness and clinical benefits of image guided surgery are well established for procedures where there is manageable tissue motion. In minimally invasive cardiac, gastrointestinal, or abdominal surgery, large scale tissue deformation prohibits accurate registration and fusion of pre- and intraoperative data. Vision based techniques such as structure from motion and simultaneous localization and mapping are capable of recovering 3D structure and laparoscope motion. Current research in the area generally assumes the environment is static, which is difficult to satisfy in most surgical procedures. In this paper, a novel framework for simultaneous online estimation of laparoscopic camera motion and tissue deformation in a dynamic environment is proposed. The method only relies on images captured by the laparoscope to sequentially and incrementally generate a dynamic 3D map of tissue motion that can be co-registered with pre-operative data. The theoretical contribution of this paper is validated with both simulated and ex vivo data. The practical application of the technique is further demonstrated on in vivo procedures.


medical image computing and computer assisted intervention | 2008

Soft Tissue Tracking for Minimally Invasive Surgery: Learning Local Deformation Online

Peter Mountney; Guang-Zhong Yang

Accurate estimation and tracking of dynamic tissue deformation is important to motion compensation, intra-operative surgical guidance and navigation in minimally invasive surgery. Current approaches to tissue deformation tracking are generally based on machine vision techniques for natural scenes which are not well suited to MIS because tissue deformation cannot be easily modeled by using ad hoc representations. Such techniques do not deal well with inter-reflection changes and may be susceptible to instrument occlusion. The purpose of this paper is to present an online learning based feature tracking method suitable for in vivo applications. It makes no assumptions about the type of image transformations and visual characteristics, and is updated continuously as the tracking progresses. The performance of the algorithm is compared with existing tracking algorithms and validated on simulated, as well as in vivo cardiovascular and abdominal MIS data. The strength of the algorithm in dealing with drift and occlusion is validated and the practical value of the method is demonstrated by decoupling cardiac and respiratory motion in robotic assisted surgery.


medical image computing and computer assisted intervention | 2007

A probabilistic framework for tracking deformable soft tissue in minimally invasive surgery

Peter Mountney; Benny Lo; Surapa Thiemjarus; Danail Stoyanov; Guang Zhong-Yang

The use of vision based algorithms in minimally invasive surgery has attracted significant attention in recent years due to its potential in providing in situ 3D tissue deformation recovery for intra-operative surgical guidance and robotic navigation. Thus far, a large number of feature descriptors have been proposed in computer vision but direct application of these techniques to minimally invasive surgery has shown significant problems due to free-form tissue deformation and varying visual appearances of surgical scenes. This paper evaluates the current state-of-the-art feature descriptors in computer vision and outlines their respective performance issues when used for deformation tracking. A novel probabilistic framework for selecting the most discriminative descriptors is presented and a Bayesian fusion method is used to boost the accuracy and temporal persistency of soft-tissue deformation tracking. The performance of the proposed method is evaluated with both simulated data with known ground truth, as well as in vivo video sequences recorded from robotic assisted MIS procedures.


international conference on robotics and automation | 2009

A stereoscopic fibroscope for camera motion and 3D depth recovery during Minimally Invasive Surgery

David P. Noonan; Peter Mountney; Daniel S. Elson; Ara Darzi; Guang-Zhong Yang

This paper introduces a stereoscopic fibroscope imaging system for Minimally Invasive Surgery (MIS) and examines the feasibility of utilizing images transmitted from the distal fibroscope tip to a proximally mounted CCD camera to recover both camera motion and 3D scene information. Fibre image guides facilitate instrument miniaturization and have the advantage of being more easily integrated with articulated robotic instruments. In this paper, twin 10,000 pixel coherent fibre bundles (590µm diameter) have been integrated into a bespoke laparoscopic imaging instrument. Images captured by the system have been used to build a 3D map of the environment and reconstruct the laparoscopes 3D pose and motion using a SLAM algorithm. Detailed phantom validation of the system demonstrates its practical value and potential for flexible MIS instrument integration due to the small footprint and flexible nature of the fibre image guides.


international conference of the ieee engineering in medicine and biology society | 2009

Dynamic view expansion for minimally invasive surgery using simultaneous localization and mapping

Peter Mountney; Guang-Zhong Yang

Navigation during Minimally Invasive Surgery (MIS) has recognized difficulties due to limited field-of-view, off-axis visualization and loss of direct 3D vision. This can cause visual-spatial disorientation when exploring complex in vivo structures. In this paper, we present an approach to dynamic view expansion which builds a 3D textured model of the MIS environment to facilitate in vivo navigation. With the proposed technique, no prior knowledge of the environment is required and the model is built sequentially while the laparoscope is moved. The method is validated on simulated data with known ground truth. Its potential clinical value is also demonstrated with in vivo experiments.


medical image computing and computer assisted intervention | 2011

Dense surface reconstruction for enhanced navigation in MIS

Johannes Totz; Peter Mountney; Danail Stoyanov; Guang-Zhong Yang

Recent introduction of dynamic view expansion has led to the development of computer vision methods for minimally invasive surgery to artificially expand the intra-operative field-of-view of the laparoscope. This provides improved awareness of the surrounding anatomical structures and minimises the effect of disorientation during surgical navigation. It permits the augmentation of live laparoscope images with information from previously captured views. Current approaches, however, can only represent the tissue geometry as planar surfaces or sparse 3D models, thus introducing noticeable visual artefacts in the final rendering results. This paper proposes a high-fidelity tissue geometry mapping by combining a sparse SLAM map with semi-dense surface reconstruction. The method is validated on phantom data with known ground truth, as well as in-vivo data captured during a robotic assisted MIS procedure. The derived results have shown that the method is able to effectively increase the coverage of the expanded surgical view without compromising mapping accuracy.


medical image computing and computer assisted intervention | 2009

Optical Biopsy Mapping for Minimally Invasive Cancer Screening

Peter Mountney; Stamatia Giannarou; Daniel S. Elson; Guang-Zhong Yang

The quest for providing tissue characterization and functional mapping during minimally invasive surgery (MIS) has motivated the development of new surgical tools that extend the current functional capabilities of MIS. Miniaturized optical probes can be inserted into the instrument channel of standard endoscopes to reveal tissue cellular and subcellular microstructures, allowing excision-free optical biopsy. One of the limitations of such a point based imaging and tissue characterization technique is the difficulty of tracking probed sites in vivo. This prohibits large area surveillance and integrated functional mapping. The purpose of this paper is to present an image-based tracking framework by combining a semi model-based instrument tracking method with vision-based simultaneous localization and mapping. This allows the mapping of all spatio-temporally tracked biopsy sites, which can then be re-projected back onto the endoscopic video to provide a live augmented view in vivo, thus facilitating re-targeting and serial examination of potential lesions. The proposed method has been validated on phantom data with known ground truth and the accuracy derived demonstrates the strength and clinical value of the technique. The method facilitates a move from the current point based optical biopsy towards large area multi-scale image integration in a routine clinical environment.


computer assisted radiology and surgery | 2012

Horizon Stabilized—Dynamic View Expansion for Robotic Assisted Surgery (HS-DVE)

Alexander Warren; Peter Mountney; David P. Noonan; Guang-Zhong Yang

PurposeNew surgical approaches based on natural orifice transluminal surgery (NOTES) have the potential to further decrease morbidity and hospital stay. However, a number of key challenges have been identified preventing its clinical adoption, including inadequate instrument design and spatial disorientation. Furthermore, retroflexion, missing fixed anatomical references, and limited field-of-view are key factors contributing to disorientation in NOTES.Methods A hybrid approach of integrated orientation sensing and real-time vision processing is proposed to restore orientation cues for improved surgical navigation. The distal tip of an articulated robotic endoscope is equipped with an inertial measurement unit (IMU) enabling video images to be reoriented and stabilized with respect to the horizon. This is performed by measuring the direction of gravity in relation to the cameras. Dynamic view expansion is used to increase the field-of-view of the endoscope. The method registers past video images to the current image and creates an enlarged visualization of the anatomy through simultaneous localization and mapping (SLAM).ResultsThe clinical potential of the system is demonstrated on a NOTES appendectomy procedure performed on the NOSsE phantom. This involves an articulated robotic endoscope navigating to visualize the appendix while retroflexed. The horizon stabilization is additionally evaluated quantitatively against known ground truth.ConclusionsThe combination of horizon stabilization and dynamic view expansion presents a realistic approach for reintroducing orientation and navigation cues during NOTES. The platform allows real-time implementation, which is an important prerequisite for further clinical evaluation.

Collaboration


Dive into the Peter Mountney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar

Ara Darzi

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benny Lo

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge