Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George P. Mylonas is active.

Publication


Featured researches published by George P. Mylonas.


medical image computing and computer assisted intervention | 2005

Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures

Danail Stoyanov; George P. Mylonas; Ara Darzi; Guang-Zhong Yang

In robotically assisted laparoscopic surgery, soft-tissue motion tracking and structure recovery are important for intraoperative surgical guidance, motion compensation and delivering active constraints. In this paper, we present a novel method for feature based motion tracking of deformable soft-tissue surfaces in totally endoscopic coronary artery bypass graft (TECAB) surgery. We combine two feature detectors to recover distinct regions on the epicardial surface for which the sparse 3D surface geometry may be computed using a pre-calibrated stereo laparoscope. The movement of the 3D points is then tracked in the stereo images with stereo-temporal constrains by using an iterative registration algorithm. The practical value of the technique is demonstrated on both a deformable phantom model with tomographically derived surface geometry and in vivo robotic assisted minimally invasive surgery (MIS) image sequences.


Computer Aided Surgery | 2007

HMM assessment of quality of movement trajectory in laparoscopic surgery

Julian J. H. Leong; Marios Nicolaou; Louis Atallah; George P. Mylonas; Ara Darzi; Guang-Zhong Yang

Laparoscopic surgery poses many different constraints for the operating surgeon, resulting in a slow uptake of advanced laparoscopic procedures. Traditional approaches to the assessment of surgical performance rely on prior classification of a cohort of surgeons’ technical skills for validation, which may introduce subjective bias to the outcome. In this study, Hidden Markov Models (HMMs) are used to learn surgical maneuvers from 11 subjects with mixed abilities. By using the leave-one-out method, the HMMs are trained without prior clustering of subjects into different skill levels, and the output likelihood indicates the similarity of a particular subjects motion trajectories to those of the group. The results show that after a short period of training, the novices become more similar to the group when compared to the initial pre-training assessment. The study demonstrates the strength of the proposed method in ranking the quality of trajectories of the subjects, highlighting its value in minimizing the subjective bias in skills assessment for minimally invasive surgery.


medical image computing and computer assisted intervention | 2007

pq-space based non-photorealistic rendering for augmented reality

Mirna Lerotic; Adrian James Chung; George P. Mylonas; Guang-Zhong Yang

The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception.


intelligent robots and systems | 2008

Gaze contingent articulated robot control for robot assisted minimally invasive surgery

David P. Noonan; George P. Mylonas; Ara Darzi; Guang-Zhong Yang

This paper introduces a novel technique for controlling an articulated robotic device through the eyes of the surgeon during minimally invasive surgery. The system consists of a binocular eye-tracking unit and a robotic instrument featuring a long, rigid shaft with an articulated distal tip for minimally invasive interventions. They have been integrated into a daVinci surgical robot to provide a seamless and non-invasive localization of eye fixations of the surgeon. By using a gaze contingent framework, the surgeonpsilas fixations in 3D are converted into commands that direct the robotic probe to the desired location. Experimental results illustrate the ability of the system to perform real-time gaze contingent robot control and opens up a new avenue for improving current human-robot interfaces.


Surgical Endoscopy and Other Interventional Techniques | 2012

Collaborative eye tracking: a potential training tool in laparoscopic surgery

Andrew S A Chetwood; Ka-Wai Kwok; Loi-Wah Sun; George P. Mylonas; James Clark; Ara Darzi; Guang-Zhong Yang

BackgroundEye-tracking technology has been shown to improve trainee performance in the aircraft industry, radiology, and surgery. The ability to track the point-of-regard of a supervisor and reflect this onto a subjects’ laparoscopic screen to aid instruction of a simulated task is attractive, in particular when considering the multilingual make up of modern surgical teams and the development of collaborative surgical techniques. We tried to develop a bespoke interface to project a supervisors’ point-of-regard onto a subjects’ laparoscopic screen and to investigate whether using the supervisor’s eye-gaze could be used as a tool to aid the identification of a target during a surgical-simulated task.MethodsWe developed software to project a supervisors’ point-of-regard onto a subjects’ screen whilst undertaking surgically related laparoscopic tasks. Twenty-eight subjects with varying levels of operative experience and proficiency in English undertook a series of surgically minded laparoscopic tasks. Subjects were instructed with verbal queues (V), a cursor reflecting supervisor’s eye-gaze (E), or both (VE). Performance metrics included time to complete tasks, eye-gaze latency, and number of errors.ResultsCompletion times and number of errors were significantly reduced when eye-gaze instruction was employed (VE, E). In addition, the time taken for the subject to correctly focus on the target (latency) was significantly reduced.ConclusionsWe have successfully demonstrated the effectiveness of a novel framework to enable a supervisor eye-gaze to be projected onto a trainee’s laparoscopic screen. Furthermore, we have shown that utilizing eye-tracking technology to provide visual instruction improves completion times and reduces errors in a simulated environment. Although this technology requires significant development, the potential applications are wide-ranging.


Computer Aided Surgery | 2006

Gaze-contingent control for minimally invasive robotic surgery

George P. Mylonas; Ara Darzi; Guang-Zhong Yang

Objective: Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. Methods: A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Results: Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. Conclusion: The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.


ieee international conference on biomedical robotics and biomechatronics | 2010

Gaze contingent control for an articulated mechatronic laparoscope

David P. Noonan; George P. Mylonas; Jianzhong Shang; Christopher J. Payne; Ara Darzi; Guang-Zhong Yang

This paper introduces two techniques for controlling an articulated mechatronic laparoscope through the eyes of the surgeon during minimally invasive surgery. The system consists of a 2D eye tracking unit interfaced with a mechatronic laparoscope that has five controllable degrees-of-freedom (DoF) located at the distal end of a rigid shaft. Through the use of image feedback from a tip mounted camera, a closed-loop gaze contingent framework featuring two separate control techniques (“Individual Joint Selection” and “Automatic Joint Selection”) was developed. Under this framework, the location of a surgeons 2D fixation point is converted into commands that servo the laparoscope. Experimental results illustrate the ability of both techniques to perform real-time gaze contingent laparoscope control. A key advantage of the proposed system is the ability to provide the operator with sufficient distal dexterity to achieve stable off-axis visualisation in an intuitive, hands-free manner, thus allowing other handheld instruments to be controlled simultaneously. Potential applications include Single Incision Laparoscopic Surgery (SILS) or Natural Orifice Trans-Endoluminal Surgery (NOTES), where the use of multiple instruments passing through a single incision presents both visualization and ergonomic challenges.


international conference on medical imaging and augmented reality | 2008

Perceptual Docking for Robotic Control

Guang-Zhong Yang; George P. Mylonas; Ka-Wai Kwok; Adrian James Chung

In current robotic surgery, dexterity is enhanced by microprocessor controlled mechanical wrists which allow motion scaling for reduced gross hand movements and improved performance of micro-scale tasks. The continuing evolution of the technology, including force feedback and virtual immobilization through real-time motion adaptation, will permit complex procedures such as beating heart surgery to be carried out under a static frame-of-reference. In pursuing more adaptive and intelligent robotic designs, the regulatory, ethical and legal barriers imposed on interventional surgical robots have given rise to the need of a tightly integrated control between the operator and the robot when autonomy is considered. This paper outlines the general concept of perceptual dockingfor robotic control and how it can be used for learning and knowledge acquisition in robotic assisted minimally invasive surgery such that operator specific motor and perceptual/cognitive behaviour is acquired through in situsensing. A gaze contingent framework is presented in this paper as an example to illustrate how saccadic eye movements and ocular vergence can be used for attention selection, recovering 3D tissue deformation and motor channelling during minimally invasive surgical procedures.


International Workshop on Medical Imaging and Virtual Reality | 2004

Gaze Contingent Depth Recovery and Motion Stabilisation for Minimally Invasive Robotic Surgery

George P. Mylonas; Ara Darzi; Guang-Zhong Yang

The introduction of surgical robots in minimally invasive surgery has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. They permit the use of motion scaling for reducing gross hand movements and the performance of micro-scale tasks that are otherwise not possible. The high degree of freedom offered by robotic surgery, however, can introduce the problems of complex instrument control and hand-eye coordination. The purpose of this project is to investigate the use of real-time binocular eye tracking for empowering the robots with human vision using knowledge acquired in situ, thus simplifying, as well as enhancing, robotic control in surgery. By utilizing the close relationship between the horizontal disparity and the depth perception, varying with the viewing distance, we demonstrate how vergence can be effectively used for recovering 3D depth at the fixation points and further be used for adaptive motion stabilization during surgery. A dedicated stereo viewer and eye tracking system has been developed and experimental results involving normal subjects viewing real as well as synthetic scene are presented. Detailed quantitative analysis demonstrates the strength and potential value of the method.


medical image computing and computer assisted intervention | 2005

Gaze-Contingent soft tissue deformation tracking for minimally invasive robotic surgery

George P. Mylonas; Danail Stoyanov; Ara Darzi; Guang-Zhong Yang

The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.

Collaboration


Dive into the George P. Mylonas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ara Darzi

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Ka-Wai Kwok

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Daniel Leff

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loi-Wah Sun

Imperial College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge