Gauthier Gras
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gauthier Gras.
Annals of Surgery | 2016
Hani J. Marcus; Christopher J. Payne; Archie Hughes-Hallett; Gauthier Gras; Konrad Leibrandt; Dipankar Nandi; Guang-Zhong Yang
Objective:To determine the rate and extent of translation of innovative surgical devices from the laboratory to first-in-human studies, and to evaluate the factors influencing such translation. Summary Background Data:Innovative surgical devices have preceded many of the major advances in surgical practice. However, the process by which devices arising from academia find their way to translation remains poorly understood. Methods:All biomedical engineering journals, and the 5 basic science journals with the highest impact factor, were searched between January 1993 and January 2000 using the Boolean search term “surgery OR surgeon OR surgical”. Articles were included if they described the development of a new device and a surgical application was described. A recursive search of all citations to the article was performed using the Web of Science (Thompson-Reuters, New York, NY) to identify any associated first-in-human studies published by January 2015. Kaplan-Meier curves were constructed for the time to first-in-human studies. Factors influencing translation were evaluated using log-rank and Cox proportional hazards models. Results:A total of 8297 articles were screened, and 205 publications describing unique devices were identified. The probability of a first-in-human at 10 years was 9.8%. Clinical involvement was a significant predictor of a first-in-human study (P = 0.02); devices developed with early clinical collaboration were over 6 times more likely to be translated than those without [RR 6.5 (95% confidence interval 0.9–48)]. Conclusions:These findings support initiatives to increase clinical translation through improved interactions between basic, translational, and clinical researchers.
computer assisted radiology and surgery | 2016
Stamatia Giannarou; Menglong Ye; Gauthier Gras; Konrad Leibrandt; Hani J. Marcus; Guang-Zhong Yang
PurposeIn microsurgery, accurate recovery of the deformation of the surgical environment is important for mitigating the risk of inadvertent tissue damage and avoiding instrument maneuvers that may cause injury. The analysis of intraoperative microscopic data can allow the estimation of tissue deformation and provide to the surgeon useful feedback on the instrument forces exerted on the tissue. In practice, vision-based recovery of tissue deformation during tool–tissue interaction can be challenging due to tissue elasticity and unpredictable motion.MethodsThe aim of this work is to propose an approach for deformation recovery based on quasi-dense 3D stereo reconstruction. The proposed framework incorporates a new stereo correspondence method for estimating the underlying 3D structure. Probabilistic tracking and surface mapping are used to estimate 3D point correspondences across time and recover localized tissue deformations in the surgical site.ResultsWe demonstrate the application of this method to estimating forces exerted on tissue surfaces. A clinically relevant experimental setup was used to validate the proposed framework on phantom data. The quantitative and qualitative performance evaluation results show that the proposed 3D stereo reconstruction and deformation recovery methods achieve submillimeter accuracy. The force–displacement model also provides accurate estimates of the exerted forces.ConclusionsA novel approach for tissue deformation recovery has been proposed based on reliable quasi-dense stereo correspondences. The proposed framework does not rely on additional equipment, allowing seamless integration with the existing surgical workflow. The performance evaluation analysis shows the potential clinical value of the technique.
medical image computing and computer assisted intervention | 2015
Gauthier Gras; Hani J. Marcus; Christopher J. Payne; Philip Pratt; Guang-Zhong Yang
Microsurgery is technically challenging, demanding both rigorous precision under the operating microscope and great care when handling tissue. Applying excessive force can result in irreversible tissue injury, but sufficient force must be exerted to carry out manoeuvres in an efficient manner. Technological advances in hand-held instruments have allowed the integration of force sensing capabilities into surgical tools, resulting in the possibility of force feedback during an operation. This paper presents a novel method of graduated online visual force-feedback for hand-held microsurgical instruments. Unlike existing visual force-feedback techniques, the force information is integrated into the surgical scene by highlighting the area around the point of contact while preserving salient anatomical features. We demonstrate that the proposed technique can be integrated seamlessly with image guidance techniques. Critical anatomy beyond the exposed tissue surface is revealed using an augmented reality overlay when the user is exerting large forces within their proximity. The force information is further used to improve the quality of the augmented reality by displacing the overlay based on the forces exerted. Detailed user studies were performed to assess the efficacy of the proposed method.
international conference on robotics and automation | 2017
Konrad Leibrandt; Piyamate Wisanuvej; Gauthier Gras; Jianzhong Shang; Carlo A. Seneci; Petros Giataganas; Valentina Vitiello; Ara Darzi; Guang-Zhong Yang
The field of robotic surgery increasingly advances towards highly articulated and continuum robots, requiring new kinematic strategies to enable users to perform dexterous manipulation in confined workspaces. This development is driven by surgical interventions accessing the surgical workspace through natural orifices such as the mouth or the anus. Due to the long and narrow nature of these access pathways, external triangulation at the fulcrum point is very limited or absent, which makes introducing multiple degrees of freedom at the distal end of the instrument necessary. Additionally, high force and miniaturization requirements make the control of such instruments particularly challenging. This letter presents the kinematic considerations needed to effectively manipulate these novel instruments and allow us their dexterous control in confined spaces. A nonlinear calibration model is further used to map joint to actuator space and improve significantly the precision of the instruments motion. The effectiveness of the presented approach is quantified with bench tests, and the usability of the system is assessed by three user studies simulating the requirements of a realistic surgical task.
Artificial Intelligence in Medicine | 2017
Michele Tonutti; Gauthier Gras; Guang-Zhong Yang
OBJECTIVES Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. METHOD A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brains surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumours mesh. RESULTS The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANNs, with positional errors for SVR models reaching under 0.2mm. CONCLUSIONS The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue.
intelligent robots and systems | 2016
Gauthier Gras; Guang-Zhong Yang
Eye tracking technology has shown promising results for allowing hands-free control of robotically-mounted cameras and tools. However existing systems present only limited capabilities in allowing the full range of camera motions in a safe, intuitive manner. This paper introduces a framework for the recognition of surgeon intention, allowing activation and control of the camera through natural gaze behaviour. The system is resistant to noise such as blinking, while allowing the surgeon to look away safely at any time. Furthermore, this paper presents a novel approach to control the translation of the camera along its optical axis using a combination of eye tracking and stereo reconstruction. Combining eye tracking and stereo reconstruction allows the system to determine which point in 3D space the user is fixating, enabling a translation of the camera to achieve the optimal viewing distance. In addition, the eye tracking information is used to perform automatic laser targeting for laser ablation. The desired target point of the laser, mounted on a separate robotic arm, is determined with the eye tracking thus removing the need to manually adjust the lasers target point before starting each new ablation. The calibration methodology used to obtain millimetre precision for the laser targeting without the aid of visual servoing is described. Finally, a user study validating the system is presented, showing clear improvement with median task times under half of those of a manually controlled robotic system.
intelligent robots and systems | 2015
Christopher J. Payne; Gauthier Gras; Michael D. Hughes; Dinesh Nathwani; Guang-Zhong Yang
The surgical robotics community have developed many different flexible robot designs to address the access problems of minimally invasive surgery. In this paper, we present a hand-held mechatronic tool with a miniaturized distal flexible manipulator incorporating a microscopy probe, camera and a light source for diagnostic arthroscopy. Extensive characterization of the flexible manipulator is provided, including an optimization of the manipulator workspace, hysteresis characteristics and repeatability of the instrument. The lateral stiffness of the flexible manipulator for different bending conditions is assessed along with the overall robustness of the platform. A cadaveric study was performed to demonstrate the potential clinical value of the device.
international conference on robotics and automation | 2017
Jianzhong Shang; Konrad Leibrandt; Petros Giataganas; Valentina Vitiello; Carlo A. Seneci; Piyamate Wisanuvej; Jindong Liu; Gauthier Gras; James Clark; Ara Darzi; Guang-Zhong Yang
This letter introduces a single-port robotic platform for transanal endoscopic microsurgery (TEMS). Two robotically controlled articulated surgical instruments are inserted via a transanal approach to perform submucosal or full-thickness dissection. This system is intended to replace the conventional TEMS approach that uses manual laparoscopic instruments. The new system is based on master–slave robotically controlled tele-manipulation. The slave robot comprises a support arm that is mounted on the operating table, supporting a surgical port and a robotic platform that drives the surgical instruments. The master console includes a pair of haptic devices, as well as a three-dimensional display showing the live video stream of a stereo endoscope inserted through the surgical port. The surgical instrumentation consists of energy delivery devices, graspers, and needle drivers allowing a full TEMS procedure to be performed. Results from benchtop tests, ex vivo animal tissue evaluation, and in vivo studies demonstrate the clinical advantage of the proposed system.
international conference on robotics and automation | 2014
Gauthier Gras; Valentina Vitiello; Guang-Zhong Yang
In recent years, robotic systems have been playing an increasingly important role in physiotherapy. The aim of these platforms is to aid the recovery process from strokes or muscular damage by assisting patients to perform a number of controlled tasks, thus effectively complementing the role of the physiotherapist. In this paper, we present a novel learning from demonstration framework for cooperative control in robotic-assisted physiotherapy. Unlike other approaches, the aim of the proposed system is to guide the patients to optimally execute a task based on previously learned demonstrations. This allows the generation of patient-specific gestures under the supervision of the expert physiotherapist. The guidance is performed through stiffness control of a compliant manipulator, where the stiffness profile of the generalized trajectory is determined according to the relative importance of each section of the task. In contrast with the traditional learning approach, where the execution of the generalized trajectory by the robot is automated, this cooperative control architecture allows the patients to perform the task at their own pace, while ensuring the movements are executed correctly. Increased performance of the learning framework is accomplished through a novel fast, low-cost multi-demonstration dynamic time warping algorithm used to build the model. Experimental validation of the framework is carried out using an interactive setup designed to provide further guidance through additional visual and sensory feedback based on the task model. The results demonstrate the potential of the proposed framework, showing a significant improvement in the performance of guided tasks compared to unguided ones.
international conference on robotics and automation | 2017
Gauthier Gras; Konrad Leibrandt; Piyamate Wisanuvej; Petros Giataganas; Carlo A. Seneci; Menglong Ye; Jianzhong Shang; Guang-Zhong Yang
Traditional robotic surgical systems rely entirely on robotic arms to triangulate articulated instruments inside the human anatomy. This configuration can be ill-suited for working in tight spaces or during single access approaches, where little to no triangulation between the instrument shafts is possible. The control of these instruments is further obstructed by ergonomic issues: The presence of motion scaling imposes the use of clutching mechanics to avoid the workspace limitations of master devices, and forces the user to choose between slow, precise movements, or fast, less accurate ones. This paper presents a bi-manual system using novel self-triangulating 6-degrees-of-freedom (DoF) tools through a flexible elbow, which are mounted on robotic arms. The control scheme for the resulting 9-DoF system is detailed, with particular emphasis placed on retaining maximum dexterity close to joint limits. Furthermore, this paper introduces the concept of gaze-assisted adaptive motion scaling. By combining eye tracking with hand motion and instrument information, the system is capable of inferring the users destination and modifying the motion scaling accordingly. This safe, novel approach allows the user to quickly reach distant locations while retaining full precision for delicate manoeuvres. The performance and usability of this adaptive motion scaling is evaluated in a user study, showing a clear improvement in task completion speed and in the reduction of the need for clutching.