Francisco Vasconcelos
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Francisco Vasconcelos.
international conference on medical imaging and augmented reality | 2016
Evangelos B. Mazomenos; Francisco Vasconcelos; Jeremy Smelt; Henry Prescott; Marjan Jahangiri; Bruce Martin; Andrew Smith; Susan Wright; Danail Stoyanov
This paper presents a novel approach for evaluating technical skills in Transoesophageal Echocardiography (TEE). Our core assumption is that operational competency can be objectively expressed by specific motion-based measures. TEE experiments were carried out with an augmented reality simulation platform involving both novice trainees and expert radiologists. Probe motion data were collected and used to formulate various kinematic parameters. Subsequent analysis showed that statistically significant differences exist among the two groups for the majority of the metrics investigated. Experts exhibited lower completion times and higher average velocity and acceleration, attributed to their refined ability for efficient and economical probe manipulation. In addition, their navigation pattern is characterised by increased smoothness and fluidity, evaluated through the measures of dimensionless jerk and spectral arc length. Utilised as inputs to well-known clustering algorithms, the derived metrics are capable of discriminating experience levels with high accuracy (>84 %).
european conference on computer vision | 2016
Francisco Vasconcelos; Donald Peebles; Sebastien Ourselin; Danail Stoyanov
We propose a minimal solution for the similarity registration (rigid pose and scale) between two sets of 3D lines, and also between a set of co-planar points and a set of 3D lines. The first problem is solved up to 8 discrete solutions with a minimum of 2 line-line correspondences, while the second is solved up to 4 discrete solutions using 4 point-line correspondences. We use these algorithms to perform the extrinsic calibration between a pose tracking sensor and a 2D/3D ultrasound (US) curvilinear probe using a tracked needle as calibration target. The needle is tracked as a 3D line, and is scanned by the ultrasound as either a 3D line (3D US) or as a 2D point (2D US). Since the scale factor that converts US scan units to metric coordinates is unknown, the calibration is formulated as a similarity registration problem. We present results with both synthetic and real data and show that the minimum solutions outperform the correspondent non-minimal linear formulations.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018
Francisco Vasconcelos; João Pedro Barreto; Edmond Boyer
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
arXiv: Computer Vision and Pattern Recognition | 2018
Timur Kuzhagaliyev; Neil T. Clancy; Mirek Janatka; Kevin Tchaka; Francisco Vasconcelos; Matthew J. Clarkson; Kurinchi Selvan Gurusamy; David J. Hawkes; Brian R. Davidson; Danail Stoyanov
Irreversible electroporation (IRE) is a soft tissue ablation technique suitable for treatment of inoperable tumours in the pancreas. The process involves applying a high voltage electric field to the tissue containing the mass using needle electrodes, leaving cancerous cells irreversibly damaged and vulnerable to apoptosis. Efficacy of the treatment depends heavily on the accuracy of needle placement and requires a high degree of skill from the operator. In this paper, we describe an Augmented Reality (AR) system designed to overcome the challenges associated with planning and guiding the needle insertion process. Our solution, based on the HoloLens (Microsoft, USA) platform, tracks the position of the headset, needle electrodes and ultrasound (US) probe in space. The proof of concept implementation of the system uses this tracking data to render real-time holographic guides on the HoloLens, giving the user insight into the current progress of needle insertion and an indication of the target needle trajectory. The operator’s field of view is augmented using visual guides and real-time US feed rendered on a holographic plane, eliminating the need to consult external monitors. Based on these early prototypes, we are aiming to develop a system that will lower the skill level required for IRE while increasing overall accuracy of needle insertion and, hence, the likelihood of successful treatment.
Knee | 2018
Vincent V.G. An; Yusuf Mirza; Evangelos B. Mazomenos; Francisco Vasconcelos; Danail Stoyanov; Sam Oussedik
PURPOSE This study aimed to determine the effect of a simulation course on gaze fixation strategies of participants performing arthroscopy. METHODS Participants (n = 16) were recruited from two one-day simulation-based knee arthroscopy courses, and were asked to undergo a task before and after the course, which involved identifying a series of arthroscopic landmarks. The gaze fixation of the participants was recorded with a wearable eye-tracking system. The time taken to complete the task and proportion of time participants spent with their gaze fixated on the arthroscopic stack, the knee model, and away from the stack or knee model were recorded. RESULTS Participants demonstrated a statistically decreased completion time in their second attempt compared to the first attempt (P = 0.001). In their second attempt, they also demonstrated improved gaze fixation strategies, with a significantly increased amount (P = 0.008) and proportion of time (P = 0.003) spent fixated on the screen vs. knee model. CONCLUSION Simulation improved arthroscopic skills in orthopaedic surgeons, specifically by improving their gaze control strategies and decreasing the amount of time taken to identify and mark landmarks in an arthroscopic task.
Annals of Biomedical Engineering | 2018
Krittin Pachtrachai; Francisco Vasconcelos; François Chadebecq; Max Allan; Stephen Hailes; Vijay Pawar; Danail Stoyanov
Hand–eye calibration aims at determining the unknown rigid transformation between the coordinate systems of a robot arm and a camera. Existing hand–eye algorithms using closed-form solutions followed by iterative non-linear refinement provide accurate calibration results within a broad range of robotic applications. However, in the context of surgical robotics hand–eye calibration is still a challenging problem due to the required accuracy within the millimetre range, coupled with a large displacement between endoscopic cameras and the robot end-effector. This paper presents a new method for hand–eye calibration based on the adjoint transformation of twist motions that solves the problem iteratively through alternating estimations of rotation and translation. We show that this approach converges to a solution with a higher accuracy than closed form initializations within a broad range of synthetic and real experiments. We also propose a stereo hand–eye formulation that can be used in the context of both our proposed method and previous state-of-the-art closed form solutions. Experiments with real data are conducted with a stereo laparoscope, the KUKA robot arm manipulator, and the da Vinci surgical robot, showing that both our new alternating solution and the explicit representation of stereo camera hand–eye relations contribute to a higher calibration accuracy.
medical image computing and computer assisted intervention | 2017
Rene M. Lacher; Francisco Vasconcelos; David C. Bishop; Norman R. Williams; Mohammed Keshtgar; David J. Hawkes; John H. Hipwell; Danail Stoyanov
Breast cancer is the most prevalent cancer type in women, and while its survival rate is generally high the aesthetic outcome is an increasingly important factor when evaluating different treatment alternatives. 3D scanning and reconstruction techniques offer a flexible tool for building detailed and accurate 3D breast models that can be used both pre-operatively for surgical planning and post-operatively for aesthetic evaluation. This paper aims at comparing the accuracy of low-cost 3D scanning technologies with the significantly more expensive state-of-the-art 3D commercial scanners in the context of breast 3D reconstruction. We present results from 28 synthetic and clinical RGBD sequences, including 12 unique patients and an anthropomorphic phantom demonstrating the applicability of low-cost RGBD sensors to real clinical cases. Body deformation and homogeneous skin texture pose challenges to the studied reconstruction systems. Although these should be addressed appropriately if higher model quality is warranted, we observe that low-cost sensors are able to obtain valuable reconstructions comparable to the state-of-the-art within an error margin of 3 mm.
computer assisted radiology and surgery | 2016
Francisco Vasconcelos; Donald Peebles; Sebastien Ourselin; Danail Stoyanov
international conference on robotics and automation | 2018
Krittin Pachtrachai; Francisco Vasconcelos; George Dwyer; Vijay Pawar; Stephen Hailes; Danail Stoyanov
international conference on robotics and automation | 2018
Francisco Vasconcelos; Evangelos Mazomentos; John D. Kelly; Sebastien Ourselin; Danail Stoyanov