Christos Bergeles
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christos Bergeles.
intelligent robots and systems | 2016
Konrad Leibrandt; Christos Bergeles; Guang-Zhong Yang
Safe and effective telemanipulation of concentric tube robots is hindered by their complex, non-intuitive kinematics. Guidance schemes in the form of attractive and repulsive constraints can simplify task execution and facilitate natural operation of the robot by clinicians. The real-time seamless calculation and application of guidance, however, requires computationally efficient algorithms that solve the non-linear inverse kinematics of the robot and guarantee that the commanded robot configuration is stable and sufficiently away from the anatomy. This paper presents a multi-processor framework that allows on-the-fly calculation of optimal safe paths based on rapid workspace and roadmap pre-computation The real-time nature of the developed software enables complex guidance constraints to be implemented with minimal computational overhead. A user study on a simulated challenging clinical problem demonstrated that the incorporated guiding constraints are highly beneficial for fast and accurate navigation with concentric tube robots.
international conference on robotics and automation | 2017
George Dwyer; François Chadebecq; Marcel Tella Amo; Christos Bergeles; Efthymios Maneas; Vijay Pawar; Emmanuel Vander Poorten; Jan Deprest; Sebastien Ourselin; Paolo De Coppi; Tom Vercauteren; Danail Stoyanov
Twin–twin transfusion syndrome requires interventional treatment using a fetoscopically introduced laser to sever the shared blood supply between the fetuses. This is a delicate procedure relying on small instrumentation with limited articulation to guide the laser tip and a narrow field of view to visualize all relevant vascular connections. In this letter, we report on a mechatronic design for a comanipulated instrument that combines concentric tube actuation to a larger manipulator constrained by a remote centre of motion. A stereoscopic camera is mounted at the distal tip and used for imaging. Our mechanism provides enhanced dexterity and stability of the imaging device. We demonstrate that the imaging system can be used for computing geometry and enhancing the view at the operating site. Results using electromagnetic sensors for verification and comparison to visual odometry from the distal sensor show that our system is promising and can be developed further for multiple clinical needs in fetoscopic procedures.
Biomedical Optics Express | 2017
Christos Bergeles; B Davidson; Melissa Kasilian; Angelos Kalitzeos; Joseph Carroll; Alfredo Dubra; Michel Michaelides; Sebastien Ourselin
Precise measurements of photoreceptor numerosity and spatial arrangement are promising biomarkers for the early detection of retinal pathologies and may be valuable in the evaluation of retinal therapies. Adaptive optics scanning light ophthalmoscopy (AOSLO) is a method of imaging that corrects for aberrations of the eye to acquire high-resolution images that reveal the photoreceptor mosaic. These images are typically graded manually by experienced observers, obviating the robust, large-scale use of the technology. This paper addresses unsupervised automated detection of cones in non-confocal, split-detection AOSLO images. Our algorithm leverages the appearance of split-detection images to create a cone model that is used for classification. Results show that it compares favorably to the state-of-the-art, both for images of healthy retinas and for images from patients affected by Stargardt disease. The algorithm presented also compares well to manual annotation while excelling in speed.
IEEE Robotics & Automation Magazine | 2017
Konrad Leibrandt; Christos Bergeles; Guang-Zhong Yang
The complex, nonintuitive kinematics of concentric tube robots (CTRs) can make their telemanipulation challenging. Collaborative control schemes that guide the operating clinician via repulsive and attractive force feedback based on intraoperative path planning can simplify this task. Computationally efficient algorithms, however, are required to perform rapid path planning and solve the inverse kinematics of the robot at interactive rates. Until now, ensuring stable and collisionfree robot configurations required long periods of precomputation to establish kinematic look-up tables. This article presents a high-performance robot kinematics software architecture, which is used together with a multinode computational framework to rapidly calculate dense path plans for safe telemanipulation of unstable CTRs. The proposed software architecture enables on-the-fly, incremental, inverse-kinematics estimation at interactive rates, and it is tailored to modern computing architectures with efficient multicore central processing units (CPUs). The effectiveness of the architecture is quantified with computational-complexity metrics and a clinically demanding simulation inspired from neurosurgery for hydrocephalus treatment. By achieving real-time path planning, active constraints (ACs) can get generated on the fly and support the operator in faster and more reliable execution of telemanipulation tasks.
Scientific Reports | 2018
B Davidson; Angelos Kalitzeos; Joseph Carroll; Alfredo Dubra; Sebastien Ourselin; Michel Michaelides; Christos Bergeles
We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method’s robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.
Ophthalmic Technologies XXVIII | 2018
Brice Thurin; Edward Bloch; Sotiris Nousias; Sebastien Ourselin; Pearse A. Keane; Christos Bergeles
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
Biomedical Optics Express | 2018
B Davidson; Angelos Kalitzeos; Joseph Carroll; Alfredo Dubra; Sebastien Ourselin; Michel Michaelides; Christos Bergeles
The field of view of high-resolution ophthalmoscopes that require the use of adaptive optics (AO) wavefront correction is limited by the isoplanatic patch of the eye, which varies across individual eyes and with the portion of the pupil used for illumination and/or imaging. Therefore all current AO ophthalmoscopes have small fields of view comparable to, or smaller than, the isoplanatic patch, and the resulting images have to be stitched off-line to create larger montages. These montages are currently assembled either manually, by expert human graders, or automatically, often requiring several hours per montage. This arguably limits the applicability of AO ophthalmoscopy to studies with small cohorts and moreover, prevents the ability to review a real-time captured montage of all locations during image acquisition to further direct targeted imaging. In this work, we propose stitching the images with our novel algorithm, which uses oriented fast rotated brief (ORB) descriptors, local sensitivity hashing, and by searching for a ‘good enough’ transformation, rather than the best possible, to achieve processing times of 1–2 minutes per montage of 250 images. Moreover, the proposed method produces montages which are as accurate as previous methods, when considering the image similarity metrics: normalised mutual information (NMI), and normalised cross correlation (NCC).
IEEE Transactions on Robotics | 2017
Alessandro Vandini; Christos Bergeles; Ben Glocker; Petros Giataganas; Guang-Zhong Yang
Tracking and shape estimation of flexible robots that navigate through the human anatomy are prerequisites to safe intracorporeal control. Despite extensive research in kinematic and dynamic modeling, inaccuracies and shape deformation of the robot due to unknown loads and collisions with the anatomy make shape sensing important for intraoperative navigation. To address this issue, vision-based solutions have been explored. The task of 2-D tracking and 3-D shape reconstruction of flexible robots as they reach deep-seated anatomical locations is challenging, since the image acquisition techniques usually suffer from low signal-to-noise ratio or slow temporal responses. Moreover, tracking and shape estimation are thus far treated independently despite their coupled relationship. This paper aims to address tracking and shape estimation in a unified framework based on Markov random fields. By using concentric tube robots as an example, the proposed algorithm fuses information extracted from standard monoplane X-ray fluoroscopy with the kinematics model to achieve joint 2-D tracking and 3-D shape estimation in realistic clinical scenarios. Detailed performance analyses of the results demonstrate the accuracy of the method for both tracking and shape reconstruction.
international conference on robotics and automation | 2016
Ning Liu; Christos Bergeles; Guang-Zhong Yang
Bronchoscopic interventions are widely performed for the diagnosis and treatment of lung diseases. However, for most endobronchial devices, the lack of a bendable tip restricts their access ability to get into distal bronchi with complex bifurcations. This paper presents the design of a new wire-driven continuum manipulator to help guide these devices. The proposed manipulator is built by assembling miniaturized blocks that are featured with interlocking circular joints. It has the capability of maintaining its integrity when the lengths of actuation wires change due to the shaft flex. It allows the existence of a relatively large central cavity to pass through other instruments and enables two rotational degrees of freedom. All these features make it suitable for procedures where tubular anatomies are involved and the flexible shafts have to be considerably bent in usage, just like bronchoscopic interventions. A kinematic model is built to estimate the relationship between the translations of actuation wires and the manipulator tip position. A scale-up model is produced for evaluation experiments and the results validate the performance of the proposed mechanism.
intelligent robots and systems | 2016
Georgios Fagogenis; Christos Bergeles; Pierre E. Dupont
Concentric tube robots comprise telescopic precurved elastic tubes. The robots tip and shape are controlled via relative tube motions, i.e. tube rotations and translations. Non-linear interactions between the tubes, e.g. friction and torsion, as well as uncertainty in the physical properties of the tubes themselves, e.g. the Youngs modulus, curvature, or stiffness, hinder accurate kinematic modelling. In this paper, we present a machine-learning-based methodology for kinematic modelling of concentric tube robots and in situ model adaptation. Our approach is based on Locally Weighted Projection Regression (LWPR). The model comprises an ensemble of linear models, each of which locally approximates the original complex kinematic relation. LWPR can accommodate for model deviations by adjusting the respective local models at run-time, resulting in an adaptive kinematics framework. We evaluated our approach on data gathered from a three-tube robot, and report high accuracy across the robots configuration space.