Menglong Ye
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Menglong Ye.
medical image computing and computer-assisted intervention | 2013
Menglong Ye; Stamatia Giannarou; Nisha Patel; Julian Teare; Guang-Zhong Yang
Recent advances in microscopic detection techniques include fluorescence spectroscopy, fibred confocal microscopy and optical coherence tomography. These methods can be integrated with miniaturised probes to assist endoscopy, thus enabling diseases to be detected at an early and pre-invasive stage, forgoing the need for histopathological samples and off-line analysis. Since optical-based biopsy does not leave visible marks after sampling, it is important to track the biopsy sites to enable accurate retargeting and subsequent serial examination. In this paper, a novel approach is proposed for pathological site retargeting in gastroscopic examinations. The proposed method is based on affine deformation modelling with geometrical association combined with cascaded online learning and tracking. It provides online in vivo retargeting, and is able to track pathological sites in the presence of tissue deformation. It is also robust to partial occlusions and can be applied to a range of imaging probes including confocal laser endomicroscopy.
Medical Image Analysis | 2016
Menglong Ye; Stamatia Giannarou; Alexander Meining; Guang-Zhong Yang
With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art.
international conference on robotics and automation | 2017
Lin Zhang; Menglong Ye; Petros Giataganas; Michael D. Hughes; Guang-Zhong Yang
Robot-assisted minimally invasive surgery can benefit from the automation of common, repetitive or well-defined but ergonomically difficult tasks. One such task is the scanning of a pick-up endomicroscopy probe over a complex, undulating tissue surface to enhance the effective field-of-view through video mosaicing. In this paper, the da Vinci® surgical robot, through the dVRK framework, is used for autonomous scanning and 2D mosaicing over a user-defined region of interest. To achieve the level of precision required for high quality mosaic generation, which relies on sufficient overlap between consecutive image frames, visual servoing is performed using a combination of a tracking marker attached to the probe and the endomicroscopy images themselves. The resulting sub-millimetre accuracy of the probe motion allows for the generation of large mosaics with minimal intervention from the surgeon. Images are streamed from the endomicroscope and overlaid live onto the surgeons view, while 2D mosaics are generated in real-time, and fused into a 3D stereo reconstruction of the surgical scene, thus providing intuitive visualisation and fusion of the multi-scale images. The system therefore offers significant potential to enhance surgical procedures, by providing the operator with cellular-scale information over a larger area than could typically be achieved by manual scanning.
medical image computing and computer-assisted intervention | 2016
Menglong Ye; Lin Zhang; Stamatia Giannarou; Guang-Zhong Yang
In robotic surgery, tool tracking is important for providing safe tool-tissue interaction and facilitating surgical skills assessment. Despite recent advances in tool tracking, existing approaches are faced with major difficulties in real-time tracking of articulated tools. Most algorithms are tailored for offline processing with pre-recorded videos. In this paper, we propose a real-time 3D tracking method for articulated tools in robotic surgery. The proposed method is based on the CAD model of the tools as well as robot kinematics to generate online part-based templates for efficient 2D matching and 3D pose estimation. A robust verification approach is incorporated to reject outliers in 2D detections, which is then followed by fusing inliers with robot kinematic readings for 3D pose estimation of the tool. The proposed method has been validated with phantom data, as well as ex vivo and in vivo experiments. The results derived clearly demonstrate the performance advantage of the proposed method when compared to the state-of-the-art.
computer assisted radiology and surgery | 2016
Stamatia Giannarou; Menglong Ye; Gauthier Gras; Konrad Leibrandt; Hani J. Marcus; Guang-Zhong Yang
PurposeIn microsurgery, accurate recovery of the deformation of the surgical environment is important for mitigating the risk of inadvertent tissue damage and avoiding instrument maneuvers that may cause injury. The analysis of intraoperative microscopic data can allow the estimation of tissue deformation and provide to the surgeon useful feedback on the instrument forces exerted on the tissue. In practice, vision-based recovery of tissue deformation during tool–tissue interaction can be challenging due to tissue elasticity and unpredictable motion.MethodsThe aim of this work is to propose an approach for deformation recovery based on quasi-dense 3D stereo reconstruction. The proposed framework incorporates a new stereo correspondence method for estimating the underlying 3D structure. Probabilistic tracking and surface mapping are used to estimate 3D point correspondences across time and recover localized tissue deformations in the surgical site.ResultsWe demonstrate the application of this method to estimating forces exerted on tissue surfaces. A clinically relevant experimental setup was used to validate the proposed framework on phantom data. The quantitative and qualitative performance evaluation results show that the proposed 3D stereo reconstruction and deformation recovery methods achieve submillimeter accuracy. The force–displacement model also provides accurate estimates of the exerted forces.ConclusionsA novel approach for tissue deformation recovery has been proposed based on reliable quasi-dense stereo correspondences. The proposed framework does not rely on additional equipment, allowing seamless integration with the existing surgical workflow. The performance evaluation analysis shows the potential clinical value of the technique.
medical image computing and computer assisted intervention | 2014
Menglong Ye; Edward Johns; Stamatia Giannarou; Guang-Zhong Yang
Endoscopic surveillance is a widely used method for monitoring abnormal changes in the gastrointestinal tract such as Barretts esophagus. Direct visual assessment, however, is both time consuming and error prone, as it involves manual labelling of abnormalities on a large set of images. To assist surveillance, this paper proposes an online scene association scheme to summarise an endoscopic video into scenes, on-the-fly. This provides scene clustering based on visual contents, and also facilitates topological localisation during navigation. The proposed method is based on tracking and detection of visual landmarks on the tissue surface. A generative model is proposed for online learning of pairwise geometrical relationships between landmarks. This enables robust detection of landmarks and scene association under tissue deformation. Detailed experimental comparison and validation have been conducted on in vivo endoscopic videos to demonstrate the practical value of our approach.
international conference on robotics and automation | 2017
Gauthier Gras; Konrad Leibrandt; Piyamate Wisanuvej; Petros Giataganas; Carlo A. Seneci; Menglong Ye; Jianzhong Shang; Guang-Zhong Yang
Traditional robotic surgical systems rely entirely on robotic arms to triangulate articulated instruments inside the human anatomy. This configuration can be ill-suited for working in tight spaces or during single access approaches, where little to no triangulation between the instrument shafts is possible. The control of these instruments is further obstructed by ergonomic issues: The presence of motion scaling imposes the use of clutching mechanics to avoid the workspace limitations of master devices, and forces the user to choose between slow, precise movements, or fast, less accurate ones. This paper presents a bi-manual system using novel self-triangulating 6-degrees-of-freedom (DoF) tools through a flexible elbow, which are mounted on robotic arms. The control scheme for the resulting 9-DoF system is detailed, with particular emphasis placed on retaining maximum dexterity close to joint limits. Furthermore, this paper introduces the concept of gaze-assisted adaptive motion scaling. By combining eye tracking with hand motion and instrument information, the system is capable of inferring the users destination and modifying the motion scaling accordingly. This safe, novel approach allows the user to quickly reach distant locations while retaining full precision for delicate manoeuvres. The performance and usability of this adaptive motion scaling is evaluated in a user study, showing a clear improvement in task completion speed and in the reduction of the need for clutching.
medical image computing and computer assisted intervention | 2016
Menglong Ye; Edward Johns; B Walter; Alexander Meining; Guang-Zhong Yang
For early diagnosis of malignancies in the gastrointestinal tract, surveillance endoscopy is increasingly used to monitor abnormal tissue changes in serial examinations of the same patient. Despite successes with optical biopsy for in vivo and in situ tissue characterisation, biopsy retargeting for serial examinations is challenging because tissue may change in appearance between examinations. In this paper, we propose an inter-examination retargeting framework for optical biopsy, based on an image descriptor designed for matching between endoscopic scenes over significant time intervals. Each scene is described by a hierarchy of regional intensity comparisons at various scales, offering tolerance to long-term change in tissue appearance whilst remaining discriminative. Binary coding is then used to compress the descriptor via a novel random forests approach, providing fast comparisons in Hamming space and real-time retargeting. Extensive validation conducted on 13 in vivo gastrointestinal videos, collected from six patients, show that our approach outperforms state-of-the-art methods.
computer assisted radiology and surgery | 2017
Menglong Ye; Edward Johns; B Walter; Alexander Meining; Guang-Zhong Yang
PurposeSerial endoscopic examinations of a patient are important for early diagnosis of malignancies in the gastrointestinal tract. However, retargeting for optical biopsy is challenging due to extensive tissue variations between examinations, requiring the method to be tolerant to these changes whilst enabling real-time retargeting.MethodThis work presents an image retrieval framework for inter-examination retargeting. We propose both a novel image descriptor tolerant of long-term tissue changes and a novel descriptor matching method in real time. The descriptor is based on histograms generated from regional intensity comparisons over multiple scales, offering stability over long-term appearance changes at the higher levels, whilst remaining discriminative at the lower levels. The matching method then learns a hashing function using random forests, to compress the string and allow for fast image comparison by a simple Hamming distance metric.ResultsA dataset that contains 13 in vivo gastrointestinal videos was collected from six patients, representing serial examinations of each patient, which includes videos captured with significant time intervals. Precision-recall for retargeting shows that our new descriptor outperforms a number of alternative descriptors, whilst our hashing method outperforms a number of alternative hashing approaches.ConclusionWe have proposed a novel framework for optical biopsy in serial endoscopic examinations. A new descriptor, combined with a novel hashing method, achieves state-of-the-art retargeting, with validation on in vivo videos from six patients. Real-time performance also allows for practical integration without disturbing the existing clinical workflow.
IEEE Robotics & Automation Magazine | 2017
Lin Zhang; Menglong Ye; Petros Giataganas; Michael D. Hughes; Adrian Bradu; Adrian Gh. Podoleanu; Guang-Zhong Yang
Minimally invasive surgery (MIS), performed through a small number of keyhole incisions, has become the standard of care for many general surgical procedures, reducing trauma, blood loss, and other complications and offering patients the prospect of a faster recovery with less postoperative pain. These improvements for the patient, however, require higher dexterity and complex instrument control by the surgeons. Keyhole incisions constrain the motion of surgical instruments, while the loss of stereovision when using a laparoscope or endoscope means that depth perception is much poorer than in traditional open surgery. The desire to tackle these issues has been the main driver behind the development of robotic MIS systems with stereovision. In particular, the da Vinci robot (Intuitive Surgical, Inc., Sunnyvale, California) is a successful surgical platform, used widely in the treatment of gynecological and urological cancers. While human guidance is essential for MIS, recent studies [1] have suggested that automation of some surgical subtasks, particularly those that are tedious and repetitive or require high precision, can be beneficial in improving accuracy and reducing the cognitive load of the surgeon. For example, several studies have investigated automation of surgical suturing subtasks, including using a suturing tool under fluorescence guidance [2], and other studies have explored areas such as autonomous tissue dissection [3].