Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uditha L. Jayarathne is active.

Publication


Featured researches published by Uditha L. Jayarathne.


medical image computing and computer assisted intervention | 2013

Robust Intraoperative US Probe Tracking Using a Monocular Endoscopic Camera

Uditha L. Jayarathne; A. Jonathan McLeod; Terry M. Peters; Elvis C. S. Chen

In the context of minimally-invasive procedures involving both endoscopic video and ultrasound, we present a vision-based method to track the ultrasound probe using a standard monocular video laparoscopic instrument. This approach requires only cosmetic modification to the ultrasound probe and obviates the need for magnetic tracking of either instrument. We describe an Extended Kalman Filter framework that solves for both the feature correspondence and pose estimation, and is able to track a 3D pattern on the surface of the ultrasound probe in near real-time. The tracking capability is demonstrated by performing an ultrasound calibration of a visually-tracked ultrasound probe, using a standard endoscopic video camera. Ultrasound calibration resulted in a mean TRE of 2.3 mm, and comparison with an external optical tracker demonstrated a mean FRE of 4.4 mm between the two tracking systems.


computer assisted radiology and surgery | 2017

Hand-eye calibration for surgical cameras: a Procrustean Perspective-n-Point solution

Isabella Morgan; Uditha L. Jayarathne; Adam Rankin; Terry M. Peters; Elvis C. S. Chen

PurposeSurgical cameras are prevalent in modern operating theatres often used as surrogates for direct vision. A surgical navigational system is a useful adjunct, but requires an accurate “hand-eye” calibration to determine the geometrical relationship between the surgical camera and tracking markers.MethodsUsing a tracked ball-tip stylus, we formulated hand-eye calibration as a Perspective-n-Point problem, which can be solved efficiently and accurately using as few as 15 measurements.ResultsThe proposed hand-eye calibration algorithm was applied to three types of camera and validated against five other widely used methods. Using projection error as the accuracy metric, our proposed algorithm compared favourably with existing methods.ConclusionWe present a fully automated hand-eye calibration technique, based on Procrustean point-to-line registration, which provides superior results for calibrating surgical cameras when compared to existing methods.


Proceedings of SPIE | 2016

Towards disparity joint upsampling for robust stereoscopic endoscopic scene reconstruction in robotic prostatectomy

Xiongbiao Luo; A. Jonathan McLeod; Uditha L. Jayarathne; Stephen E. Pautler; Christopher M. Schlacta; Terry M. Peters

Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.


international conference on image and graphics | 2015

Binocular Endoscopic 3-D Scene Reconstruction Using Color and Gradient-Boosted Aggregation Stereo Matching for Robotic Surgery

Xiongbiao Luo; Uditha L. Jayarathne; Stephen Pautler; Terry M. Peters

This paper seeks to develop fast and accurate endoscopic stereo 3-D scene reconstruction for image-guided robotic surgery. Although stereo 3-D reconstruction techniques have been widely discussed over the last few decades, they still remain challenging for endoscopic stereo images with photometric variations, noise, and specularities. To address these limitations, we propose a robust stereo matching framework that constructs cost function on the basis of image gradient and three-channel color information for endoscopic stereo scene 3-D reconstruction. Color information is powerful for textureless stereo pairs and gradient is robust to texture structures under noise and illumination change. We evaluate our stereo matching framework on clinical patient stereoscopic endoscopic sequence data. Experimental results demonstrate that our approach significantly outperforms current available methods. In particular, our framework provided 99.5 % reconstructed density of stereo images compared to other available matching strategies which achieved at the most an 87.6 % reconstruction of the scene.


Workshop on Augmented Environments for Computer-Assisted Interventions | 2015

Simultaneous Estimation of Feature Correspondence and Stereo Object Pose with Application to Ultrasound Augmented Robotic Laparoscopy

Uditha L. Jayarathne; Xiongbiao Luo; Elvis C. S. Chen; Terry M. Peters

In-situ visualization of ultrasound in robot-assisted surgery requires robust, real-time computation of the pose of the intra-corporeal ultrasound (US) probe with respect to the stereo-laparoscopic camera. Image based, intrinsic methods of computing this relative pose need to overcome challenges due to irregular illumination, partial feature occlusion and clutter that are unavoidable in practical robotic-laparoscopy. In this paper, we extend a state-of-the-art simultaneous monocular pose and correspondence estimation framework to a stereo imaging model. The method is robust to partial feature occlusion and clutter, and does not require explicit feature matching. Through exhaustive experiments, we demonstrate that in terms of accuracy, the proposed method outperforms the conventional stereo pose estimation approach and the state-of-the-art monocular camera-based method. Both quantitative and qualitative results are presented.


Revised Selected Papers of the Second International Workshop on Computer-Assisted and Robotic Endoscopy - Volume 9515 | 2015

Stereoscopic Motion Magnification in Minimally-Invasive Robotic Prostatectomy

A. Jonathan McLeod; John S. H. Baxter; Uditha L. Jayarathne; Stephen E. Pautler; Terry M. Peters; Xiongbiao Luo

The removal of the prostate is a common treatment option for localized prostate cancer. Robotic prostatectomy uses endoscopic cameras to provide a stereoscopic view of the surgical scene to the surgeon. Often, this surgical scene is difficult to interpret because of variants in anatomy and some critical structures such as the neurovascular bundles alongside the prostate, are affected by variations in size and shape of the prostate. The objective of this article is to develop a real-time stereoscopic video processing framework to improve the perceptibly of the surgical scene, using Eulerian Motion Magnification to exaggerate the subtle pulsatile behavior of the neurovascular bundles. This framework has been validated on both digital phantoms and retrospective analysis of robotic prostatectomy video.


medical image computing and computer assisted intervention | 2014

Enhanced Differential Evolution to Combine Optical Mouse Sensor with Image Structural Patches for Robust Endoscopic Navigation

Xiongbiao Luo; Uditha L. Jayarathne; A. Jonathan McLeod; Kensaku Mori

Endoscopic navigation generally integrates different modalities of sensory information in order to continuously locate an endoscope relative to suspicious tissues in the body during interventions. Current electromagnetic tracking techniques for endoscopic navigation have limited accuracy due to tissue deformation and magnetic field distortion. To avoid these limitations and improve the endoscopic localization accuracy, this paper proposes a new endoscopic navigation framework that uses an optical mouse sensor to measure the endoscope movements along its viewing direction. We then enhance the differential evolution algorithm by modifying its mutation operation. Based on the enhanced differential evolution method, these movement measurements and image structural patches in endoscopic videos are fused to accurately determine the endoscope position. An evaluation on a dynamic phantom demonstrated that our method provides a more accurate navigation framework. Compared to state-of-the-art methods, it improved the navigation accuracy from 2.4 to 1.6 mm and reduced the processing time from 2.8 to 0.9 seconds.


Proceedings of SPIE | 2014

Solving for free-hand and real-time 3D ultrasound calibration with anisotropic orthogonal Procrustes analysis

Elvis C. S. Chen; A. Jonathan McLeod; Uditha L. Jayarathne; Terry M. Peters

Real-time 3D ultrasound is an emerging imaging modality, offering full volumetric view of the anatomy without ionizing radiation. A spatial tracking system facilitates its integration with image-guided interventions, but such integration requires an accurate calibration between the spatial tracker and the ultrasound volume. In this paper, a rapid calibration technique for real-time 3D ultrasound is presented, comprising a plane-based calibration phantom, an algorithm for automatic fiducial extraction from ultrasound volumes, and a numerical solution for deriving calibration parameters involving anisotropic scaling. Using a magnetic tracking system and a commercial transesophageal echocardiogram real-time 3D ultrasound probe, this technique achieved a mean Target Registration Error of 2.9mm in a laboratory setting.


medical image computing and computer assisted intervention | 2018

Endoscopic Laser Surface Scanner for Minimally Invasive Abdominal Surgeries

Jordan Geurten; Wenyao Xia; Uditha L. Jayarathne; Terry M. Peters; Elvis C. S. Chen

Minimally invasive surgery performed under endoscopic video is a viable alternative to several types of open abdominal surgeries. Advanced visualization techniques require accurate patient registration, often facilitated by reconstruction of the organ surface in situ. We present an active system for intraoperative surface reconstruction of internal organs, comprising a single-plane laser as the structured light source and a surgical endoscope camera as the imaging system. Both surgical instruments are spatially calibrated and tracked, after which the surface reconstruction is formulated as the intersection problem between line-of-sight rays (from the surgical camera) and the laser beam. Surface target registration error after a rigid-body surface registration between the scanned 3D points to the ground truth obtained via CT is reported. When tested on an ex vivo porcine liver and kidney, root-mean-squared surface target registration error of 1.28 mm was achieved. Accurate endoscopic surface reconstruction is possible by using two separately calibrated and tracked surgical instruments, where the trigonometry between the structured light, imaging system, and organ surface can be optimized. Our novelty is the accurate calibration technique for the tracked laser beam, and the design and the construction of laser apparatus designed for robotic-assisted surgery.


medical image computing and computer assisted intervention | 2017

Real-Time 3D Ultrasound Reconstruction and Visualization in the Context of Laparoscopy

Uditha L. Jayarathne; John Moore; Elvis C. S. Chen; Stephen E. Pautler; Terry M. Peters

In the context of laparoscopic interventions involving intracorporeal ultrasound, we present a method to visualize hidden targets in 3D. As the surgeon scans the organ surface, we stitch tracked 2D ultrasound images into a 3D volume in real-time. This volume, registered in space with the surface view provided by the laparoscope, is visualized through a transparent window in the surface image. The efficacy of the proposed method is demonstrated by conducting a psychophysical study with phantoms, involving experienced ultrasound users and laparoscopic surgeons. The results reveal that the proposed method demands significantly less cognitive and physical effort compared to the 2D ultrasound visualization method conventionally used in the operating room.

Collaboration


Dive into the Uditha L. Jayarathne's collaboration.

Top Co-Authors

Avatar

Terry M. Peters

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Elvis C. S. Chen

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen E. Pautler

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

John Moore

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge