Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ping-Lin Chang is active.

Publication


Featured researches published by Ping-Lin Chang.


IEEE Transactions on Medical Imaging | 2014

Comparative Validation of Single-Shot Optical Techniques for Laparoscopic 3-D Surface Reconstruction

Lena Maier-Hein; Anja Groch; A. Bartoli; Sebastian Bodenstedt; G. Boissonnat; Ping-Lin Chang; Neil T. Clancy; Daniel S. Elson; S. Haase; E. Heim; Joachim Hornegger; Pierre Jannin; Hannes Kenngott; Thomas Kilgus; B. Muller-Stich; D. Oladokun; Sebastian Röhl; T. R. Dos Santos; Heinz Peter Schlemmer; Alexander Seitel; Stefanie Speidel; Martin Wagner; Danail Stoyanov

Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper.


medical image computing and computer assisted intervention | 2015

Image Based Surgical Instrument Pose Estimation with Multi-class Labelling and Optical Flow

Max Allan; Ping-Lin Chang; Sebastien Ourselin; David J. Hawkes; Ashwin Sridhar; John D. Kelly; Danail Stoyanov

Image based detection, tracking and pose estimation of surgical instruments in minimally invasive surgery has a number of potential applications for computer assisted interventions. Recent developments in the field have resulted in advanced techniques for 2D instrument detection in laparoscopic images, however, full 3D pose estimation remains a challenging and unsolved problem. In this paper, we present a novel method for estimating the 3D pose of robotic instruments, including axial rotation, by fusing information from large homogeneous regions and local optical flow features. We demonstrate the accuracy and robustness of this approach on ex vivo data with calibrated ground truth given by surgical robot kinematics which we will also make available to the community. Qualitative validation on in vivo data from robotic assisted prostatectomy further demonstrates that the technique can function in clinical scenarios.


medical image computing and computer assisted intervention | 2013

Real-Time Dense Stereo Reconstruction Using Convex Optimisation with a Cost-Volume for Image-Guided Robotic Surgery

Ping-Lin Chang; Danail Stoyanov; Andrew J. Davison; Philip “Eddie” Edwards

Reconstructing the depth of stereo-endoscopic scenes is an important step in providing accurate guidance in robotic-assisted minimally invasive surgery. Stereo reconstruction has been studied for decades but remains a challenge in endoscopic imaging. Current approaches can easily fail to reconstruct an accurate and smooth 3D model due to textureless tissue appearance in the real surgical scene and occlusion by instruments. To tackle these problems, we propose a dense stereo reconstruction algorithm using convex optimisation with a cost-volume to efficiently and effectively reconstruct a smooth model while maintaining depth discontinuity. The proposed approach has been validated by quantitative evaluation using simulation and real phantom data with known ground truth. We also report qualitative results from real surgical images. The algorithm outperforms state of the art methods and can be easily parallelised to run in real-time on recent graphics hardware.


Computerized Medical Imaging and Graphics | 2013

Modeling of the bony pelvis from MRI using a multi-atlas AE-SDM for registration and tracking in image-guided robotic prostatectomy

Qinquan Gao; Ping-Lin Chang; Daniel Rueckert; S. Mohammed Ali; Daniel Cohen; Philip Pratt; Erik Mayer; Guang-Zhong Yang; Ara Darzi; Philip “Eddie” Edwards

A fundamental challenge in the development of image-guided surgical systems is alignment of the preoperative model to the operative view of the patient. This is achieved by finding corresponding structures in the preoperative scans and on the live surgical scene. In robot-assisted laparoscopic prostatectomy (RALP), the most readily visible structure is the bone of the pelvic rim. Magnetic resonance imaging (MRI) is the modality of choice for prostate cancer detection and staging, but extraction of bone from MRI is difficult and very time consuming to achieve manually. We present a robust and fully automated multi-atlas pipeline for bony pelvis segmentation from MRI, using a MRI appearance embedding statistical deformation model (AE-SDM). The statistical deformation model is built using the node positions of deformations obtained from hierarchical registrations of full pelvis CT images. For datasets with corresponding CT and MRI images, we can transform the MRI into CT SDM space. MRI appearance can then be used to improve the combined MRI/CT atlas to MRI registration using SDM constraints. We can use this model to segment the bony pelvis in a new MRI image where there is no CT available. A multi-atlas segmentation algorithm is introduced which incorporates MRI AE-SDMs guidance. We evaluated the method on 19 subjects with corresponding MRI and manually segmented CT datasets by performing a leave-one-out study. Several metrics are used to quantify the overlap between the automatic and manual segmentations. Compared to the manual gold standard segmentations, our robust segmentation method produced an average surface distance 1.24±0.27mm, which outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. We also show that the resulting surface can be tracked in the endoscopic view in near real time using dense visual tracking methods. Results are presented on a simulation and a real clinical RALP case. Tracking is accurate to 0.13mm over 700 frames compared to a manually segmented surface. Our method provides a realistic and robust framework for intraoperative alignment of a bony pelvis model from diagnostic quality MRI images to the endoscopic view.


international conference information processing | 2014

Robust Real-Time Visual Odometry for Stereo Endoscopy Using Dense Quadrifocal Tracking

Ping-Lin Chang; Ankur Handa; Andrew J. Davison; Danail Stoyanov; Philip “Eddie” Edwards

Visual tracking in endoscopic scenes is known to be a difficult task due to the lack of textures, tissue deformation and specular reflection. In this paper, we devise a real-time visual odometry framework to robustly track the 6-DoF stereo laparoscope pose using the quadrifocal relationship. The instant motion of a stereo camera creates four views which can be constrained by the quadrifocal geometry. Using the previous stereo pair as a reference frame, the current pair can be warped back by minimising a photometric error function with respect to a camera pose constrained by the quadrifocal geometry. Using a robust estimator can further remove the outliers caused by occlusion, deformation and specular highlights during the optimisation. Since the optimisation uses all pixel data in the images, it results in a very robust pose estimation even for a textureless scene. The quadrifocal geometry is initialised by using real-time stereo reconstruction algorithm which can be efficiently parallelised and run on the GPU together with the proposed tracking framework. Our system is evaluated using a ground truth synthetic sequence with a known model and we also demonstrate the accuracy and robustness of the approach using phantom and real examples of endoscopic augmented reality.


international conference on robotics and automation | 2016

Robust Catheter and Guidewire Tracking Using B-Spline Tube Model and Pixel-Wise Posteriors

Ping-Lin Chang; Alexander Rolls; Herbert De Praetere; Emmanuel Vander Poorten; Celia V. Riga; Colin Bicknell; Danail Stoyanov

In endovascular surgery and cardiology, robotic catheters are emerging as a promising technology for enhanced catheter manipulation and navigation while reducing radiation exposure. For robotic catheter systems especially with tendon actuation, a key challenge is the localisation of the catheter shape and position within the anatomy. An effective approach is through image-based catheter/guidewire detection and tracking. However, these are difficult problems due to the thin appearance of the instruments in the image and the low signal-to-noise ratio of fluoroscopy. In this letter, we propose a deformable B-spline tube model, which can effectively represent the shape of a catheter and guidewire. The model allows fitting using a region-based probabilistic algorithm, which does not rely on intensity gradients but exploits a signed distance function and the nonparametric distributions of measurements. Unlike previous B-spline fitting approaches, which optimise the spline with respect to control points, we propose a knot-driven scheme with an equidistance prior to better fit complex curves. Our probabilistic framework shows promising results for catheter and guidewire tracking in different procedures even with handling overlapping instrument segments. We present empirical studies using phantom model data and in vivo fluoroscopic sequences with annotated ground truth. Our results indicate that the proposed approach can precisely model the catheter and guidewire contours in near real time, and this information can be embedded in a robotic catheter control loop or utilised for image-guidance.


computer assisted radiology and surgery | 2016

Catheter manipulation analysis for objective performance and technical skills assessment in transcatheter aortic valve implantation

Evangelos B. Mazomenos; Ping-Lin Chang; Radoslaw A. Rippel; Alexander Rolls; David J. Hawkes; Colin Bicknell; Adrien E. Desjardins; Celia V. Riga; Danail Stoyanov

PurposeTranscatheter aortic valve implantation (TAVI) demands precise and efficient handling of surgical instruments within the confines of the aortic anatomy. Operational performance and dexterous skills are critical for patient safety, and objective methods are assessed with a number of manipulation features, derived from the kinematic analysis of the catheter/guidewire in fluoroscopy video sequences.MethodsA silicon phantom model of a type I aortic arch was used for this study. Twelve endovascular surgeons, divided into two experience groups, experts (


AE-CAI'11 Proceedings of the 6th international conference on Augmented Environments for Computer-Assisted Interventions | 2011

2D/3D registration of a preoperative model with endoscopic video using colour-consistency

Ping-Lin Chang; Dongbin Chen; Daniel Cohen; Philip “Eddie” Edwards


Journal of Medical Robotics Research , 1 (3) , Article 1640010. (2016) | 2016

A Survey on the Current Status and Future Challenges Towards Objective Skills Assessment in Endovascular Surgery

Evangelos B. Mazomenos; Ping-Lin Chang; Alexander Rolls; David J. Hawkes; Colin Bicknell; Emmanuel Vander Poorten; Celia V. Riga; Adrien E. Desjardins; Danail Stoyanov

n=6


IEEE Transactions on Medical Imaging | 2018

Articulated Multi-Instrument 2-D Pose Estimation Using Fully Convolutional Networks

Xiaofei Du; Thomas Kevin Kurmann; Ping-Lin Chang; Maximillian Allan; Sebastian Ourselin; Raphael Sznitman; John D. Kelly; Danail Stoyanov

Collaboration


Dive into the Ping-Lin Chang's collaboration.

Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herbert De Praetere

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Phuong Toan Tran

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Hawkes

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge