Xiongbiao Luo
Nagoya University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiongbiao Luo.
Proceedings of SPIE | 2010
Xiongbiao Luo; Marco Feuerstein; Takamasa Sugiura; Takayuki Kitasaka; Kazuyoshi Imaizumi; Yoshinori Hasegawa; Kensaku Mori
This paper presents a hybrid camera tracking method that uses electromagnetic (EM) tracking and intensitybased image registration and its evaluation on a dynamic motion phantom. As respiratory motion can significantly affect rigid registration of the EM tracking and CT coordinate systems, a standard tracking approach that initializes intensity-based image registration with absolute pose data acquired by EM tracking will fail when the initial camera pose is too far from the actual pose. We here propose two new schemes to address this problem. Both of these schemes intelligently combine absolute pose data from EM tracking with relative motion data combined from EM tracking and intensity-based image registration. These schemes significantly improve the overall camera tracking performance. We constructed a dynamic phantom simulating the respiratory motion of the airways to evaluate these schemes. Our experimental results demonstrate that these schemes can track a bronchoscope more accurately and robustly than our previously proposed method even when maximum simulated respiratory motion reaches 24 mm.
asian conference on computer vision | 2010
Xiongbiao Luo; Tobias Reichl; Marco Feuerstein; Takayuki Kitasaka; Kensaku Mori
This paper presents a new hybrid bronchoscope tracking method that uses an electromagnetic position sensor, a sequential Monte Carlo sampler, and its evaluation on a dynamic motion phantom. Since airway deformation resulting from patient movement, respiratory motion, and coughing can significantly affect the rigid registration between electromagnetic tracking and computed tomography (CT) coordinate systems, a standard hybrid tracking approach that initializes intensitybased image registration with absolute pose data acquired by electromagnetic tracking fails when the initial camera pose is too far from the actual pose. We propose a new solution that combines electromagnetic tracking and a sequential Monte Carlo sampler to address this problem. In our solution, sequential Monte Carlo sampling is introduced to recursively approximate the posterior probability distributions of the bronchoscope camera motion parameters in accordance with the observation model based on electromagnetic tracking. We constructed a dynamic phantom that simulates airway deformation to evaluate our proposed solution. Experimental results demonstrate that the challenging problem of airway deformation can be robustly modeled and effectively addressed with our proposed approach compared to a previous hybrid method, even when the maximum simulated airway deformation reaches 23 mm.
IEEE Transactions on Medical Imaging | 2016
Yi Wang; Jie-Zhi Cheng; Dong Ni; Muqing Lin; Jing Qin; Xiongbiao Luo; Ming Xu; Xiaoyan Xie; Pheng-Ann Heng
Registration and fusion of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland can provide high-quality guidance for prostate interventions. However, accurate MR-TRUS registration remains a challenging task, due to the great intensity variation between two modalities, the lack of intrinsic fiducials within the prostate, the large gland deformation caused by the TRUS probe insertion, and distinctive biomechanical properties in patients and prostate zones. To address these challenges, a personalized model-to-surface registration approach is proposed in this study. The main contributions of this paper can be threefold. First, a new personalized statistical deformable model (PSDM) is proposed with the finite element analysis and the patient-specific tissue parameters measured from the ultrasound elastography. Second, a hybrid point matching method is developed by introducing the modality independent neighborhood descriptor (MIND) to weight the Euclidean distance between points to establish reliable surface point correspondence. Third, the hybrid point matching is further guided by the PSDM for more physically plausible deformation estimation. Eighteen sets of patient data are included to test the efficacy of the proposed method. The experimental results demonstrate that our approach provides more accurate and robust MR-TRUS registration than state-of-the-art methods do. The averaged target registration error is 1.44 mm, which meets the clinical requirement of 1.9 mm for the accurate tumor volume detection. It can be concluded that the presented method can effectively fuse the heterogeneous image information in the elastography, MR, and TRUS to attain satisfactory image alignment performance.
IEEE Transactions on Medical Imaging | 2014
Xiongbiao Luo; Kensaku Mori
Endoscope 3-D motion tracking, which seeks to synchronize preand intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
medical image computing and computer assisted intervention | 2011
Xiongbiao Luo; Takayuki Kitasaka; Kensaku Mori
A novel bronchoscope tracking prototype was designed and validated for bronchoscopic navigation. We construct a novel mouth- or nose-piece bronchoscope model to directly measure the movement information of a bronchoscope outside of a patients body. Fusing the measured movement information based on sequential Monte Carlo (SMC) sampler, we exploit accurate and robust intra-operative alignment between the pre- and intra-operative image data for augmenting surgical bronchoscopy. We validate our new prototype on phantom datasets. The experimental results demonstrate that our proposed prototype is a promising approach to navigate a bronchoscope beyond EMT systems.
IEEE Transactions on Biomedical Engineering | 2014
Xiongbiao Luo; Kensaku Mori
Electromagnetically navigated endoscopy, which is increasingly applied in endoscopic interventions, utilizes an electromagnetic sensor attached at the endoscope tip to measure the endoscope movements and to navigate the endoscope in the region of interest in the body. Due to patient motion and magnetic field distortion, sensor electromagnetic tracking (EMT) measurement inaccuracy and dynamic jitter errors remain challenging for electromagnetic navigation. This paper proposes a new tracking framework of an animated particle filter that integrates adaptive particle swarm optimization into a generic particle filter to significantly boost electromagnetic trackers. We validate our method on a dynamic phantom and compare it to state-of-the-art EMT methods. Our experimental results demonstrate the effectiveness and robustness of our method, which provides position and orientation accuracy of 2.48 mm, 4.69° that significantly outperformed previous methods at least with tracking error of 4.19 mm, 7.75°. The tracking smoothness was improved from 4.09 mm, 3.37° to 1.84 mm, 2.52°. Our method successfully tackled the particle impoverishment better than standard particle filters.
medical image computing and computer-assisted intervention | 2011
Tobias Reichl; Xiongbiao Luo; Manuela Menzel; Hubert Hautmann; Kensaku Mori; Nassir Navab
We present a novel approach to tracking of flexible bronchoscopes by modeling the output as spatially continuous over time. Bronchoscopy is a widespread clinical procedure for diagnosis and treatment of lung diseases and navigation systems are highly needed. Tracking of the bronchoscope can be regarded as a deformable registration problem. In our approach we use hybrid image-based and electromagnetic tracking, and the bronchoscope pose relative to CT data is interpolated using Catmull-Rom splines for position and SLERP for orientation. We evaluate the method using ground truth poses manually selected by experts, where mean inter-expert agreement was determined as 1.26 mm. For four dynamic phantom data sets, the accuracy of our method is between 4.13 and 5.93 mm and shown to be equivalent to previous methods. We significantly improve inter-frame smoothness from 2.35-3.08 mm to 1.08-1.51 mm. Our method provides a more realistic and physically plausible solution with significantly less jitter. This quantitative result is confirmed by video output, which is much more consistent and robust, with fewer occasions of tracking loss or unexpected movement.
IEEE Transactions on Medical Imaging | 2013
Xiongbiao Luo; Takayuki Kitasaka; Kensaku Mori
The paper presents a new endoscope motion tracking method that is based on a novel external endoscope tracking device and our modified stochastic optimization method for boosting endoscopy navigation. We designed a novel tracking prototype where a 2-D motion sensor was introduced to directly measure the insertion-retreat linear motion and also the rotation of the endoscope. With our developed stochastic optimization method, which embeds traceable particle swarm optimization in the Condensation algorithm, a full six degrees-of-freedom endoscope pose (position and orientation) can be recovered from 2-D motion sensor measurements. Experiments were performed on a dynamic bronchial phantom with maximal simulated respiratory motion around 24.0 mm. The experimental results demonstrate that our proposed method provides a promising endoscope motion tracking approach with more effective and robust performance than several current available tracking techniques. The average tracking accuracy of the position improved from 6.5 to 3.3 mm, which further approaches the clinical requirement of 2.0 mm in practice.
international conference on medical imaging and augmented reality | 2010
Xiongbiao Luo; Marco Feuerstein; Tobias Reichl; Takayuki Kitasaka; Kensaku Mori
This paper compares Kanade-Lucas-Tomasi (KLT), speeded up robust feature (SURF), and scale invariant feature transformation (SIFT) features applied to bronchoscope tracking. In our study, we first use KLT, SURF, or SIFT features and epipolar constraints to obtaininterframe translation (up to scale) and orientation displacements and Kalman filtering to recover an estimate for the magnitude of the motion (scale factor determination), and then multiply inter-frame motion parameters onto the previous pose of the bronchoscope camera to achieve the predicted pose, which is used to initialize intensity-based image registration to refine the current pose of the bronchoscope camera. We evaluate the KLT-, SURF-, and SIFT-based bronchoscope camera motion tracking methods on patient datasets. According to experimental results, we may conclude that SIFT features are more robust than KLT and SURF features at predicting the bronchoscope motion, and all methods for predicting the bronchoscope camera motion show a significant performance boost compared to sole intensity-based image registration without an additional position sensor.
medical image computing and computer assisted intervention | 2011
Xiongbiao Luo; Takayuki Kitasaka; Kensaku Mori
This paper presents a new bronchoscope motion tracking method that utilizes manifold modeling and sequential Monte Carlo (SMC) sampler to boost navigated bronchoscopy. Our strategy to estimate the bronchoscope motions comprises two main stages: (1) bronchoscopic scene identification and (2) SMC sampling. We extend a spatial local and global regressive mapping (LGRM) method to Spatial-LGRM to learn bronchoscopic video sequences and construct their manifolds. By these manifolds, we can classify bronchoscopic scenes to bronchial branches where a bronchoscope is located. Next, we employ a SMC sampler based on a selective image similarity measure to integrate estimates of stage (1) to refine positions and orientations of a bronchoscope. Our proposed method was validated on patient datasets. Experimental results demonstrate the effectiveness and robustness of our method for bronchoscopic navigation without an additional position sensor.