Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liangjing Yang is active.

Publication


Featured researches published by Liangjing Yang.


IEEE Transactions on Biomedical Engineering | 2014

Augmented Reality Navigation With Automatic Marker-Free Image Registration Using 3-D Image Overlay for Dental Surgery

Junchen Wang; Hideyuki Suenaga; Kazuto Hoshi; Liangjing Yang; Etsuko Kobayashi; Ichiro Sakuma; Hongen Liao

Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patients anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.


Computerized Medical Imaging and Graphics | 2015

Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

Junchen Wang; Hideyuki Suenaga; Hongen Liao; Kazuto Hoshi; Liangjing Yang; Etsuko Kobayashi; Ichiro Sakuma

Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D images geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.


international conference of the ieee engineering in medicine and biology society | 2013

Ultrasound image-based endoscope localization for minimally invasive fetoscopic surgery

Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Hongen Liao; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba

The purpose of this work is to introduce an ultrasound image-based intraoperative scheme for rigid endoscope localization during minimally invasive fetoscopic surgery. Positional information of surgical instruments with respect to anatomical features is important for the development of computer-aided surgery applications. While most surgical navigation systems use optical tracking systems with satisfactory accuracy, there are several operation limitations in such systems. We propose an elegant framework for intraoperative instrument localization that does not require any external tracking system but uses an ultrasound imaging system and a computation scheme based on constrained kinematics of minimally invasive fetoscopic surgery. Our proposed algorithm simultaneously estimates endoscope and port positions in an online sequential fashion with standard deviation of 1.28 mm for port estimation. Robustness of the port estimation algorithm against external disturbance was demonstrated by intentionally introducing artificial errors to measurement data. The estimation converges within eight iterations under disturbance magnitude of 30 mm.


International Journal of Medical Robotics and Computer Assisted Surgery | 2015

Image mapping of untracked free-hand endoscopic views to an ultrasound image-constructed 3D placenta model

Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Takehiro Ando; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba

This study presents a tracker‐less image‐mapping framework for surgical navigation motivated by the clinical need for intuitive visual guidance during minimally invasive fetoscopic surgery.


Computerized Medical Imaging and Graphics | 2015

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation.

Liangjing Yang; Junchen Wang; Takehiro Ando; Akihiro Kubota; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba; Etsuko Kobayashi

This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures.


AE-CAI | 2013

Ultrasound Image-Guided Mapping of Endoscopic Views on a 3D Placenta Model: A Tracker-Less Approach

Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Hongen Liao; Ichiro Sakuma; Hiromasa Yamashita; Toshio Chiba

This work presents a framework for mapping of free-hand endoscopic views onto 3D anatomical model constructed from ultrasound images without the use of external trackers. It is non-disruptive in terms of existing surgical workflow as surgeons do not need to accommodate operational constraints associated with the use of additionally installed motion sensors or tracking systems. A passive fiducial marker is attached to the tip of the endoscope to create a geometric eccentricity that encodes the position and orientation of the camera. The relative position between the endoscope and the anatomical model under the ultrasound image reference frame is used to establish a texture map that overlays endoscopic views onto the surface of the model. This addresses operational challenges including the limited field-of-view (FOV) and the lack of 3D perspective associated with minimally invasive procedure. Experimental results show that average tool position and orientation errors are 1.32 mm and 1.6o respectively. The R.M.S. error of the overall image mapping obtained based on comparison of dimension of landmarks is 3.30 mm with standard deviation of 2.14 mm. The feasibility of the framework is also demonstrated through implementations on a phantom model.


AE-CAI | 2013

Real-Time Marker-Free Patient Registration and Image-Based Navigation Using Stereovision for Dental Surgery

Junchen Wang; Hideyuki Suenaga; Liangjing Yang; Hongen Liao; Etsuko Kobayashi; Tsuyoshi Takato; Ichiro Sakuma

Surgical navigation techniques have been evolving rapidly in the field of oral and maxillofacial surgery (OMS). However, challenges still exist in the current state of the art of computer-assisted OMS especially from the viewpoint of dental surgery. The challenges include the invasive patient registration procedure, the difficulty of reference marker attachment, navigation error caused by patient movement, bulky optical markers and maintenance of line of sight for commercial optical tracking devices, inaccuracy and susceptibility of electromagnetic (EM) sensors to magnetic interference for EM tracking devices. In this paper, a new solution is proposed to overcome the mentioned challenges. A stereo camera is designed as a tracking device for both instrument tracking and patient tracking, which is customized optimally for the limited surgical space of dental surgery. A small dot pattern is mounted to the surgical tool for instrument tracking, which can be seen by the camera at all times during the operation. The patient registration is achieved by patient tracking and 3D contour matching with the preoperative patient model, requiring no fiducial marker and reference marker. In addition, the registration is updated in real-time. Experiments were performed to evaluate our method and an average overall error of 0.71 mm was achieved.


International Journal of Medical Robotics and Computer Assisted Surgery | 2017

Video see-through augmented reality for oral and maxillofacial surgery

Junchen Wang; Hideyuki Suenaga; Liangjing Yang; Etsuko Kobayashi; Ichiro Sakuma

Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration.


International Journal of Medical Robotics and Computer Assisted Surgery | 2016

Towards scene adaptive image correspondence for placental vasculature mosaic in computer assisted fetoscopic procedures.

Liangjing Yang; Junchen Wang; Takehiro Ando; Akihiro Kubota; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba; Etsuko Kobayashi

Visualization of the vast placental vasculature is crucial in fetoscopic laser photocoagulation for twin‐to‐twin transfusion syndrome treatment. However, vasculature mosaic is challenging due to the fluctuating imaging conditions during fetoscopic surgery.


Workshop on Augmented Environments for Computer-Assisted Interventions | 2015

3D Surgical Overlay with Markerless Image Registration Using a Single Camera

Junchen Wang; Hideyuki Suenaga; Liangjing Yang; Hongen Liao; Takehiro Ando; Etsuko Kobayashi; Ichiro Sakuma

Minimum invasive surgery can benefit from surgical visualization, which is achieved by either virtual reality or augmented reality. We previously proposed an integrated 3D image overlay based surgical visualization solution including 3D image rendering, distortion correction, and spatial projection. For correct spatial projection of the 3D image, image registration is necessary. In this paper we present a 3D image overlay based augmented reality surgical navigation system with markerless image registration using a single camera. The innovation compared with our previous work lies in the single camera based image registration method for 3D image overlay. The 3D mesh model of patient’s teeth which is created from the preoperative CT data is matched with the intraoperative image captured by a single optical camera to determine the six-degree-of-freedom pose of the model with respect to the camera. The obtained pose is used to superimpose the 3D image of critical hidden tissues on patient’s body directly via a translucent mirror for surgical visualization. The image registration performs automatically within approximate 0.2 s, which enables real-time update to tackle patient’s movement. Experimental results show that the registration accuracy is about 1 mm and confirm the feasibility of the 3D surgical overlay system.

Collaboration


Dive into the Liangjing Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toshio Chiba

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge