Junchen Wang
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junchen Wang.
IEEE Transactions on Biomedical Engineering | 2014
Junchen Wang; Hideyuki Suenaga; Kazuto Hoshi; Liangjing Yang; Etsuko Kobayashi; Ichiro Sakuma; Hongen Liao
Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patients anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.
Computerized Medical Imaging and Graphics | 2015
Junchen Wang; Hideyuki Suenaga; Hongen Liao; Kazuto Hoshi; Liangjing Yang; Etsuko Kobayashi; Ichiro Sakuma
Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D images geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.
International Journal of Medical Robotics and Computer Assisted Surgery | 2011
Junchen Wang; Takashi Ohya; Hongen Liao; Ichiro Sakuma; Tianmiao Wang; Iwai Tohnai; Toshinori Iwai
It is tedious and difficult to position a flexible catheter in a target vessel branch within complicated‐shaped vessels owing to the lack of real‐time visual feedback. Digital subtraction angiography and fluoroscopic guidance are currently used for catheter placement.
medical image computing and computer-assisted intervention | 2013
Taizan Yonetsuji; Takehiro Ando; Junchen Wang; Keisuke Fujiwara; Kazunori Itani; Takashi Azuma; Kiyoshi Yoshinaka; Akira Sasaki; Shu Takagi; Etsuko Kobayashi; Hongen Liao; Yoichiro Matsumoto; Ichiro Sakuma
High intensity focused ultrasound (HIFU) is a promising technique for cancer treatment owing to its minimal invasiveness and safety. However, skin burn, long treatment time and incomplete ablation are main shortcomings of this method. This paper presents a novel HIFU robotic system for breast cancer treatment. The robot has 4 rotational degrees of freedom with the workspace located in a water tank for HIFU beam imaging and ablation treatment. The HIFU transducer combined with a diagnostic 2D linear ultrasound probe is mounted on the robot end-effector, which is rotated around the HIFU focus when ablating the tumor. HIFU beams are visualized by the 2D probe using beam imaging. Skin burn can be prevented or alleviated by avoiding long time insonification towards the same skin area. The time cost could be significantly reduced, as there is no need to interrupt the ablation procedure for cooling the skin. In addition, our proposed robot control strategies can avoid incomplete ablation. Experiments were carried out and the results showed the effectiveness of our proposed system.
international conference of the ieee engineering in medicine and biology society | 2013
Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Hongen Liao; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba
The purpose of this work is to introduce an ultrasound image-based intraoperative scheme for rigid endoscope localization during minimally invasive fetoscopic surgery. Positional information of surgical instruments with respect to anatomical features is important for the development of computer-aided surgery applications. While most surgical navigation systems use optical tracking systems with satisfactory accuracy, there are several operation limitations in such systems. We propose an elegant framework for intraoperative instrument localization that does not require any external tracking system but uses an ultrasound imaging system and a computation scheme based on constrained kinematics of minimally invasive fetoscopic surgery. Our proposed algorithm simultaneously estimates endoscope and port positions in an online sequential fashion with standard deviation of 1.28 mm for port estimation. Robustness of the port estimation algorithm against external disturbance was demonstrated by intentionally introducing artificial errors to measurement data. The estimation converges within eight iterations under disturbance magnitude of 30 mm.
International Journal of Medical Robotics and Computer Assisted Surgery | 2015
Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Takehiro Ando; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba
This study presents a tracker‐less image‐mapping framework for surgical navigation motivated by the clinical need for intuitive visual guidance during minimally invasive fetoscopic surgery.
Computerized Medical Imaging and Graphics | 2015
Liangjing Yang; Junchen Wang; Takehiro Ando; Akihiro Kubota; Hiromasa Yamashita; Ichiro Sakuma; Toshio Chiba; Etsuko Kobayashi
This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures.
Archive | 2016
Xinran Zhang; Zhencheng Fan; Junchen Wang; Hongen Liao
Augmented reality (AR) techniques, which can merge virtual computer-generated guidance information into real medical interventions, help surgeons obtain dynamic “see-through” scenes during orthopaedic interventions. Among various AR techniques, 3D integral videography (IV) image overlay is a promising solution because of its simplicity in implementation as well as the ability to produce a full parallax augmented natural view for multiple observers and improve surgeons’ hand-eye coordination. To obtain a precise fused result, patient-3D image registration is a vital technique in the IV overlay based orthopaedic interventions. Marker or marker-less based registration techniques are alternative depending on a particular clinical application. According to accurate AR information, minimally invasive therapy including cutting, drilling, implantation and other related operations, can be performed more easily and safely. This chapter reviews related augmented reality techniques for image-guided surgery and analyses several examples about clinical applications. Eventually, we discuss the future development of 3D AR based orthopaedic interventions.
AE-CAI | 2013
Liangjing Yang; Junchen Wang; Etsuko Kobayashi; Hongen Liao; Ichiro Sakuma; Hiromasa Yamashita; Toshio Chiba
This work presents a framework for mapping of free-hand endoscopic views onto 3D anatomical model constructed from ultrasound images without the use of external trackers. It is non-disruptive in terms of existing surgical workflow as surgeons do not need to accommodate operational constraints associated with the use of additionally installed motion sensors or tracking systems. A passive fiducial marker is attached to the tip of the endoscope to create a geometric eccentricity that encodes the position and orientation of the camera. The relative position between the endoscope and the anatomical model under the ultrasound image reference frame is used to establish a texture map that overlays endoscopic views onto the surface of the model. This addresses operational challenges including the limited field-of-view (FOV) and the lack of 3D perspective associated with minimally invasive procedure. Experimental results show that average tool position and orientation errors are 1.32 mm and 1.6o respectively. The R.M.S. error of the overall image mapping obtained based on comparison of dimension of landmarks is 3.30 mm with standard deviation of 2.14 mm. The feasibility of the framework is also demonstrated through implementations on a phantom model.
AE-CAI | 2013
Junchen Wang; Hideyuki Suenaga; Liangjing Yang; Hongen Liao; Etsuko Kobayashi; Tsuyoshi Takato; Ichiro Sakuma
Surgical navigation techniques have been evolving rapidly in the field of oral and maxillofacial surgery (OMS). However, challenges still exist in the current state of the art of computer-assisted OMS especially from the viewpoint of dental surgery. The challenges include the invasive patient registration procedure, the difficulty of reference marker attachment, navigation error caused by patient movement, bulky optical markers and maintenance of line of sight for commercial optical tracking devices, inaccuracy and susceptibility of electromagnetic (EM) sensors to magnetic interference for EM tracking devices. In this paper, a new solution is proposed to overcome the mentioned challenges. A stereo camera is designed as a tracking device for both instrument tracking and patient tracking, which is customized optimally for the limited surgical space of dental surgery. A small dot pattern is mounted to the surgical tool for instrument tracking, which can be seen by the camera at all times during the operation. The patient registration is achieved by patient tracking and 3D contour matching with the preoperative patient model, requiring no fiducial marker and reference marker. In addition, the registration is updated in real-time. Experiments were performed to evaluate our method and an average overall error of 0.71 mm was achieved.