Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sing Chun Lee is active.

Publication


Featured researches published by Sing Chun Lee.


computer assisted radiology and surgery | 2016

Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

Sing Chun Lee; Bernhard Fuerst; Javad Fotouhi; Marius Fischer; Greg Osgood; Nassir Navab

PurposeThis work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient’s surface without the need to move the C-arm.MethodsAn RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm.ResultsSeveral experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries.ConclusionTo the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


computer assisted radiology and surgery | 2016

Preclinical usability study of multiple augmented reality concepts for K-wire placement

Marius Fischer; Bernhard Fuerst; Sing Chun Lee; Javad Fotouhi; Séverine Habert; Simon Weidert; Ekkehard Euler; Greg Osgood; Nassir Navab

PurposeIn many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon’s view to assist accurate placement of tools.MethodWe design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization.ResultsThe evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency.ConclusionThe 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


computer assisted radiology and surgery | 2016

Dual-robot ultrasound-guided needle placement: closing the planning-imaging-action loop.

Risto Kojcev; Bernhard Fuerst; Oliver Zettinig; Javad Fotouhi; Sing Chun Lee; Benjamin Frisch; Russell H. Taylor; Edoardo Sinibaldi; Nassir Navab

PurposePrecise needle placement is an important task during several medical procedures. Ultrasound imaging is often used to guide the needle toward the target region in soft tissue. This task remains challenging due to the user’s dependence on image quality, limited field of view, moving target, and moving needle. In this paper, we present a novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance.MethodWe integrated force-controlled ultrasound image acquisition, registration of preoperative and intraoperative images, vision-based robot control, and target localization, in combination with a novel needle tracking algorithm. The framework allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions. We assessed the framework by considering both static and moving targets embedded in water and tissue-mimicking gelatin.ResultsThe presented dual-robot tracking algorithms allow for accurate needle placement, namely to target the region of interest with an error around 1 mm.ConclusionTo the best of our knowledge, we show the first use of two independent robots, one for imaging, the other for needle insertion, that are simultaneously controlled using image processing algorithms. Experimental results show the feasibility and demonstrate the accuracy and robustness of the process.


Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications | 2018

Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing

Emerson Tucker; Javad Fotouhi; Mathias Unberath; Sing Chun Lee; Bernhard Fuerst; Alex Johnson; Mehran Armand; Greg Osgood; Nassir Navab

Pre-operative CT data is available for several orthopedic and trauma interventions, and is mainly used to identify injuries and plan the surgical procedure. In this work we propose an intuitive augmented reality environment allowing visualization of pre-operative data during the intervention, with an overlay of the optical information from the surgical site. The pre-operative CT volume is first registered to the patient by acquiring a single C-arm X-ray image and using 3D/2D intensity-based registration. Next, we use an RGBD sensor on the C-arm to fuse the optical information of the surgical site with patient pre-operative medical data and provide an augmented reality environment. The 3D/2D registration of the pre- and intra-operative data allows us to maintain a correct visualization each time the C-arm is repositioned or the patient moves. An overall mean target registration error (mTRE) and standard deviation of 5.24 ± 3.09 mm was measured averaged over 19 C-arm poses. The proposed solution enables the surgeon to visualize pre-operative data overlaid with information from the surgical site (e.g. surgeon’s hands, surgical tools, etc.) for any C-arm pose, and negates issues of line-of-sight and long setup times, which are present in commercially available systems.


medical image computing and computer-assisted intervention | 2018

DeepDRR – A Catalyst for Machine Learning in Fluoroscopy-Guided Procedures

Mathias Unberath; Jan-Nico Zaech; Sing Chun Lee; Bastian Bier; Javad Fotouhi; Mehran Armand; Nassir Navab

Machine learning-based approaches outperform competing methods in most disciplines relevant to diagnostic radiology. Interventional radiology, however, has not yet benefited substantially from the advent of deep learning, in particular because of two reasons: 1) Most images acquired during the procedure are never archived and are thus not available for learning, and 2) even if they were available, annotations would be a severe challenge due to the vast amounts of data. When considering fluoroscopy-guided procedures, an interesting alternative to true interventional fluoroscopy is in silico simulation of the procedure from 3D diagnostic CT. In this case, labeling is comparably easy and potentially readily available, yet, the appropriateness of resulting synthetic data is dependent on the forward model. In this work, we propose DeepDRR, a framework for fast and realistic simulation of fluoroscopy and digital radiography from CT scans, tightly integrated with the software platforms native to deep learning. We use machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, combined with analytic forward projection and noise injection to achieve the required performance. On the example of anatomical landmark detection in X-ray images of the pelvis, we demonstrate that machine learning models trained on DeepDRRs generalize to unseen clinically acquired data without the need for re-training or domain adaptation. Our results are promising and promote the establishment of machine learning in fluoroscopy-guided procedures.


Healthcare technology letters | 2017

Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

Sing Chun Lee; Bernhard Fuerst; Keisuke Tateno; Alex Johnson; Javad Fotouhi; Greg Osgood; Federico Tombari; Nassir Navab

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.


Journal of medical imaging | 2018

Plan in 2-D, execute in 3-D: An augmented reality solution for cup placement in total hip arthroplasty

Javad Fotouhi; Clayton P. Alexander; Mathias Unberath; Giacomo Taylor; Sing Chun Lee; Bernhard Fuerst; Alex Johnson; Greg Osgood; Russell H. Taylor; Harpal S. Khanuja; Mehran Armand; Nassir Navab

Abstract. Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the acetabular cup is verified in C-arm x-ray images that are acquired in an anterior–posterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon’s experience in understanding the 3-D orientation of a hemispheric implant from 2-D AP projection images. This work proposes an easy to use intraoperative component planning system based on two C-arm x-ray images that are combined with 3-D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital and report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10 deg, and 0.53 deg, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.


Medical Physics | 2018

Automatic intraoperative stitching of nonoverlapping cone‐beam CT acquisitions

Javad Fotouhi; Bernhard Fuerst; Mathias Unberath; Stefan Reichenstein; Sing Chun Lee; Alex Johnson; Greg Osgood; Mehran Armand; Nassir Navab

PURPOSE Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.


Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling | 2018

Technical note: an augmented reality system for total hip arthroplasty

Javad Fotouhi; Clayton P. Alexander; Mathias Unberath; Giacomo Taylor; Sing Chun Lee; Bernhard Fuerst; M. D. Alex Johnson; M. D. Greg Osgood; Russell H. Taylor; Harpal S. Khanuja; Nassir Navab; Mehran Armand

Proper implant alignment is a critical step in total hip arthroplasty (THA) procedures. In current practice, correct alignment of the acetabular cup is verified in C-arm X-ray images that are acquired in an anteriorposterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon’s experience in understanding the 3D orientation of a hemispheric implant from 2D AP projection images. This work proposes an easy to use intra-operative component planning system based on two C-arm X-ray images that is combined with 3D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital, and also report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10°, and 0.53°, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.


international symposium on mixed and augmented reality | 2017

[POSTER] Mixed Reality Support for Orthopaedic Surgery

Sing Chun Lee; Keisuke Tateno; Bernhard Fuerst; Federico Tombari; Javad Fotouhi; Greg Osgood; Alex Johnson; Nassir Navab

This work presents a mixed reality environment for orthopaedic interventions that provides a 3D overlay of Cone-beam CT images, surgical site, and real-time tool tracking. The system uses an RGBD camera attached to the detector plane of a mobile C-arm, which is a typical device to acquire X-Ray images during surgery. Calibration of the two devices is done by acquiring simultaneous CBCT and RGBD scans of a calibration phantom and computing the rigid transformation between them. The markerless tracking of the surgical tool is computed in the RGBD view using real-time segmentation and Simultaneous Localization And Mapping. The RGBD view is then overlaid to the CBCT data with real-time point clouds of the surgical site. This visualization provides multiple desired views of the medical data, surgical site, and the tracking of surgical tools, which could be used to provide intuitive visualization for orthopedic procedures to place instrumentation and to assist surgeons with their localization and coordination. Our proposed opto-X-ray system can lead to x-ray radiation dose reduction as well as improved safety in minimally invasive orthopaedic procedures.

Collaboration


Dive into the Sing Chun Lee's collaboration.

Top Co-Authors

Avatar

Javad Fotouhi

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Osgood

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Alex Johnson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Mehran Armand

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giacomo Taylor

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge