Alexis Cheng
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexis Cheng.
IEEE Transactions on Biomedical Engineering | 2013
Hussam Al-Deen Ashab; Victoria A. Lessoway; Siavash Khallaghi; Alexis Cheng; Robert Rohling; Purang Abolmaesumi
We propose an augmented reality system to identify lumbar vertebral levels to assist in spinal needle insertion for epidural anesthesia. These procedures require careful placement of a needle to ensure effective delivery of anesthetics and to avoid damaging sensitive tissue such as nerves. In this system, a trinocular camera tracks an ultrasound transducer during the acquisition of a sequence of B-mode images. The system generates an ultrasound panorama image of the lumbar spine, automatically identifies the lumbar levels in the panorama image, and overlays the identified levels on a live camera view of the patients back. Validation is performed to test the accuracy of panorama generation, lumbar level identification, overall system accuracy, and the effect of changes in the curvature of the spine during the examination. The results from 17 subjects demonstrate the feasibility and capability of achieving an error within clinically acceptable range for epidural anaesthesia.
international conference on robotics and automation | 2014
Martin Kendal Ackerman; Alexis Cheng; Emad M. Boctor; Gregory S. Chirikjian
Ultrasound imaging can be an advantageous imaging modality for image guided surgery. When using ultrasound imaging (or any imaging modality), calibration is important when more advanced forms of guidance, such as augmented reality systems, are used. There are many different methods of calibration, but the goal of each is to recover the rigid body transformation relating the pose of the probe to the ultrasound image frame. This paper presents a unified algorithm that can solve the ultrasound calibration problem for various calibration methodologies. The algorithm uses gradient descent optimization on the Euclidean Group. It can be used in real time, also serving as a way to update the calibration parameters on-line. We also show how filtering, based on the theory of invariants, can further improve the online results. Focusing on two specific calibration methodologies, the AX = XB problem and the BX-1p problem, we demonstrate the efficacy of the algorithm in both simulation and experimentation.
Journal of Biomedical Optics | 2013
Alexis Cheng; Jin U. Kang; Russell H. Taylor; Emad M. Boctor
Abstract. Modern surgical procedures often have a fusion of video and other imaging modalities to provide the surgeon with information support. This requires interventional guidance equipment and surgical navigation systems to register different tools and devices together, such as stereoscopic endoscopes and ultrasound (US) transducers. In this work, the focus is specifically on the registration between these two devices. Electromagnetic and optical trackers are typically used to acquire this registration, but they have various drawbacks typically leading to target registration errors (TRE) of approximately 3 mm. We introduce photoacoustic markers for direct three-dimensional (3-D) US-to-video registration. The feasibility of this method was demonstrated on synthetic and ex vivo porcine liver, kidney, and fat phantoms with an air-coupled laser and a motorized 3-D US probe. The resulting TRE for each experiment ranged from 380 to 850 μm with standard deviations ranging from 150 to 450 μm. We also discuss a roadmap to bring this system into the surgical setting and possible challenges along the way.
medical image computing and computer assisted intervention | 2014
Xiaoyu Guo; Alexis Cheng; Haichong K. Zhang; Hyun Jae Kang; Ralph Etienne-Cummings; Emad M. Boctor
In ultrasound-guided medical procedures, accurate tracking of interventional tools with respect to the US probe is crucial to patient safety and clinical outcome. US probe tracking requires an unavoidable calibration procedure to recover the rigid body transformation between the US image and the tracking coordinate system. In literature, almost all calibration methods have been performed on passive phantoms. There are several challenges to these calibration methods including dependency on ultrasound image quality and parameters such as frequency, depth, and beam-thickness. In this work, for the first time we introduce an active echo (AE) phantom for US calibration. The phantom actively detects and responds to the US beams from the imaging probe. This active approach allows reliable and accurate identification of the ultrasound image mid-plane independent of the image quality. It also enables automatic point segmentations. Both the target localization and segmentation can be done automatically, so the user dependency is minimized. The AE phantom is compared with a gold standard crosswire (CW) phantom in a robotic US experimental setup. The result indicates that AE calibration phantom provides a localization precision of 223 μm, and an overall reconstruction error of 850 μm. Autosegmentation is also tested and proved to have the similar performance as the manual segmentation.
international conference on robotics and automation | 2015
Sungmin Kim; Hyun Jae Kang; Alexis Cheng; Muyinatu A. Lediju Bell; Emad M. Boctor; Peter Kazanzides
We are investigating the use of photoacoustic (PA) imaging to detect critical structures, such as the carotid artery, that may be located behind the bone being drilled during robot-assisted endonasal transsphenoidal surgery. In this system, the laser is mounted on the drill (via an optical fiber) and the 2D ultrasound (US) probe is placed elsewhere on the skull. Both the drill and the US probe are tracked relative to the patient reference frame. PA imaging provides two advantages compared to conventional B-mode US: (1) the laser penetrates thin layers of bone, and (2) the PA image displays targets that are in the laser path. Thus, the laser can be used to (non-invasively) extend the drill axis, thereby enabling reliable detection of critical structures that may reside in the drill path. This setup creates a challenging alignment problem, however, because the US probe must be placed so that its image plane intersects the laser line in the neighborhood of the target anatomy (as estimated from preoperative images). This paper reports on a navigation system developed to assist with this task, and the results of phantom experiments that demonstrate that a critical structure can be detected with an accuracy of approximately 1 mm relative to the drill tip.
Journal of medical imaging | 2016
Haichong K. Zhang; Alexis Cheng; Nick Bottenus; Xiaoyu Guo; Gregg E. Trahey; Emad M. Boctor
Abstract. Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA—called synthetic tracked aperture ultrasound (STRATUS) imaging—by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality.
medical image computing and computer assisted intervention | 2012
Alexis Cheng; Jin U. Kang; Russell H. Taylor; Emad M. Boctor
Interventional guidance systems require surgical navigation systems to register different tools and devices together. Standard navigation systems have various drawbacks leading to target registration errors (TRE) of around 3mm. The aim of this work is to introduce the photoacoustic (PA) effect as a direct 3D ultrasound (US) to video registration method. We present our experimental setup and demonstrate its feasibility on both a synthetic phantom and an ex vivo tissue phantom. We achieve an average TRE of 560 microns and standard deviation of 280 microns on a synthetic phantom. Also, an average TRE of 420 microns and standard deviation of 150 microns on the ex vivo tissue phantom are obtained. We describe a roadmap to bring this system into the surgical setting and highlight possible challenges along the way.
international conference on robotics and automation | 2014
Martin Kendal Ackerman; Alexis Cheng; Gregory S. Chirikjian
For the case of an exact set of compatible As and Bs with known correspondence, the AX=XB problem was solved decades ago. However, in many applications, data streams containing the As and Bs will often have different sampling rates or will be asynchronous. For these reasons and the fact that each stream may contain gaps in information, methods that require minimal a priori knowledge of the correspondence between As and Bs would be superior to the existing algorithms that require exact correspondence. We present an information-theoretic algorithm for recovering X from a set of As and a set of Bs that does not require a priori knowledge of correspondences. The algorithm views the problem in terms of distributions on the group SE(3), and minimizing the Kullback-Leibler divergence of these distributions with respect to the unknown X. This minimization is performed by an efficient numerical procedure that reliably recovers an unknown X.
Frontiers in Robotics and AI | 2016
Hasan Tutkun Şen; Alexis Cheng; Kai Ding; Emad M. Boctor; John Wong; Iulian Iordachita; Peter Kazanzides
Radiation therapy typically begins with the acquisition of a CT scan of the patient for planning, followed by multiple days where radiation is delivered according to the plan. This requires that the patient be reproducibly positioned (set up) on the radiation therapy device (linear accelerator) such that the radiation beams pass through the target. Modern linear accelerators provide cone-beam computed tomography (CBCT) imaging, but this does not provide sufficient contrast to discriminate many abdominal soft-tissue targets and therefore patient setup is often done by aligning bony anatomy or implanted fiducials. Ultrasound (US) can be used to both assist with patient setup and to provide real-time monitoring of soft-tissue targets. One challenge, however, is that the ultrasound probe contact pressure can deform the target area and cause discrepancies with the treatment plan. Another challenge is that radiation therapists typically do not have ultrasound experience and therefore cannot easily find the target in the US image. We propose cooperative control strategies to address both challenges. First, we use cooperative control with virtual fixtures (VFs) to enable acquisition of a planning CT that includes the soft-tissue deformation. Then, for the patient setup during the treatment sessions, we propose to use real-time US image feedback to dynamically update the VFs; this co-manipulation strategy provides haptic cues that guide the therapist to correctly place the US probe. A phantom study is performed to demonstrate that the co-manipulation strategy enables inexperienced operators to quickly and accurately place the probe on a phantom to reproduce a desired reference image. This is a necessary step for patient setup and, by reproducing the reference image, creates soft-tissue deformations that are consistent with the treatment plan, thereby enabling real-time monitoring during treatment delivery.
international conference of the ieee engineering in medicine and biology society | 2012
Hussam Al-Deen Ashab; Victoria A. Lessoway; Siavash Khallaghi; Alexis Cheng; Robert Rohling; Purang Abolmaesumi
Purpose: Spinal needle injection procedures are used for anesthesia and analgesia, such as lumbar epidurals. These procedures require careful placement of a needle, both to ensure effective therapy delivery and to avoid damaging sensitive tissue such as the spinal cord. An important step in such procedures is the accurate identification of the vertebral levels, which is currently performed using manual palpation with a reported 30% success rate for correct identification. Methods: An augmented reality system was developed to help identify the lumbar vertebral levels. The system consists of an ultrasound transducer tracked in real time by a trinocular camera system, an automatic ultrasound panorama generation module that provides an extended view of the lumbar vertebrae, an image processing technique that automatically identifies the vertebral levels in the panorama image, and a graphical interface that overlays the identified levels on a live camera view of the patients back. Results: Validation was performed on ultrasound data obtained from 10 subjects with different spine arching. The average success rate for segmentation of the vertebrae was 85%. The automatic level identification had an average accuracy of 6.6 mm. Conclusion: The prototype system demonstrates better accuracy for identifying the vertebrae than traditional manual methods.