Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elvis C. S. Chen is active.

Publication


Featured researches published by Elvis C. S. Chen.


Medical Image Analysis | 2001

A computational model of postoperative knee kinematics

Elvis C. S. Chen; Randy E. Ellis; J.T. Bryant; John F. Rudan

A mathematical model for studying the passive kinematics of total knee prostheses can be useful in computer-aided planning and guidance of total joint replacement. If the insertion location and neutral length of knee ligaments is known, the passive kinematics of the knee can be calculated by minimizing the strain energy stored in the ligaments at any angular configuration of the knee. Insertions may be found intraoperatively, or may come from preoperative 3D medical images. The model considered here takes into consideration the geometry of the prosthesis and patient-specific information. This model can be used to study the kinematics of the knee joint of a patient after total joint replacement. The model may be useful in preoperative planning, computer-aided intraoperative guidance, and the design of new prosthetic joints.


medical image computing and computer assisted intervention | 2006

Using registration uncertainty visualization in a user study of a simple surgical task

Amber L. Simpson; Burton Ma; Elvis C. S. Chen; Randy E. Ellis; A. James Stewart

We present a novel method to visualize registration uncertainty and a simple study to motivate the use of uncertainty visualization in computer-assisted surgery. Our visualization method resulted in a statistically significant reduction in the number of attempts required to localize a target, and a statistically significant reduction in the number of targets that our subjects failed to localize. Most notably, our work addresses the existence of uncertainty in guidance and offers a first step towards helping surgeons make informed decisions in the presence of imperfect data.


medical image computing and computer assisted intervention | 2013

Robust Intraoperative US Probe Tracking Using a Monocular Endoscopic Camera

Uditha L. Jayarathne; A. Jonathan McLeod; Terry M. Peters; Elvis C. S. Chen

In the context of minimally-invasive procedures involving both endoscopic video and ultrasound, we present a vision-based method to track the ultrasound probe using a standard monocular video laparoscopic instrument. This approach requires only cosmetic modification to the ultrasound probe and obviates the need for magnetic tracking of either instrument. We describe an Extended Kalman Filter framework that solves for both the feature correspondence and pose estimation, and is able to track a 3D pattern on the surface of the ultrasound probe in near real-time. The tracking capability is demonstrated by performing an ultrasound calibration of a visually-tracked ultrasound probe, using a standard endoscopic video camera. Ultrasound calibration resulted in a mean TRE of 2.3 mm, and comparison with an external optical tracker demonstrated a mean FRE of 4.4 mm between the two tracking systems.


medical image computing and computer assisted intervention | 2009

Biomechanically Constrained Groupwise US to CT Registration of the Lumbar Spine

Sean Gill; Parvin Mousavi; Gabor Fichtinger; Elvis C. S. Chen; Jonathan Boisvert; David R. Pichora; Purang Abolmaesumi

Registration of intraoperative ultrasound (US) with preoperative computed tomography (CT) data for interventional guidance is a subject of immense interest, particularly for percutaneous spinal injections. We propose a biomechanically constrained group-wise registration of US to CT images of the lumbar spine. Each vertebra in CT is treated as a sub-volume and transformed individually. The sub-volumes are then reconstructed into a single volume. The algorithm simulates an US image from the CT data at each iteration of the registration. This simulated US image is used to calculate an intensity based similarity metric with the real US image. A biomechanical model is used to constrain the displacement of the vertebrae relative to one another. Covariance Matrix Adaption - Evolution Strategy (CMA-ES) is utilized as the optimization strategy. Validation is performed on CT and US images from a phantom designed to preserve realistic curvatures of the spine. The technique is able to register initial misalignments of up to 20 mm with a success rate of 82%, and those of up to 10 mm with a success rate of 98.6%.


computer assisted radiology and surgery | 2015

Registration of 3D shapes under anisotropic scaling

Elvis C. S. Chen; A. Jonathan McLeod; John S. H. Baxter; Terry M. Peters

PurposeSeveral medical imaging modalities exhibit inherent scaling among the acquired data: The scale in an ultrasound image varies with the speed of sound and the scale of the range data used to reconstruct organ surfaces is subject to the scanner distance. In the context of surface-based registration, these scaling factors are often assumed to be isotropic, or as a known prior. Accounting for such anisotropies in scale can potentially dramatically improve registration and calibrations procedures that are essential for robust image-guided interventions.MethodsWe introduce an extension to the ordinary iterative closest point (ICP) algorithm, solving for the similarity transformation between point-sets comprising anisotropic scaling followed by rotation and translation. The proposed anisotropic-scaled ICP (ASICP) incorporate a novel use of Mahalanobis distance to establish correspondence and a new solution for the underlying registration problem. The derivation and convergence properties of ASICP are presented, and practical implementation details are discussed. Because the ASICP algorithm is independent of shape representation and feature extraction, it is generalizable for registrations involving scaling.ResultsExperimental results involving the ultrasound calibration, registration of partially overlapping range data, whole surfaces, as well as multi-modality surface data (intraoperative ultrasound to preoperative MR) show dramatic improvement in fiducial registration error.ConclusionWe present a generalization of the ICP algorithm, solving for a similarity transform between two point-sets by means of anisotropic scales, followed by rotation and translation. Our anisotropic-scaled ICP algorithm shares many traits with the ordinary ICP, including guaranteed convergence, independence of shape representation, and general applicability.


Journal of Endourology | 2012

First Prize: A Phantom Model as a Teaching Modality for Laparoscopic Partial Nephrectomy

Alfonso Fernandez; Elvis C. S. Chen; John Moore; Carling L. Cheung; Petar Erdeljan; Andrew Fuller; Elspeth M. McDougall; Terry M. Peters; Stephen E. Pautler

PURPOSE To evaluate a materials model for laparoscopic ultrasound identification and partial nephrectomy of kidney tumors. METHODS Five urology fellows performed laparoscopic ultrasonography (LUS) examination of the tumor model, and the time for identification was recorded. After identifying the tumor, they performed a laparoscopic partial nephrectomy using the target tumor with measurement of operative parameters. They completed a questionnaire and rated the quality of the renal tumor model on a 5-point Likert scale. RESULTS The participants were able to identify 49 tumors by LUS (98%). The mean time to identify the renal tumors by LUS was 1.12 minutes ± 0.93 standard deviation (SD). A partial nephrectomy was successfully completed on 49 tumor models (98%). The mean resection time was 7.69 minutes ± 3.8 SD. All of the participants considered that this model was helpful in the practice of LPN. The fellows would recommend this model as a teaching tool for residents/fellows to perform tumor imaging by LUS and for practicing LPN in a simulated environment. CONCLUSION We have developed a unique model that simulates small kidney tumors that can be used for training surgeons in the clinical skills of laparoscopic partial nephrectomy.


Proceedings of SPIE | 2012

Towards Real-time 3D US-CT Registration on the Beating Heart for Guidance of Minimally Invasive Cardiac Interventions

Feng Li; Pencilla Lang; Martin Rajchl; Elvis C. S. Chen; Gerard M. Guiraudon; Terry M. Peters

Compared to conventional open-heart surgeries, minimally invasive cardiac interventions cause less trauma and sideeffects to patients. However, the direct view of surgical targets and tools is usually not available in minimally invasive procedures, which makes image-guided navigation systems essential. The choice of imaging modalities used in the navigation systems must consider the capability of imaging soft tissues, spatial and temporal resolution, compatibility and flexibility in the OR, and financial cost. In this paper, we propose a new means of guidance for minimally invasive cardiac interventions using 3D real-time ultrasound images to show the intra-operative heart motion together with preoperative CT image(s) employed to demonstrate high-quality 3D anatomical context. We also develop a method to register intra-operative ultrasound and pre-operative CT images in close to real-time. The registration method has two stages. In the first, anatomical features are segmented from the first frame of ultrasound images and the CT image(s). A feature based registration is used to align those features. The result of this is used as an initialization in the second stage, in which a mutual information based registration is used to register every ultrasound frame to the CT image(s). A GPU based implementation is used to accelerate the registration.


Archive | 2012

Augmented Environments for Computer-Assisted Interventions

Cristian A. Linte; Elvis C. S. Chen; Marie-Odile Berger; John Moore; David R. Holmes

This book constitutes the refereed proceedings of the International Workshop on Augemented Environments for Computer-Assisted Interventions, held in conjunction with MICCAI 2012, in Nice, France in September 2012. The 16 revised full papers presented were carefully reviewed and selected from 22 submissions. The papers cover the topics of image registration and fusion, calibration, visualization and 3D perception, hardware and optical design, real-time implementation, as well as validation, clinical applications, and clinical evaluation.


AE-CAI | 2013

The Role of Augmented Reality in Training the Planning of Brain Tumor Resection

Kamyar Abhari; John S. H. Baxter; Elvis C. S. Chen; Ali R. Khan; Chris Wedlake; Terry M. Peters; Roy Eagleson; Sandrine de Ribaupierre

The environment in which a surgeons is trained profoundly effects their preferred method for visualizing patient images. While classical 2D viewing might be preferred by some older experts, the new generation of residents and novices has been raised navigating in 3D through video games, and are accustomed to seeing 3D reconstructions of the human anatomy. In this study, we evaluate the performance of different groups of users in 4 different visualization modalities (2D planes, orthogonal planes, 3D reconstruction and augmented reality). We hypothesize that this system will facilitate the spatio-visual abilities of individuals in terms of assessing patient-specific data, an essential requirement of many neurosurgical applications such as tumour resection. We also hypothesize that the difference between AR and the other modalities will be greater in the novice group. Our preliminary results indicate that AR is better or as good as other modalities in terms of performance.


Gastrointestinal Endoscopy | 2011

A prospective, randomized assessment of a spatial orientation device in natural orifice transluminal endoscopic surgery

Sharyle Fowler; Mohamed S. Hefny; Elvis C. S. Chen; Randy E. Ellis; Dale Mercer; Diederick Jalink; Andrew Samis; Lawrence Hookey

BACKGROUND One of the challenges in natural orifice transluminal endoscopic surgery (NOTES) is spatial orientation. The Queens NOTES group has devised a novel method of orientation by using a magnetic device that passes within an endoscope channel allowing for 3-dimensional imaging of the shape and orientation of the endoscope. OBJECTIVE To assess the feasibility and utility of a novel orientation device. DESIGN Randomized, controlled trial. SETTING Animal research laboratory study on four 25-kg pigs. INTERVENTION The device was tested by 6 endoscopists and 6 laparoscopic surgeons. Starting at the gastrotomy, the time to identify 4 targets was recorded. Participants were required to identify and touch the gallbladder, the fallopian tube, a clip on the abdominal wall, and the liver edge. Use of the orientation device was randomized for each session. MAIN OUTCOME MEASUREMENTS Time to identify targets with and without the device. Secondary analysis assessed differences between medical specialties and level of training. RESULTS The mean time to identify all 4 targets with the device was 75.08 ± 42.68 seconds versus 100.20 ± 60.70 seconds without the device (P <.001). The mean time to identify all 4 targets on the first attempt was 102.29 ± 61.36 seconds versus 72.99 ± 40.19 seconds on the second attempt (P <.001). No differences based on specialty or level of training were identified. LIMITATIONS Small sample size and simplicity of tasks. CONCLUSION Regardless of randomization order, both groups were faster with the device. These encouraging results warrant further study using more complex scenarios.

Collaboration


Dive into the Elvis C. S. Chen's collaboration.

Top Co-Authors

Avatar

Terry M. Peters

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

John Moore

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar

John S. H. Baxter

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Golafsoun Ameri

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar

Uditha L. Jayarathne

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Bainbridge

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Cristian A. Linte

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge