Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sonny Chan is active.

Publication


Featured researches published by Sonny Chan.


Neurosurgery | 2013

Virtual reality simulation in neurosurgery: technologies and evolution.

Sonny Chan; Francois Conti; Kenneth Salisbury; Nikolas H. Blevins

Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.


Hearing Research | 2010

Reconstruction and exploration of virtual middle-ear models derived from micro-CT datasets

Dong H. Lee; Sonny Chan; Curt Salisbury; Namkeun Kim; Kenneth Salisbury; Sunil Puria; Nikolas H. Blevins

BACKGROUND Middle-ear anatomy is integrally linked to both its normal function and its response to disease processes. Micro-CT imaging provides an opportunity to capture high-resolution anatomical data in a relatively quick and non-destructive manner. However, to optimally extract functionally relevant details, an intuitive means of reconstructing and interacting with these data is needed. MATERIALS AND METHODS A micro-CT scanner was used to obtain high-resolution scans of freshly explanted human temporal bones. An advanced volume renderer was adapted to enable real-time reconstruction, display, and manipulation of these volumetric datasets. A custom-designed user interface provided for semi-automated threshold segmentation. A 6-degrees-of-freedom navigation device was designed and fabricated to enable exploration of the 3D space in a manner intuitive to those comfortable with the use of a surgical microscope. Standard haptic devices were also incorporated to assist in navigation and exploration. RESULTS Our visualization workstation could be adapted to allow for the effective exploration of middle-ear micro-CT datasets. Functionally significant anatomical details could be recognized and objective data could be extracted. CONCLUSIONS We have developed an intuitive, rapid, and effective means of exploring otological micro-CT datasets. This system may provide a foundation for additional work based on middle-ear anatomical data.


American Journal of Rhinology & Allergy | 2009

Integration of patient-specific paranasal sinus computed tomographic data into a virtual surgical environment.

Sachin Parikh; Sonny Chan; Sumit Agrawal; Peter H. Hwang; Curt Salisbury; Benjamin Y. Rafii; Gaurav Varma; Kenneth Salisbury; Nikolas H. Blevins

Background The advent of both high-resolution computed tomographic (CT) imaging and minimally invasive endoscopic techniques has led to revolutionary advances in sinus surgery. However, the rhinologist is left to make the conceptual jump between static cross-sectional images and the anatomy encountered intraoperatively. A three-dimensional (3D) visuo-haptic representation of the patients anatomy may allow for enhanced preoperative planning and rehearsal, with the goal of improving outcomes, decreasing complications, and enhancing technical skills. Methods We developed a novel method of automatically constructing 3D visuo-haptic models of patients’ anatomy from preoperative CT scans for placement in a virtual surgical environment (VSE). State-of-the-art techniques were used to create a high-fidelity representation of salient bone and soft tissue anatomy and to enable manipulation of the virtual patient in a surgically meaningful manner. A modified haptic interface device drives a virtual endoscope that mimics the surgical configuration. Results The creation and manipulation of sinus anatomy from CT data appeared to provide a relevant means of exploring patient-specific anatomy. Unlike more traditional methods of interacting with multiplanar imaging data, our VSE provides the potential for a more intuitive experience that can replicate the views and access expected at surgery. The inclusion of tactile (haptic) feedback provides an additional dimension of realism. Conclusion The incorporation of patient-specific clinical CT data into a virtual surgical environment holds the potential to offer the surgeon a novel means to prepare for rhinologic procedures and offer training to residents. An automated pathway for segmentation, reconstruction, and an intuitive interface for manipulation may enable rehearsal of planned procedures.


international conference on robotics and automation | 2012

Point clouds can be represented as implicit surfaces for constraint-based haptic rendering

Adam Leeper; Sonny Chan; Kenneth Salisbury

We present a constraint-based strategy for haptic rendering of arbitrary point cloud data. With the recent proliferation of low-cost range sensors, dense 3D point cloud data is readily available at high update rates. Taking a cue from the graphics literature, we propose that point data should be represented as an implicit surface, which can be formulated to be mathematically smooth and efficient for computing interaction forces, and for which haptic constraint algorithms are already well-known. This method is resistant to sensor noise, makes no assumptions about surface connectivity or orientation, and data pre-processing is fast enough for use with streaming data. We compare the performance of two different implicit representations and discuss our strategy for handling time-varying point clouds from a depth camera. Applications of haptic point cloud rendering to remote sensing, as in robot telemanipulation, are also discussed.


world haptics conference | 2011

Constraint-based six degree-of-freedom haptic rendering of volume-embedded isosurfaces

Sonny Chan; Francois Conti; Nikolas H. Blevins; Kenneth Salisbury

A method for 6-DOF haptic rendering of isosurface geometry embedded within sampled volume data is presented. The algorithm uses a quasi-static formulation of motion constrained by multiple contacts to simulate rigid-body interaction between a haptically controlled virtual tool (proxy), represented as a point-sampled surface, and volumetric isosurfaces. Unmodified volume data, such as computed tomography or magnetic resonance images, can be rendered directly with this approach, making it particularly suitable for applications in medical or surgical simulation. The algorithm was implemented and tested on a variety of volume data sets using several virtual tools with different geometry. As the constraint-based algorithm permits simulation of a massless proxy, no artificial mass or inertia were needed nor observed. The speed and transparency of the algorithm allowed motion to be responsive to extremely stiff contacts with complex virtualized geometry. Despite rendering stiffnesses that approach the physical limits of the interfaces used, the simulation remained stable through haptic interactions that typically present a challenge to other rendering methods, including wedging, prying, and hooking.


ieee vgtc conference on visualization | 2006

Real-time super resolution contextual close-up of clinical volumetric data

Torin Arni Taerum; Mario Costa Sousa; Faramarz F. Samavati; Sonny Chan; Joseph Ross Mitchell

We present an illustrative visualization system for real-time and high quality rendering of clinical volumetric medical data. Our technique is inspired by a medical illustration technique for depicting contextual close-up views of selected regions of interest where internal anatomical features are rendered in high detail. Our method integrates four important components: decimation of original volume for interactivity, B-spline subdivision for super-resolution rendering, fast gradient quantization technique for feature extraction and GPU fragment shaders for gradient dependent rendering and transfer functions. Examples with clinical CT and MRI data demonstrate the capabilities of our system.


world haptics conference | 2013

Deformable haptic rendering for volumetric medical image data

Sonny Chan; Nikolas H. Blevins; Kenneth Salisbury

Virtual-reality-based surgical simulation is one of the most notable and practical applications of kinesthetic haptic rendering. With recent advances in volume visualization technology, simulators can now incorporate pre-operative medical image data for surgical planning or rehearsal. For a truly immersive, patient-specific simulation experience, versatile haptic rendering of volume data is needed. This article presents a method for haptic rendering of deformable isosurfaces embedded within volumetric data. The proxy-based algorithm operates on the original data volume, rather than on an alternate or derived representation, to preserve geometric accuracy. Real-time deformation is driven by a coarser mesh generated from the volume. We show that this approach has very favorable properties for an application in surgical simulation.


Computer assisted surgery (Abingdon, England) | 2016

High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery

Sonny Chan; Peter Li; Garrett D. Locketz; Kenneth Salisbury; Nikolas H. Blevins

Abstract Medical imaging techniques provide a wealth of information for surgical preparation, but it is still often the case that surgeons are examining three-dimensional pre-operative image data as a series of two-dimensional images. With recent advances in visual computing and interactive technologies, there is much opportunity to provide surgeons an ability to actively manipulate and interpret digital image data in a surgically meaningful way. This article describes the design and initial evaluation of a virtual surgical environment that supports patient-specific simulation of temporal bone surgery using pre-operative medical image data. Computational methods are presented that enable six degree-of-freedom haptic feedback during manipulation, and that simulate virtual dissection according to the mechanical principles of orthogonal cutting and abrasive wear. A highly efficient direct volume renderer simultaneously provides high-fidelity visual feedback during surgical manipulation of the virtual anatomy. The resulting virtual surgical environment was assessed by evaluating its ability to replicate findings in the operating room, using pre-operative imaging of the same patient. Correspondences between surgical exposure, anatomical features, and the locations of pathology were readily observed when comparing intra-operative video with the simulation, indicating the predictive ability of the virtual surgical environment.


ieee haptics symposium | 2012

Constraint-based haptic rendering of point data for teleoperated robot grasping

Adam Leeper; Sonny Chan; Kaijen Hsiao; Matei T. Ciocarlie; Kenneth Salisbury

We present an efficient 6-DOF haptic algorithm for rendering interaction forces between a rigid proxy object and a set of unordered point data. We further explore the use of haptic feedback for remotely supervised robots performing grasping tasks. The robot captures the geometry of a remote environment (as a cloud of 3D points) at run-time using a depth camera or laser scanner. An operator then uses a haptic device to position a virtual model of the robot gripper (the haptic proxy), specifying a desired grasp pose to be executed by the robot. The haptic algorithm enforces a proxy pose that is non-colliding with the observable environment, and provides both force and torque feedback to the operator. Once the operator confirms the desired gripper pose, the robot computes a collision-free arm trajectory and executes the specified grasp. We apply this method for grasping a wide range of objects, previously unseen by the robot, from highly cluttered scenes typical of human environments. Our user experiment (N=20) shows that people with no prior experience using the visualization system on which our interfaces are based are able to successfully grasp more objects with a haptic device providing force-feedback than with just a mouse.


international conference on computer graphics and interactive techniques | 2003

Sound synthesis for the Web, games, and virtual reality

J. R. Parker; Sonny Chan

Here we describe a new technique for synthesizingsounds from samples, using the block spectral Gaussianpyramid method originally devised for texture synthesis incomputer graphics.Computer games and digital animations often impress theviewer with the quality of both the graphics and the audio.Sometimes real sounds are recorded and played back whenneeded, often as a loop. Other times the sounds are createdin a studio using synthesizers and other devices. It shouldbe possible to create realistic sounds from small samplesusing computer techniques, and thus create as much of anygiven sound as is needed. Sound synthesis is frequently notsufficiently realistic yet; one solution is to use small sam-ples of a desired sound and to reconstitute them to for anew, longer, and non-repeating sample. This is the subjectthat we wish to explore.There are many reasons for pursuing sound synthesis.Sound quality reflects more than frequency content ornoise: repetitive sounds can be quite irritating. Gatheringreal sounds can be very expensive, even for large studios.Reusing existing samples is a cost effective option if highquality can be assured.The basic method proposed has been borrowed fromwork on graphical textures. We have based methods onthree different texture synthesis algorithms; for instance,Efros starts with purely random pixels and grows the tex-ture, one pixel at a time - a very similar method will workfor audio.

Collaboration


Dive into the Sonny Chan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gail Kopp

University of Calgary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonas Forsslund

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge