Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Damion Shelton is active.

Publication


Featured researches published by Damion Shelton.


tests and proofs | 2008

Effectiveness of augmented-reality visualization versus cognitive mediation for learning actions in near space

Roberta L. Klatzky; Bing Wu; Damion Shelton; George D. Stetten

The present study examined the impact of augmented-reality visualization, in comparison to conventional ultrasound (CUS), on the learning of ultrasound-guided needle insertion. Whereas CUS requires cognitive processes for localizing targets, our augmented-reality device, called the “sonic flashlight” (SF) enables direct perceptual guidance. Participants guided a needle to an ultrasound-localized target within opaque fluid. In three experiments, the SF showed higher accuracy and lower variability in aiming and endpoint placements than did CUS. The SF, but not CUS, readily transferred to new targets and starting points for action. These effects were evident in visually guided action (needle and target continuously visible) and visually directed action (target alone visible). The results have application to learning to visualize surgical targets through ultrasound.


Journal of Ultrasound in Medicine | 2002

Guidance of Retrobulbar Injection With Real-time Tomographic Reflection

Wilson M. Chang; George D. Stetten; Louis A. Lobes; Damion Shelton; Robert J. Tamburo

Objective. Retrobulbar and peribulbar injections are common ophthalmologic procedures used to deliver anesthetics and other medications for ophthalmic therapy and surgery. These injections, typically performed without any type of guidance, can lead to complications that are rare but visually devastating. The needle may penetrate the optic nerve, perforate the globe, or disperse toxic quantities of drugs intraocularly, causing major visual loss. Sonographic guidance may increase the accuracy of the needle placement, thereby decreasing the incidence of complications. However, difficulties arise in coordinating the relative location of the image, the needle, and the patient. Real‐time tomographic reflection is a new method for in situ visualization of sonographic images, permitting direct hand‐eye coordination to guide invasive instruments beneath the surface of the skin. Methods. In this preliminary study, real‐time tomographic reflection was used to visualize the eye and surrounding anatomic structures in a cadaver during a simulated retrobulbar injection. Results. The needle tip was easily followed as it was advanced into the retrobulbar space. Conclusions. The images presented in this preliminary study show the use of real‐time tomographic reflection to visualize insertion of an invasive instrument into the human body.


medical image computing and computer assisted intervention | 2003

C-Mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

George D. Stetten; Aaron Cois; Wilson Chang; Damion Shelton; Robert Tamburo; John Castellucci; Olaf T. von Ramm

RATIONALE AND OBJECTIVES Real-time tomographic reflection (RTTR) permits in situ visualization of tomographic images so that natural hand-eye coordination can be used directly during invasive procedures. The method uses a half-silvered mirror to merge the visual outer surface of the patient with a simultaneous scan of the patients interior without requiring a head-mounted display or tracking. A viewpoint-independent virtual image is reflected precisely into its actual location. When applied to ultrasound, we call the resulting RTTR device the sonic flashlight. We previously implemented the sonic flashlight using conventional two-dimensional ultrasound scanners that produce B-mode slices. Real-time three-dimensional (RT3D) ultrasound scanners recently have been developed that permit RTTR to be applied to slices with other orientations, including C-mode (parallel to the face of the transducer). Such slice orientation may offer advantages for image-guided intervention. MATERIALS AND METHODS Using a prototype scanner developed at Duke University (Durham, NC) with a matrix array that electronically steers an ultrasound beam at high speed in 3D, we implemented a sonic flashlight capable of displaying C-mode images in situ in real time. RESULTS We present the first images from the C-mode sonic flashlight, showing bones in the hand and the cardiac ventricles. CONCLUSION The extension of RTTR to matrix array RT3D ultrasound offers the ability to visualize in situ slices other than the conventional B-mode slice, including C-mode slices parallel to the face of the transducer. This orientation may provide a broader target, facilitating certain interventional procedures. Future work is discussed, including display of slices with arbitrary orientation and use of a holographic optical element instead of a mirror.


Experimental Brain Research | 2008

Mental concatenation of perceptually and cognitively specified depth to represent locations in near space.

Bing Wu; Roberta L. Klatzky; Damion Shelton; George D. Stetten

The purpose of this study was to examine how discrete segments of contiguous space arising from perceptual or cognitive channels are mentally concatenated. We induced and measured errors in each channel separately, then summed the psychophysical functions to accurately predict pointing to a depth specified by both together. In Experiment 1, subjects drew a line to match the visible indentation of a probe into a compressible surface. Systematic perceptual errors were induced by manipulating surface stiffness. Subjects in Experiment 2 placed the probe against a rigid surface and viewed the depth of a hidden target below it from a remote image with a metric scale. This cognitively mediated depth judgment produces systematic under-estimation (Wu et al. in IEEE Trans Vis Comput Grap 11(6):684–693, 2005; confirmed here). In Experiment 3, subjects pointed to a target location detected by the indented probe and displayed remotely, requiring mental concatenation of the depth components. The model derived from the data indicated the errors in the components were passed through the integration process without additional systematic error. Experiment 4 further demonstrated that this error-free concatenation was intrinsically spatial, rather than numerical.


international conference on computer graphics and interactive techniques | 2002

Ultrasound visualization with the sonic flashlight

Damion Shelton; George D. Stetten; Wilson M. Chang

From the discovery of X-rays over a century ago, clinicians have been presented with a wide assortment of imaging modalities yielding maps of localized structure and function within the patient. Some imaging modalities are tomographic, meaning that the data are localized into voxels, rather than projected along lines of sight as with conventional X-ray images. Tomographic modalities include magnetic resonance (MR), computerized tomography (CT), ultrasound, and others. Tomographic images, with their spatially distinct voxels, are essential to our present work.


Progress in Biomedical Optics and Imaging 2004 - Medical Imaging: Visualization, Image-Guided Procedures, and Display | 2004

A novel machine interface for scaled telesurgery

Samuel T. Clanton; David C. Wang; Yoky Matsuoka; Damion Shelton; George D. Stetten

We have developed a system architecture that will allow a surgeon to employ direct hand-eye coordination to conduct medical procedures in a remote microscopic environment. In this system, a scaled real-time video image of the workspace of a small robotic arm, taken from a surgical microscope camera, is visually superimposed on the natural workspace of a surgeon via a half-silvered mirror. The robot arm holds a small tool, such as a microsurgical needle holder or microsurgical forceps, and the surgeon grasps a second tool connected to a position encoder, in this case a second robot arm. The views of the local and remote environments are superimposed such that the tools in the local and remote environments are visually merged. The position encoder and small robot arm are linked such that movement of the tool by the operator produces scaled-down movement by the small robot tool. To the surgeon, it seems that his hands and the tool he or she is holding is moving and interacting with the remote environment, which is really microscopic and at a distance. Our current work focuses on using a position-controlled master-slave robot linkage of two 3 degree of freedom haptic devices, and we are pursuing the use of a 6-to-7 degree of freedom master-slave linkage to produce more realistic interaction.


international symposium on biomedical imaging | 2002

Towards a clinically useful sonic flashlight

George D. Stetten; Damion Shelton; Wilson M. Chang; Vikram S. Chib; Robert J. Tamburo; Daniel Hildebrand; L. Lobes; Jules H. Sumkin

We have previously shown a new method of merging a direct view of the patient with an ultrasound image displayed in situ within the patient, using a half-silvered mirror. We call this method Real Time Tomographic Reflection (RTTR). This paper reviews our progress to date in developing an embodiment of RTTR that we call the sonic flashlight/spl trade/. The clinical utility of the sonic flashlight for guiding invasive procedures will depend on a number of factors, and we have explored these factors through a series of prototypes. Responding to feedback from our clinical collaborators, we have upgraded various elements of our original apparatus and implemented a new generation of the system that is smaller, lighter, and more easily manipulated. We have improved performance as we gain a better understanding of the optical parameters of the system. Our results demonstrate in situ visualization of vasculature of the neck in a human volunteer and the anatomy of the eye in a cadaver.


Journal of Vision | 2010

Efficacy of image-guided action is controlled by perception

Roberta L. Klatzky; Bing Wu; Damion Shelton; George D. Stetten

to be published in the Journal of Vision Researchers on human perception have devised a number of methods for measuring perceived location and using it to assess perceptually guided action. Such work has primarily been performed in space accessible by reaching and walking. Here we use the same approach to assess perceptually guided action in very near space, specifically, in the applied context of ultrasound-guided surgical manipulation. Our approach measured the ultrasound user’s perception of the location of a target independently from assessing the action employed to reach it. Experiments were conducted with the Sonic Flashlight (SF), a visualization device that creates a virtual in situ image, and conventional ultrasound (CUS), which displays the image on a screen displaced from the target. Two studies determined subjects’ perception of target location with a triangulation-by-pointing task. Depth perception with the SF was comparable to direct vision, while the CUS caused considerable underestimation of target depth. Binocular depth information in the SF was shown to significantly contribute to its superiority. A third experiment tested subjects in an ultrasound-guided needle insertion task. With direct visualization of the target, subjects performed insertions faster and more accurately by using the SF rather than CUS. Furthermore, the trajectory analysis showed that insertions with the SF generally went directly to the target along the desired path, while the CUS led to an arc-shaped deviation from the ideal path, as predicted by the previously measured underestimation of target depth. Ongoing research is further examining the time-course of learning with the two devices, measuring precise trajectories for needle insertion. This work extends the demonstration of the perception/action linkage to near space and provides a very practical application for such research. In particular, different image methods, which lead to different percepts, will lead to actions with differential efficacy.


Journal of Vision | 2010

Interaction of visual and haptic cues in the image-based perception of depth

Bing Wu; Roberta L. Klatzky; Damion Shelton; George D. Stetten

Many medical applications attempt to locate targets by using imaging techniques such as ultrasound. If the target is located in a compressible medium (e.g., human tissue), however, its position in the ultrasound image will shift as the medium is compressed. We investigated whether users can accommodate to such displacements by using visual and haptic cues and accurately judge target depth. Subjects were asked to locate targets underneath a soft rubber surface. Visual cues to the amount of compression were provided by a grid on the surface that deformed under pressure and by the visible displacement of the tip of the ultrasound probe. The first experiment tested whether these visual cues are sufficient for judging surface deformation and compensating so as to accurately locate the target. Subjects acquired ultrasound images of targets at different depths and localized them with a triangulation-by-pointing procedure. Using conventional ultrasound with a remote display, subjects consistently underestimated surface deformation and thus target depth. In a second experiment, haptic feedback was added so that resisting force increased with surface deformation. We found that the stiffer the surface, the less the underestimation of target depth due to compression. A third experiment used a different imaging display, the Sonic Flashlight, an augmented-reality tool that enables users to directly see the target in 3D space. The perception of target location with this device was accurate despite the surface compliance. An ongoing experiment is further examining the learning and transfer of skills to correct the compliance effect.


Medical Imaging 2004: Image Processing | 2004

Novel method to automatically identify medial node correspondences between two images

Robert J. Tamburo; C. Aaron Cois; Damion Shelton; George D. Stetten

Many modern forms of segmentation and registration require manual input, making them tedious and time-consuming processes. There have been some successes with automating these methods, but these tend to be unreliable due to inherent variations in anatomical shapes and image quality. It is toward this goal that we have developed methods of identifying correspondences in two images between medial nodes; image features related to anatomical structures. Medial based image features are used because they have proven robust against image noise and shape variation, and provide rotationally invariant properties of dimensionality and scale, while preserving orientation information independently. We have introduced several novel metrics for comparing the medial and geometric relationships between medial nodes and different cliques of medial nodes (a clique is a set of multiple medial nodes). These metrics overcome problems introduced by symmetry between cliques and provide increasing discriminability with the size of the clique. In this paper, we demonstrate medial-based correspondences and validate their specificity with standard Receiver Operator Characteristic (ROC) analysis. It is believed that our method of locating corresponding medial features may be useful for automatically locating anatomical structures or generating landmarks for registration.

Collaboration


Dive into the Damion Shelton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Wu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Cois

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Galeotti

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

L. Lobes

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge