Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samantha Horvath is active.

Publication


Featured researches published by Samantha Horvath.


IEEE Journal of Translational Engineering in Health and Medicine | 2014

FingerSight: Fingertip Haptic Sensing of the Visual Environment

Samantha Horvath; John M. Galeotti; Bing Wu; Roberta L. Klatzky; Mel Siegel; George D. Stetten

We present a novel device mounted on the fingertip for acquiring and transmitting visual information through haptic channels. In contrast to previous systems in which the user interrogates an intermediate representation of visual information, such as a tactile display representing a camera generated image, our device uses a fingertip-mounted camera and haptic stimulator to allow the user to feel visual features directly from the environment. Visual features ranging from simple intensity or oriented edges to more complex information identified automatically about objects in the environment may be translated in this manner into haptic stimulation of the finger. Experiments using an initial prototype to trace a continuous straight edge have quantified the users ability to discriminate the angle of the edge, a potentially useful feature for higher levels analysis of the visual scene.


7th International Workshop on Augmented Environments for Computer-Assisted Interventions, AE-CAI 2012, Held in Conjunction with MICCAI 2012 | 2012

Hand-Held Force Magnifier for Surgical Instruments: Evolution toward a Clinical Device

Randy Lee; Bing Wu; Roberta L. Klatzky; Vikas Shivaprabhu; John M. Galeotti; Samantha Horvath; Mel Siegel; Joel S. Schuman; Ralph L. Hollis; George D. Stetten

We have developed a novel and relatively simple method for magnifying forces perceived by an operator using a surgical tool. A sensor measures force between the tip of a tool and its handle, and a proportionally greater force is created by an actuator between the handle and a brace attached to the operator’s hand, providing an enhanced perception of forces at the tip of the tool. Magnifying forces in this manner may provide an improved ability to perform delicate surgical procedures. The device is completely hand-held and can thus be easily manipulated to a wide variety of locations and orientations. We have previously developed a prototype capable of amplifying forces only in the push direction, and which had a number of other limiting factors. We now present second-generation and third-generation devices, capable of both push and pull, and describe some of the engineering concerns in their design, as well as our future directions.


AE-CAI'11 Proceedings of the 6th international conference on Augmented Environments for Computer-Assisted Interventions | 2011

Towards an ultrasound probe with vision: structured light to determine surface orientation

Samantha Horvath; John M. Galeotti; Bo Wang; Matt Perich; Jihang Wang; Mel Siegel; Patrick Vescovi; George D. Stetten

Over the past decade, we have developed an augmented reality system called the Sonic Flashlight (SF), which merges ultrasound with the operators vision using a half-silvered mirror and a miniature display attached to the ultrasound probe. We now add a small video camera and a structured laser light source so that computer vision algorithms can determine the location of the surface of the patient being scanned, to aid in analysis of the ultrasound data. In particular, we intend to determine the angle of the ultrasound probe relative to the surface to disambiguate Doppler information from arteries and veins running parallel to, and beneath, that surface. The initial demonstration presented here finds the orientation of a flat-surfaced ultrasound phantom. This is a first step towards integrating more sophisticated computer vision methods into automated ultrasound analysis, with the ultimate goal of creating a symbiotic human/machine system that shares both ultrasound and visual data.


international conference on computer graphics and interactive techniques | 2008

FingerSight™: fingertip control and haptic sensing of the visual environment

John M. Galeotti; Samantha Horvath; Roberta L. Klatzky; Brock Nichol; Mel Siegel; George D. Stetten

Many devices that transfer input from the visual environment to another sense have been developed. The primary assistive technologies in current use are white canes, guide dogs, and GPSbased technologies. All of these facilitate safe travel in a wide variety of environments, but none of them are useful for straightening a picture frame on the wall or finding a cup of coffee on a counter-top. Tactile display screens and direct nerve stimulation are two existing camera-based technologies that seek to replace the more general capabilities of vision. Notably, these methods preserve a predetermined map between the image captured by a camera and a spatially fixed grid of sensory stimulators. Other technologies use fixed cameras and tracking devices to record and interpret movements and gestures. These, however, operate in a limited space and focus on the subject, rather than the subject’s interrogation of his environment. With regard to haptic feedback devices for the hand, most existing devices aim to simulate tactile exploration of virtual objects.


Workshop on Augmented Environments for Computer-Assisted Interventions | 2014

Towards Video Guidance for Ultrasound, Using a Prior High-Resolution 3D Surface Map of the External Anatomy

Jihang Wang; Vikas Shivaprabhu; John M. Galeotti; Samantha Horvath; Vijay S. Gorantla; George D. Stetten

We are developing techniques for guiding ultrasound probes and other clinical tools with respect to the exterior of the patient, using one or more video camera(s) mounted directly on the probe or tool. This paper reports on a new method of matching the real-time video image of the patient’s exterior against a prior high-resolution surface map acquired with a multiple-camera imaging device used in reconstructive surgery. This surface map is rendered from multiple viewpoints in real-time to find the viewpoint that best matches the probe-mounted camera image, thus establishing the camera’s pose relative to the anatomy. For ultrasound, this will permit the compilation of 3D ultrasound data as the probe is moved, as well as the comparison of a real-time ultrasound scan with previous scans from the same anatomical location, all without using external tracking devices. In a broader sense, tools that know where they are by looking at the patient’s exterior could have an important beneficial impact on clinical medicine.


Proceedings of SPIE | 2012

Real-Time Registration of Video with Ultrasound using Stereo Disparity

Jihang Wang; Samantha Horvath; George D. Stetten; Mel Siegel; John M. Galeotti

Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantoms surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.


Displays | 2017

Generating an image that affords slant perception from stereo, without pictorial cues

John M. Galeotti; Kori Macdonald; Jihang Wang; Samantha Horvath; Ada Zhang; Roberta L. Klatzky

This paper describes an algorithm for generating a planar image that when tilted provides stereo cues to slant, without contamination from pictorial gradients. As the stimuli derived from this image are ultimately intended for use in studies of slant perception under magnification, a further requirement is that the generated image be suitable for high-definition printing or display on a monitor. A first stage generates an image consisting of overlapping edges with sufficient density that when zoomed, edges that nearly span the original scale are replaced with newly emergent content that leaves the visible edge statistics unchanged. A second stage reduces intensity clumping while preserving edges by enforcing a broad dynamic range across the image. Spectral analyses demonstrate that the low-frequency content of the resulting image, which would correspond to the pictorial cue of texture gradient changes under slant, (a) has a power fall-off deviating from 1/f noise (to which the visual system is particularly sensitive), and (b) does not offer systematic cues under changes in scale or slant. Two behavioral experiments tested whether the algorithm generates stimuli that offer cues to slant under stereo viewing only, and not when disparities are eliminated. With a particular adjustment of dynamic range (and nearly so with the other version that was tested), participants viewing without stereo cues were essentially unable to discriminate slanted from flat (frontal) stimuli, and when slant was reported, they failed to discriminate its direction. In contrast, non-stereo viewing of a control stimulus with pictorial cues, as well as stereoscopic observation, consistently allowed participants to perceive slant correctly. Experiment 2 further showed that these results generalized across a population of different stimuli from the same generation process and demonstrated that the process did not substitute biased slant cues.


international workshop on pattern recognition in neuroimaging | 2013

Descending Variance Graphs for Segmenting Neurological Structures

George D. Stetten; Cynthia Wong; Vikas Shivaprabhu; Ada Zhang; Samantha Horvath; Jihang Wang; John M. Galeotti; Vijay S. Gorantla; Howard J. Aizenstein

We present a novel and relatively simple method for clustering pixels into homogeneous patches using a directed graph of edges between neighboring pixels. For a 2D image, the mean and variance of image intensity is computed within a circular region centered at each pixel. Each pixel stores its circles mean and variance, and forms the node in a graph, with possible edges to its 4 immediate neighbors. If at least one of those neighbors has a lower variance than itself, a directed edge is formed, pointing to the neighbor with the lowest variance. Local minima in variance thus form the roots of disjoint trees, representing patches of relative homogeneity. The method works in n-dimensions and requires only a single parameter: the radius of the circular (spherical, or hyper spherical) regions used to compute variance around each pixel. Setting the intensity of all pixels within a given patch to the mean at its root pixel significantly reduces image noise while preserving anatomical structure, including location of boundaries. The patches may themselves be clustered using techniques that would be computationally too expensive if applied to the raw pixels. We demonstrate such clustering to identify fascicles in the median nerve in high-resolution 2D ultrasound images, as well as white matter hyper intensities in 3D magnetic resonance images.


Proceedings of SPIE | 2010

Image segmentation using the student's t-test and the divergence of direction on spherical regions

George D. Stetten; Samantha Horvath; John M. Galeotti; Gaurav Shukla; Bo Wang; Brian E. Chapman

We have developed a new framework for analyzing images called Shells and Spheres (SaS) based on a set of spheres with adjustable radii, with exactly one sphere centered at each image pixel. This set of spheres is considered optimized when each sphere reaches, but does not cross, the nearest boundary of an image object. Statistical calculations at varying scale are performed on populations of pixels within spheres, as well as populations of adjacent spheres, in order to determine the proper radius of each sphere. In the present work, we explore the use of a classical statistical method, the students t-test, within the SaS framework, to compare adjacent spherical populations of pixels. We present results from various techniques based on this approach, including a comparison with classical gradient and variance measures at the boundary. A number of optimization strategies are proposed and tested based on pairs of adjacent spheres whose size are controlled in a methodical manner. A properly positioned sphere pair lies on opposite sides of an object boundary, yielding a direction function from the center of each sphere to the boundary point between them. Finally, we develop a method for extracting medial points based on the divergence of that direction function as it changes across medial ridges, reporting not only the presence of a medial point but also the angle between the directions from that medial point to the two respective boundary points that make it medial. Although demonstrated here only in 2D, these methods are all inherently n-dimensional.


workshop on applications of computer vision | 2017

Ultrasound Tracking Using ProbeSight: Camera Pose Estimation Relative to External Anatomy by Inverse Rendering of a Prior High-Resolution 3D Surface Map

Jihang Wang; Chengqian Che; John M. Galeotti; Samantha Horvath; Vijay S. Gorantla; George D. Stetten

This paper addresses the problem offreehand ultrasound probe tracking without requiring an external tracking device, by mounting a video camera on the probe to identify location relative to the patients external anatomy. By pre-acquiring a high-resolution 3D surface map as an atlas ofthe anatomy, we eliminate the needfor artificial skin markers. We use an OpenDR pipeline for inverse rendering and pose estimation via matching the real-time camera image with the 3D surface map. We have addressed the problem ofdistinguishing rotation from translation by including an inertial navigation system to accurately measure rotation. Experiments on both a phantom containing an image ofhuman skin (palm) as well as actual human skin (fingers, palm, and wrist) validate the effectiveness ofour approach. For ultrasound, this will permit the compilation of3D ultrasound data as the probe is moved, as well as comparison ofreal-time ultrasound scans registered with previous scans from the same anatomical location. In a broader sense, tools that know where they are by looking at the patients exterior could have broad beneficial impact on clinical medicine.

Collaboration


Dive into the Samantha Horvath's collaboration.

Top Co-Authors

Avatar

John M. Galeotti

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jihang Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mel Siegel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ada Zhang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Kori Macdonald

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Wu

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge