Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John M. Galeotti is active.

Publication


Featured researches published by John M. Galeotti.


Robotics and Autonomous Systems | 2004

Maze exploration behaviors using an integrated evolutionary robotics environment

Andrew L. Nelson; Edward Grant; John M. Galeotti; Stacey Rhody

This paper presents results generated with a new evolutionary robotics (ER) simulation environment and its complementary real mobile robot colony research test-bed. Neural controllers producing mobile robot maze searching and exploration behaviors using binary tactile sensors as inputs were evolved in a simulated environment and subsequently transferred to and tested on real robots in a physical environment. There has been a considerable amount of proof-of-concept and demonstration research done in the field of ER control in recent years, most of which has focused on elementary behaviors such as object avoidance and homing. Artificial neural networks (ANN) are the most commonly used evolvable controller paradigm found in current ER literature. Much of the research reported to date has been restricted to the implementation of very simple behaviors using small ANN controllers. In order to move beyond the proof-of-concept stage our ER research was designed to train larger more complicated ANN controllers, and to implement those controllers on real robots quickly and efficiently. To achieve this a physical robot test-bed that includes a colony of eight real robots with advanced computing and communication abilities was designed and built. The real robot platform has been coupled to a simulation environment that facilitates the direct wireless transfer of evolved neural controllers from simulation to real robots (and vice versa). We believe that it is the simultaneous development of ER computing systems in both the simulated and the physical worlds that will produce advances in mobile robot colony research. Our simulation and training environment development focuses on the definition and training of our new class of ANNs, networks that include multiple hidden layers, and time-delayed and recurrent connections. Our physical mobile robot design focuses on maximizing computing and communications power while minimizing robot size, weight, and energy usage. The simulation and ANN-evolution environment was developed using MATLAB. To allow for efficient control software portability our physical evolutionary robots (EvBots) are equipped with a PC-104-based computer running a custom distribution of Linux and connected to the Internet via a wireless network connection. In addition to other high-level computing applications, the mobile robots run a condensed version of MATLAB, enabling ANN controllers evolved in simulation to be transferred directly onto physical robots without any alteration to the code. This is the first paper in a series to be published cataloging our results in this field.


international conference information processing | 2011

Hand-held force magnifier for surgical instruments

George D. Stetten; Bing Wu; Roberta L. Klatzky; John M. Galeotti; Mel Siegel; Randy Lee; Francis S. Mah; Andrew W. Eller; Joel S. Schuman; Ralph L. Hollis

We present a novel and relatively simple method for magnifying forces perceived by an operator using a tool. A sensor measures the force between the tip of a tool and its handle held by the operators fingers. These measurements are used to create a proportionally greater force between the handle and a brace attached to the operators hand, providing an enhanced perception of forces between the tip of the tool and a target. We have designed and tested a prototype that is completely hand-held and thus can be easily manipulated to a wide variety of locations and orientations. Preliminary psychophysical evaluation demonstrates that the device improves the ability to detect and differentiate between small forces at the tip of the tool. Magnifying forces in this manner may provide an improved ability to perform delicate surgical procedures, while preserving the flexibility of a hand-held instrument.


international symposium on biomedical imaging | 2004

Enhanced snake based segmentation of vocal folds

Sonya Allin; John M. Galeotti; George D. Stetten; Seth Dailey

We present a system to segment the medial edges of the vocal folds from stroboscopic video. The system has two components. The first learns a color transformation that optimally discriminates, according to the Fisher linear criterion, between the trachea and vocal folds. Using this transformation, it is able to make a coarse segmentation of vocal fold boundaries. The second component uses an active contour formulation recently developed for the Insight Toolkit to refine detected contours. Rather than tune the internal energy of our active contours to bias for specific shapes, we optimize image energy so as to highlight boundaries of interest. This transformation of image energy simplifies the contour extraction process and suppresses noisy artifacts, which may confound standard implementations. We evaluate our system on stroboscopic video of sustained phonation. Our evaluation compares points on automatically extracted contours with manually supplied points at perceived vocal fold edges. Mean deviations for points located on the minor axes of the vocal folds averaged 2.2 pixels across all subjects, with a standard deviation of 3.6.


IEEE Journal of Translational Engineering in Health and Medicine | 2014

FingerSight: Fingertip Haptic Sensing of the Visual Environment

Samantha Horvath; John M. Galeotti; Bing Wu; Roberta L. Klatzky; Mel Siegel; George D. Stetten

We present a novel device mounted on the fingertip for acquiring and transmitting visual information through haptic channels. In contrast to previous systems in which the user interrogates an intermediate representation of visual information, such as a tactile display representing a camera generated image, our device uses a fingertip-mounted camera and haptic stimulator to allow the user to feel visual features directly from the environment. Visual features ranging from simple intensity or oriented edges to more complex information identified automatically about objects in the environment may be translated in this manner into haptic stimulation of the finger. Experiments using an initial prototype to trace a continuous straight edge have quantified the users ability to discriminate the angle of the edge, a potentially useful feature for higher levels analysis of the visual scene.


7th International Workshop on Augmented Environments for Computer-Assisted Interventions, AE-CAI 2012, Held in Conjunction with MICCAI 2012 | 2012

Hand-Held Force Magnifier for Surgical Instruments: Evolution toward a Clinical Device

Randy Lee; Bing Wu; Roberta L. Klatzky; Vikas Shivaprabhu; John M. Galeotti; Samantha Horvath; Mel Siegel; Joel S. Schuman; Ralph L. Hollis; George D. Stetten

We have developed a novel and relatively simple method for magnifying forces perceived by an operator using a surgical tool. A sensor measures force between the tip of a tool and its handle, and a proportionally greater force is created by an actuator between the handle and a brace attached to the operator’s hand, providing an enhanced perception of forces at the tip of the tool. Magnifying forces in this manner may provide an improved ability to perform delicate surgical procedures. The device is completely hand-held and can thus be easily manipulated to a wide variety of locations and orientations. We have previously developed a prototype capable of amplifying forces only in the push direction, and which had a number of other limiting factors. We now present second-generation and third-generation devices, capable of both push and pull, and describe some of the engineering concerns in their design, as well as our future directions.


Journal of Pathology Informatics | 2011

A fully automated approach to prostate biopsy segmentation based on level-set and mean filtering

Juan Vidal; Gloria Bueno; John M. Galeotti; Marcial García-Rojo; Fernanda Relea; Oscar Déniz

With modern automated microscopes and digital cameras, pathologists no longer have to examine samples looking through microscope binoculars. Instead, the slide is digitized to an image, which can then be examined on a screen. This creates the possibility for computers to analyze the image. In this work, a fully automated approach to region of interest (ROI) segmentation in prostate biopsy images is proposed. This will allow the pathologists to focus on the most important areas of the image. The method proposed is based on level-set and mean filtering techniques for lumen centered expansion and cell density localization respectively. The novelty of the technique lies in the ability to detect complete ROIs, where a ROI is composed by the conjunction of three different structures, that is, lumen, cytoplasm, and cells, as well as regions with a high density of cells. The method is capable of dealing with full biopsies digitized at different magnifications. In this paper, results are shown with a set of 100 H and E slides, digitized at 5×, and ranging from 12 MB to 500 MB. The tests carried out show an average specificity above 99% across the board and average sensitivities of 95% and 80%, respectively, for the lumen centered expansion and cell density localization. The algorithms were also tested with images at 10× magnification (up to 1228 MB) obtaining similar results.


AE-CAI'11 Proceedings of the 6th international conference on Augmented Environments for Computer-Assisted Interventions | 2011

Towards an ultrasound probe with vision: structured light to determine surface orientation

Samantha Horvath; John M. Galeotti; Bo Wang; Matt Perich; Jihang Wang; Mel Siegel; Patrick Vescovi; George D. Stetten

Over the past decade, we have developed an augmented reality system called the Sonic Flashlight (SF), which merges ultrasound with the operators vision using a half-silvered mirror and a miniature display attached to the ultrasound probe. We now add a small video camera and a structured laser light source so that computer vision algorithms can determine the location of the surface of the patient being scanned, to aid in analysis of the ultrasound data. In particular, we intend to determine the angle of the ultrasound probe relative to the surface to disambiguate Doppler information from arteries and veins running parallel to, and beneath, that surface. The initial demonstration presented here finds the orientation of a flat-surfaced ultrasound phantom. This is a first step towards integrating more sophisticated computer vision methods into automated ultrasound analysis, with the ultimate goal of creating a symbiotic human/machine system that shares both ultrasound and visual data.


international conference on computer graphics and interactive techniques | 2008

FingerSight™: fingertip control and haptic sensing of the visual environment

John M. Galeotti; Samantha Horvath; Roberta L. Klatzky; Brock Nichol; Mel Siegel; George D. Stetten

Many devices that transfer input from the visual environment to another sense have been developed. The primary assistive technologies in current use are white canes, guide dogs, and GPSbased technologies. All of these facilitate safe travel in a wide variety of environments, but none of them are useful for straightening a picture frame on the wall or finding a cup of coffee on a counter-top. Tactile display screens and direct nerve stimulation are two existing camera-based technologies that seek to replace the more general capabilities of vision. Notably, these methods preserve a predetermined map between the image captured by a camera and a spatially fixed grid of sensory stimulators. Other technologies use fixed cameras and tracking devices to record and interpret movements and gestures. These, however, operate in a limited space and focus on the subject, rather than the subject’s interrogation of his environment. With regard to haptic feedback devices for the hand, most existing devices aim to simulate tactile exploration of virtual objects.


Human Factors | 2015

Psychophysical Evaluation of Haptic Perception Under Augmentation by a Handheld Device

Bing Wu; Roberta L. Klatzky; Randy Lee; Vikas Shivaprabhu; John M. Galeotti; Mel Siegel; Joel S. Schuman; Ralph L. Hollis; George D. Stetten

Objective: This study investigated the effectiveness of force augmentation in haptic perception tasks. Background: Considerable engineering effort has been devoted to developing force augmented reality (AR) systems to assist users in delicate procedures like microsurgery. In contrast, far less has been done to characterize the behavioral outcomes of these systems, and no research has systematically examined the impact of sensory and perceptual processes on force augmentation effectiveness. Method: Using a handheld force magnifier as an exemplar haptic AR, we conducted three experiments to characterize its utility in the perception of force and stiffness. Experiments 1 and 2 measured, respectively, the user’s ability to detect and differentiate weak force (<0.5 N) with or without the assistance of the device and compared it to direct perception. Experiment 3 examined the perception of stiffness through the force augmentation. Results: The user’s ability to detect and differentiate small forces was significantly improved by augmentation at both threshold and suprathreshold levels. The augmentation also enhanced stiffness perception. However, although perception of augmented forces matches that of the physical equivalent for weak forces, it falls off with increasing intensity. Conclusion: The loss in the effectiveness reflects the nature of sensory and perceptual processing. Such perceptual limitations should be taken into consideration in the design and development of haptic AR systems to maximize utility. Application: The findings provide useful information for building effective haptic AR systems, particularly for use in microsurgery.


international symposium on biomedical imaging | 2007

AUTOMATED SEGMENTATION OF THE RIGHT HEART USING AN OPTIMIZED SHELLS AND SPHERES ALGORITHM

C.A. Cow; K. Rockot; John M. Galeotti; Robert J. Tamburo; D. Gottlieb; J.E. Mayer; A. Powell; Michael H. Sacks; George D. Stetten

We have developed a novel framework for medical image analysis, known as shells and spheres. This framework utilizes spherical operators of variable radius centered at each image pixel and sized to reach, but not cross, the nearest object boundary. Statistical population tests are performed on adjacent spheres to compare image regions across boundaries. Previously, our framework was applied to segmentation of cardiac CT data with promising results. In this paper, we present a more accurate and versatile system by optimizing algorithm parameters for a particular data set to maximize agreement to manual segmentations. We perform parameter optimization on a selected 2D slice from a 3D image data set, generating effective parameters for 3D segmentation in practical computational time. Details of this approach are given, along with a validated application to cardiac MR data.

Collaboration


Dive into the John M. Galeotti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samantha Horvath

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mel Siegel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jihang Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Bing Wu

Arizona State University at the Polytechnic campus

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Wang

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge