Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharon A. Stansfield is active.

Publication


Featured researches published by Sharon A. Stansfield.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1986

ANGY: A Rule-Based Expert System for Automatic Segmentation of Coronary Vessels From Digital Subtracted Angiograms

Sharon A. Stansfield

This paper details the design and implementation of ANGY, a rule-based expert system in the domain of medical image processing. Given a subtracted digital angiogram of the chest, ANGY identifies and isolates the coronary vessels, while ignoring any nonvessel structures which may have arisen from noise, variations in background contrast, imperfect subtraction, and irrelevent anatomical detail. The overall system is modularized into three stages: the preprocessing stage and the two stages embodied in the expert itself. In the preprocessing stage, low-level image processing routines written in C are used to create a segmented representation of the input image. These routines are applied sequentially. The expert system is rule-based and is written in OPS5 and LISP. It is separated into two stages: The low-level image processing stage embodies a domain-independent knowledge of segmentation, grouping, and shape analysis. Working with both edges and regions, it determines such relations as parallel and adjacent and attempts to refine the segmentation begun by the preprocessing. The high-level medical stage embodies a domain-dependent knowledge of cardiac anatomy and physiology. Applying this knowledge to the objects and relations determined in the preceding two stages, it identifies those objects which are vessels and eliminates all others.


The International Journal of Robotics Research | 1991

Robotic grasping of unknown objects: a knowledge-based approach

Sharon A. Stansfield

In this article we describe a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions reqrsired for grasping an unmodeled object. The utility of such a system is most evident in environ ments where the robot will have to grasp and manipulate a variety of unknown objects, but many of these manipu lation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assis tance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configu ration. Experimental results are presented to illustrate the functioning of the system.


The International Journal of Robotics Research | 1988

A robotic perceptual system utilizing passive vision and active touch

Sharon A. Stansfield

This paper presents a robotic perceptual system which utilizes passive vision and active touch. The task is one-fingered exploration of a single unmodeled object for apprehension— the determination of the features of the object and the rela tions among them. A two-stage exploration is utilized. Vision is first used in a feedforward manner to segment the object and to obtain its position. Touch is then used in a feedback mode to further explore the object. In designing this system, we have addressed several issues. The first concerns the way in which the robotic perceptual system should be structured. The model which we propose here is based upon theories of human perception. It consists of a highly modularized set of knowledge-based modules, each of which is domain specific and informationally encapsulated. Within the framework of this model, we have designed both vision and touch subsys tems. In each case, we have defined the primitives, features, and representations extracted and created by the system. The visual system is passive and relatively simple. The touch system is active and is an attempt to systematically structure a robotic haptic perception system. Finally, we have ad dressed the issues of how these two subsystems interact dur ing active exploration and how the information from each is integrated. The work described has been implemented and tested on a robot system consisting of a combination tactile array/force-torque sensor, a PUMA robot arm, a pair of CCD cameras, and a Vax 750.


Presence: Teleoperators & Virtual Environments | 2000

Design and Implementation of a Virtual Reality System and Its Application to Training Medical First Responders

Sharon A. Stansfield; Daniel Shawver; Annette L. Sobel; Monica Prasad; Lydia Tapia

This paper presents the design and implementation of a distributed virtual reality (VR) platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success. The system is fully immersive and multimodal, and users are represented as tracked, full-body figures. The system supports the manipulation of virtual objects, allowing users to act upon the environment in a natural manner. The underlying intelligent simulation component creates an interactive, responsive world in which the consequences of such actions are presented within a realistic, time-critical scenario. The focus of this work has been on the training of medical emergency-response personnel. BioSimMER, an application of the system to training first responders to an act of bio-terrorism, has been implemented and is presented throughout the paper as a concrete example of how the underlying platform architecture supports complex training tasks. Finally, a preliminary field study was performed at the Texas Engineering Extension Service Fire Protection Training Division. The study focused on individual, rather than team, interaction with the system and was designed to gauge user acceptance of VR as a training tool. The results of this study are presented.


international conference on robotics and automation | 1986

Primitives, features, and exploratory procedures: Building a robot tactile perception system

Sharon A. Stansfield

Contrary to previously held notions, recent psychological experiments suggest that the human tactile system is both a fast and accurate recognition device for real objects. It seems reasonable, then, to build a robotic tactile perception system for the purpose of object identification. Toward this end, we present a set of low level tactile primitives and the exploratory and analytic procedures used to identify and extract them. We then discuss how these primitives may be combined into tactile features and, finally, how these features might be utilized by the robot perceptual system as a whole.


Robotica | 1992

Haptic Perception with an Articulated, Sensate Robot Hand

Sharon A. Stansfield

In this paper we present a series of haptic exploratory procedures, or EPs, implemented for a multi-fingered, articulated, sensate robot hand. These EPs are designed to extract specific tactile and kinesthetic information from an object via their purposive invocation by an intelligent robotic system. Taken together, they form an active robotic touch perception system to be used both in extracting information about the environment for internal representation and in acquiring grasps for manipulation. The theory and structure of this robotic haptic system is based upon models of human haptic exploration and information processing. The haptic system presented utilizes an integrated robotic system consisting of a PUMA 560 robot arm, a JPL/Stanford robot hand, with joint torque sensing in the fingers, a wrist force/torque sensor, and a 256 element, spatially-resolved fingertip tactile array. We describe the EPs implemented for this system and provide experimental results which illustrate how they function and how the information which they extract may be used. In addition to the sensate hand and arm, the robot also contains structured-lighting vision and a Prolog-based reasoning system capable of grasp generation and object categorization. We present a set of simple tasks which show how both grasping and recognition may be enhanced by the addition of active touch perception.


Presence: Teleoperators & Virtual Environments | 1998

Mapping Algorithms for Real-Time Control of an Avatar Using Eight Sensors

Sudhanshu Kumar Semwal; Ron R. Hightower; Sharon A. Stansfield

In a virtual environment for small groups of interacting participants, it is important that the physical motion of each participant be replicated by synthetic human forms in real time. Sensors on a users body are used to drive an inverse kinematics algorithm. Such iterative algorithms for solving the general inverse kinematics problem are too slow for a real-time interactive environment. In this paper we present analytic, constant time methods to solve the inverse kinematics problem and drive an avatar figure. Our sensor configuration has only eight sensors per participant, so the sensor data is augmented with information about natural body postures. The algorithm is fast, and the resulting avatar motion approximates the actions of the participant quite well. This new analytic solution resolves a problem with an earlier iterative algorithm that had a tendency to position knees and elbows of the avatar in awkward and unnatural positions.


Prehospital and Disaster Medicine | 2001

A Virtual Reality Patient Simulation System for Teaching Emergency Response Skills to U.S. Navy Medical Providers

Karen Freeman; Scott F. Thompson; Eric B. Allely; Annette L. Sobel; Sharon A. Stansfield; William M. Pugh

Rapid and effective medical intervention in response to civil and military-related disasters is crucial for saving lives and limiting long-term disability. Inexperienced providers may suffer in performance when faced with limited supplies and the demands of stabilizing casualties not generally encountered in the comparatively resource-rich hospital setting. Head trauma and multiple injury cases are particularly complex to diagnose and treat, requiring the integration and processing of complex multimodal data. In this project, collaborators adapted and merged existing technologies to produce a flexible, modular patient simulation system with both three-dimensional virtual reality and two-dimensional flat screen user interfaces for teaching cognitive assessment and treatment skills. This experiential, problem-based training approach engages the user in a stress-filled, high fidelity world, providing multiple learning opportunities within a compressed period of time and without risk. The system simulates both the dynamic state of the patient and the results of user intervention, enabling trainees to watch the virtual patient deteriorate or stabilize as a result of their decision-making speed and accuracy. Systems can be deployed to the field enabling trainees to practice repeatedly until their skills are mastered and to maintain those skills once acquired. This paper describes the technologies and the process used to develop the trainers, the clinical algorithms, and the incorporation of teaching points. We also characterize aspects of the actual simulation exercise through the lens of the trainee.


ieee virtual reality conference | 1995

An application of shared virtual reality to situational training

Sharon A. Stansfield; Daniel Shawver; Nadine E. Miner; David M. Rogers

This paper presents current research being undertaken at Sandia National Laboratories to develop a distributed, shared virtual reality simulation system. The architecture of the system is presented within the framework of an initial application: situational training of inspectors and escorts under programs to verify compliance with nuclear non-proliferation treaties.


international conference on robotics and automation | 1994

An interactive virtual reality simulation system for robot control and operator training

Nadine E. Miner; Sharon A. Stansfield

Robotic systems are often very complex and difficult to operate, especially as multiple robots are integrated to accomplish difficult tasks. In addition, training the operators of these complex robotic systems is time-consuming and costly. In this paper a virtual reality based robotic control system is presented. The virtual reality system provides a means by which operators can operate, and be trained to operate, complex robotic systems in an intuitive, cost-effective way. Operator interaction with the robotic system is at a high, task-oriented, level. Continuous state monitoring prevents illegal robot actions and provides interactive feedback to the operator and real-time training for novice users.<<ETX>>

Collaboration


Dive into the Sharon A. Stansfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Shawver

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Hélène Larin

American Physical Therapy Association

View shared research outputs
Top Co-Authors

Avatar

Annette L. Sobel

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Nadine E. Miner

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron R. Hightower

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Christopher Cooke

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David M. Rogers

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge