Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin R. Murphy is active.

Publication


Featured researches published by Robin R. Murphy.


international conference on robotics and automation | 1990

Autonomous navigation in a manufacturing environment

Ronald C. Arkin; Robin R. Murphy

Current approaches towards achieving mobility in the workplace are reviewed. The role of automatic guided vehicles (AGVs) and some of the preliminary work of other groups in autonomous vehicles are described. An overview is presented of the autonomous robot architecture (AuRA), a general-purpose system designed for experimentation in the domain of intelligent mobility. The means by which navigation is accomplished within this framework is specifically addressed. A description is given of the changes made to AuRA to adapt it to a flexible manufacturing environment, the types of knowledge that need to be incorporated, and the new motor behaviors required for this domain. Simulations of both navigational planning and reactive/reflexive motor schema-based navigation in a flexible manufacturing systems environment, followed by actual navigational experiments using the mobile vehicle, are presented. >


intelligent robots and systems | 1992

Sfx: An Architecture For Action-oriented Sensor Fusion

Robin R. Murphy; Ronald C. Arkin

Sensor fusion has an important role in the navigation of autonomous mobile robots. Our research has generated a generic and robust pro- cess model based on the action-oriented percep- tion paradigm. The autonomous execution and ex- ception handling abilities of this model have been implemented as the Sensor Fusion Effects (SFX) architecture. The key aspects of this implemen- tation are the sensing plan, the uncertainty man- agement mechanism, the application of feedback from the sensing process to individual sensors, the detection of exceptions to the sensing plan, and handling of those exceptions. This paper gives an overview of the SFX architecture, concentrating on the sensing plan as the central control struc- ture guiding autonomous execution. This paper also reports on experiments using sensor data col- lected from our mobile robot which demonstrate the use of the sensing plan representation, the ex- ecution sequence, the application of feedback, and how feedback improves the overall sensing capa- bilities of the robot.


intelligent robots and systems | 1989

Mobile Robot Docking Operations in a Manufacturing Environment: Progress in Visual Perceptual Strategies

Ronald C. Arkin; Robin R. Murphy; Mark P. Pearson; David Vaughn

This paper presents four dicerent visual strategies for use in docking operations for a mobile robot in a manufacturing environment. The algorithms developed include temporal activity (motion) detection, Hough transform-based recognition, adaptive fast region segmentation, and edge-based texture niethods. These algorithms are to be sequenced in a manner that is consistent wit11 our robot’s motor behavior (schema) for docking, exploiting aspects of ballistic and controlled motion a8 the robot moves towards a workstation.


systems man and cybernetics | 1995

Cooperative assistance for remote robot supervision

Erika Rogers; Robin R. Murphy; A. Stewart; Nazir A. Warsi

This paper describes current work on the design of a computer system which provides cooperative assistance for the supervision of remote semi-autonomous robots. It consists of a blackboard-based framework which allows communication between the remote robot, the local human supervisor, and an intelligent mediating system, which aids interactive exception handling when the remote robot requires the assistance of the local operator.


visual communications and image processing | 1990

A Strategy for the Fine Positioning of a Mobile Robot using Texture

Robin R. Murphy

This paper presents a low-level visual strategy for positioning a mobile robot over short distances (6 feet) using the texture of an artificial landmark. The relative depth of the robot can be recovered from the number of texture generated edges detected in the landmark region. This technique can be extended to recover orientation as well as depth. In that application, the ratio of the number of edges per unit area in one side of the region to the other determines the orientation. The orientation taken with total number of edges determines the depth. The use of the number of edges per unit area as the metric enables this strategy to work well under variations in the shape and size of the region, including mild obscurations. Experiments show that depth can be recovered from an appropriate texture with an average error of 5.7% over a range of 73 to 10 inches. If the landmark is not perpendicular to the camera, the orientation can be recovered with an average error of 9.0° and depth with 8.0% over a range of 84 to 60 inches. Motivation and experiments are discussed, including the issues in designing an appropriate texture for an application. Results with our mobile robot using a motor control strategy similar to the controlled movement of the docking behavior are presented.


Sensor Fusion III: 3D Perception and Recognition | 1991

Control scheme for sensor fusion for navigation of autonomous mobile robots

Robin R. Murphy

Sensor fusion in robotics, particularly for navigation of autonomous mobile robots, has typically been addressed as a “bottom-up” or data driven process. This has led to a variety of systems that, although somewhat successful, have been difficult to expand to include additional sensors or extend to other domains. The approach taken here is to specify and develop a control scheme which considers the sensor fusion process in the context of the intended actions of the robot, knowledge of the environment, and the available sensor suite. The resulting control scheme exploits environmental knowledge in three ways in order to reduce processing. First, the control structure supports adaptation of the sensor fusion process to the environment and intended action. An appropriate set of candidate features is selected from the feature extraction library during the investigatory phase. Fusion occurs during the performatory phase in one of three global states: complete sensor fusion; fusion with the possibility of discordance and resultant recalibration of dependent perceptual sources; and fusion with the possibility of discordance and resultant suppression of discordant perceptual sources. Second, the states themselves use environmental knowledge to improve the fusion results as well as the sensing quality. Knowledge about how a sensor behaves under certain environmental conditions can lead to the exclusion of suspect readings from the fusion process. Third, the control scheme allows the system to respond to unexpected or catastrophic changes in the environment or sensors by permitting transitions between states. When an unacceptable discordance is detected between features, the investigatory phase is re-invoked, the system reconfigured, and instantiated in a new state.


international symposium on intelligent control | 1990

Adaptive tracking for a mobile robot

Robin R. Murphy; Ronald C. Arkin

A novel technique for adaptive tracking in indoor lighting environments based on Chebyshevs theorem is presented. The technique is used to recover the region corresponding to an artificial landmark accurately and efficiently through a sequence of images. Accurate region segmentation is the first step in determining the position of a mobile robot relative to a landmark. Nonadaptive region tracking techniques are susceptible to even small variations in indoor illumination and as a result may return degraded regions. An adaptive feedforward technique is necessary to combat this degradation, which is measured in terms of preservation of the centroid, region size, and visual erosion. This technique has been tested successfully using black and white images acquired from a mobile robot. Demonstrations of the adaptive tracking technique working in conjunction with the move-to-goal and follow-the-leader behaviors on the mobile robot are presented.<<ETX>>


Sensor Fusion IV: Control Paradigms and Data Structures | 1992

State-based sensor fusion for surveillance

Robin R. Murphy

This paper presents a state-based control scheme for sensor fusion in autonomous mobile robots. States specify the sensing strategy for each sensor; the feedback rule to be applied to the sensors; and a set of failure conditions, which signal abnormal or inconsistent evidence. Experiments were conducted in the surveillance domain, where the robot was to determine if three different areas in a cluttered tool room remained unchanged after each visit. The data collected from four sensors (a Sony Hi8 color camcorder, a Pulnix black and white camera, an Inframetrics true infrared camera, and Polaroid ultrasonic transducers) and fused using the sensor fusion effects architecture (SFX) support the claims that the state-based control scheme produces percepts which are consistent with the scene being viewed, can improve the global belief in a percept, can improve the sensing quality of the robot, and it robust under a variety of conditions.


Proceedings of SPIE | 1991

Application of Dempster-Shafer theory to a novel control scheme for sensor fusion

Robin R. Murphy

The combination of imperfect evidence contributed by different sensors is a basic problem for sensor fusion in autonomous mobile robots. Current implementations of sensor fusion systems are restricted to fusing only certain classes of evidence because of the lack of a general framework for the combination of evidence. The authors approach to this problem is to first develop a model of the sensor fusion without committing to a particular theory of evidence, then to formulate a combination of evidence framework based on the requirements of the model. Their previous work has proposed such a model. This paper discusses the evidential demands of the model and one possible implementation using Dempster-Shafer theory. Three drawbacks of DS theory (computational intractability, weak assumptions of statistical independence, and counterintuitive averaging of strongly biased evidence) are eliminated by applying DS theory within the constraints of the model. An example based on simulated sensor data illustrates this application of Dempster-Shafer theory.


Archive | 1996

Knowledge-Based Image Enhancement for Cooperative Tele-Assistance

Erika Rogers; Versonya Dupont; Robin R. Murphy; Nazir A. Warsi

There is an increasing need in complex environments for computerized assistance, both for the effective filtering and display of pertinent information or data, and also for the decision-making task itself. The combination of artificial intelligence techniques with image processing and graphics capabilities provides the foundation for building intelligent systems which act as mediaries between the human and the task domain. In the filed of tele-assistance, this type of system enables cooperative problem-solving between a remote semi-autonomous robot and a local human supervisor, this paper describes current work on such a system, with an emphasis on the development of knowledge-based image enhancement capabilities. These allow the intelligent assistant to request particular images related to a failure state, and to automatically enhance those image in such a manner that the local supervisor may quickly and effectively make a decision.

Collaboration


Dive into the Robin R. Murphy's collaboration.

Top Co-Authors

Avatar

Ronald C. Arkin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Erika Rogers

Clark Atlanta University

View shared research outputs
Top Co-Authors

Avatar

Nazir A. Warsi

Clark Atlanta University

View shared research outputs
Top Co-Authors

Avatar

David Vaughn

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kenneth F. Hughes

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Mark P. Pearson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge