Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Alan Peters is active.

Publication


Featured researches published by Richard Alan Peters.


IEEE Transactions on Image Processing | 2002

Topological median filters

Hakan Guray Senel; Richard Alan Peters; Benoit M. Dawant

This paper describes the definition and testing of a new type of median filter for images. The topological median filter implements some existing ideas and some new ideas on fuzzy connectedness to improve, over a conventional median filter, the extraction of edges in noise. The concept of alpha-connectivity is defined and used to create an algorithm for computing the degree of connectedness of a pixel to all the other pixels in an arbitrary neighborhood. The resulting connectivity map of the neighborhood effectively disconnects peaks in the neighborhood that are separated from the center pixel by a valley in the brightness topology. The median of the connectivity map is an estimate of the median of the peak or plateau to which the center pixel belongs. Unlike the conventional median filter, the topological median is relatively unaffected by disconnected features in the neighborhood of the center pixel. Four topological median filters are defined. Qualitative and statistical analyses of the four filters are presented. It is demonstrated that edge detection can be more accurate on topologically median filtered images than on conventionally median filtered images.


ad hoc networks | 2010

A distributed intrusion detection system for resource-constrained devices in ad-hoc networks

Adrian P. Lauf; Richard Alan Peters; William H. Robinson

This paper describes the design and implementation of a two-stage intrusion detection system (IDS) for use with mobile ad-hoc networks. Our anomaly-based intrusion detection is provided by analyzing the context from the application-level interactions of networked nodes; each interaction corresponds to a specific function or behavior within the operational scenario of the network. A static set of behaviors is determined offline, and these behaviors are tracked dynamically during the operation of the network. During the first stage of the IDS, our detection strategy employs the analysis of global and local maxima in the probability density functions of the behaviors to isolate deviance at the granularity of a single node. This stage is used to capture the typical behavior of the network. The first stage also provides tuning and calibration for the second stage. During the second stage, a cross-correlative component is used to detect multiple threats simultaneously. Our approach distributes the IDS among all connected network nodes, allowing each node to identify potential threats individually. The combined result can detect deviant nodes in a scalable manner and can operate in the presence of a density of deviant nodes approaching 22%. Computational requirements are reduced to adapt optimally to embedded devices on an ad-hoc network.


IEEE Intelligent Systems & Their Applications | 2000

ISAC: foundations in human-humanoid interaction

Kazuhiko Kawamura; Richard Alan Peters; D.M. Wilkes; W.A. Alford; Tamara Rogers

The authors describe their humanoid robotic system, ISAC (Intelligent Soft-Arm Control), and their approach to human-humanoid interaction (HHI). They present a software architecture called the Intelligent Machine Architecture (IMA) and two high-level agents (the Human agent and the Self agent) within their HHI framework.


international conference on robotics and automation | 2003

Robonaut task learning through teleoperation

Richard Alan Peters; Christina Louise Campbell; William Bluethmann; Eric Huber

This paper addresses the problem of automatic skill acquisition by a robot. It reports that six trials of a reach-grasp-release-retract skill are sufficient for learning a canonical description of the task under the following circumstances: The robot is Robonaut, NASAs space-capable, dexterous humanoid. Robonaut was teleoperated by a person using full immersion Virtual Reality technology that transforms the operators arm and hand motions into those of the robot. The operators sole source of real-time feedback was visual. During the six trials all of the Robots sensory inputs and motor control parameters were recorded as time-series. Later the time-series from each trial was partitioned into the same number of episodes as a function of changes in the motor parameter sequence. The episodes were time normalized and averaged across trials The resultant motor parameter sequence and sensor signals were used to control the robot without the teleoperator. The robot was able to perform the task autonomously with robot starting positions and object locations both similar to, and different from the original trials.


intelligent robots and systems | 1998

A visual attention network for a humanoid robot

Joseph Andrew Driscoll; Richard Alan Peters; Kyle R. Cave

For a humanoid robot to interact easily with a person, the robot should have human-like sensory capabilities and attentional mechanisms. Particularly useful is an active vision head controlled by a visual attention system that selects viewpoints in the environment as a function of the robots task. This paper describes a model of human visual attention called FeatureGate, which is a locally connected, pyramidal, artificial neural network that operates on 2D feature maps of the environment. Given a set of feature maps, and the description of a specific target, FeatureGate finds the location whose features most closely match those of the target. The paper describes the network, its implementation, a series of tests that characterize its performance with respect to a persons performance on a similar task, and its use in the control of an active vision system.


IEEE Transactions on Aerospace and Electronic Systems | 1993

Image texture synthesis-by-analysis using moving-average models

J.A. Cadzow; D. M. Wilkes; Richard Alan Peters; Xingkang Li

A synthesis-by-analysis model for texture replication or simulation is presented. This model can closely replicate a given textured image or produce another image that although distinct from the original, has the same general visual characteristics and the same first and second-order gray-level statistics as the original image. The texture synthesis algorithm, proposed contains three distinct components: a moving-average (MA) filter, a filter excitation function, and a gray-level histogram. The analysis portion of the texture synthesis algorithm derives the three from a given image. The synthesis portion convolves the MA filter kernel with the excitation function, adds noise, and modifies the histogram of the result. The advantages of this texture model over others include conceptually and computationally simple and robust parameter estimation, inherent stability, parsimony in the number of parameters, and synthesis through convolution. The authors describe a procedure for deriving the correct MA kernel using a signal enhancement algorithm, demonstrate the effectiveness of the model by using it to mimic several diverse textured images, discuss its applicability to the problem of infrared background simulation, and include detailed algorithms for the implementation of the model. >


Applied Artificial Intelligence | 1998

TOWARD SOCIALLY INTELLIGENT SERVICE ROBOTS

D.M. Wilkes; A. Alford; Robert T. Pack; Tamara Rogers; Richard Alan Peters; Kazuhiko Kawamura

In the Intelligent Robotics Laboratory at Vanderbilt University, we seek to develop service robots with a high level of social intelligence and interactivity. In order to achieve this goal, we have identified two main issues for research. The first issue is how to achieve a high level of interaction between the human and the robot. This has lead to the formulation of our philosophy of Human Directed Local Autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The motivation for integrating humans into a service robot system is to take advantage of human intelligence and skill. Human intelligence can be used to interpret robot sensor data, eliminating computationally expensive and possibly error-prone automated analyses. Human skill is a valuable resource for trajectory and path planning as well as for simplifying the search process. In this article, we present our plans for integrating humans into a service robot system. We present our paradigm for human-robot int...


ieee-ras international conference on humanoid robots | 2004

Building an autonomous humanoid tool user

William Bluethmann; Robert O. Ambrose; Myron A. Diftler; Eric Huber; Andrew H. Fagg; Michael T. Rosenstein; Robert Platt; Roderic A. Grupen; Cynthia Breazeal; Andrew G. Brooks; Andrea Lockerd; Richard Alan Peters; Odest Chadwicke Jenkins; Maja J. Matarić; Magdalena D. Bugajska

To make the transition from a technological curiosity to productive tools, humanoid robots will require key advances in many areas, including, mechanical design, sensing, embedded avionics, power, and navigation. Using the NASA Johnson Space Centers Robonaut as a testbed, the DARPA mobile autonomous robot software (MARS) humanoids team is investigating technologies that will enable humanoid robots to work effectively with humans and autonomously work with tools. A novel learning approach is being applied that enables the robot to learn both from a remote human teleoperating the robot and an adjacent human giving instruction. When the remote human performs tasks teleoperatively, the robot learns the salient sensory-motor features to executing the task. Once learned, the task may be carried out fusing the skills required to perform the task, guided by on-board sensing. The adjacent human takes advantage of previously learned skills to sequence the execution of these skills. Preliminary results from initial experiments using a drill to tighten lug nuts on a wheel are discussed.


IEEE Transactions on Robotics | 2006

Superpositioning of behaviors learned through teleoperation

Christina Louise Campbell; Richard Alan Peters; Robert E. Bodenheimer; William Bluethmann; Eric Huber; Robert O. Ambrose

This paper reports that the superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of an articulated reach-and-grasp task. The results support the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation can bootstrap the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes within an environment. It is shown that a reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. In particular, teleoperation of the robot to reach and grasp an object at nine different locations in its workspace enabled robust autonomous performance of the task anywhere within the workspace. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. The work was performed on Robonaut, the NASA space-capable humanoid at Johnson Space Center, Houston, TX.


computational intelligence in robotics and automation | 2007

Reinforcement Learning with a Supervisor for a Mobile Robot in a Real-world Environment

Karla Conn; Richard Alan Peters

This paper describes two experiments with supervised reinforcement learning (RL) on a real, mobile robot. Two types of experiments were preformed. One tests the robots reliability in implementing a navigation task it has been taught by a supervisor. The other, in which new obstacles are placed along the previously learned path to the goal, measures the robots robustness to changes in environment. Supervision consisted of human-guided, remote-controlled runs through a navigation task during the initial stages of reinforcement learning. The RL algorithms deployed enabled the robot to learn a path to a goal yet retain the ability to explore different solutions when confronted with a new obstacle. Experimental analysis was based on measurements of average time to reach the goal, the number of failed states encountered during an episode, and how closely the RL learner matched the supervisors actions.

Collaboration


Dive into the Richard Alan Peters's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamara Rogers

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge