Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. Alan Peters is active.

Publication


Featured researches published by R. Alan Peters.


International Journal of Humanoid Robotics | 2004

A PARALLEL DISTRIBUTED COGNITIVE CONTROL SYSTEM FOR A HUMANOID ROBOT

Kazuhiko Kawamura; R. Alan Peters; Robert E. Bodenheimer; Nilanjan Sarkar; Juyi Park; Charles A. Clifton; Albert William Spratley; Kimberly A. Hambuchen

During the last decade, Researchers at Vanderbilt have been developing a humanoid robot called the Intelligent Soft Arm Control (ISAC). This paper describes ISAC in terms of its software components and with respect to the design philosophy that has evolved over the course of its development. Central to the control system is a parallel, distributed software architecture, comprising a set of independent software objects or agents that execute as needed on standard PCs linked via Ethernet. Fundamental to the design philosophy is the direct physical interaction of the robot with people. Initially, this philosophy guided application development. Yet over time it became apparent that such interaction may be necessary for the acquisition of intelligent behaviors by an agent in a human-centered environment. Concurrent to that evolution was a shift from a programmer’s high-level specification of action toward the robot’s own motion acquisition of primitive behaviors through sensory-motor coordination (SMC) and task learning through cognitive control and working memory. . Described is the parallel distributed cognitive control architecture and the advantages and limitations that have guided its development. Primary structures for sensing, memory, and cognition are described. Motion learning through teleoperation and fault diagnosis through system health monitoring are described. The generality of the control system is discussed in terms of its applicability to physically heterogeneous robots and multi-robot systems.


Intelligent Systems and Advanced Manufacturing | 2002

Toward perception-based navigation using EgoSphere

Kazuhiko Kawamura; R. Alan Peters; D.M. Wilkes; Ahmet Bugra Koku; Ali Sekman

A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.


international conference on robotics and automation | 2002

Enhancing a human-robot interface using Sensory EgoSphere

Carlotta Johnson; A. Bugra Koku; Kazuhiko Kawamura; R. Alan Peters

Shows how a Sensory EgoSphere (SES), a robot-centric geodesic dome that represents the short term memory of a mobile robot, could enhance a human-robot interface. It is proposed that the addition of this visual representation of the sensor data on a mobile robot enhances the effectiveness of a human-robot interface. The SES migrates information presentation to the user from the sensing level to the perception level. The composition of the vision with other sensors on the SES surrounding the robot gives clarity and ease of interpretation. It enables the user to better visualize the present circumstances of the robot. The human-robot interface (HRI) will be implemented through a graphical user interface (GUI) which contains the SES, command prompt, compass, environment map, sonar and laser display. This paper proposes that the SES increases situational awareness and allows the human supervisor to accurately ascertain the present perception (sensory input) of the robot and use this information to assist the robot in getting out of difficult situations.


Journal of Electronic Imaging | 1996

Morphological pseudo bandpass image decompositions

R. Alan Peters

A morphological bandpass filter would, ideally, strictly limit the sizes of all features in an image to lie between the sizes of two similarly shaped but differently scaled structuring elements. A morphological bandpass decomposition of an image would be a disjoint set of morphological bandpass images with features of increasing size such that the set sums to the original image. Such strict bandpass limitations in size are not possible in general for arbitrary structuring element families. Hence, a true bandpass decomposition is not generally possible. Pseudo bandpass decompositions, in which intraband size limitations are relaxed, are possible and can be useful image analysis tools. Four pseudo bandpass image decompositions are described, one of which, the opening spectrum, is relatively well known and three of which are new. They are a decomposition derived from iteration of the top-hat transform, a morphological reconstruction of a Euclidean (quasi) granulometry, and a reconstruction of the opening spectrum. Properties of the opening spectrum and the top-hat transform are reviewed. The tophat spectrum is defined, some of its properties are deduced, and it is compared to the opening spectrum. The reconstruction-based decompositions are defined and compared to the others. Comparative examples are given and a practical use described.


SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation | 1994

Colored-object detection for a mobile robot

Koji Fujiwara; R. Alan Peters; Kazuhiko Kawamura

This paper documents a target acquisition and retrieval problem for a mobile robot. The robots goal is to grasp a known object whose location is unknown. The robot achieves its goal through a visual search followed by physical movement toward the object. To avoid the computationally expensive, general problem of finding a specific object in a cluttered scene, the robot restricts its visual search area to those places that exhibit colors found on the object. When the object is outside grasping range, precise object world-coordinates are unnecessary for the robot to approach the object. Rough coordinate estimates are sufficient if they are quickly computable and improve with decreasing range. As an example of this problem, a robot is programmed to find and approach a specific soda can at an unknown location in a cluttered environment. Color, in this situation, is a more reliable cue to the location of the can than other features. This paper presents a focus of attention algorithm using color, which provides rough estimates of object position. The algorithm described here is related to the histogram backprojection algorithm of Ballard and Swain, but it does not require the object image size a priori.


IS&T/SPIE 1994 International Symposium on Electronic Imaging: Science and Technology | 1994

Morphological bandpass decomposition of images

R. Alan Peters; James A. Nichols

The concept of a morphological size distribution is well known. It can be envisioned as a sequence of progressively more highly smoothed images which is a nonlinear analogue of scale space. Whereas the differences between Gaussian lowpass filtered images in scale space form a sequence of approximately Laplacian bandpass filtered images, the difference image sequence from a morphological size distribution is not bandpass in any usual sense for most images. This paper presents a proof that a strictly size band limited sequence can be created along one dimension in an n dimensional image. This result is used to show how an image time sequence can be decomposed into a set of sequences each of which contains only events of a specific limited duration. It is shown that this decomposition can be used for noise reduction. This paper also presents two algorithms which create from morphological size distributions, (pseudo) size bandpass decompositions in more than one dimension. One algorithm uses Vincent grayscale reconstruction on the size distribution. The other reconstructs the difference image sequence.


Characterization, Propagation, and Simulation of Sources and Backgrounds | 1991

Simulation of infrared backgrounds using two-dimensional models

James A. Cadzow; D.M. Wilkes; R. Alan Peters; Xingkang Li; Jamshed N. Patel

Analysis and simulation of smart munitions requires imagery for the munitions sensor to view. The imagery is usually infrared and depicts a target embedded in a background. Mathematical models of such imagery are useful to munitions researchers. A mathematical model can synthesize a test scenario at a cost much less than that of actual data acquisition. To date, most research has focused on the modeling targets. It is essential, however, to test a munitions target acquisition algorithms on images containing targets superimposed on a wide variety of backgrounds. Consequently, there is a need for accurate models of infrared backgrounds. Useful models are difficult to create because of the complexity and diversity of imagery viewed by smart munition sensors. A model of IR backgrounds is presented that will, given a textured image, generate another image. The synthetic image, although distinctly different from the original, has the general visual characteristics and the first and second-order statistics of the original image. In effect, the synthetic image looks like a continuation of the original scene, as if another picture of the scene were taken adjacent to the original. The model is an FIR kernel convolved with an excitation function, noise added to the result, and followed by histogram modification. The paper describes a procedure for deriving the correct FIR kernel using a signal enhancement algorithm, and reports a demonstration of the model in which it is used to mimic several diverse textured images.


human vision and electronic imaging conference | 2001

Effects of surface microstructure on macroscopic image shading

Duje Tadin; Richard F. Haglund; Joseph S. Lappin; R. Alan Peters

The reflectance characteristics of materials are known to have visible effects on image shading functions. The present study quantified the light scattering distributions form roughened glass and correlated these with the microscopic surface topography. Samples were 11 silica microscope slides, roughened with commercial diamond pastes with particle sizes ranging from 4-67 m. In-plane and out-of- plane scattering measurements were made with laser light incident at 45 degrees. In-plane scattering distributions for samples illuminated with 633 nm varied significantly in their shape, but all curves were roughly centered on the specular angles. In-plane scattering distributions for 514 nm also disappearance of specular components was observed for samples whose RMS roughness just exceeded the wavelength of light. The results indicate that small changes in microscopic surface roughness can produce large changes in scattering. Evidently, surface microstructure has a pronounced effect on macroscopic appearance.


Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling | 1997

Hand-eye coordination with an active camera head

Magued Bishay; R. Alan Peters; D.M. Wilkes; Kazuhiko Kawamura

Hand-eye coordination is the coupling between vision and manipulation. Visual servoing is the term applied to hand-eye coordination in robots. In recent years, research has demonstrated that active vision -- active control of camera position and camera parameters -- facilitates a robots interaction with the world. One aspect of active vision is centering an object in an image. This is known as gaze stabilization or fixation. This paper presents a new algorithm that applies target fixation to image-based visual servoing. This algorithm, called fixation point servoing (FPS), uses target fixation to eliminate the need for Jacobian computation Additionally, FPS reburies only the rotation relationship between the camera head and the gripper frames and does not require accurate tracking of the gripper. FPS was tested on a robotics system called ISAC and experimental results are shown. FPS was also compared to a classical Jacobian-based technique using simulations of both algorithms.


SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation | 1994

Properties of image sequences generated through opening residuals

R. Alan Peters; James A. Nichols

This paper states and proves a number of properties of the tophat and the tophat spectrum. These include: the tophat is antiextensive and idempotent (but not increasing); each image in the tophat spectrum is size-limited and open; the structuring element family need not be mutually open to generate a tophat spectrum; if the SE family is mutually open, and the original image is binary, each image in the tophat spectrum includes the open part of the corresponding image from the opening spectrum; and, the tophat spectrum is identical to the opening spectrum created with a family of flat, 1D structuring elements.

Collaboration


Dive into the R. Alan Peters's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Sekman

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge