Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony J. Barbera is active.

Publication


Featured researches published by Anthony J. Barbera.


Annual Reviews in Control | 2004

RCS: A cognitive architecture for intelligent multi-agent systems

James S. Albus; Anthony J. Barbera

RCS (Real-time Control System) is a cognitive architecture designed to enable any level of intelligent behavior, up to and including human levels of performance. RCS was inspired 30 years ago by a theoretical model of the cerebellum, the portion of the brain responsible for fine motor coordination and control of conscious motions. It was originally designed for sensory-interactive goal-directed control of laboratory manipulators. Over three decades, it has evolved into a real-time control architecture for intelligent machine tools, factory automation systems, and intelligent autonomous vehicles. RCS consists of a multi-layered multi-resolutional hierarchy of computational agents each containing elements of sensory processing (SP), world modeling (WM), value judgment (VJ), behavior generation (BG), and a knowledge database (KD). At the lower levels, these agents generate goal-seeking reactive behavior. At higher levels, they enable decision making, planning, and deliberative behavior. This paper is a product of U. S. Government employees in the course of their assigned duties, and therefore not subject to copyright.


Industrial Optical Robotic Systems Technology & Applications | 2004

Status report on next-generation LADAR for driving unmanned ground vehicles

Maris Juberts; Anthony J. Barbera

The U.S. Department of Defense has initiated plans for the deployment of autonomous robotic vehicles in various tactical military operations starting in about seven years. Most of these missions will require the vehicles to drive autonomously over open terrain and on roads which may contain traffic, obstacles, military personnel as well as pedestrians. Unmanned Ground Vehicles (UGVs) must therefore be able to detect, recognize and track objects and terrain features in very cluttered environments. Although several LADAR sensors exist today which have successfully been implemented and demonstrated to provide somewhat reliable obstacle detection and can be used for path planning and selection, they tend to be limited in performance, are effected by obscurants, and are quite large and expensive. In addition, even though considerable effort and funding has been provided by the DOD R&D community, nearly all of the development has been for target detection (ATR) and tracking from various flying platforms. Participation in the Army and DARPA sponsored UGV programs has helped NIST to identify requirement specifications for LADAR to be used for on and off-road autonomous driving. This paper describes the expected requirements for a next generation LADAR for driving UGVs and presents an overview of proposed LADAR design concepts and a status report on current developments in scannerless Focal Plane Array (FPA) LADAR and advanced scanning LADAR which may be able to achieve the stated requirements. Examples of real-time range images taken with existing LADAR prototypes will be presented.


Information Control Problems in Manufacturing Technology 1982#R##N#Proceedings of the 4th IFAC/IFIP Symposium, Maryland, USA, 26–28 October 1982 | 1983

AN ARCHITECTURE FOR REAL-TIME SENSORY-INTERACTIVE CONTROL OF ROBOTS IN A MANUFACTURING FACILITY

James S. Albus; Charles R. McLean; Anthony J. Barbera; M.L. Fitzgerald

Abstract A hierarchical architecture is described for a robot integrated into a real-time sensory interactive factory control system. In this architecture, high level goals are decomposed through a succession of levels, each producing strings of simpler commands to the next lower level. The bottom level generates the drive signals to the robot actuators. Each control level is a separate process with a limited scope of responsibility. Each performs the generic control function of sampling its input and generating appropriate outputs. The input is characterized by three types of data - a command from the next higher level, processed sensory data, and status feedback from the next lower level. The outputs are of three types - a command to the next lower level, a request for sensory information to the processing module at the same level, and a status feedback to the next higher level. This paper describes this generic control structure and its implementation in a real-time sensory-interactive control system for a manufacturing facility.


Ai Magazine | 2006

Using 4D/RCS to address AI knowledge integration

Craig I. Schlenoff; Jim Albus; Elena R. Messina; Anthony J. Barbera; Raj Madhavan; Stephen Balakrisky

In this article, we show how 4D/RCS incorporates and integrates multiple types of disparate knowledge representation techniques into a common, unifying architecture. The 4D/RCS architecture is based on the supposition that different knowledge representation techniques offer different advantages, and 4D/RCS is designed in such a way as to combine the strengths of all of these techniques into a common unifying architecture in order to exploit the advantages of each. In the context of applying the architecture to the control of autonomous vehicles, we describe the procedural and declarative types of knowledge that have been developed and applied and the value that each brings to achieving the ultimate goal of autonomous navigation. We also look at symbolic versus iconic knowledge representation and show how 4D/RCS accommodates both of these types of representations and uses the strengths of each to strive towards achieving human-level intelligence in autonomous systems.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Trajectory generation for an on-road autonomous vehicle

John A. Horst; Anthony J. Barbera

We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Collaborative tactical behaviors for autonomous ground and air vehicles

James S. Albus; Anthony J. Barbera; Harry A. Scott; Stephen B. Balakirsky

Tactical behaviors for autonomous ground and air vehicles are an area of high interest to the Army. They are critical for the inclusion of robots in the Future Combat System (FCS). Tactical behaviors can be defined at multiple levels: at the Company, Platoon, Section, and Vehicle echelons. They are currently being defined by the Army for the FCS Unit of Action. At all of these echelons, unmanned ground vehicles, unmanned air vehicles, and unattended ground sensors must collaborate with each other and with manned systems. Research being conducted at the National Institute of Standards and Technology (NIST) and sponsored by the Army Research Lab is focused on defining the Four Dimensional Real-time Controls System (4D/RCS) reference model architecture for intelligent systems and developing a software engineering methodology for system design, integration, test and evaluation. This methodology generates detailed design requirements for perception, knowledge representation, decision making, and behavior generation processes that enable complex military tactics to be planned and executed by unmanned ground and air vehicles working in collaboration with manned systems.


Proceedings of SPIE--the International Society for Optical Engineering | 2004

Task analysis of autonomous on-road driving

Anthony J. Barbera; John A. Horst; Craig I. Schlenoff; David W. Aha

The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).


Proceedings of SPIE--the International Society for Optical Engineering | 2004

Achieving intelligent performance in autonomous on-road driving

Craig I. Schlenoff; John Evans; Anthony J. Barbera; James S. Albus; Elena R. Messina; Stephen B. Balakirsky

This paper describes NIST’s efforts in evaluating what it will take to achieve autonomous human-level driving skills in terms of time and funding. NIST has approached this problem from several perspectives: considering the current state-of-the-art in autonomous navigation and extrapolating from there, decomposing the tasks identified by the Department of Transportation for on-road driving and comparing that with accomplishments to date, analyzing computing power requirements by comparison with the human brain, and conducting a Delphi Forecast using the expert researchers in the field of autonomous driving. A detailed description of each of these approaches is provided along with the major finding from each approach and an overall picture of what it will take to achieve human level driving skills in autonomous vehicles.


Mobile robots. Conferenced | 2004

Identifying sensory processing requirements for an on-road driving application of 4D/RCS

John A. Horst; Anthony J. Barbera; Craig I. Schlenoff; David W. Aha

Sensory processing for real-time, complex, and intelligent control systems is costly, so it is important to perform only the sensory processing required by the task. In this paper, we describe a straightforward metric for precisely defining sensory processing requirements. We then apply that metric to a complex, real-world control problem, autonomous on-road driving. To determine these requirements the system designer must precisely and completely define 1) the system behaviors, 2) the world model situations that the system behaviors require, 3) the world model entities needed to generate all those situations, and 4) the resolutions, accuracy tolerances, detection timing, and detection distances required of all world model entities.


Robotics and Industrial Inspection | 1983

Sensor Robotics In The National Bureau Of Standards

James S. Albus; Anthony J. Barbera; M L. Fitzgerald

For robots to operate effectively in the partially unconstrained environment of manufacturing, they must be equipped with control systems that have sensory capabilities. This paper describes a control system that consists of three parallel cross-coupled hierarchies. First is a control hierarchy which decomposes high level tasks into primitive actions. Second is a sensory processing hierarchy that analyses data from the environment. Third is a world model hierarchy which generates expectations. These are compared against the sensory data at each level of the sensory processing hierarchy. Deviations between expected and observed data are used by the the control hierarchy to modify its task decomposition strategies so as to generate sensory-interactive goal-directed behavior. This system has been implemented on a research robot, using a network of microcomputers and a real-time vision system mounted on the robot wrist.

Collaboration


Dive into the Anthony J. Barbera's collaboration.

Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Schlenoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John A. Horst

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David W. Aha

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stephen B. Balakirsky

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Elena R. Messina

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John Evans

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

M L. Fitzgerald

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Charles R. McLean

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Harry A. Scott

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge