David A. Handelman
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David A. Handelman.
IEEE Control Systems Magazine | 1992
Stephen H. Lane; David A. Handelman; Jack Gelfand
The cerebellar model articulation controller (CMAC) neural network is capable of learning nonlinear functions extremely quickly due to the local nature of its weight updating. The rectangular shape of CMAC receptive field functions, however, produces discontinuous (staircase) function approximations without inherent analytical derivatives. The ability to learn both functions and function derivatives is important for the development of many online adaptive filter, estimation, and control algorithms. It is shown that use of B-spline receptive field functions in conjunction with more general CMAC weight addressing schemes allows higher-order CMAC neural networks to be developed that can learn both functions and function derivatives. This also allows hierarchical and multilayer CMAC network architectures to be constructed that can be trained using standard error back-propagation learning techniques.<<ETX>>
IEEE Control Systems Magazine | 1990
David A. Handelman; Stephen H. Lane; Jack Gelfand
A methodology is presented for integrating artificial neural networks and knowledge-based systems for the purpose of robotic control. The integration is patterned after models of human motor skill acquisition. The initial control task chosen to demonstrate the integration technique involves teaching a two-link manipulator how to make a specific type of swing. A three-level task hierarchy is defined consisting of low-level reflexes, reflex modulators, and an execution monitor. The rule-based execution monitor first determines how to make a successful swing using rules alone. It then teaches cerebellar model articulation controller (CMAC) neural networks how to accomplish the task by having them observe rule-based task execution. Following initial training, the execution monitor continuously evaluates neural network performance and re-engages swing-maneuver rules whenever changes in the manipulator or its operating environment necessitate retraining of the networks. Simulation results show the interaction between rule-based and network-based system components during various phases of training and supervision.<<ETX>>
Journal of Guidance Control and Dynamics | 1989
Robert F. Stengel; David A. Handelman
A technique for rule-based fault-tolerant flight control is presented. The objective is to define methods for designing control systems capable of accommodating a wide range of aircraft failures, including sensor, control, and structural failures. A software architecture that integrates quantitative analytical redundancy techniques and heuristic expert system concepts for the purpose of in-flight, real-time fault tolerance is described. The resultant controller uses a rule-based expert system approach to transform the problem of failure accommodation task scheduling and selection into a problem of search. Control system performance under sensor and control failures using linear discrete-time deterministic simulations of a tandem-rotor helicopters dynamics is demonstrated. It is found that the rule-based control technique enhances existing redundancy management systems, providing smooth integration of symbolic and numeric computation, a search-based decision-making mechanism, straightforward system organization and debugging, an incremental growth capability, and inherent parallelism for computational speed.
american control conference | 1987
David A. Handelman; Robert F. Stengel
A method for control employing rule-based search is reviewed, and a Rule-Based Controller achieving economical real-time performance is described. Code optimization, in the form of LISP-to-Pascal knowledge base translation, provides real-time search execution speed and a processing environment enabling highly integrated symbolic and numeric computation. With a multiprocessor software architecture specifying rule-based protocol for control task communication, and a hardware architecture providing concurrent implementation within a multi-microprocessor system, the controller realizes a set of cooperating real-time expert systems. Based on experience gained through the design and implementation of a Rule-Based Flight Control System, the proposed approach appears applicable to a large class of complex control problems.
international conference on robotics and automation | 1989
David A. Handelman; Stephen H. Lane; Jack Gelfand
The authors address the issue of integrating both computational paradigms for the purpose of robotic manipulation. The control task chosen to demonstrate the integration technique involves teaching a two-link manipulator how to make a tennis-like swing. A three-level task hierarchy is defined consisting of low-level reflexes, reflex modulators, and an execution monitor. The rule-based execution monitor first determines how to make a successful swing using rules alone. It then teaches a neural network how to accomplish the task by having it observe rule-based task execution. Following initial training, the execution monitor continuously evaluates neural network performance and re-engages swing-maneuver rules whenever changes in the manipulator or its operating environment necessitate retraining of the network. Simulation results show the interaction between rule-based and network-based system components during various phases of training and supervision.<<ETX>>
american control conference | 1988
David A. Handelman; Robert F. Stengel
This paper investigates how certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learnig. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.
Archive | 1993
David A. Handelman; Stephen H. Lane
Complex, yet efficient, sensorimotor responses can be learned by an individual if given verbal explanations of how to accomplish a task, examples of typical motions involved, and time to practice. As designers of robot control systems, we aim to emulate characteristics of human-to-human skill transfer, and subsequent improvements in computational efficiency, in order to maximize ultimate robot capability while minimizing the amount of design effort required to obtain it. Our approach to robotic skill acquisition involves transitioning between declarative and reflexive forms of processing, implemented using knowledge-based systems and neural networks. A pole-balancing problem demonstrates how a rule-based control law can be used to train neural networks for fast and dramatic improvements in system performance.
international symposium on intelligent control | 1988
Stephen H. Lane; David A. Handelman; Jack Gelfand
Using the neural basis of human motor control as a guide, it was possible to develop a control strategy based on the localized structure of reflex arcs, antagonistic actuation, and the encoding of movement by neuronal populations. Starting with a dynamic joint model consisting of an agonistic-antagonist pair of actuators with musclelike properties, it is shown that transitions from one posture to another can be accomplished by adjusting the steady-state open-loop stiffness of the opposing muscle pair and modulating the reflex gains as functions of the system state to shape the transient response. A computational map neural network paradigm is used to calculate time-varying reflex gains that move the system towards the direction of minimum error. Simulation results show that desired phase-plane trajectories can be tracked fairly accurately using a reasonable number of repetitions to learn the motion.<<ETX>>
IFAC Proceedings Volumes | 1988
David A. Handelman; Robert F. Stengel
Abstract This paper addresses issues regarding the application of artificial intelligence techniques to real-time control. Advantages associated with knowledge-based programming are discussed. A proposed rule-based control technique is summarized and applied to the problem of automated aircraft emergency procedure execution. Although emergency procedures are by definition predominately procedural, their numerous evaluation and decision points make a declarative representation of the knowledge they encode highly attractive, resulting in an organized and easily maintained software hierarchy. Simulation results demonstrate that real-time performance can be obtained using a microprocessor-based controller. It is concluded that a rule-based control system design approach may prove more useful than conventional methods under certain circumstances, and that declarative rules with embedded procedural code provide a sound basis for the construction of complex, yet economical, control systems.
Archive | 1993
Stephen H. Lane; David A. Handelman; Jack Gelfand
It is thought that the brain produces coordinated action by recruiting suppressing and/or modulating appropriate sets of coordinative structures in the spinal cord. The work presented in this paper examines the ability of robotic systems to produce similar behavior through the modulation of motor synergy strengths using central pattern generator neural networks and reinforcement learning optimization. The motor synergies employed are forward kinematic approximations based on the Berkinblitt model of the spinal frog wiping reflex. The object of the reinforcement learning optimization is to modulate the synergy coefficient strengths in order to produce skilled motions that can be generalized across space and time. Simulation results demonstrate the acquisition of robotic skills associated with minimum energy cost functions.