Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen H. Lane is active.

Publication


Featured researches published by Stephen H. Lane.


Automatica | 1988

Flight control design using non-linear inverse dynamics

Stephen H. Lane; Robert F. Stengel

Aircraft in extreme flight conditions such as stalls and spins experience nonlinear forces and moments generated from high angles of attack and high angular rates. Flight control systems based upon nonlinear inverse dynamics offer the potential for providing improved levels of safety and performance in these flight conditions over the competing designs developed using linearizing assumptions. Inverse dynamics are generated for specific command variable sets of a 12-state nonlinear aircraft model to develop a control system that provides satisfactory response over the entire flight envelope. Detailed descriptions of the inertial dynamic and aerodynamic models are given, and it is shown how the command variable sets are altered as functions of the system state to add stall prevention features to the system. Simulation results are presented for various mission objectives over a range of flight conditions to confirm the effectiveness of the design.


IEEE Control Systems Magazine | 1992

Theory and development of higher-order CMAC neural networks

Stephen H. Lane; David A. Handelman; Jack Gelfand

The cerebellar model articulation controller (CMAC) neural network is capable of learning nonlinear functions extremely quickly due to the local nature of its weight updating. The rectangular shape of CMAC receptive field functions, however, produces discontinuous (staircase) function approximations without inherent analytical derivatives. The ability to learn both functions and function derivatives is important for the development of many online adaptive filter, estimation, and control algorithms. It is shown that use of B-spline receptive field functions in conjunction with more general CMAC weight addressing schemes allows higher-order CMAC neural networks to be developed that can learn both functions and function derivatives. This also allows hierarchical and multilayer CMAC network architectures to be constructed that can be trained using standard error back-propagation learning techniques.<<ETX>>


IEEE Control Systems Magazine | 1990

Integrating neural networks and knowledge-based systems for intelligent robotic control

David A. Handelman; Stephen H. Lane; Jack Gelfand

A methodology is presented for integrating artificial neural networks and knowledge-based systems for the purpose of robotic control. The integration is patterned after models of human motor skill acquisition. The initial control task chosen to demonstrate the integration technique involves teaching a two-link manipulator how to make a specific type of swing. A three-level task hierarchy is defined consisting of low-level reflexes, reflex modulators, and an execution monitor. The rule-based execution monitor first determines how to make a successful swing using rules alone. It then teaches cerebellar model articulation controller (CMAC) neural networks how to accomplish the task by having them observe rule-based task execution. Following initial training, the execution monitor continuously evaluates neural network performance and re-engages swing-maneuver rules whenever changes in the manipulator or its operating environment necessitate retraining of the networks. Simulation results show the interaction between rule-based and network-based system components during various phases of training and supervision.<<ETX>>


international conference on robotics and automation | 1996

Synergy-based learning of hybrid position/force control for redundant manipulators

Vijaykumar Gullapalli; Jack Gelfand; Stephen H. Lane; Wade W. Wilson

Describes an intelligent control architecture designed to endow human-like capabilities to robots and report experimental results that demonstrate the utility of this architecture in controlling a redundant dynamic manipulator in a hybrid position/force control task. Motor synergies that arise when control of a subset of the available degrees of freedom is coupled and coordinated to accomplish specific task sub-goals are used to simplify the problem, of controlling redundant systems by reducing the dimensionality of the control space. Using synergies as a basis control set gives the controller the general ability to execute novel tasks in unstructured environments. In addition, the rapid learning capabilities of the controller permit refinement of control through the acquisition of skilled control with practice.


international conference on robotics and automation | 1989

Integrating neural networks and knowledge-based systems for robotic control

David A. Handelman; Stephen H. Lane; Jack Gelfand

The authors address the issue of integrating both computational paradigms for the purpose of robotic manipulation. The control task chosen to demonstrate the integration technique involves teaching a two-link manipulator how to make a tennis-like swing. A three-level task hierarchy is defined consisting of low-level reflexes, reflex modulators, and an execution monitor. The rule-based execution monitor first determines how to make a successful swing using rules alone. It then teaches a neural network how to accomplish the task by having it observe rule-based task execution. Following initial training, the execution monitor continuously evaluates neural network performance and re-engages swing-maneuver rules whenever changes in the manipulator or its operating environment necessitate retraining of the network. Simulation results show the interaction between rule-based and network-based system components during various phases of training and supervision.<<ETX>>


Archive | 1993

Fast Sensorimotor Skill Acquisition based on Rule-Based Training of Neural Networks

David A. Handelman; Stephen H. Lane

Complex, yet efficient, sensorimotor responses can be learned by an individual if given verbal explanations of how to accomplish a task, examples of typical motions involved, and time to practice. As designers of robot control systems, we aim to emulate characteristics of human-to-human skill transfer, and subsequent improvements in computational efficiency, in order to maximize ultimate robot capability while minimizing the amount of design effort required to obtain it. Our approach to robotic skill acquisition involves transitioning between declarative and reflexive forms of processing, implemented using knowledge-based systems and neural networks. A pole-balancing problem demonstrates how a rule-based control law can be used to train neural networks for fast and dramatic improvements in system performance.


international symposium on intelligent control | 1988

A neural network computational map approach to reflexive motor control

Stephen H. Lane; David A. Handelman; Jack Gelfand

Using the neural basis of human motor control as a guide, it was possible to develop a control strategy based on the localized structure of reflex arcs, antagonistic actuation, and the encoding of movement by neuronal populations. Starting with a dynamic joint model consisting of an agonistic-antagonist pair of actuators with musclelike properties, it is shown that transitions from one posture to another can be accomplished by adjusting the steady-state open-loop stiffness of the opposing muscle pair and modulating the reflex gains as functions of the system state to shape the transient response. A computational map neural network paradigm is used to calculate time-varying reflex gains that move the system towards the direction of minimum error. Simulation results show that desired phase-plane trajectories can be tracked fairly accurately using a reasonable number of repetitions to learn the motion.<<ETX>>


Archive | 1993

Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization

Stephen H. Lane; David A. Handelman; Jack Gelfand

It is thought that the brain produces coordinated action by recruiting suppressing and/or modulating appropriate sets of coordinative structures in the spinal cord. The work presented in this paper examines the ability of robotic systems to produce similar behavior through the modulation of motor synergy strengths using central pattern generator neural networks and reinforcement learning optimization. The motor synergies employed are forward kinematic approximations based on the Berkinblitt model of the spinal frog wiping reflex. The object of the reinforcement learning optimization is to modulate the synergy coefficient strengths in order to produce skilled motions that can be generalized across space and time. Simulation results demonstrate the acquisition of robotic skills associated with minimum energy cost functions.


Applications of Artificial Neural Networks | 1990

Can robots learn like people do

Stephen H. Lane; David A. Handelman; Jack Gelfand

This paper describes an approach to robotic control patterned after models of human skill acquisition and the organization of the human motor control system. The intent of the approach is to develop autonomous robots capable of learning complex tasks in unstructured environments through rule-based inference and self-induced practice. Features of the human motor control system emulated include a hierarchical and modular organization antagonistic actuation and multi-joint motor synergies. Human skill acquisition is emulated using declarative and reflexive representations of knowledge feedback and feedforward implementations of control and attentional mechanisms. Rule-based systems acquire rough-cut task execution and supervise the training of neural networks during the learning process. After the neural networks become capable of controlling system operation reinforcement learning is used to further refine the system performance. The research described is interdisciplinary and addresses fundamental issues in learning and adaptive control dexterous manipulation redundancy management knowledge-based system and neural network applications to control and the computational modelling of cognitive and motor skill acquisition. 296 / SPIE Vol. 1294 Applications of Artificial Neural Networks (1990)


american control conference | 1987

Nonlinear Inverse Dynamics Control Laws - A Sampled Data Approach

Stephen H. Lane; Robert F. Stengel

A sampled-data approach for the implementation of Nonlinear Inverse Dynamics (NID) control laws in real time is presented. The control laws developed place the same number of poles as their continuous-time counterparts, take into account the system dynamics in between the sample points, and embed the computational delays associated with the inverse calculations directly into their design.

Collaboration


Dive into the Stephen H. Lane's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge