Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gintaras Vincent Puskorius is active.

Publication


Featured researches published by Gintaras Vincent Puskorius.


IEEE Transactions on Neural Networks | 1994

Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks

Gintaras Vincent Puskorius; Lee A. Feldkamp

Although the potential of the powerful mapping and representational capabilities of recurrent network architectures is generally recognized by the neural network research community, recurrent neural networks have not been widely used for the control of nonlinear dynamical systems, possibly due to the relative ineffectiveness of simple gradient descent training algorithms. Developments in the use of parameter-based extended Kalman filter algorithms for training recurrent networks may provide a mechanism by which these architectures will prove to be of practical value. This paper presents a decoupled extended Kalman filter (DEKF) algorithm for training of recurrent networks with special emphasis on application to control problems. We demonstrate in simulation the application of the DEKF algorithm to a series of example control problems ranging from the well-known cart-pole and bioreactor benchmark problems to an automotive subsystem, engine idle speed control. These simulations suggest that recurrent controller networks trained by Kalman filter methods can combine the traditional features of state-space controllers and observers in a homogeneous architecture for nonlinear dynamical systems, while simultaneously exhibiting less sensitivity than do purely feedforward controller networks to changes in plant parameters and measurement noise.


international symposium on neural networks | 1991

Decoupled extended Kalman filter training of feedforward layered networks

Gintaras Vincent Puskorius; Lee A. Feldkamp

Presents a training algorithm for feedforward layered networks based on a decoupled extended Kalman filter (DEKF). The authors present an artificial process noise extension to DEKF that increases its convergence rate and assists in the avoidance of local minima. Computationally efficient formulations for two particularly natural and useful cases of DEKF are given. Through a series of pattern classification and function approximation experiments, three members of DEKF are compared with one another and with standard backpropagation (SBP). These studies demonstrate that the judicious grouping of weights along with the use of artificial process noise in DEKF result in input-output mapping performance that is comparable to the global extended Kalman algorithm, and is often superior to SBP, while requiring significantly fewer presentations of training data than SBP and less overall training time than either of these procedures.<<ETX>>


Proceedings of the IEEE | 1998

A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification

Lee A. Feldkamp; Gintaras Vincent Puskorius

We present a coherent neural net based framework for solving various signal processing problems. It relies on the assertion that time-lagged recurrent networks possess the necessary representational capabilities to act as universal approximators of nonlinear dynamical systems. This applies to system identification, time-series prediction, nonlinear filtering, adaptive filtering, and temporal pattern classification. We address the development of models of nonlinear dynamical systems, in the form of time-lagged recurrent neural nets, which can be used without further training. We employ a weight update procedure based on the extended Kalman filter (EKF). Against the tendency for a net to forget earlier learning as it processes new examples, we develop a technique called multistream training. We demonstrate our framework by applying it to 4 problems. First, we show that a single time-lagged recurrent net can be trained to produce excellent one-time-step predictions for two different time series and also to be robust to severe errors in the input sequence. Second, we model stably a complex system containing significant process noise. The remaining two problems are drawn from real-world automotive applications. One involves input-output modeling of the dynamic behavior of a catalyst-sensor system which is exposed to an operating engines exhaust stream, the other the real-time and continuous detection of engine misfire.


Proceedings of the IEEE | 1996

Dynamic neural network methods applied to on-vehicle idle speed control

Gintaras Vincent Puskorius; Lee A. Feldkamp; Leighton Ira Davis

The application of neural network techniques to the control of nonlinear dynamical systems has been the subject of substantial interest and research in recent years. In our own work, we have concentrated on extending the dynamic gradient formalism as established by Narendra and Parthasarathy (1990, 1991), and on employing it for applications in the control of nonlinear systems, with specific emphasis on automotive subsystems. The results we have reported to date, however, have been based exclusively upon simulation studies. In this paper, we establish that dynamic gradient training methods can be successfully used for synthesizing neural network controllers directly on instances of real systems. In particular we describe the application of dynamic gradient methods for training a time-lagged recurrent neural network feedback controller for the problem of engine idle speed control on an actual vehicle, discuss hardware and software issues, and provide representative experimental results.


international conference on robotics and automation | 1987

Global calibration of a robot/vision system

Gintaras Vincent Puskorius; Lee A. Feldkamp

The success of industrial robotic applications involving off-line programming and sensory-based guidance will depend significantly upon the positioning accuracy of robots, the accuracy of sensing devices and the software coupling between the robot controller and the sensors. These accuracy and coupling issues are addressed by a methodology developed for the automatic global calibration of a robot/vision system. The methodology employs a stereo-pair of CCD array cameras, which are mounted to the end-effector of a six-axis revolute robot arm. With an automatic procedure,three-dimensional coordinate measurements are made, relative to the robots base frame, of a single spherical point in space at numerous and widely varying joint-angle configurations. Based upon a modified Denavit-Hartenburg robot kinematic model, both geometric and nongeometric robotic errors are inferred simultaneously with the geometric errors of the vision system using an iterative least-squares algorithm. Preliminary results indicate an approximately threefold improvement in positioning accuracy of the robot arm.


international symposium on neural networks | 1994

Training controllers for robustness: multi-stream DEKF

Lee A. Feldkamp; Gintaras Vincent Puskorius

Kalman-filter-based training has been shown to be advantageous in many training applications. By its nature, extended Kalman filter (EKF) training is realized with instance-by-instance updates, rather than by performing updates at the end of a batch of training instances or patterns. Motivated originally by the desire to be able to base an update an a collection of instances, rather than just one, we recognized that the simple construct of multiple streams of training examples allows a batch-like update to be performed without violating an underlying principle of Kalman training, vis. that the approximate error covariance matrix remain consistent with the updates that have actually been performed. In this paper, we present this construct and show how it may be used to train robust controllers, i.e. controllers that perform well for a range of plants.<<ETX>>


international symposium on neural networks | 1992

Model reference adaptive control with recurrent networks trained by the dynamic DEKF algorithm

Gintaras Vincent Puskorius; Lee A. Feldkamp

Two fundamental extensions of the dynamic backpropagation (DBP) gradient descent procedure which generally result in faster convergence times and higher quality solutions are presented. The decoupled extended Kalman filter training algorithm (DEKF) for feedforward layered networks is extended to the training of neural controllers in a dynamic indirect adaptive control scheme; the resulting algorithm is called dynamic DEKF (or DDEKF). The DDEKF neural controller training algorithm is extended to include control network architectures with explicit internal feedback connections. It is demonstrated that the DDEKF algorithm has computational complexity and requirements that are similar to those of DBP for control networks with a large number of recurrent connections. The use of these extensions for a model reference adaptive control (MRAC) problem in which the example dynamical system is highly nonlinear and does not possess a unique inverse is presented.<<ETX>>


intelligent vehicles symposium | 1992

Neural network modeling and control of an anti-lock brake system

Leighton Ira Davis; Gintaras Vincent Puskorius; F. Yuan; Lee A. Feldkamp

The authors have previously described neural-network-based methods for modeling automotive systems and training near-optimal controllers. These methods are based on the premise that the physical system can be sufficiently instrumented during network training so that accurate evaluation of the effect of control actions is possible. In certain systems, such a automotive anti-lock braking (ABS), it may be costly to obtain the detailed data that would be required to exploit the full capabilities of neural methods. The present paper reports an initial simulation-based study to determine the performance potential of controllers designed with these methods. Such studies will help determine whether the cost of carrying out neural training methods on actual systems is justified.<<ETX>>


international symposium on neural networks | 1992

Neural control systems trained by dynamic gradient methods for automotive applications

Lee A. Feldkamp; Gintaras Vincent Puskorius; Leighton Ira Davis; F. Yuan

The use of dynamic gradient-based training of neural controllers for automotive systems is illustrated. The authors use a recurrent structure that embeds an identification network and a neural controller and that properly treats both short- and long-term effects of controller weight changes. This results in an approximately optimal control strategy. Feedforward and hybrid feedforward-feedback neural controllers trained by dynamic backpropagation and a dynamic decoupled extended Kalman filter (DDEKF) are investigated. A quarter-car active suspension model is considered in both linear and nonlinear forms, and representative results are presented. Methods using higher-order information, e.g., DDEKF are very effective in comparison to methods based exclusively upon gradient descent, e.g., dynamic backpropagation (DBP). The use of a recurrent structure for obtaining derivatives for controller training is illustrated.<<ETX>>


Proceedings of SPIE | 1992

Recurrent network training with the decoupled-extended-Kalman-filter algorithm

Gintaras Vincent Puskorius; Lee A. Feldkamp

In this paper we describe the extension of our decoupled extended Kalman filter (DEKF) training algorithm to networks with internal recurrent (or feedback) connections; we call the resulting algorithm dynamic DEKF (or DDEKF for short). Analysis of DDEKFs computational complexity and empirical evidence suggest significant computational and performance advantages in comparison to training algorithms based exclusively upon gradient descent. We demonstrate DDEKFs effectiveness by training networks with recurrent connections for four different classes of problems. First, DDEKF is used to train a recurrent network that produces as its output a delayed copy of its input. Second, recurrent networks are trained by DDEKF to recognize sequences of events with arbitrarily long time delays between the events. Third, DDEKF is applied to the training of identification networks to act as models of the input-output behavior for nonlinear dynamical systems. We conclude the paper with a brief discussion of the extension of DDEKF to the training of neural controllers with internal feedback connections.

Collaboration


Dive into the Gintaras Vincent Puskorius's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge