Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee A. Feldkamp is active.

Publication


Featured researches published by Lee A. Feldkamp.


Journal of Biomechanics | 1994

The relationship between the structural and orthogonal compressive properties of trabecular bone

Robert W. Goulet; Steven A. Goldstein; Michael J. Ciarelli; Janet L. Kuhn; M. B. Brown; Lee A. Feldkamp

In this study, cubes of trabecular bone with a wide range of structural properties were scanned on a micro-computed tomography system to produce complete three-dimensional digitizations from which morphological and architectural parameters could be measured in a nondestructive manner. The cubes were then mechanically tested in uniaxial compression in three orthogonal directions and to failure in one direction to find the orthogonal tangent elastic moduli and ultimate strengths. After testing, the cubes were weighed and ashed to determine the apparent and ash densities. A high correlation between the basic stereologic measurements was found, indicating that there is a relationship between the amount of bone and number of trabeculae in cancellous bone. Regression analysis was used to estimate the modulus and ultimate strength; these regressions accounted for 68-90% of the variance in these measures. These relationships were dependent on the metaphyseal type and donor, with the modulus also dependent on the direction of testing. This indicates that the properties of the individual trabeculae, as well as their amount and organization, may be important in predicting the mechanical properties of cancellous bone.


IEEE Transactions on Neural Networks | 1994

Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks

Gintaras Vincent Puskorius; Lee A. Feldkamp

Although the potential of the powerful mapping and representational capabilities of recurrent network architectures is generally recognized by the neural network research community, recurrent neural networks have not been widely used for the control of nonlinear dynamical systems, possibly due to the relative ineffectiveness of simple gradient descent training algorithms. Developments in the use of parameter-based extended Kalman filter algorithms for training recurrent networks may provide a mechanism by which these architectures will prove to be of practical value. This paper presents a decoupled extended Kalman filter (DEKF) algorithm for training of recurrent networks with special emphasis on application to control problems. We demonstrate in simulation the application of the DEKF algorithm to a series of example control problems ranging from the well-known cart-pole and bioreactor benchmark problems to an automotive subsystem, engine idle speed control. These simulations suggest that recurrent controller networks trained by Kalman filter methods can combine the traditional features of state-space controllers and observers in a homogeneous architecture for nonlinear dynamical systems, while simultaneously exhibiting less sensitivity than do purely feedforward controller networks to changes in plant parameters and measurement noise.


international symposium on neural networks | 1991

Decoupled extended Kalman filter training of feedforward layered networks

Gintaras Vincent Puskorius; Lee A. Feldkamp

Presents a training algorithm for feedforward layered networks based on a decoupled extended Kalman filter (DEKF). The authors present an artificial process noise extension to DEKF that increases its convergence rate and assists in the avoidance of local minima. Computationally efficient formulations for two particularly natural and useful cases of DEKF are given. Through a series of pattern classification and function approximation experiments, three members of DEKF are compared with one another and with standard backpropagation (SBP). These studies demonstrate that the judicious grouping of weights along with the use of artificial process noise in DEKF result in input-output mapping performance that is comparable to the global extended Kalman algorithm, and is often superior to SBP, while requiring significantly fewer presentations of training data than SBP and less overall training time than either of these procedures.<<ETX>>


Proceedings of the IEEE | 1998

A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification

Lee A. Feldkamp; Gintaras Vincent Puskorius

We present a coherent neural net based framework for solving various signal processing problems. It relies on the assertion that time-lagged recurrent networks possess the necessary representational capabilities to act as universal approximators of nonlinear dynamical systems. This applies to system identification, time-series prediction, nonlinear filtering, adaptive filtering, and temporal pattern classification. We address the development of models of nonlinear dynamical systems, in the form of time-lagged recurrent neural nets, which can be used without further training. We employ a weight update procedure based on the extended Kalman filter (EKF). Against the tendency for a net to forget earlier learning as it processes new examples, we develop a technique called multistream training. We demonstrate our framework by applying it to 4 problems. First, we show that a single time-lagged recurrent net can be trained to produce excellent one-time-step predictions for two different time series and also to be robust to severe errors in the input sequence. Second, we model stably a complex system containing significant process noise. The remaining two problems are drawn from real-world automotive applications. One involves input-output modeling of the dynamic behavior of a catalyst-sensor system which is exposed to an operating engines exhaust stream, the other the real-time and continuous detection of engine misfire.


Proceedings of the IEEE | 1996

Dynamic neural network methods applied to on-vehicle idle speed control

Gintaras Vincent Puskorius; Lee A. Feldkamp; Leighton Ira Davis

The application of neural network techniques to the control of nonlinear dynamical systems has been the subject of substantial interest and research in recent years. In our own work, we have concentrated on extending the dynamic gradient formalism as established by Narendra and Parthasarathy (1990, 1991), and on employing it for applications in the control of nonlinear systems, with specific emphasis on automotive subsystems. The results we have reported to date, however, have been based exclusively upon simulation studies. In this paper, we establish that dynamic gradient training methods can be successfully used for synthesizing neural network controllers directly on instances of real systems. In particular we describe the application of dynamic gradient methods for training a time-lagged recurrent neural network feedback controller for the problem of engine idle speed control on an actual vehicle, discuss hardware and software issues, and provide representative experimental results.


international symposium on neural networks | 2003

Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks

Lee A. Feldkamp; Danil V. Prokhorov; Timothy Mark Feldkamp

We illustrate the ability of a fixed-weight neural network, trained with Kalman filter methods, to perform tasks that are usually entrusted to an explicitly adaptive system. Following a simple example, we demonstrate that such a network can be trained to exhibit input-output behavior that depends on which of two conditioning tasks was performed a substantial number of time steps in the past. This behavior can also be made to survive an intervening interference task.


Applied Intelligence | 2004

Neural Learning from Unbalanced Data

Yi Lu Murphey; Hong Guo; Lee A. Feldkamp

This paper describes the result of our study on neural learning to solve the classification problems in which data is unbalanced and noisy. We conducted the study on three different neural network architectures, multi-layered Back Propagation, Radial Basis Function, and Fuzzy ARTMAP using three different training methods, duplicating minority class examples, Snowball technique and multidimensional Gaussian modeling of data noise. Three major issues are addressed: neural learning from unbalanced data examples, neural learning from noisy data, and making intentional biased decisions. We argue that by properly generated extra training data examples around the noise densities, we can train a neural network that has a stronger capability of generalization and better control of the classification error of the trained neural network. In particular, we focus on problems that require a neural network to make favorable classification to a particular class such as classifying normal(pass)/abnormal(fail) vehicles in an assembly plant. In addition, we present three methods that quantitatively measure the noise level of a given data set. All experiments were conducted using data examples downloaded directly from test sites of an automobile assembly plant. The experimental results showed that the proposed multidimensional Gaussian noise modeling algorithm was very effective in generating extra data examples that can be used to train a neural network to make favorable decisions for the minority class and to have increased generalization capability.


international conference on robotics and automation | 1987

Global calibration of a robot/vision system

Gintaras Vincent Puskorius; Lee A. Feldkamp

The success of industrial robotic applications involving off-line programming and sensory-based guidance will depend significantly upon the positioning accuracy of robots, the accuracy of sensing devices and the software coupling between the robot controller and the sensors. These accuracy and coupling issues are addressed by a methodology developed for the automatic global calibration of a robot/vision system. The methodology employs a stereo-pair of CCD array cameras, which are mounted to the end-effector of a six-axis revolute robot arm. With an automatic procedure,three-dimensional coordinate measurements are made, relative to the robots base frame, of a single spherical point in space at numerous and widely varying joint-angle configurations. Based upon a modified Denavit-Hartenburg robot kinematic model, both geometric and nongeometric robotic errors are inferred simultaneously with the geometric errors of the vision system using an iterative least-squares algorithm. Preliminary results indicate an approximately threefold improvement in positioning accuracy of the robot arm.


international symposium on neural networks | 1994

Training controllers for robustness: multi-stream DEKF

Lee A. Feldkamp; Gintaras Vincent Puskorius

Kalman-filter-based training has been shown to be advantageous in many training applications. By its nature, extended Kalman filter (EKF) training is realized with instance-by-instance updates, rather than by performing updates at the end of a batch of training instances or patterns. Motivated originally by the desire to be able to base an update an a collection of instances, rather than just one, we recognized that the simple construct of multiple streams of training examples allows a batch-like update to be performed without violating an underlying principle of Kalman training, vis. that the approximate error covariance matrix remain consistent with the updates that have actually been performed. In this paper, we present this construct and show how it may be used to train robust controllers, i.e. controllers that perform well for a range of plants.<<ETX>>


Archive | 1998

Enhanced Multi-Stream Kalman Filter Training for Recurrent Networks

Lee A. Feldkamp; Danil V. Prokhorov; Charles F. Eagen; F. Yuan

We present a framework for the training of time-lagged recurrent networks that has been used for a wide variety of both abstract problems and practical applications. Our method is based on rigorous computation of dynamic derivatives, using various forms of backpropagation through time (BPTT), a second-order weight update scheme that uses the extended Kaiman filter, and data delivery mechanics designed for sequential weight updates with broad coverage of the available data. We extend our previous discussions of this framework by discussing various alternative forms of BPTT. In addition, we consider explicitly the issue of dealing with and optimizing network initial states. We discuss the initial state problem from the standpoint of making time-series predictions.

Collaboration


Dive into the Lee A. Feldkamp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge