Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James David Keeler is active.

Publication


Featured researches published by James David Keeler.


Neural Computation | 1990

Layered neural networks with Gaussian hidden units as universal approximations

Eric Hartman; James David Keeler; Jacek M. Kowalski

A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.


Neural Computation | 1991

Predicting the future: Advantages of semilocal units

Eric Hartman; James David Keeler

In investigating gaussian radial basis function (RBF) networks for their ability to model nonlinear time series, we have found that while RBF networks are much faster than standard sigmoid unit backpropagation for low-dimensional problems, their advantages diminish in high-dimensional input spaces. This is particularly troublesome if the input space contains irrelevant variables. We suggest that this limitation is due to the localized nature of RBFs. To gain the advantages of the highly nonlocal sigmoids and the speed advantages of RBFs, we propose a particular class of semilocal activation functions that is a natural interpolation between these two families. We present evidence that networks using these gaussian bar units avoid the slow learning problem of sigmoid unit networks, and, very importantly, are more accurate than RBF networks in the presence of irrelevant inputs. On the Mackey-Glass and Coupled Lattice Map problems, the speedup over sigmoid networks is so dramatic that the difference in training time between RBF and gaussian bar networks is minor. Gaussian bar architectures that superpose composed gaussians (gaussians-of-gaussians) to approximate the unknown function have the best performance. We postulate that an interesing behavior displayed by gaussian bar functions under gradient descent dynamics, which we call automatic connection pruning, is an important factor in the success of this representation.


Optical Engineering | 1990

Optoelectronic implementation of multilayer neural networks in a single photorefractive crystal

Carsten Peterson; Stephen Redfield; James David Keeler; Eric Hartman

We present a novel, versatile optoelectronic neural network architecture for implementing supervised learning algorithms in photorefractive materials. The system is based on spatial multiplexing rather than the more commonly used angular multiplexing of the interconnect gratings. This simple, single-crystal architecture implements a variety of multilayer supervised learning algorithms including mean field theory, backpropagation, and Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam depletion, rescattering, absorption, and decay effects of the crystal are compensated for by suitably modified supervised learning algorithms.


Isa Transactions | 1998

Process modeling and optimization using focused attention neural networks

James David Keeler; Eric Hartman; Stephen Piche

Abstract Neural networks have been shown to be very useful for modeling and optimization of nonlinear and even chaotic processes. However, in using standard neural network approaches to modeling and optimization of processes in the presence of unmeasured disturbances, a dilemma arises between achieving the accurate predictions needed for modeling and computing the correct gains required for optimization. As shown in this paper, the Focused Attention Neural Network (FANN) provides a solution to this dilemma. Unmeasured disturbances are prevalent in process industry plants and frequently have significant effects on process outputs. In such cases, process outputs often cannot be accurately predicted from the independent process input variables alone. To enhance prediction accuracy, a common neural network modeling practice is to include other dependent process output variables as model inputs. The inclusion of such variables almost invariably benefits prediction accuracy, and is benign if the model is used for prediction alone. However, the process gains , necessary for optimization, sensitivity analysis and other process characterizations, are almost always incorrect in such models. We describe a neural network architecture, the FANN, which obtains accuracy in both predictions and gains in the presence of unmeasured disturbances. The FANN architecture uses dependent process variables to perform feed-forward estimation of unmeasured disturbances, and uses these estimates together with the independent variables as model inputs. Process gains are then calculated correctly as a function of the estimated disturbances and the independent variables. Steady-state optimization solutions thus include compensation for unmeasured disturbances. The effectiveness of the FANN architecture is illustrated using a model of a process with two unmeasured disturbances and using a model of the chaotic Belousov–Zhabotinski chemical reaction.


international symposium on neural networks | 1991

Semi-local units for prediction

Eric Hartman; James David Keeler

The authors consider a class of semilocal activation functions, which respond to more localized regions of input space than sigmoid functions but less localized regions than radial basis functions (RBFs). In particular, they examine Gaussian bar functions, which sum the Gaussian responses from each input dimension. They present evidence that Gaussian bar networks avoid the slow learning problems of sigmoid networks and deal more robustly with irrelevant inputs than RBF networks. On the Mackey-Glass problem, the speedup over sigmoid networks is so dramatic that the difference in training time between RBF and Gaussian bar networks is minor. Architectures that superpose composed Gaussians (Gaussians-of Gaussians) to approximate the unknown function have the best performance. An automatic connection pruning mechanism inherent in the Gaussian bar function is very likely a key factor in the success of this representation.<<ETX>>


Neural Computation | 1990

An optoelectronic architecture for multilayer learning in a single photorefractive crystal

Carsten Peterson; Stephen Redfield; James David Keeler; Eric Hartman

We propose a simple architecture for implementing supervised neural network models optically with photorefractive technology. The architecture is very versatile: a wide range of supervised learning algorithms can be implemented including mean-field-theory, backpropagation, and Kanerva-style networks. Our architecture is based on a single crystal with spatial multiplexing rather than the more commonly used angular multiplexing. It handles hidden units and places no restrictions on connectivity. Associated with spatial multiplexing are certain physical phenomena, rescattering and beam depletion, which tend to degrade the matrix multiplications. Detailed simulations including beam absorption and grating decay show that the supervised learning algorithms (slightly modified) compensate for these degradations.


Archive | 1993

Virtual continuous emission monitoring system with sensor validation

James David Keeler; John P. Havener; Devendra Godbole; Ralph Bruce Ferguson


Archive | 1994

Virtual emissions monitor for automobile

James David Keeler; John P. Havener; Devendra Godbole; B. Ferguson Ii Ralph


Archive | 1992

Residual activation neural network

James David Keeler; Eric Hartman; Kadir Liano; Ralph Bruce Ferguson


Archive | 1995

Virtual continuous emission monitoring system

James David Keeler; John P. Havener; Devendra Godbole; Ralph Bruce Ferguson

Collaboration


Dive into the James David Keeler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge