Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Hartman is active.

Publication


Featured researches published by Eric Hartman.


Neural Computation | 1990

Layered neural networks with Gaussian hidden units as universal approximations

Eric Hartman; James David Keeler; Jacek M. Kowalski

A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.


Neural Networks | 1989

Explorations of the Mean Field Theory Learning Algorithm

Carsten Peterson; Eric Hartman

The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.


Neural Computation | 1991

Predicting the future: Advantages of semilocal units

Eric Hartman; James David Keeler

In investigating gaussian radial basis function (RBF) networks for their ability to model nonlinear time series, we have found that while RBF networks are much faster than standard sigmoid unit backpropagation for low-dimensional problems, their advantages diminish in high-dimensional input spaces. This is particularly troublesome if the input space contains irrelevant variables. We suggest that this limitation is due to the localized nature of RBFs. To gain the advantages of the highly nonlocal sigmoids and the speed advantages of RBFs, we propose a particular class of semilocal activation functions that is a natural interpolation between these two families. We present evidence that networks using these gaussian bar units avoid the slow learning problem of sigmoid unit networks, and, very importantly, are more accurate than RBF networks in the presence of irrelevant inputs. On the Mackey-Glass and Coupled Lattice Map problems, the speedup over sigmoid networks is so dramatic that the difference in training time between RBF and gaussian bar networks is minor. Gaussian bar architectures that superpose composed gaussians (gaussians-of-gaussians) to approximate the unknown function have the best performance. We postulate that an interesing behavior displayed by gaussian bar functions under gradient descent dynamics, which we call automatic connection pruning, is an important factor in the success of this representation.


Neural Computation | 2000

Training Feedforward Neural Networks with Gain Constraints

Eric Hartman

Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.


Optical Engineering | 1990

Optoelectronic implementation of multilayer neural networks in a single photorefractive crystal

Carsten Peterson; Stephen Redfield; James David Keeler; Eric Hartman

We present a novel, versatile optoelectronic neural network architecture for implementing supervised learning algorithms in photorefractive materials. The system is based on spatial multiplexing rather than the more commonly used angular multiplexing of the interconnect gratings. This simple, single-crystal architecture implements a variety of multilayer supervised learning algorithms including mean field theory, backpropagation, and Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam depletion, rescattering, absorption, and decay effects of the crystal are compensated for by suitably modified supervised learning algorithms.


Isa Transactions | 1998

Process modeling and optimization using focused attention neural networks

James David Keeler; Eric Hartman; Stephen Piche

Abstract Neural networks have been shown to be very useful for modeling and optimization of nonlinear and even chaotic processes. However, in using standard neural network approaches to modeling and optimization of processes in the presence of unmeasured disturbances, a dilemma arises between achieving the accurate predictions needed for modeling and computing the correct gains required for optimization. As shown in this paper, the Focused Attention Neural Network (FANN) provides a solution to this dilemma. Unmeasured disturbances are prevalent in process industry plants and frequently have significant effects on process outputs. In such cases, process outputs often cannot be accurately predicted from the independent process input variables alone. To enhance prediction accuracy, a common neural network modeling practice is to include other dependent process output variables as model inputs. The inclusion of such variables almost invariably benefits prediction accuracy, and is benign if the model is used for prediction alone. However, the process gains , necessary for optimization, sensitivity analysis and other process characterizations, are almost always incorrect in such models. We describe a neural network architecture, the FANN, which obtains accuracy in both predictions and gains in the presence of unmeasured disturbances. The FANN architecture uses dependent process variables to perform feed-forward estimation of unmeasured disturbances, and uses these estimates together with the independent variables as model inputs. Process gains are then calculated correctly as a function of the estimated disturbances and the independent variables. Steady-state optimization solutions thus include compensation for unmeasured disturbances. The effectiveness of the FANN architecture is illustrated using a model of a process with two unmeasured disturbances and using a model of the chaotic Belousov–Zhabotinski chemical reaction.


international symposium on neural networks | 1991

Semi-local units for prediction

Eric Hartman; James David Keeler

The authors consider a class of semilocal activation functions, which respond to more localized regions of input space than sigmoid functions but less localized regions than radial basis functions (RBFs). In particular, they examine Gaussian bar functions, which sum the Gaussian responses from each input dimension. They present evidence that Gaussian bar networks avoid the slow learning problems of sigmoid networks and deal more robustly with irrelevant inputs than RBF networks. On the Mackey-Glass problem, the speedup over sigmoid networks is so dramatic that the difference in training time between RBF and Gaussian bar networks is minor. Architectures that superpose composed Gaussians (Gaussians-of Gaussians) to approximate the unknown function have the best performance. An automatic connection pruning mechanism inherent in the Gaussian bar function is very likely a key factor in the success of this representation.<<ETX>>


Neural Computation | 1990

An optoelectronic architecture for multilayer learning in a single photorefractive crystal

Carsten Peterson; Stephen Redfield; James David Keeler; Eric Hartman

We propose a simple architecture for implementing supervised neural network models optically with photorefractive technology. The architecture is very versatile: a wide range of supervised learning algorithms can be implemented including mean-field-theory, backpropagation, and Kanerva-style networks. Our architecture is based on a single crystal with spatial multiplexing rather than the more commonly used angular multiplexing. It handles hidden units and places no restrictions on connectivity. Associated with spatial multiplexing are certain physical phenomena, rescattering and beam depletion, which tend to degrade the matrix multiplications. Detailed simulations including beam absorption and grating decay show that the supervised learning algorithms (slightly modified) compensate for these degradations.


Archive | 2007

Development of PUNDA (Parametric Universal Nonlinear Dynamics Approximator) Models for Self-Validating Knowledge-Guided Modelling of Nonlinear Processes in Particle Accelerators \& Industry

Bijan Sayyarrodsari; Carl Schweiger; Eric Hartman

The difficult problems being tackled in the accelerator community are those that are nonlinear, substantially unmodeled, and vary over time. Such problems are ideal candidates for model-based optimization and control if representative models of the problem can be developed that capture the necessary mathematical relations and remain valid throughout the operation region of the system, and through variations in system dynamics. The goal of this proposal is to develop the methodology and the algorithms for building high-fidelity mathematical representations of complex nonlinear systems via constrained training of combined first-principles and neural network models.


Archive | 1992

Residual activation neural network

James David Keeler; Eric Hartman; Kadir Liano; Ralph Bruce Ferguson

Collaboration


Dive into the Eric Hartman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge