Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward W. Page is active.

Publication


Featured researches published by Edward W. Page.


IEEE Transactions on Computers | 1991

Optimization using neural networks

Gene A. Tagliarini; J.F. Christ; Edward W. Page

The design of feedback (or recurrent) neural networks to produce good solutions to complex optimization problems is discussed. The theoretical basis for applying neural networks to optimization problems is reviewed, and a design rule that serves as a primitive for constructing a wide class of constraints is introduced. The use of the design rule is illustrated by developing a neural network for producing high-quality solutions to a probabilistic resource allocation task. The resulting neural network has been simulated on a high-performance parallel processor that has been optimized for neural network simulation. >


1988 Los Angeles Symposium--O-E/LASE '88 | 1988

Algorithm Development For Neural Networks

Edward W. Page; Gene A. Tagliarini

This paper is concerned with the development of algorithms for solving optimization problems with a network of artificial neurons. Although there is no notion of step-by-step sequencing in a neural network, it is possible to develop tools and techniques for interconnecting a network of neurons so that it will achieve stable states corresponding to possible solutions of a problem. An approach to implementing an important class of constraints in a network of artificial neurons is presented and illustrated by developing a solution to a resource allocation problem.


systems man and cybernetics | 1996

Enhancing MLP networks using a distributed data representation

Sridhar Narayan; Gene Tagliarini; Edward W. Page

Multilayer perceptron (MLP) networks trained using backpropagation can be slow to converge in many instances. The primary reason for slow learning is the global nature of backpropagation. Another reason is the fact that a neuron in an MLP network functions as a hyperplane separator and is therefore inefficient when applied to classification problems in which decision boundaries are nonlinear. This paper presents a data representational approach that addresses these problems while operating within the framework of the familiar backpropagation model. We examine the use of receptors with overlapping receptive fields as a preprocessing technique for encoding inputs to MLP networks. The proposed data representation scheme, termed ensemble encoding, is shown to promote local learning and to provide enhanced nonlinear separability. Simulation results for well known problems in classification and time-series prediction indicate that the use of ensemble encoding can significantly reduce the time required to train MLP networks. Since the choice of representation for input data is independent of the learning algorithm and the functional form employed in the MLP model, nonlinear preprocessing of network inputs may be an attractive alternative for many MLP network applications.


Applications and science of artificial neural networks. Conference | 1997

Ensemble encoding for time series forecasting with MLP networks

Naveen Aerrabotu; Gene A. Tagliarini; Edward W. Page

Neural networks represent a promising approach to time series forecasting; however, the problem of obtaining good network generalization continues to present a challenge. As a means of improving network generalization ability for time series forecasting applications, this paper investigates the utility of a biologically inspired scheme that employs receptive fields for encoding network inputs. Both single- and multi-step forecasting performances are studied in the context of the sunspot series. Additionally, a heuristic for selecting the placement and dilations of the receptive field functions is presented. The performance of multi-layered perceptron networks trained using the data arising from the encoding scheme is assessed. The heuristic for placing and dilating the receptive fields yielded networks that learn rapidly and have consistently good multi-step prediction capability as compared to other published results.


Proceedings of SPIE | 1996

Signal classification using wavelets and neural networks

Chris Johnson; Edward W. Page; Gene A. Tagliarini

The ability of wavelet decomposition to reduce signals to a relatively small number of components can be exploited in pattern recognition applications. Several recent studies have shown that wavelet decomposition extracts salient signal features which can lead to improved pattern classification by a neural network. The performance of the neural network classifier is heavily dependent upon the ability of wavelet processing to yield discriminatory features. This paper considers the combination of wavelet and neural processing for classifying 1- dimensional signals embedded in noise. Noisy signals were decomposed using the Haar wavelet basis and feedforward neural networks were trained on wavelet series coefficients at various scales. The experiment was repeated using the 4-coefficient Daubechies wavelet basis. The classification accuracy for both wavelet bases is compared over multiple scales, several signal-to-noise ratios, and varying numbers of training epochs.


international symposium on neural networks | 1994

Optimizing locality of data representation in MLP networks

S. Narayan; Edward W. Page

Local learning techniques associated with multilayer perceptron (MLP) networks typically rely on integrating receptive fields into the network model. However, data representation schemes that employ multiple, overlapping receptive fields to preprocess network inputs can be another source of local learning in MLP networks. This paper demonstrates a preprocessing scheme in which a genetic algorithm is used to determine the form and placement of receptive fields in order to optimize locality of representation of MLP network inputs. A performance metric for comparing MLP networks with disparate degrees of freedom is introduced. Both the preprocessing scheme and the proposed metric are demonstrated by using them in the context of predicting the Mackey-Glass chaotic time series.<<ETX>>


IEEE Transactions on Computers | 1980

Minimally Testable Reed-Muller Canonical Forms

Edward W. Page

Arbitrary switching function realizations based upon Reed- Muller canonical (RMC) expansions have been shown to possess many of the desirable properties of easily testable networks. While realizations based upon each of the 2n possible RMC expansions of a given switching function can be tested for permanent stuck-at-0 and stuck-at-1 faults with a small set of input vectors, certain expansions lead to an even smaller test set because of the resulting network topology. In particular, the selection of an RMC expansion that has a minimal number of literals appearing in an even number of product terms will give rise to switching function realizations requiring still fewer tests. This correspondence presents a solution to the problem of selecting the RMC expansion of a given switching function possessing the smallest test set.


Proceedings of SPIE | 1993

Accelerating learning in a neural network for sonar signal classification

Sridhar Narayan; Gene A. Tagliarini; Edward W. Page

Ensemble encoding employs multiple, overlapping receptive fields to yield a distributed representation of analog signals. The effect of ensemble encoding on learning in multi-layer perceptron (MLP) networks is examined by applying it to a neural learning benchmark, sonar signal classification. Results suggest that, when used to encoded input patterns, ensemble encoding can accelerate learning and improve classification accuracy in MLP networks.


southeastcon | 1993

Enhancing neural network functionality with ensemble encoding

Sridhar Narayan; Gene A. Tagliarini; Edward W. Page

The authors discuss ensemble coding, a biologically motivated data representation scheme, which uses multiple receptors with overlapping receptive fields to encode analog inputs to multilayer perceptron (MLP) networks. By generating a distributed representation for input data, ensemble encoding enhances the node connection function for hidden-layer neurons. The added flexibility in constructing nonlinear internal mappings afforded by ensemble encoding is demonstrated through a function approximation example.<<ETX>>


Proceedings of SPIE | 1993

Performance evaluation of a neural network for weapon-to-target assignment

J. F. Christ; Edward W. Page; Gene A. Tagliarini

This paper describes a neural network for assigning weapons to targets and compares its execution time on four distinct machines. The network employs more than 46,000 neural elements and more than 49 million connections. It has produced excellent results for a realistic test scenario. Not only has the neural network produced high quality assignments for a realistic test scenario, the neural approach can potentially deliver results in real-time. The machines employed to evaluate the execution speed of the neural algorithm for assigning weapons to targets were: a DEC VAX 8810, a Neural Emulation Tool (NET) neural network accelerator from Loral Corporation, an Intel iPSC/2 Hypercube and a Cray Y-MP4/464.

Collaboration


Dive into the Edward W. Page's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gene Tagliarini

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge