Gordon R. Little
University of Dayton Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gordon R. Little.
Applied Optics | 1990
Gordon R. Little; Steven C. Gustafson; Robert A. Senn
The backpropagation neural network learning algorithm is generalized to include complex-valued interconnections for possible optical implementations.
Applied Optics | 1986
Gordon R. Little
Modification de la methode de MacQuigg. Un reseau de commande de phase simple est fabrique en superposant les 2 faisceaux des 2 sources et en faisant un hologramme
Proceedings of SPIE | 1992
Steven C. Gustafson; Gordon R. Little; Mark August Manzardo; Todd Steven Puterbaugh
Stretch and hammer neural networks use radial basis function methods to achieve advantages in generalizing training examples. These advantages include (1) exact learning, (2) maximally smooth modeling of Gaussian deviations from linear relationships, (3) identical outputs for arbitrary linear combination of inputs, and (4) training without adjustable parameters in a predeterminable number of steps. Stretch and hammer neural networks are feedforward architectures that have separate hidden neuron layers for stretching and hammering in accordance with an easily visualized physical model. Training consists of (1) transforming the inputs to principal component coordinates, (2) finding the least squares hyperplane through the training points, (3) finding the Gaussian radial basis function variances at the column diagonal dominance limit, and (4) finding the Gaussian radial basis function coefficients. The Gaussian radial basis function variances are chosen to be as large as possible consistent with maintaining diagonal dominance for the simultaneous linear equations that must be solved to obtain the basis function coefficients. This choice insures that training example generalization is maximally smooth consistent with unique training in a predeterminable number of steps. Stretch and hammer neural networks have been used successfully in several practical applications.
Optical and Hybrid Computing | 1986
Steven C. Gustafson; Steven L. Cartwright; David L. Flannery; Gordon R. Little; John S. Loomis; L. Maugh Vail
The potential of optics-based technology for performing the fundamental decision and interconnection operations required in any computing system is reviewed. Examples in which only interconnection operations are performed optically and in which both interconnection and decision operations are performed optically are discussed.
Applications of Artificial Neural Networks II | 1991
Steven C. Gustafson; Gordon R. Little; Eugene G. Olczak
Locally linear neural networks that were developed to process image data using optical correlator outputs are described. These networks extend well-known nearest neighbor techniques and have the desirable properties of coordinate invariance, data interpolation, linear representation, and data bootstrapping. Simplified locally linear networks are successfully used to estimate the rotation of objects in images for objects located by simulated optical correlation.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
OE/LASE '90, 14-19 Jan., Los Angeles, CA | 1990
Darren M. Simon; Steven C. Gustafson; Gordon R. Little
The convergence of a neural network model based on optical resonator designs is examined for Boolean logic operations. Computer simulations are performed to investigate convergence performance and to assess possible optical implementations. The model is a simple and general mathematical formulation obtained using standard methods in which plane wave amplitudes and phases are specified at discrete times separated by the resonator period. The model is trained and tested as an associative memory neural network using an input state vector and a hologram matrix that evolves in time according to a set of coupled nonlinear difference equations. In general, these equations represent a high-order threshold logic, and the hologram matrix is a function of the outer product matrix of the evolving complex-element state vector. Model parameters are explored to provide insight on convergence mechanisms, robustness to input perturbations, and optimization of convergence times for both training and testing. The model is of interest for optical resonator designs that incorporate (1) dynamic holograms for massively parallel interconnection and storage functions and (2) nonlinear components such as phase conjugate mirrors (with thresholding and gain) for decision operations.2 These components are often incorporated into resonator loops to provide feedback and adaptation interactions. The neural
Neural Network Models for Optical Computing | 1988
Steven C. Gustafson; Gordon R. Little
Binary pattern classification that may be implemented using optical hardware and neural network algorithms is considered. Pattern classification problems that have no concise description (as in classifying handwritten characters) or no concise computation (as in NP-complete problems) are expected to be particularly amenable to this approach. For example, optical processors that efficiently classify binary patterns in accordance with their Boolean function complexity might be designed. As a candidate for such a design, an optical neural network model is discussed that is designed for binary pattern classification and that consists of an optical resonator with a dynamic multiplex-recorded reflection hologram and a phase conjugate mirror with thresholding and gain. In this model, learning or training examples of binary patterns may be recorded on the hologram such that one bit in each pattern marks the pattern class. Any input pattern, including one with an unknown class or marker bit, will be modified by a large number of parallel interactions with the reflection hologram and nonlinear mirror. After perhaps several seconds and 100 billion interactions, a steady-state pattern may develop with a marker bit that represents a minimum-Boolean-complexity classification of the input pattern. Computer simulations are presented that illustrate progress in understanding the behavior of this model and in developing a processor design that could have commanding and enduring performance advantages compared to current pattern classification techniques.
Archive | 1992
Peter G. Raeth; Steven C. Gustafson; Gordon R. Little; Todd S. Puterbaugh
Archive | 1991
Steven C. Gustafson; Gordon R. Little
Archive | 1997
Steven C. Gustafson; Gordon R. Little; Troy A. Rhoadarmer; Theresa A. Tuthill