Steven K. Rogers
Battelle Memorial Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven K. Rogers.
Pattern Analysis and Applications | 1999
John G. Keller; Steven K. Rogers; Matthew Kabrisky; Mark E. Oxley
Abstract: The automated recognition of targets in complex backgrounds is a difficult problem, yet humans perform such tasks with ease. We therefore propose a recognition model based on behavioural and physiological aspects of the human visual system. Emulating saccadic behaviour, an object is first memorised as a sequence of fixations. At each fixation an artificial visual field is constructed using a multi-resolution/ orientation Gabor filterbank, edge features are extracted, and a new saccadic location is automatically selected. When a new image is scanned and a ‘familiar’ field of view encountered, the memorised saccadic sequence is executed over the new image. If the expected visual field is found around each fixation point, the memorised object is recognised. Results are presented from trials in which individual objects were first memorised and then searched for in collages of similar objects acting as distractors. In the different collages, entries of the memorised objects were subjected to various combinations of rotation, translation and noise corruption. The model successfully detected the memorised object in over 93% of the ‘object present’ trials, and correctly rejected collages in over 98% of the trials in which the object was not present in the collage. These results are compared with those obtained using a correlation-based recogniser, and the behavioural model is found to provide superior performance.
Applications and science of artificial neural networks. Conference | 1997
Thomas F. Rathbun; Steven K. Rogers; Martin P. DeSimio; Mark E. Oxley
The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICAs training technique yields novel feature selection technique and hidden node pruning technique
Hybrid Image and Signal Processing II | 1990
Steven K. Rogers; John D. Cline; Matthew Kabrisky; James P. Mills
In this paper, the application of Joint Transform Correlator (JTC) techniques for use in automatic target recognition using actual sensor data is addressed. The problem of interest is the detection and classification of objects in forward looking infrared (FLIR) images. A Joint Transform Correlator architecture using a single magnetooptical spatial light modulator (MOSLM) and a charge coupled device (CCD) camera is designed and tested. A unique technique for binarizing the fringe pattern, commonly called the joint transform power spectrum (JTPS), that enhances the application to actual sensor images is presented. The modification of the architecture to allow for scale and rotation invariant target recognition is also presented.
Proceedings of SPIE | 1998
Lemuel R. Myers; John G. Keller; Steven K. Rogers; Matthew Kabrisky
In this paper, it is shown that Evolution Programs can be used to search the weight space for Bayesian training of a Neural Network. Bayesian Analysis is an integration problem (as opposed to an optimization problem) over weight space. The first application of the Bayesian method primarily focused on using a Gaussian approximation of the posterior distribution in an area of high probability in the weight space instead of using formal integration. More recently, training a neural network in a Bayesian fashion has been accomplished by searching weight space for areas of high probability density which obviates the need for the Gaussian assumption. In particular, a hybrid Monte-Carlo method was used to search weight space in a logical manner to obtain an arbitrarily close approximation of the integration involved in a Bayesian analysis. Genetic Algorithms have been used in the past to determine the weights in an ANN, and (with some slight modifications) are ideally suited for searching the weight space to approximate the Bayesian integration. In this respect, the Bayesian framework provides a simple and elegant way to apply Evolution Programs to the ANN training problem. While this paper concentrates on using ANNs as classifiers, the generalization to regression problems is straightforward.
Proceedings of SPIE | 1998
John G. Keller; Lemuel R. Myers; Steven K. Rogers; Matthew Kabrisky; Mark E. Oxley; Martin P. DeSimio
We introduce an object recognition system using Gabor filters to model the biological visual field and a saccadic behavioral model to emulate biological active vision. A high resolution image containing an object of interest is first processed by an ensemble of multi-resolution, multi- orientation Gabor filters. The object can then be described by an alternating sequence of fixation coordinates and the Gabor responses at pixel locations surrounding that fixation point. Once this sequence is memorized, a complex image can be searched for the same location/feature sequence, indicating the presence of the memorized object. The model is suitable for memorization of arbitrary objects, and a simple example is presented using a human face as the object of interest.
Proceedings of SPIE | 1998
Terry A. Wilson; Steven K. Rogers; Mark E. Oxley; Thomas F. Rathbun; Martin P. DeSimio; Matthew Kabrisky
A Radial Basis Function (RBF) Iterative Construction Algorithm (RICA) is presented that autonomously determines the size of the network architecture needed to perform classification on a given data set. The algorithm uses a combination of a Gaussian goodness-of-fit measure and Mahalanobis distance clustering to calculate the number of hidden nodes needed and to estimate the parameters of the hidden node basis functions. An iterative minimum squared error reduction method is used to optimize the output layer weights. RICA is compared to several neural network algorithms, including a fixed architecture multilayer perceptron (MLP), a fixed architecture RBF, and an adaptive architecture MLP, using optical character recognition and infrared image data.
IFAC Proceedings Volumes | 1998
Claudia Kropas Hughes; Steven K. Rogers; Matthew Kabrisky; Mark E. Oxley
Abstract The digital computer is a wonderful tool for numeric calculations at incredible speeds, but humans are still vastly superior in pattern recognition and classification of complex visual image patterns. An understanding of how our visual system operates has the potential to inspire us to better, more efficient means of automatic, computer aided pattern analysis of images. Scale, shift and rotations, are easily compensated for within the Human Visual System, and the scene is understood even when presentation is made for the first time. A study of the models for the Human Visual System provides insight into pattern analysis techniques for automatic pattern recognition and classification.
Applications and science of artificial neural networks. Conference | 1997
Jeffrey L. Blackmon; Steven K. Rogers
This paper demonstrates the usefulness of neural networks in classifying environmental samples from compound mixture data. This problem was solved by careful determination of neural network learning parameters and forward sequential selection of input features. Finally, the fundamental limit of any classifier on this data was determined using Bayes error bounding.
Applications of Artificial Neural Networks | 1990
Bruce A. Conway; Matthew Kabrisky; Steven K. Rogers; Gary B. Lamont
This report details the implementation of the Kohonen Self-Organizing Net on a 32-node Intel iPSC/1 HyperCube and the 25 performance improvement gained by increasing the dimensionality of the net without increasing processing requirements. 1. KOHONEN SELF-ORGANIZING MAP IMPLEMENTED ON THE INTEL iPSC HYPERCUBE This report examines the implementation of a Kohonen net on the Intel iPSC/l HyperCube and explores the performance improvement gained by increasing the dimensionality of the Kohonen net from the conventional two-dimensional case to the n-dimensional case where n is the number of inputs to the Kohonen net. In this example the Kohonen net performance is improved by increasing the dimensionality of the net without increasing the number of weights or nodes in the net and without increasing processing requirements. Kohonen in his Tutorial/ICCN 1 2 states that the dimensionality of the grid is not restricted to two but that maps in the biological brain tend to be two-dimensional. It is proposed that this is not a particularly severe restriction in the brain where not all inputs are connected to all nodes and specific maps can be formed for specific functions but in the case of the massively connected Kohonen net reducing all problems to two dimensions places an unnecessary burden on the learning process and necessarily causes the loss of information regarding the interrelationship of inputs and corresponding output clusters. Indeed reducing the dimension is a projection
Applications of Artificial Neural Networks | 1990
Arthur L. Sumner; Steven K. Rogers; Gregory L. Tarr; Matthew Kabrisky; David Norman
Spectral analysis involving the determination of atomic and molecular species present in a spectm of multi—spectral data is a very time consulTLLng task, especially considering the fact that there are typically thousands of spectra collected during each experiment. Ixie to the overwhelming amount of available spectral data and the time required to analyze these data, a robust autorratic method for doing at least some preliminary spectral analysis is needed. This research focused on the development of a supervised artificial neural network with error correction learning, specifically a three—layer feed-forward backpropagation perceptron. The obj ective was to develop a neural network which would do the preliminary spectral analysis and save the analysts from the task of reviewing thousands of spectral frames . The input to the network is raw spectral data with the output consisting of the classification of both atomic and molecular species in the source.