Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William J. B. Oldham is active.

Publication


Featured researches published by William J. B. Oldham.


IEEE Transactions on Neural Networks | 1994

An iterative method for training multilayer networks with threshold functions

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

Concerns the problem of finding weights for feed-forward networks in which threshold functions replace the more common logistic node output function. The advantage of such weights is that the complexity of the hardware implementation of such networks is greatly reduced. If the task to be learned does not change over time, it may be sufficient to find the correct weights for a threshold function network off-line and to transfer these weights to the hardware implementation. This paper provides a mathematical foundation for training a network with standard logistic function nodes and gradually altering the function to allow a mapping to a threshold unit network. The procedure is analogous to taking the limit of the logistic function as the gain parameter goes to infinity. It is demonstrated that, if the error in a trained network is small, a small change in the gain parameter will cause a small change in the network error. The result is that a network that must be implemented with threshold functions can first be trained using a traditional back propagation network using gradient descent, and further trained with progressively steeper logistic functions. In theory, this process could require many repetitions. In simulations, however, the weights have be successfully mapped to a true threshold network after a modest number of slope changes. It is important to emphasize that this method is only applicable to situations for which off-line learning is appropriate.


international symposium on neural networks | 1993

A comparison of recurrent neural network learning algorithms

Antonette M. Logar; Edward M. Corwin; William J. B. Oldham

Selected recurrent network training algorithms are described, and their performances are compared with respect to speed and accuracy for a given problem. Detailed complexity analyses are presented to allow more accurate comparison between training algorithms for networks with few nodes. Network performance for predicting the Mackey-Glass equation is reported for each of the recurrent networks, as well as for a backpropagation network. Using networks of comparable size, the recurrent networks produce significantly better prediction accuracy. The accuracy of the backpropagation network is improved by increasing the size of the network, but the recurrent networks continue to produce better results for the large prediction distances. Of the recurrent networks considered, Pearlmutters off-line training algorithm produces the best results.<<ETX>>


Computers in Industry | 1995

A neural network approach for datum selection in computer-aided process planning

Jiannan Mei; Hong-C. Zhang; William J. B. Oldham

The goal of process planning is to convert design specifications into manufacturing instructions to make products within the specifications at the lowest cost. Therefore, for a computer-aided process planning system (CAPP) to generate a feasible and economical process plan, the tolerance information from design and manufacturing processes must be carefully studied. The geometric tolerances are usually specified in design only when higher accuracy of a feature (such as flatness, roundness, etc.) or a relationship (such as parallelism, perpendicularity, etc.) is required. For the relationships with dimensional tolerances or geometric tolerances with specified design datum(s), the selection of manufacturing datum and setup in process planning plays a very important role to make parts precisely and economically. This paper presents a neural network approach for CAPP to automatically select manufacturing datums for rotational parts on the basis of the shape of the parts and tolerance constraints. A back-propagation algorithm is used and some experiments are conducted. The results are analyzed and further research is proposed.


international symposium on neural networks | 1992

A fractal dimension feature extraction technique for detecting flaws in silicon wafers

G.T. Stubbendieck; William J. B. Oldham

The authors present a feature extraction method for detecting flaws in silicon wafers based on the idea of fractal dimension. They begin by discussing why fractal dimension is a good way to model wafer surface images. They then describe how to calculate the fractal dimension of a computer image of a wafer and how the results of such a calculation can be used for fault detection and image segmentation. The results of the application of this process to some wafer images are included.<<ETX>>


acm symposium on applied computing | 1992

Predicting acid concentrations in processing plant effluent: an application of time series prediction using neural networks

Antonette M. Logar; Edward M. Corwin; William J. B. Oldham

This paper explores the application of neural networks, specificallyback propagation networks, to the problem of predicting the acid concentration of WasteWater. Experiments were conducted to determine the effects of varyingnetwork parameters such as the size of the tag, the normalization technique, and the number of steps forward in time the network is predicting. Improved prediction results were obtained by incorporating knowledge ahout the previous error, the difference between the desired and actual network responses, into the current network response. The reduction in the total prediction error for the acid concentration data, as measured by the average relative variance, ranged from 25% to a factor of six. Similar results were obtained when predicting the MackeyGlass equations, with error rate reductions averaging a factor of ten on one, two, four and eight step predictions. An unexpected, but important, payoff was a dramatic reduction in the number of required training iterations. For the acid concentration data, the number of training iterations required to achieve a reasonable level of performance was reduced from several thousand to a few hundred. lle ability to improve performance for multistep predction plus reducing the required number of training iterations makes this technique particularity applicable to real-time systems.


international symposium on neural networks | 1996

Embedding coupled oscillators into a feedforward architecture for improved time series prediction

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

The network defined by Hayashi (1994), like many purely recurrent networks, has proven very difficult to train to arbitrary time series. Many recurrent architectures are best suited for producing specific cyclic behaviors. As a result, a hybrid network has been developed to allow for training to more general sequences. The network used here is a combination of standard feedforward nodes and Hayashi oscillator pairs. A learning rule, developed using a discrete mathematics approach, is presented for the hybrid network. Significant improvements in prediction accuracy were produced compared to a pure Hayashi network and a backpropagation network. Data sets used for testing the effectiveness of this approach include Mackey-Glass, sunspot, and ECG data. The hybrid models reduced training and testing error in each case by a least 34%.


international symposium on neural networks | 1996

An extension to the Hayashi coupled oscillator network training rule

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

A variety of recurrent network architectures have been developed and applied to the problem of time series prediction. One particularly interesting network was developed by Hayashi (1994). Hayashi presented a network of coupled oscillators and a training rule for the network. His derivation was based on continuous mathematics and provided a mechanism for updating the weights into the output nodes. The work presented here gives an alternative derivation of Hayashis learning rule based on discrete mathematics as well an extension to the learning rule which allows for updating of all weights in the network.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1994

Performance comparisons of classification techniques for multi-font character recognition

Antonette M. Logar; Edward M. Corwin; William J. B. Oldham

This paper reports the performance of several neural network models on the problem of multi-font character recognition. The networks are trained on machine generated, upper-case English letters in selected fonts. The task is to recognize the same letters in different fonts. The results presented here were produced by back-propagation networks, radial basis networks and a new hybrid algorithm which is a combination of the two. These results are compared to those of the Hogg-Hubermann model as well as to those of nearest neighbor and maximum likelihood classifiers. The effects of varying the number of nodes in the hidden layer, the initial conditions, and the number of iterations in a back-propagation network were studied. The experimental results indicate that the number of nodes is an important factor in the recognition rate and that over-training is a significant problem. Different initial conditions also had a measurable effect on performance. The radial basis experiments used different numbers of centers and differing techniques for selecting the means and standard deviations. The best results were obtained with one center per training vector in which the standard deviation for each center was set to the same small number. Finally, a new hybrid technique is discussed in which a radial basis network is used to determine a starting point for a back-propagation network. The back-propagation network refines the radial basis means and standard deviations which are replaced in the radial basis network and used for another iteration. All three networks out-performed the Hogg-Hubermann network as well as the maximum likelihood classifiers.


Neural Network Models for Optical Computing | 1988

Hardware Implementation Of An Artificial Neural Network

Ross A McClain; Charles H Rogers; William J. B. Oldham

A hardware implementation of a lightly connected artificial neural network known as the Hogg-Huberman model (1) (2) is described. The hardware is built around NCRs Geometric Arithmetic Parallel Processor (GAPP) chip. A large perfor-mance gain is shown between this implementation and a simulation done in FORTRAN on a VAX 11/780. Even though the direct processor to processor communications are limited to nearest neighbors, models which require other connections can be implemented with this hardware.


international symposium on neural networks | 1993

A comparison of neural network and nearest-neighbor classifiers of handwritten lower-case letters

T.M. English; M.d.P. Gomez-Gil; William J. B. Oldham

The authors apply k-nearest-neighbor classifiers, fully-connected networks, and networks of an architecture devised by LeCun to the problem of recognizing handwritten (cursive) lower-case letters. Results reported differ from those of studies involving hand-printed characters. LeCun networks give higher accuracy (77%) than fully-connected networks (74%), which in turn give higher accuracy than k-nearest neighbor classifiers (71%). It is observed that training with an error criterion based on the L/sup 10/ norm allows LeCun networks to avoid some local minima encountered when the squared error (L/sup 2/) criterion is used.<<ETX>>

Collaboration


Dive into the William J. B. Oldham's collaboration.

Top Co-Authors

Avatar

Antonette M. Logar

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

Edward M. Corwin

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

Vir V. Phoha

Northeastern State University

View shared research outputs
Top Co-Authors

Avatar

Pilar Gomez-Gil

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge