Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward M. Corwin is active.

Publication


Featured researches published by Edward M. Corwin.


Applied Soft Computing | 2011

The use of neural network and discrete Fourier transform for real-time evaluation of friction stir welding

Enkhsaikhan Boldsaikhan; Edward M. Corwin; Antonette M. Logar; William J. Arbegast

This paper introduces a novel real-time approach to detecting wormhole defects in friction stir welding in a nondestructive manner. The approach is to evaluate feedback forces provided by the welding process using the discrete Fourier transform and a multilayer neural network. It is asserted here that the oscillations of the feedback forces are related to the dynamics of the plasticized material flow, so that the frequency spectra of the feedback forces can be used for detecting wormhole defects. A one-hidden-layer neural network trained with the backpropagation algorithm is used for classifying the frequency patterns of the feedback forces. The neural network is trained and optimized with a data set of forge-load control welds, and the generality is tested with novel data set of position control welds. Overall, about 95% classification accuracy is achieved with no bad welds classified as good. Accordingly, the present paper demonstrates an approach for providing important feedback information about weld quality in real-time to a control system for friction stir welding.


IEEE Transactions on Neural Networks | 1994

An iterative method for training multilayer networks with threshold functions

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

Concerns the problem of finding weights for feed-forward networks in which threshold functions replace the more common logistic node output function. The advantage of such weights is that the complexity of the hardware implementation of such networks is greatly reduced. If the task to be learned does not change over time, it may be sufficient to find the correct weights for a threshold function network off-line and to transfer these weights to the hardware implementation. This paper provides a mathematical foundation for training a network with standard logistic function nodes and gradually altering the function to allow a mapping to a threshold unit network. The procedure is analogous to taking the limit of the logistic function as the gain parameter goes to infinity. It is demonstrated that, if the error in a trained network is small, a small change in the gain parameter will cause a small change in the network error. The result is that a network that must be implemented with threshold functions can first be trained using a traditional back propagation network using gradient descent, and further trained with progressively steeper logistic functions. In theory, this process could require many repetitions. In simulations, however, the weights have be successfully mapped to a true threshold network after a modest number of slope changes. It is important to emphasize that this method is only applicable to situations for which off-line learning is appropriate.


international symposium on neural networks | 1993

A comparison of recurrent neural network learning algorithms

Antonette M. Logar; Edward M. Corwin; William J. B. Oldham

Selected recurrent network training algorithms are described, and their performances are compared with respect to speed and accuracy for a given problem. Detailed complexity analyses are presented to allow more accurate comparison between training algorithms for networks with few nodes. Network performance for predicting the Mackey-Glass equation is reported for each of the recurrent networks, as well as for a backpropagation network. Using networks of comparable size, the recurrent networks produce significantly better prediction accuracy. The accuracy of the backpropagation network is improved by increasing the size of the network, but the recurrent networks continue to produce better results for the large prediction distances. Of the recurrent networks considered, Pearlmutters off-line training algorithm produces the best results.<<ETX>>


international symposium on neural networks | 1996

Embedding coupled oscillators into a feedforward architecture for improved time series prediction

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

The network defined by Hayashi (1994), like many purely recurrent networks, has proven very difficult to train to arbitrary time series. Many recurrent architectures are best suited for producing specific cyclic behaviors. As a result, a hybrid network has been developed to allow for training to more general sequences. The network used here is a combination of standard feedforward nodes and Hayashi oscillator pairs. A learning rule, developed using a discrete mathematics approach, is presented for the hybrid network. Significant improvements in prediction accuracy were produced compared to a pure Hayashi network and a backpropagation network. Data sets used for testing the effectiveness of this approach include Mackey-Glass, sunspot, and ECG data. The hybrid models reduced training and testing error in each case by a least 34%.


international symposium on neural networks | 1996

An extension to the Hayashi coupled oscillator network training rule

Edward M. Corwin; Antonette M. Logar; William J. B. Oldham

A variety of recurrent network architectures have been developed and applied to the problem of time series prediction. One particularly interesting network was developed by Hayashi (1994). Hayashi presented a network of coupled oscillators and a training rule for the network. His derivation was based on continuous mathematics and provided a mechanism for updating the weights into the output nodes. The work presented here gives an alternative derivation of Hayashis learning rule based on discrete mathematics as well an extension to the learning rule which allows for updating of all weights in the network.


Journal of Computing Sciences in Colleges | 2004

Sorting in linear time - variations on the bucket sort

Edward M. Corwin; Antonette M. Logar


acm symposium on applied computing | 1994

A don't care back propagation algorithm applied to satellite image recognition

Antonette M. Logar; Edward M. Corwin; Samuel Watters; Ronald C. Weger; Ronald M. Welch


Satellite Remote Sensing of Clouds and the Atmosphere II | 1997

Development of advanced global cloud classification schemes

Chris Konvalin; Antonette M. Logar; David Lloyd; Edward M. Corwin; Manuel Penaloza; Rand E. Feind; Ronald M. Welch


Archive | 2010

Real-Time Quality Monitoring in Friction Stir Welding

Enkhsaikhan Boldsaikhan; Antonette M. Logar; Edward M. Corwin


Journal of Computing Sciences in Colleges | 2004

Counting and automata

Edward M. Corwin; Antonette M. Logar

Collaboration


Dive into the Edward M. Corwin's collaboration.

Top Co-Authors

Avatar

Antonette M. Logar

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald M. Welch

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

Chris Konvalin

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

David Lloyd

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manuel Penaloza

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

Rand E. Feind

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

Ronald C. Weger

South Dakota School of Mines and Technology

View shared research outputs
Top Co-Authors

Avatar

Samuel Watters

South Dakota School of Mines and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge