Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Don R. Hush is active.

Publication


Featured researches published by Don R. Hush.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1986

An adaptive IIR structure for sinusoidal enhancement, frequency estimation, and detection

Don R. Hush; N. Ahmed; Ruth A. David; Samuel D. Stearns

An adaptive IIR structure for processing a sinusoidal signal in broad-band noise is introduced. The structure contains three adaptive processors, each of which is computationally very simple. Useful features of the structure include enhancement, frequency estimation, and detection.


Neural Networks | 1996

Network constraints and multi-objective optimization for one-class classification

Mary M. Moya; Don R. Hush

Abstract This paper introduces a constrained second-order network with a multiple objective learning algorithm that forms closed hyperellipsoidal decision boundaries for one-class classification. The network architecture has uncoupled constraints that give independent control over each decision boundarys size, shape, position, and orientation. The architecture together with the learning algorithm guarantee the formation of positive definite eigenvalues for closed hyperellipsoidal decision boundaries. The learning algorithm incorporates two criteria, one that seeks to minimize classification mapping error and another that seeks to minimize the size of the decision boundaries. We consider both additive combinations and multiplicative combinations of the individual criteria, and we present empirical evidence for selecting functional forms of the individual objectives that are bounded and normalized. The resulting multiple objective criterion allows the decision boundaries to increase or decrease in size as necessary to achieve both within-class generalization and out-of-class generalization without requiring the use of non-target patterns in the training set. The resulting network learns compact closed decision boundaries when trained with target data only. We show results of applying the network to the Iris data set (Fisher (1936), Annals of Eugenics, 7(2), 179–188). Advantages of this approach include its inherent ability for one-class generalization, freedom from characterizing the non-target class, and the ability to form closed decision boundaries for multi-modal classes that are more complex than hyperspheres without requiring inversion of large matrices.


Neural Networks | 1996

Bounds on the complexity of recurrent neural network implementations of finite state machines

Bill G. Horne; Don R. Hush

Abstract In this paper the efficiency of recurrent neural network implementations of m-state finite state machines will be explored. Specifically, it will be shown that the node complexity for the unrestricted case can be bounded above by O (√ m ). It will also be shown that the node complexity is O( m log m ) when the weights and thresholds are restricted to the set {−1,1} and O(m) when the fan-in is restricted to two. Matching lower bounds will be provided for each of these upper bounds assuming that the state of the FSM can be encoded in a subset of the nodes of size [ log m].


systems man and cybernetics | 1992

Error surfaces for multilayer perceptrons

Don R. Hush; Bill G. Horne; John M. Salas

Characteristics of error surfaces for the multilayer perceptron neural network that help explain why learning techniques that use hill-climbing methods are so slow in these networks and also provide insights into techniques to speed learning are examined. First, the surface has a stair-step appearance with many very flat and very steep regions. When the number of training samples is small there is often a one-to-one correspondence between individual training samples and the steps on the surface. As the number of samples increases, the surface becomes smoother. In addition the surface has flat regions that extend to infinity in all directions, making it dangerous to apply learning algorithms that perform line searches. The magnitude of the gradients on the surface strongly supports the need for floating-point representations during learning. The consequences of various weight initialization techniques are also discussed. >


IEEE Transactions on Neural Networks | 1998

Efficient algorithms for function approximation with piecewise linear sigmoidal networks

Don R. Hush; Bill G. Horne

This paper presents a computationally efficient algorithm for function approximation with piecewise linear sigmoidal nodes. A one hidden layer network is constructed one node at a time using the well-known method of fitting the residual. The task of fitting an individual node is accomplished using a new algorithm that searches for the best fit by solving a sequence of quadratic programming problems. This approach offers significant advantages over derivative-based search algorithms (e.g., backpropagation and its extensions). Unique characteristics of this algorithm include: finite step convergence, a simple stopping criterion, solutions that are independent of initial conditions, good scaling properties and a robust numerical implementation. Empirical results are included to illustrate these characteristics.


Neural Networks | 1994

On the node complexity of neural networks

Bill G. Horne; Don R. Hush

Abstract New results are presented for the node complexity of threshold logic circuit implementations of n-input, m-output logic functions. When no restrictions are imposed on the network the node complexity is shown to be θ( m2 n (n− log m )) . One-hidden-layer networks are shown to have a node complexity of O(2n + m) and θ( m 2 n [ n ( n + m )]) . Networks with weights restricted to the set {−1, 0, 1} are shown to have a node complexity of θ( m2 n ) , and networks for which fan-in is restricted to 2 are shown to have a node complexity of θ( m 2 n ( n + log m )) .


international conference on acoustics, speech, and signal processing | 1984

Detection of multiple sinusoids using an adaptive cascaded structure

Nasir Ahmed; Don R. Hush; G. R. Elliott; Robert Joseph Fogler

The purpose of this paper is to present a simple adaptive algorithm for detecting and tracking a sinusoid in broadband noise, while at the same time improving its signal-to-noise ratio (SNR).


international symposium on neural networks | 1992

An adaptive algorithm for modifying hyperellipsoidal decision surfaces

Patrick M. Kelly; Don R. Hush; James M. White

The learning vector quantization (LVQ) algorithm is a common method which allows a set of reference vectors for a distance classifier to adapt to a given training set. The authors have developed a similar learning algorithm, the LVQ using the Mahalanabis distance metric (LVQ-MM), which manipulates hyperellipsoidal cluster boundaries as opposed to reference vectors. Regions of the input feature space are first enclosed by ellipsoidal decision boundaries, and then these boundaries are iteratively modified to reduce classification error. Results obtained by classifying the Iris data set are provided.<<ETX>>


american control conference | 1997

Neural networks in fault detection: a case study

Don R. Hush; Chaouki T. Abdallah; Gregory L. Heileman; D. Docampo

We study the applications of neural nets in the area of fault detection in real vibrational data. The study is one of the first to include a large set of real vibrational data and to illustrate the potential as well as the limitations of neural networks for fault detection.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1987

A new technique for adaptive frequency estimation and tracking

Delores M. Etter; Don R. Hush

A new technique is presented for estimating the frequency of a sinusoid in noise. Standard techniques typically estimate the frequency of a sinusoid from an estimate of the autocorrelation function or from a filter model. These techniques require a large number of samples for an accurate estimate of the frequency. Furthermore, if the frequency is varying with time, recomputation of the autocorrelation or filter model is necessary for each estimate update. In this correspondence, we present a new technique that is based on a variable delay element. We will show that the corresponding error surface is sinusoidal, and that the first maximum of the error function occurs at one-half the period of the unknown sinusoid. We will develop an algorithm that is based on gradient techniques to find this maximum, and from this maximum we directly compute the frequency estimate. The new algorithm works in the time domain with a simple adaptive delay update computation. The technique has been tested in simulations with signal-to-noise ratios as low as 1 : 10, with excellent performance. The speed of the algorithm depends on a convergence factor that is computed from an initial estimate of the power of the input signal. Since the technique is adaptive, it can also be applied to tracking a time-varying frequency.

Collaboration


Dive into the Don R. Hush's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

N. Ahmed

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Nasir Ahmed

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Samuel D. Stearns

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth A. David

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

D. Docampo

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Daechul Park

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge