Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernard Widrow is active.

Publication


Featured researches published by Bernard Widrow.


Proceedings of the IEEE | 1990

30 years of adaptive neural networks: perceptron, Madaline, and backpropagation

Bernard Widrow; Michael A. Lehr

Fundamental developments in feedforward artificial neural networks from the past thirty years are reviewed. The history, origination, operating characteristics, and basic theory of several supervised neural-network training algorithms (including the perceptron rule, the least-mean-square algorithm, three Madaline rules, and the backpropagation technique) are described. The concept underlying these iterative adaptation algorithms is the minimal disturbance principle, which suggests that during training it is advisable to inject new information into a network in a manner that disturbs stored information to the smallest extent possible. The two principal kinds of online rules that have developed for altering the weights of a network are examined for both single-threshold elements and multielement networks. They are error-correction rules, which alter the weights of a network to correct error in the output response to the present input pattern, and gradient rules, which alter the weights of a network during each pattern presentation by gradient descent with the objective of reducing mean-square error (averaged over all training patterns). >


international symposium on neural networks | 1990

Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights

Derrick H. Nguyen; Bernard Widrow

The authors describe how a two-layer neural network can approximate any nonlinear function by forming a union of piecewise linear segments. A method is given for picking initial weights for the network to decrease training time. The authors have used the method to initialize adaptive weights over a large number of different training problems and have achieved major improvements in learning speed in every case. The improvement is best when a large number of hidden units is used with a complicated desired response. The authors have used the method to train the truck-backer-upper and were able to decrease the training time from about two days to four hours


IFAC Proceedings Volumes | 1987

Adaptive Inverse Control

Bernard Widrow

Abstract Adaptive control is seen as a two part problem, control of plant dynamics and control of plant noise. The parts are treated separately. An unknown plant will track an input command signal if the plant is driven by a controller whose transfer function approximates the inverse of the plant transfer function. An adaptive inverse identification process can be used to obtain a stable controller, even if the plant is nonminimum phase. A model-reference version of this idea allows system dynamics to closely approximate desired reference model dynamics. No direct feedback is used, except that the plant output is monitored and utilized in order to adjust the parameters of the controller. Control of internal plant noise is accomplished with an optimal adaptive noise canceller. The canceller does not affect plant dynamics, but feeds back plant noise in a way that minimizes plant output noise power.


Proceedings of the IEEE | 1975

The complex LMS algorithm

Bernard Widrow; J. McCool; M. Ball

A least-mean-square (LMS) adaptive algorithm for complex signals is derived. The original Widrow-Hoff LMS algorithm is W<inf>j+l</inf>= W<inf>j</inf>+ 2µεjX<inf>j</inf>. The complex form is shown to be W<inf>j+1</inf>= W<inf>j</inf>+ 2µεjX<sup>-</sup><inf>j</inf>, where the boldfaced terms represent complex (phasor) signals and the bar above X<inf>j</inf>designates complex conjugate.


IEEE Computer | 1988

Neural nets for adaptive filtering and adaptive pattern recognition

Bernard Widrow; Rodney Winter

The adaptive linear combiner (ALC) is described, and practical applications of the ALC in signal processing and pattern recognition are presented. Six signal processing examples are given, which are system modeling, statistical prediction, noise canceling, echo canceling, universe modeling, and channel equalization. Adaptive pattern recognition using neural nets is then discussed. The concept involves the use of an invariance net followed by a trainable classifier. It makes use of a multilayer adaptation algorithm that descrambles output and reproduces original patterns.<<ETX>>


Communications of The ACM | 1994

Neural networks: applications in industry, business and science

Bernard Widrow; David E. Rumelhart; Michael A. Lehr

Just four years ago, the only widely reported commercial application of neural network technology outside the financial industry was the airport baggage explosive detection system [27] developed at Science Applications International Corporation (SAIC). Since that time scores of industrial and commercial applications have come into use, but the details of most of these systems are considered corporate secrets and are shrouded in secrecy. This hastening trend is due in part to the availability of an increasingly wide array of dedicated neural network hardware. This hardware is either in the form of accelerator cards for PCs and workstations or a large number of integrated circuits implementing digital and analog neural networks either currently available or in the final stages of design


IEEE Transactions on Information Theory | 1984

The least mean fourth (LMF) adaptive algorithm and its family

Eugene Walach; Bernard Widrow

New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. Time constants have been derived, and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form is W_{j+1} = W_{j} + 2 \mu K \epsilon_{j}^{2K-1}X_{j}, where W_{j} is the present weight vector, W_{j+1} is the next weight vector, \epsilon_{j} is the present error, X_{j} is the present input vector, \mu is a constant controlling stability and rate of convergence, and 2K is the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a minimum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise.


IEEE Transactions on Instrumentation and Measurement | 1996

Statistical theory of quantization

Bernard Widrow; István Kollár; Ming-Chang Liu

The effect of uniform quantization can often be modeled by an additive noise that is uniformly distributed, uncorrelated with the input signal, and has a white spectrum. This paper surveys the theory behind this model, and discusses the conditions of its validity. The application of the model to floating-point quantization is demonstrated.


Communications of The ACM | 1994

The basic ideas in neural networks

David E. Rumelhart; Bernard Widrow; Michael A. Lehr

Interest in the study of neural networks has grown remarkably in the last several years. This effort has been characterized in a variety of ways: as the study of brain-style computation, connectionist architectures, parallel distributed-processing systems, neuromorphic computation, artificial neural systems. The common theme to these efforts has been an interest in looking at the brain as a model of a parallel computational device very different from that of a traditional serial computer


Proceedings of the IEEE | 1978

Adaptive filtering in the frequency domain

M. Dentino; J. McCool; Bernard Widrow

Adaptive filtering in the frequency domain can be accomplished by Fourier transformation of the input signal and independent weighting of the contents of each frequency bin. The frequency-domain filter performs similarly to a conventional adaptive transversal filter but promises a significant reduction in computation when the number of weights equals or exceeds 16.

Collaboration


Dive into the Bernard Widrow's collaboration.

Top Co-Authors

Avatar

István Kollár

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge