David E. van den Bout
North Carolina State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David E. van den Bout.
Neural Computation | 1992
Griff L. Bilbro; David E. van den Bout
We derive the learning theory recently reported by Tishby, Levin, and Solla (TLS) directly from the principle of maximum entropy instead of statistical mechanics. The theory generally applies to any problem of modeling data. We analyze an elementary example for which we find the predictions consistent with intuition and conventional statistical results and we numerically examine the more realistic problem of training a competitive net to learn a one-dimensional probability density from samples. The TLS theory is useful for predicting average training behavior.
Advanced Neural Computers | 1990
David E. van den Bout; Wesley E. Snyder; Thomas K. Miller
A massively parallel, all-digital, stochastic architecture — TInMANN — is described which performs competitive and Kohonen types of learning at rates as high as 145,000 training examples per second. TInMANN is composed of very simple neurons and is very amenable to VLSI implementation, but rapid advances in IC technology and neural network theory reduce the rewards of such an endeavor. As an alternative, we discuss the rapid-prototyping of a bit-serial version of TInMANN using commercially available logic cell arrays and RAMs.
Microprocessors and Microsystems | 1992
D.A. Thomae; David E. van den Bout
Abstract The Anyboard rapid prototyping system is described. The Anyboard circuit partitioner is discussed and the results of experiments are presented that characterize its ability to find good partitions. Under some conditions it was found that an algorithm that is generally regarded as poor in fact produces good results in less time than an algorithm that is generally regarded as more powerful.
signal processing systems | 1990
David E. van den Bout; Paul D. Franzon; John J. Paulos; Thomas K. Miller; Wesley E. Snyder; T. Nagle; Wentai Liu
This paper discusses research on scalable VLSI implementations of feed-forward and recurrent neural networks. These two families of networks are useful in a wide variety of important applications—classification tasks for feed-forward nets and optimization problems for recurrent nets—but their differences affect the way they should be built. We find that analog computation with digitally programmable weights works best for feed-forward networks, while stochastic processing takes advantage of the integrative nature of recurrent networks. We have shown early prototypes of these networks which compute at rates of 1–2 billion connections per second. These general-purpose neural building blocks can be coupled with an overall data transmission framework that is electronically reconfigured in a local manner to produce arbitrarily large, fault-tolerant networks.
neural information processing systems | 1988
Griff L. Bilbro; Reinhold C. Mann; Thomas K. Miller; Wesley E. Snyder; David E. van den Bout; Mark W. White
neural information processing systems | 1989
Griff L. Bilbro; Reinhold C. Mann; Wesley E. Snyder; David E. van den Bout; Matthew White
conference on advanced research in vlsi | 1991
D.A. Thomae; Thomas A. Peterson; David E. van den Bout
neural information processing systems | 1990
Wesley E. Snyder; Daniel Nissman; David E. van den Bout; Griff L. Bilbro
Simulation | 1987
David E. van den Bout
neural information processing systems | 1990
Matthew S. Melton; Tan Phan; Douglas S. Reeves; David E. van den Bout