Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John S. Denker is active.

Publication


Featured researches published by John S. Denker.


Neural Computation | 1989

Backpropagation applied to handwritten zip code recognition

Yann LeCun; Bernhard E. Boser; John S. Denker; D. Henderson; R. E. Howard; W. Hubbard; Lawrence D. Jackel

The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.


international conference on pattern recognition | 1994

Comparison of classifier methods: a case study in handwritten digit recognition

Léon Bottou; Corinna Cortes; John S. Denker; Harris Drucker; Isabelle Guyon; Larry D. Jackel; Yann LeCun; Urs Muller; Eduard Sackinger; Patrice Y. Simard; Vladimir Vapnik

This paper compares the performance of several classifier algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also training time, recognition time, and memory requirements. When available, we report measurements of the fraction of patterns that must be rejected so that the remaining patterns have misclassification rates less than a given threshold.


neural information processing systems | 1998

Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation

Patrice Y. Simard; Yann LeCun; John S. Denker; Bernard Victorri

In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, “tangent distance” and “tangent propagation”, which make use of these invariances to improve performance.


international symposium on low power electronics and design | 1995

2nd order adiabatic computation with 2N-2P and 2N-2N2P logic circuits

Alan Kramer; John S. Denker; B. Flower; J. Moroney

Recent advances in compact, practical adiabatic computing circuits which demonstrate signi cant energy savings have renewed interest in using such techniques in lowpower systems. Several recently introduced circuits for adiabatic computing make use of diodes in a way which reduces switching energy from O(CV dd) in the nonadiabatic (ie: standard CMOS) case, to O(CV ddV t). These circuits provide an energy savings of at most one order of V dd=V t. This paper introduces a new class of adiabatic computing circuits which o er several advantages over existing approaches, the primary one being that, because no diodes are used, switching energy can be reduced to an energy oor of O(CV t). These second order adiabatic computing circuits provide an energy savings of as much as O(V dd=V t) over conventional CMOS. Additional advantages of the proposed circuits include the fact that, in comparison to most compact adiabatic circuits which have oating output levels over the entire data valid time, these new circuits have nonoating output levels over most of the data valid time. This is important for restoring logic levels and minimizing problems with crosstalk. The proposed circuits have been simulated and demonstrate adiabatic power savings compared to standard CMOS circuits over an operating frequency range from 1MHz to 100MHz of as much as a factor of 3. One circuit topology has been fabricated and tested and operates properly at up to 100MHz, the maximum speed which could be tested. Power measurements on the functioning circuit are in progress and preliminary results demonstrate adiabatic power-vs-frequency behavior. These second order adiabatic computing circuits provide an attractive alternative to achieve adiabatic power savings without su ering from many of the limitations of alternative approaches and without costing much more either in terms of complexity or size.


Journal of Low Power Electronics | 1994

A review of adiabatic computing

John S. Denker

We explain (a) why people want a low-energy computer; (b) under what conditions there is-or is not-an irreducible energy per computation for CMOS circuits; (c) partial versus full adiabatic computation, and their relationship to logically reversible computation; (d) various schemes for achieving adiabatic operation.


Pattern Recognition | 1991

Design of a neural network character recognizer for a touch terminal

Isabelle Guyon; P. Albrecht; Y. Le Cun; John S. Denker; W. Hubbard

Abstract We describe a system which can recognize digits and uppercase letters handprinted on a touch terminal. A character is input as a sequence of [ x(t), y(t) ] coordinates, subjected to very simple preprocessing, and then classified by a trainable neural network. The classifier is analogous to “time delay neural networks” previously applied to speech recognition. The network was trained on a set of 12,000 digits and uppercase letters, from approximately 250 different writers, and tested on 2500 such characters from other writers. Classification accuracy exceeded 96% on the test examples.


IEEE Communications Magazine | 1989

Handwritten digit recognition: applications of neural network chips and automatic learning

Y. Le Cun; Lawrence D. Jackel; Bernhard E. Boser; John S. Denker; Hans Peter Graf; I. Guyon; D. Henderson; R. E. Howard; W. Hubbard

Two novel methods for achieving handwritten digit recognition are described. The first method is based on a neural network chip that performs line thinning and feature extraction using local template matching. The second method is implemented on a digital signal processor and makes extensive use of constrained automatic learning. Experimental results obtained using isolated handwritten digits taken from postal zip codes, a rather difficult data set, are reported and discussed.<<ETX>>


international conference on pattern recognition | 1990

Handwritten zip code recognition with multilayer networks

Y. Le Cun; Ofer Matan; Bernhard E. Boser; John S. Denker; D. Henderson; R. E. Howard; W. Hubbard; L.D. Jacket; Henry S. Baird

An application of back-propagation networks to handwritten zip code recognition is presented. Minimal preprocessing of the data is required, but the architecture of the network is highly constrained and specifically designed for the task. The input of the network consists of size-normalized images of isolated digits. The performance on zip code digits provided by the US Postal Service is 92% recognition, 1% substitution, and 7% rejects. Structured neural networks can be viewed as statistical methods with structure which bridge the gap between purely statistical and purely structural methods.<<ETX>>


Neural Computation archive | 1990

Exhaustive learning

Daniel B. Schwartz; V. K. Samalam; Sara A. Solla; John S. Denker

Exhaustive exploration of an ensemble of networks is used to model learning and generalization in layered neural networks. A simple Boolean learning problem involving networks with binary weights is numerically solved to obtain the entropy Sm and the average generalization ability Gm as a function of the size m of the training set. Learning curves Gm vs m are shown to depend solely on the distribution of generalization abilities over the ensemble of networks. Such distribution is determined prior to learning, and provides a novel theoretical tool for the prediction of network performance on a specific task.


Physica D: Nonlinear Phenomena | 1986

Neural network models of learning and adaptation

John S. Denker

Abstract Recent work has applied ideas from many fields including biology, physics and computer science, in order to understand how a highly interconnected network of simple processing elements can perform useful computation. Such networks can be used as associative memories, or as analog computers to solve optimization problems. This article reviews the workings of a standard model with particular emphasis on various schemes for learning and adaptation.

Collaboration


Dive into the John S. Denker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge