Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Y. Le Cun is active.

Publication


Featured researches published by Y. Le Cun.


Pattern Recognition | 1991

Design of a neural network character recognizer for a touch terminal

Isabelle Guyon; P. Albrecht; Y. Le Cun; John S. Denker; W. Hubbard

Abstract We describe a system which can recognize digits and uppercase letters handprinted on a touch terminal. A character is input as a sequence of [ x(t), y(t) ] coordinates, subjected to very simple preprocessing, and then classified by a trainable neural network. The classifier is analogous to “time delay neural networks” previously applied to speech recognition. The network was trained on a set of 12,000 digits and uppercase letters, from approximately 250 different writers, and tested on 2500 such characters from other writers. Classification accuracy exceeded 96% on the test examples.


IEEE Communications Magazine | 1989

Handwritten digit recognition: applications of neural network chips and automatic learning

Y. Le Cun; Lawrence D. Jackel; Bernhard E. Boser; John S. Denker; Hans Peter Graf; I. Guyon; D. Henderson; R. E. Howard; W. Hubbard

Two novel methods for achieving handwritten digit recognition are described. The first method is based on a neural network chip that performs line thinning and feature extraction using local template matching. The second method is implemented on a digital signal processor and makes extensive use of constrained automatic learning. Experimental results obtained using isolated handwritten digits taken from postal zip codes, a rather difficult data set, are reported and discussed.<<ETX>>


international conference on pattern recognition | 1990

Handwritten zip code recognition with multilayer networks

Y. Le Cun; Ofer Matan; Bernhard E. Boser; John S. Denker; D. Henderson; R. E. Howard; W. Hubbard; L.D. Jacket; Henry S. Baird

An application of back-propagation networks to handwritten zip code recognition is presented. Minimal preprocessing of the data is required, but the architecture of the network is highly constrained and specifically designed for the task. The input of the network consists of size-normalized images of isolated digits. The performance on zip code digits provided by the US Postal Service is 92% recognition, 1% substitution, and 7% rejects. Structured neural networks can be viewed as statistical methods with structure which bridge the gap between purely statistical and purely structural methods.<<ETX>>


IEEE Journal of Solid-state Circuits | 1991

An analog neural network processor with programmable topology

Bernhard E. Boser; Eduard Sackinger; Jane Bromley; Y. Le Cun; Lawrence D. Jackel

The architecture, implementation, and applications of a special-purpose neural network processor are described. The chip performs over 2000 multiplications and additions simultaneously. Its data path is particularly suitable for the convolutional topologies that are typical in classification networks, but can also be configured for fully connected or feedback topologies. Resources can be multiplexed to permit implementation of networks with several hundreds of thousands of connections on a single chip. Computations are performed with 6 b accuracy for the weights and 3 b for the neuron states. Analog processing is used internally for reduced power dissipation and higher density, but all input/output is digital to simplify system integration. The practicality of the chip is demonstrated with an implementation of a neural network for optical character recognition. This network contains over 130000 connections and was evaluated in 1 ms. >


international symposium on neural networks | 1992

Shortest path segmentation: a method for training a neural network to recognize character strings

Christopher J. C. Burges; Ofer Matan; Y. Le Cun; John S. Denker; Lawrence D. Jackel; Charles E. Stenard; Craig R. Nohl; Jan Ben

The authors describe a method which combines dynamic programming and a neural network recognizer for segmenting and recognizing character strings. The method selects the optimal consistent combination of cuts from a set of candidate cuts generated using heuristics. The optimal segmentation is found by representing the image, the candidate segments, and their scores as a graph in which the shortest path corresponds to the optimal interpretation. The scores are given by neural net outputs for each segment. A significant advantage of the method is that the labor required to segment images manually is eliminated. The system was trained on approximately 7000 unsegmented handwritten zip codes provided by the United States Postal Service. The system has achieved a per-zip-code raw recognition rate of 81% on a 2368 handwritten zip-code test set.<<ETX>>


IEEE Computer | 1992

Reading handwritten digits: a ZIP code recognition system

Ofer Matan; Henry S. Baird; Jane Bromley; Christopher J. C. Burges; John S. Denker; Lawrence D. Jackel; Y. Le Cun; Edwin P. D. Pednault; W.D. Satterfield; Charles E. Stenard; T.J. Thompson

A neural network algorithm-based system that reads handwritten ZIP codes appearing on real US mail is described. The system uses a recognition-based segmenter, that is a hybrid of connected-components analysis (CCA), vertical cuts, and a neural network recognizer. Connected components that are single digits are handled by CCA. CCs that are combined or dissected digits are handled by the vertical-cut segmenter. The four main stages of processing are preprocessing, in which noise is removed and the digits are deslanted, CCA segmentation and recognition, vertical-cut-point estimation and segmentation, and directly lookup. The system was trained and tested on approximately 10000 images, five- and nine-digit ZIP code fields taken from real mail.<<ETX>>


international conference on pattern recognition | 1994

Word normalization for online handwritten word recognition

Yoshua Bengio; Y. Le Cun

We introduce a new approach to normalizing words written with an electronic stylus that applies to all styles of handwriting (upper case, lower case, printed, cursive, or mixed). A geometrical model of the word spatial structure is fitted to the pen trajectory using the expectation-maximisation algorithm. The fitting process maximizes the likelihood of the trajectory given the model and a set a priors on its parameters. The method was evaluated and integrated to a recognition system that combines neural networks and hidden Markov models.


Archive | 1990

Optical Character Recognition and Neural-Net Chips

Y. Le Cun; Lawrence D. Jackel; Hans Peter Graf; Bernhard E. Boser; John S. Denker; Isabelle Guyon; D. Henderson; R. E. Howard; W. Hubbard; Sara A. Solla

Neural Network research has always interested hardware designers, theoreticians, and application engineers. But until recently, the common ground between these groups was limited: the neural-net chips were too small to implement any full-size application, and the algorithms were too complicated (or the applications not interesting enough) to be implemented on a chip. The merging of these efforts is now made possible by the simultaneous emergence of powerful chips and successful, real-world applications of neural networks. Here, we discuss how the compute-intensive part of a handwritten digit recognizer, based on a highly structured backpropagation network, can be implemented on a general purpose neural-network chip containing 32k binary synapses. Using techniques based on the second-order properties of the error function, we show that very little accuracy on the weights and states is required in the first layers of the network. Interestingly, the best digit-recognition network is also the easiest to implement on a chip.


international conference on pattern recognition | 1992

An efficient algorithm for learning invariance in adaptive classifiers

Patrice Y. Simard; Y. Le Cun; John S. Denker; B. Victorri

In many machine learning applications, one has not only training data but also some high-level information about certain invariances that the system should exhibit. In character recognition, for example, the answer should be invariant with respect to small spatial distortions in the input images (translations, rotations, scale changes, etcetera). The authors have implemented a scheme that minimizes the derivative of the classifier outputs with respect to distortion operators. This not only produces tremendous speed advantages, but also provides a powerful language for specifying what generalizations the network can perform.<<ETX>>


international symposium on neural networks | 1991

Double backpropagation increasing generalization performance

H. Drucker; Y. Le Cun

One test of a training algorithm is how well the algorithm generalizes from the training data to the test data. It is shown that a training algorithm termed double back-propagation improves generalization by simultaneously minimizing the normal energy term found in back-propagation and an additional energy term that is related to the sum of the squares of the input derivatives (gradients). In normal back-propagation training, minimizing the energy function tends to push the input gradient to zero. However, this is not always possible. Double back-propagation explicitly pushes the input gradients to zero, making the minimum broader, and increases the generalization on the test data. The authors show the improvement over normal back-propagation on four candidate architectures with a training set of 320 handwritten numbers and a test set of size 180.<<ETX>>

Collaboration


Dive into the Y. Le Cun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge