Eduard Sackinger
Bell Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eduard Sackinger.
international conference on pattern recognition | 1994
Léon Bottou; Corinna Cortes; John S. Denker; Harris Drucker; Isabelle Guyon; Larry D. Jackel; Yann LeCun; Urs Muller; Eduard Sackinger; Patrice Y. Simard; Vladimir Vapnik
This paper compares the performance of several classifier algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also training time, recognition time, and memory requirements. When available, we report measurements of the fraction of patterns that must be rejected so that the remaining patterns have misclassification rates less than a given threshold.
international solid-state circuits conference | 2000
Eduard Sackinger; W.C. Fischer
A CMOS limiting amplifier with a bandwidth of 3 GHz, a gain of 32 dB, and a noise figure of 16 dB is described. The amplifier is fabricated in a standard 2.5-V 0.25-/spl mu/m CMOS technology and consumes 53 mW. Inversely scaled amplifier stages and active inductors with a low voltage drop are used to achieve this performance. The amplifier is targeted for use in 2.5-Gb/s (OC-48) SONET systems.
IEEE Journal of Solid-state Circuits | 1991
Bernhard E. Boser; Eduard Sackinger; Jane Bromley; Y. Le Cun; Lawrence D. Jackel
The architecture, implementation, and applications of a special-purpose neural network processor are described. The chip performs over 2000 multiplications and additions simultaneously. Its data path is particularly suitable for the convolutional topologies that are typical in classification networks, but can also be configured for fully connected or feedback topologies. Resources can be multiplexed to permit implementation of networks with several hundreds of thousands of connections on a single chip. Computations are performed with 6 b accuracy for the weights and 3 b for the neuron states. Analog processing is used internally for reduced power dissipation and higher density, but all input/output is digital to simplify system integration. The practicality of the chip is demonstrated with an implementation of a neural network for optical character recognition. This network contains over 130000 connections and was evaluated in 1 ms. >
IEEE Transactions on Neural Networks | 1992
Eduard Sackinger; Bernhard E. Boser; Jane Bromley; Yann LeCun; Lawrence D. Jackel
A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision.
international solid-state circuits conference | 1999
Eduard Sackinger; Yusuke Ota; Thaddeus J. Gabara; Wilhelm C. Fischer
A burst-mode laser driver for passive optical networks (ATM-PON/FSAN, N-PON, π-PON) uses mixed-signal design techniques in digital 0.5-μm CMOS. Power consumption is 15 mW, which is about an order of magnitude less than previous designs. The laser driver features automatic power control and laser end-of-life detection. These features are implemented with a novel peak comparator, which operates on a packet-by-packet basis.
international symposium on microarchitecture | 1992
Bernhard E. Boser; Eduard Sackinger; Jane Bromley; Yann LeCun; Lawrence D. Jackel
A special-purpose chip, optimized for computational needs of neural networks and performing over 2000 multiplications and additions simultaneously, is described. Its data path is particularly suitable for the convolutional architectures typical in pattern classification networks but can also be configured for fully connected or feedback topologies. A development system permits rapid prototyping of new applications and analysis of the impact of the specialized hardware on system performance. The power and flexibility of the processor are demonstrated with a neural network for handwritten character recognition containing over 133000 connections.<<ETX>>
signal processing systems | 1993
Hans Peter Graf; Eduard Sackinger; Lawrence D. Jackel
We present a survey of recent electronic implementations of neural nets in the US and Canada with an emphasis on integrated circuits. Well over 50 different circuits were built during the last two years, representing a remarkable variety of designs. They range from digital emulators to fully analog CMOS networks operating in the subthreshold region. A majority of these circutis, over 40 designs, use analog computation to some extent. Several neural net chips are now commercially available, and many companies are working on the development of products for an introduction in the near future.Most of the neural net circuits have been built in standard CMOS technology, except for a few designs in CCD technology. EEPROM cells are investigated by several researchers for building compact, analog storage elements for the weights.While a large number of circuits have been built, there are still only few reports of applications of any of these chips to large, real-world problems. In fact, system integration and applications with neural net chips are just beginning to be explored. We describe some experiences gained with applications of analog neural net chips to machine vision from our laboratory.
international symposium on neural networks | 1991
Bernhard E. Boser; Eduard Sackinger; Jane Bromley; Yann LeCun; R. E. Howard; Lawrence D. Jackel
A high-speed programmable neural network chip and its application to character recognition are described. A network with over 130000 connections has been implemented on a single chip and operates at a rate of over 1000 classifications per second. The chip performs up to 2000 multiplications and additions simultaneously. Its datapath is suitable for the convolutional architectures that are typical in pattern classification networks, but can also be configured for fully connected or feedback topologies. Computations were performed with 6 bits accuracy for the weights and 3 bits for the states. The chip uses analog processing internally for higher density and reduced power dissipation, but all input/output is digital to simplify system integration.<<ETX>>
IEEE Transactions on Neural Networks | 1996
Eduard Sackinger; Hans Peter Graf
Two ANNA neural-network chips are integrated on a 6U VME board, to serve as a high-speed platform for a wide variety of algorithms used in neural-network applications as well as in image analysis. The system can implement neural networks of variable sizes and architectures, but can also be used for filtering and feature extraction tasks that are based on convolutions. The board contains a controller implemented with field programmable gate arrays (FPGAs), memory, and bus interfaces, all designed to support the high compute power of the ANNA chips. This new system is designed for maximum speed and is roughly 10 times faster than a previous board. The system has been tested for such tasks as text location, character recognition, and noise removal as well as for emulating cellular neural networks (CNNs). A sustained speed of up to two billion connections per second (GC/s) and a recognition speed of 1000 characters per second has been measured.
IEEE Transactions on Circuits and Systems | 1990
Eduard Sackinger; Linda Fornera
The question of how to arrange devices (e.g. MOS transistors) in analog integrated circuits so that process variations on the chip affect the circuits performance as little as possible is addressed. The answer to this question-a generalization of the common centroid technique-is outlined, and the solution for the specific input-transistor layout of a differential difference amplifier is presented. The method can be applied to other precision analog circuits, like multipliers. Placement of components like capacitors in switched-capacitor circuits can be treated analogously. >