Marco Sabatini
STMicroelectronics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marco Sabatini.
international electron devices meeting | 1994
Alan Kramer; Marco Sabatini; Roberto Canegallo; Mauro Chinosi; Pierluigi Rolandi; P. Zabberoni
The use of flash devices for both analog storage and analog computation can result in highly efficient switched-capacitor implementations of neural networks. The standard flash device suffers from severe limitations in this application due to relatively large parasitic overlap capacitances. This paper introduces the computational concept, circuit and architecture we are exploring as well as a novel flash-based programmable nonlinear capacitor with much improved charge domain characteristics for our application. These devices are demonstrated in a novel circuit consisting of only two devices and capable of computing a 5-bit absolute-value-of-difference at an energy consumption of less than 1 pJ.<<ETX>>
international symposium on low power electronics and design | 1995
Alan Kramer; Roberto Canegallo; Mauro Chinosi; D. Doise; Giovanni Gozzini; Pier Luigi Rolandi; Marco Sabatini; P. Zabberoni
Analog techniques can lead to ultra-efficient computational systems when applied to the right applications. The problem of associative memory is well suited to array-based analog implementation. The architectures which result can be ultra efficient both in terms of high density and low power consumption. We have implemented a small (16x512) analog associative memory array which uses programmable nonlinear capacitors based on flash EEPROM technology for both analog storage and analog Manhattan Distance computation. The core circuit involved is based on only two of these novel devices. Preliminary results from this test circuit indicate that we can achieve a computing precision of more than 8 digitalequivalent bits in a chip which is capable of performing 128 Giga absolute-value-of-difference-accumulate operations per second at a power consumption of less than 150 mW. Performance of this level is more than an order of magnitude more efficient than the best low-power digital techniques and demonstrates the potential advantages analog implementation has to offer when applied to certain applications. Introduction Associative Memory The function of an associative memory, or contentaddressable memory, is more or less the inverse of that of a random access memory: when presented with a partial or complete data vector, the memory should return the row address of the internally stored data vector which best “matches” the input data vector. The matching function is typically a distance function; in standard digital implementations Hamming distance is usually used. Associative memory lends itself to array-based parallel implementation. A typical architecture consists of a 2dimensional distance-computing / memory array, and several 1-dimensional arrays including an accumulator array for accumulating distances, a comparator array for finding the smallest distance, a priority encoder array for selecting rows one at a time, and a ROM array for presenting outputs [5]. * This work has been partially sponsored by U. C. Berkeley where Mr. Kramer is completing a Ph.D. Analog Associative Memory We are exploring an analog implementation of this ype of architecture for Associative Memory. The result is an analog associative memory in which both stored memory rows and inputs consist of analog-valued vectors (5-bit equivalent precision). The goal is to achieve an ultraefficient design in terms of both density and power consumption. Our target is an associative memory containing 4K lines of 64-dimensional memory vectors and capable of performing nearest neighbor match based on Manhattan Distance in less than 2uS at a power consumption of less than 150mW. Computation of 4K 64-dimensional Manhattan Distances requires 256K 5-bit absolute-value-of-differenceaccumulate computations, thus achieving a cycle time of 2uS requires performing 128G of these operations per second. Performing this much computation on a single chip at a power consumption of less than 150mW represents an increase in efficiency both in terms of density and power consumption of more than an order of magnitude over the best low-power digital techniques [1]. Practical realization of computing systems based on analog techniques may provide a viable alternative for ultraefficient system design if the design generality lost can be justified by the added efficiency gained.
Archive | 2000
Marco Sabatini; Frederic Raynal; Bhusan Gupta
Archive | 2000
James Chester Meador; Giovanni Gozzini; Marco Sabatini
Archive | 1998
Alexander Kalnitsky; Alan Kramer; Vito Fabbrizio; Giovanni Gozzini; Bhusian Guptz; Marco Sabatini
Archive | 1997
Alan Kramer; Roberto Canegallo; Mauro Chinosi; Giovanni Gozzini; Philip Heng Wai Leong; Pier Luigi Rolandi; Marco Sabatini
Archive | 1999
Alexander Kalnitsky; Alan Kramer; Vito Fabbrizio; Giovanni Gozzini; Bhusan Gupta; Marco Sabatini
Archive | 1998
Alexander Kalnitsky; Frank Randolph Bryant; Marco Sabatini
Archive | 1997
Alan Kramer; Roberto Canegallo; Mauro Chinosi; Giovanni Gozzini; Pier Luigi Rolandi; Marco Sabatini
Archive | 1997
Alan Kramer; Roberto Canegallo; Mauro Chinosi; Giovanni Gozzini; Philip Heng Wai Leong; Marco Onorato; Pier Luigi Rolandi; Marco Sabatini