Pentti Kanerva
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pentti Kanerva.
Cognitive Computation | 2009
Pentti Kanerva
The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.
international conference on artificial neural networks | 1996
Pentti Kanerva
Information with structure is traditionally organized into records with fields. For example, a medical record consisting of name, sex, age, and weight might look like (Joe, male, 66, 77). What 77 stands for is determined by its location in the record, so that this is an example of local representation. The brains wiring, and robustness under local damage, speak for the importance of distributed representations. The Holographic Reduced Representation (HRR) of Plate is a prime example based on real or complex vectors. This paper describes how spatter coding leads to binary HRRs, and how the fields of a record are encoded into a long binary word without fields and how they are extracted from such a word.
Archive | 1994
Pentti Kanerva
The Spatter Code is a high-dimensional (e.g., N=10,000), random code that encodes “high-level concepts” in tenns of their “low-level attributes” so that concepts at different levels can be mixed freely. The binary spatter code is the simplest. It has two N-bit codewords for each concept or item, a “high-level,” or dense, word with many randomly placed Is and a “low-level,” or sparse, word with a few (that are contained in the many). The dense codewords can be used as inputs to an associative memory. The sparse codewords are used in encoding new concepts. When several items (attributes, concepts, chunks) are combined to form a new item, the two codewords for the new item are made from the sparse codewords of its constituents as follows: the new dense word is the logical OR of the constiblents (i.e., their sum thresholded at 0.5), and the new sparse word has Is where the constiblent words overlap (i.e., their sum thresholded at 1.5). When the parameters for the code are chosen properly, the number of Is in the codewords is maintained as new items are encoded from combinations of old ones.
IEEE Transactions on Very Large Scale Integration Systems | 2014
Eero Lehtonen; Jussi H. Poikonen; Mika Laiho; Pentti Kanerva
Associative memories, in contrast to conventional address-based memories, are inherently fault-tolerant and allow retrieval of data based on partial search information. This paper considers the possibility of implementing large-scale associative memories through memristive devices jointly with CMOS circuitry. An advantage of a memristive associative memory is that the memory elements are located physically above the CMOS layer, which yields more die area for the processing elements realized in CMOS. This allows for high-capacity memories even while using an older CMOS technology, as the capacity of the memory depends more on the feature size of the memristive crossbar than on that of the CMOS components. In this paper, we propose the memristive implementations, and present simulations and error analysis of the autoassociative content-addressable memory, the Willshaw memory, and the sparse distributed memory. Furthermore, we present a CMOS cell that can be used to implement the proposed memory architectures.
neural information processing systems | 1998
Pentti Kanerva
We look at distributed representation of structure with variable binding, that is natural for neural nets and that allows traditional symbolic representation and processing. The representation supports learning from example. This is demonstrated by taking several instances of the mother-of relation implying the parent-of relation, by encoding them into a mapping vector, and by showing that the mapping vector maps new instances of mother-of into parent-of. Possible implications to AI are considered.
2016 IEEE International Conference on Rebooting Computing (ICRC) | 2016
Abbas Rahimi; Simone Benatti; Pentti Kanerva; Luca Benini; Jan M. Rabaey
The mathematical properties of high-dimensional spaces seem remarkably suited for describing behaviors produces by brains. Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. Hypervectors are high-dimensional, holographic, and (pseudo)random with independent and identically distributed (i.i.d.) components. These features provide an opportunity for energy-efficient computing applied to cyberbiological and cybernetic systems. We describe the use of HDC in a smart prosthetic application, namely hand gesture recognition from a stream of Electromyography (EMG) signals. Our algorithm encodes a stream of analog EMG signals that are simultaneously generated from four channels to a single hypervector. The proposed encoding effectively captures spatial and temporal relations across and within the channels to represent a gesture. This HDC encoder achieves a high level of classification accuracy (97.8%) with only 1/3 the training data required by state-of-the-art SVM on the same task. HDC exhibits fast and accurate learning explicitly allowing online and continuous learning. We further enhance the encoder to adaptively mitigate the effect of gesture-timing uncertainties across different subjects endogenously; further, the encoder inherently maintains the same accuracy when there is up to 30% overlapping between two consecutive gestures in a classification window.
Computational Intelligence and Neuroscience | 2015
Gabriel Recchia; Magnus Sahlgren; Pentti Kanerva; Michael N. Jones
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
IEEE Transactions on Circuits and Systems | 2017
Abbas Rahimi; Sohum Datta; Denis Kleyko; Edward Paxon Frady; Bruno A. Olshausen; Pentti Kanerva; Jan M. Rabaey
We outline a model of computing with high-dimensional (HD) vectors—where the dimensionality is in the thousands. It is built on ideas from traditional (symbolic) computing and artificial neural nets/deep learning, and complements them with ideas from probability theory, statistics, and abstract algebra. Key properties of HD computing include a well-defined set of arithmetic operations on vectors, generality, scalability, robustness, fast learning, and ubiquitous parallel operation, making it possible to develop efficient algorithms for large-scale real-world tasks. We present a 2-D architecture and demonstrate its functionality with examples from text analysis, pattern recognition, and biosignal processing, while achieving high levels of classification accuracy (close to or above conventional machine-learning methods), energy efficiency, and robustness with simple algorithms that learn fast. HD computing is ideally suited for 3-D nanometer circuit technology, vastly increasing circuit density and energy efficiency, and paving a way to systems capable of advanced cognitive tasks.
international conference on artificial neural networks | 1998
Pentti Kanerva
Distributed representation of recursive structure with variable binding is explored, using binary vectors and only one composition function natural to neural nets. The system is modeled after Holographic Reduced Representation, but both of its composition operators are thresholded sums. Binding is done by ANDing the vectors for a role and a filler, and bound pairs are combined or merged by ORing. The thresholds for realizing AND and OR are different, but it is suggested that under certain conditions a singe threshold might approximate them closely enough to allow its use for both binding and merging. It therefore appears that recursive structure building that employs variable binding can be achieved with simple mechanisms suitable for neural nets.
biomedical circuits and systems conference | 2015
Mika Laiho; Jussi H. Poikonen; Pentti Kanerva; Eero Lehtonen
Computing with high-dimensional vectors in a manner that resembles computing with numbers is based on Plates Holographic Reduced Representation (HRR) and is used to model human cognition. Here we examine its hardware realization under constraints suggested by the properties of the brains circuits. The sparseness of neural firing suggests that the vectors should be sparse. We show that the HRR operations of addition, multiplication, and permutation can be realized with sparse vectors, making an energy-efficient implementation possible. Furthermore, we propose a processor that has both data and instructions embedded in the same high-dimensional vector. The operation is highlighted with a sequence memory example.