Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giacomo Indiveri is active.

Publication


Featured researches published by Giacomo Indiveri.


Frontiers in Neuroscience | 2011

Neuromorphic silicon neuron circuits

Giacomo Indiveri; Bernabé Linares-Barranco; Tara Julia Hamilton; André van Schaik; Ralph Etienne-Cummings; Tobi Delbruck; Shih-Chii Liu; Piotr Dudek; Philipp Häfliger; Sylvie Renaud; Johannes Schemmel; Gert Cauwenberghs; John V. Arthur; Kai Hynna; Fopefolu Folowosele; Sylvain Saïghi; Teresa Serrano-Gotarredona; Jayawan H. B. Wijekoon; Yingxue Wang; Kwabena Boahen

Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.


Nanotechnology | 2013

Integration of nanoscale memristor synapses in neuromorphic computing architectures

Giacomo Indiveri; Bernabé Linares-Barranco; Robert A. Legenstein; George Deligeorgis; Themistoklis Prodromakis

Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.


Neural Computation | 2007

Synaptic Dynamics in Analog VLSI

Chiara Bartolozzi; Giacomo Indiveri

Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.


Neural Networks | 2013

2013 Special Issue: Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition

Nikola Kasabov; Kshitij Dhoble; Nuttapod Nuntalid; Giacomo Indiveri

On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes (which information is explicitly present in input data streams and would be crucial to consider in some tasks) and of the information contained in the timing of the following spikes that is learned by the dynamic synapses as a whole spatio-temporal pattern.


Frontiers in Neuroscience | 2013

STDP and STDP variations with memristors for spiking neuromorphic learning systems

Teresa Serrano-Gotarredona; Timothée Masquelier; Themistoklis Prodromakis; Giacomo Indiveri; Bernabé Linares-Barranco

In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a) introduce global synchronization and (b) to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original “moving wall” or to the “filament creation and annihilation” models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision.


IEEE Transactions on Biomedical Circuits and Systems | 2009

Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI

Srinjoy Mitra; Stefano Fusi; Giacomo Indiveri

Real-time classification of patterns of spike trains is a difficult computational problem that both natural and artificial networks of spiking neurons are confronted with. The solution to this problem not only could contribute to understanding the fundamental mechanisms of computation used in the biological brain, but could also lead to efficient hardware implementations of a wide range of applications ranging from autonomous sensory-motor systems to brain-machine interfaces. Here we demonstrate real-time classification of complex patterns of mean firing rates, using a VLSI network of spiking neurons and dynamic synapses which implement a robust spike-driven plasticity mechanism. The learning rule implemented is a supervised one: a teacher signal provides the output neuron with an extra input spike-train during training, in parallel to the spike-trains that represent the input pattern. The teacher signal simply indicates if the neuron should respond to the input pattern with a high rate or with a low one. The learning mechanism modifies the synaptic weights only as long as the current generated by all the stimulated plastic synapses does not match the output desired by the teacher, as in the perceptron learning rule. We describe the implementation of this learning mechanism and present experimental data that demonstrate how the VLSI neural network can learn to classify patterns of neural activities, also in the case in which they are highly correlated.


international symposium on circuits and systems | 2003

A low-power adaptive integrate-and-fire neuron circuit

Giacomo Indiveri

We present a low-power analog circuit for implementing a model of a leaky integrate and fire neuron. Next to being optimized for low-power consumption, the proposed circuit includes elements for implementing spike frequency adaptation, for setting an arbitrary refractory period and for modulating the neurons threshold voltage. We present experimental data from a prototype chip, implemented using a standard 1.5 /spl mu/m CMOS process.


Proceedings of the IEEE | 2014

Neuromorphic electronic circuits for building autonomous cognitive systems

Elisabetta Chicca; Fabio Stefanini; Chiara Bartolozzi; Giacomo Indiveri

Several analog and digital brain-inspired electronic systems have been recently proposed as dedicated solutions for fast simulations of spiking neural networks. While these architectures are useful for exploring the computational properties of large-scale models of the nervous system, the challenge of building low-power compact physical artifacts that can behave intelligently in the real world and exhibit cognitive abilities still remains open. In this paper, we propose a set of neuromorphic engineering solutions to address this challenge. In particular, we review neuromorphic circuits for emulating neural and synaptic dynamics in real time and discuss the role of biophysically realistic temporal dynamics in hardware neural processing architectures; we review the challenges of realizing spike-based plasticity mechanisms in real physical systems and present examples of analog electronic circuits that implement them;we describe the computational properties of recurrent neural networks and show how neuromorphic winner-take-all circuits can implement working-memory and decision-making mechanisms. We validate the neuromorphic approach proposed with experimental results obtained from our own circuits and systems, and argue how the circuits and networks presented in this work represent a useful set of components for efficiently and elegantly implementing neuromorphic cognition.


Neural Networks | 2001

Orientation-selective aVLSI spiking neurons

Shih-Chii Liu; Jörg Kramer; Giacomo Indiveri; Tobias Delbrück; Thomas P. Burg; Rodney J. Douglas

We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.


IEEE Transactions on Circuits and Systems I-regular Papers | 2007

A Multichip Pulse-Based Neuromorphic Infrastructure and Its Application to a Model of Orientation Selectivity

Elisabetta Chicca; Adrian M. Whatley; Patrick Lichtsteiner; V. Dante; Tobias Delbrück; P. Del Giudice; Rodney J. Douglas; Giacomo Indiveri

The growing interest in pulse-mode processing by neural networks is encouraging the development of hardware implementations of massively parallel networks of integrate-and-fire neurons distributed over multiple chips. Address-event representation (AER) has long been considered a convenient transmission protocol for spike based neuromorphic devices. One missing, long-needed feature of AER-based systems is the ability to acquire data from complex neuromorphic systems and to stimulate them using suitable data. We have implemented a general-purpose solution in the form of a peripheral component interconnect (PCI) board (the PCI-AER board) supported by software. We describe the main characteristics of the PCI-AER board, and of the related supporting software. To show the functionality of the PCI-AER infrastructure we demonstrate a reconfigurable multichip neuromorphic system for feature selectivity which models orientation tuning properties of cortical neurons

Collaboration


Dive into the Giacomo Indiveri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jörg Kramer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chiara Bartolozzi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Luigi Raffo

University of Cagliari

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge