Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno U. Pedroni is active.

Publication


Featured researches published by Bruno U. Pedroni.


Frontiers in Neuroscience | 2014

Event-driven contrastive divergence for spiking neuromorphic systems

Emre Neftci; Srinjoy Das; Bruno U. Pedroni; Kenneth Kreutz-Delgado; Gert Cauwenberghs

Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.


Frontiers in Neuroscience | 2016

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

Emre Neftci; Bruno U. Pedroni; Siddharth Joshi; Maruan Al-Shedivat; Gert Cauwenberghs

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.


arXiv: Neural and Evolutionary Computing | 2016

Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware

Peter U. Diehl; Guido Zarrella; Andrew S. Cassidy; Bruno U. Pedroni; Emre Neftci

In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This “train-and-constrain” method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBMs Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ≈ 17μW.


international symposium on neural networks | 2016

TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth

Peter U. Diehl; Bruno U. Pedroni; Andrew S. Cassidy; Paul A. Merolla; Emre Neftci; Guido Zarrella

We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response. Specifically, we perform a fine-grained sentiment analysis task with implementations on two different systems: one using conventional spiking neural network (SNN) simulators and the other one using IBMs Neurosynaptic System TrueNorth. Input words are projected into a high-dimensional semantic space and processed through a fully-connected neural network (FCNN) containing rectified linear units (ReLU) trained via backpropagation. After training, this FCNN is converted to a SNN by substituting the ReLUs with integrate-and-fire neurons. We show that there is practically no performance loss due to conversion to a spiking network on a sentiment analysis test set, i.e. correlations with human annotations differ by less than 0.02 between the original DNN and its spiking equivalent. Additionally, we show that the SNN generated with this technique can be mapped to existing neuromorphic hardware - in our case, the TrueNorth chip. Mapping to the chip involves 4-bit synaptic weight discretization and adjustment of the neuron thresholds. The resulting end-to-end system can take a user input, i.e. a word in a vocabulary of over 300,000 words, and estimate its sentiment on TrueNorth with a power consumption of approximately 50 µW.


international symposium on circuits and systems | 2015

Gibbs sampling with low-power spiking digital neurons

Srinjoy Das; Bruno U. Pedroni; Paul A. Merolla; John V. Arthur; Andrew S. Cassidy; Bryan L. Jackson; Dharmendra S. Modha; Gert Cauwenberghs; Kenneth Kreutz-Delgado

Restricted Boltzmann Machines and Deep Belief Networks have been successfully used in a wide variety of applications including image classification and speech recognition. Inference and learning in these algorithms uses a Markov Chain Monte Carlo procedure called Gibbs sampling. A sigmoidal function forms the kernel of this sampler which can be realized from the firing statistics of noisy integrate-and-fire neurons on a neuromorphic VLSI substrate. This paper demonstrates such an implementation on an array of digital spiking neurons with stochastic leak and threshold properties for inference tasks and presents some key performance metrics for such a hardware-based sampler in both the generative and discriminative contexts.


international symposium on neural networks | 2013

Neuromorphic adaptations of restricted Boltzmann machines and deep belief networks

Bruno U. Pedroni; Srinjoy Das; Emre Neftci; Kenneth Kreutz-Delgado; Gert Cauwenberghs

Restricted Boltzmann Machines (RBMs) and Deep Belief Networks (DBNs) have been demonstrated to perform efficiently on a variety of applications, such as dimensionality reduction and classification. Implementation of RBMs on neuromorphic platforms, which emulate large-scale networks of spiking neurons, has significant advantages from concurrency and low-power perspectives. This work outlines a neuromorphic adaptation of the RBM, which uses a recently proposed neural sampling algorithm (Buesing et al. 2011), and examines its algorithmic efficiency. Results show the feasibility of such alterations, which will serve as a guide for future implementation of such algorithms in neuromorphic very large scale integration (VLSI) platforms.


biomedical circuits and systems conference | 2016

Forward table-based presynaptic event-triggered spike-timing-dependent plasticity

Bruno U. Pedroni; Sadique Sheik; Siddharth Joshi; Georgios Detorakis; Somnath Paul; Charles Augustine; Emre Neftci; Gert Cauwenberghs

Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and postsynaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation using run-length encoding leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.


Frontiers in Neuroscience | 2015

Event-driven contrastive divergence: neural sampling foundations

Emre Neftci; Srinjoy Das; Bruno U. Pedroni; Kenneth Kreutz-Delgado; Gert Cauwenberghs

In a recent Frontiers in Neuroscience paper (Neftci et al., 2014) we contributed an on-line learning rule, driven by spike-events in an Integrate and Fire (IF) neural network, that emulates the learning performance of Contrastive Divergence (CD) in an equivalent Restricted Boltzmann Machine (RBM) amenable to real-time implementation in spike-based neuromorphic systems. The event-driven CD framework assumes the foundations of neural sampling (Buesing et al., 2011; Maass, 2014) in mapping spike rates of a deterministic IF network onto probabilities of a corresponding stochastic neural network. In Neftci et al. (2014), we used a particular form of neural sampling previously analyzed in Petrovici et al. (2013)1, although this connection was not made sufficiently clear in the published article. The purpose of this letter is to clarify this connection, and to raise the readers awareness to the existence of various forms of neural sampling. We highlight the differences as well as strong connections across these various forms, and suggest applications of event-driven CD in a more general setting enabled by the broader interpretations of neural sampling.


international symposium on circuits and systems | 2017

Fast classification using sparsely active spiking networks

Hesham Mostafa; Bruno U. Pedroni; Sadique Sheik; Gert Cauwenberghs

Spike generation and routing is typically the most energy-demanding operation in neuromorphic hardware built using spiking neurons. Spiking neural networks running on neuromorphic hardware, however, often use rate-coding where the neurons spike rate is treated as the information-carrying quantity. Rate-coding is a highly inefficient coding scheme with minimal information content in each spike, which requires the transmission of a large number of spikes. In this paper, we describe an alternative type of spiking networks based on temporal coding where neuron spiking activity is very sparse and information is encoded in the time of each spike. We implemented the proposed networks on an FPGA platform and we use these sparsely active spiking networks to classify MNIST digits. The network FPGA implementation produces the classification output using only few tens of spikes from the hidden layer, and the classification result is obtained very quickly, typically within 1–3 synaptic time constants. We describe the idealized network dynamics and how these dynamics are adapted to allow an efficient implementation on digital hardware. Our results illustrate the importance of making use of the temporal dynamics in spiking networks in order to maximize the information content of each spike, which ultimately leads to reduced spike counts, improved energy efficiency, and faster response times.


IEEE Transactions on Biomedical Circuits and Systems | 2016

Mapping Generative Models onto a Network of Digital Spiking Neurons

Bruno U. Pedroni; Srinjoy Das; John V. Arthur; Paul A. Merolla; Bryan L. Jackson; Dharmendra S. Modha; Kenneth Kreutz-Delgado; Gert Cauwenberghs

Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.

Collaboration


Dive into the Bruno U. Pedroni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Srinjoy Das

University of California

View shared research outputs
Top Co-Authors

Avatar

Sadique Sheik

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge