Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emre Neftci is active.

Publication


Featured researches published by Emre Neftci.


Frontiers in Neuroscience | 2016

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

Emre Neftci; Bruno U. Pedroni; Siddharth Joshi; Maruan Al-Shedivat; Gert Cauwenberghs

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.


Frontiers in Neuroscience | 2017

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

Emre Neftci; Charles Augustine; Somnath Paul; Georgios Detorakis

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.


PLOS Computational Biology | 2015

Learning of Chunking Sequences in Cognition and Behavior

Jordi Fonollosa; Emre Neftci; Mikhail I. Rabinovich

We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia.


international symposium on quality electronic design | 2016

Neuromorphic architectures with electronic synapses

Sukru Burc Eryilmaz; Siddharth Joshi; Emre Neftci; Weier Wan; Gert Cauwenberghs; H.-S. Philip Wong

This paper gives an overview of recent progress on 1) online learning algorithms with spiking neurons 2) neuromorphic platforms that efficiently run these algorithms with a focus on implementation using analog-non-volatile memory (aNVM) as electronic synapses. Design considerations and challenges for using aNVM synapses such as requirements for device variability, multilevel states, programming energy, array-level connectivity, wire energy, fan-in/fan-out, and IR drop are presented. Future research directions and integration challenges are summarized. Algorithms based on spiking neural networks are promising for energy efficient real-time learning, but cycle-to-cycle device variations can significantly impact learning performance. Our analysis suggests that wires are increasingly important for energy considerations, especially for large systems.


IEEE Transactions on Electron Devices | 2016

Training a Probabilistic Graphical Model With Resistive Switching Electronic Synapses

Sukru Burc Eryilmaz; Emre Neftci; Siddharth Joshi; SangBum Kim; M. BrightSky; Hsiang-Lan Lung; Chung H. Lam; Gert Cauwenberghs; H.-S.P. Wong

Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of ~ 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices.


biomedical circuits and systems conference | 2016

Forward table-based presynaptic event-triggered spike-timing-dependent plasticity

Bruno U. Pedroni; Sadique Sheik; Siddharth Joshi; Georgios Detorakis; Somnath Paul; Charles Augustine; Emre Neftci; Gert Cauwenberghs

Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and postsynaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation using run-length encoding leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.


iScience | 2018

Data and Power Efficient Intelligence with Neuromorphic Learning Machines

Emre Neftci

The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data.


international symposium on circuits and systems | 2016

Synaptic sampling in hardware spiking neural networks

Sadique Sheik; Somnath Paul; Charles Augustine; Chinnikrishna Kothapalli; Muhammad M. Khellah; Gert Cauwenberghs; Emre Neftci

Using a neural sampling approach, networks of stochastic spiking neurons, interconnected with plastic synapses, have been used to construct computational machines such as Restricted Boltzmann Machines (RBMs). Previous work towards building such networks achieved lower performances than traditional RBMs. More recently, Synaptic Sampling Machines (SSMs) were shown to outperform equivalent RBMs. In Synaptic Sampling Machines (SSMs), the stochasticity for the sampling is generated at the synapse. Stochastic synapses play the dual role of a regularizer during learning and an efficient mechanism for implementing stochasticity in neural networks over a wide dynamic range. In this paper we show that SSMs with stochastic synapses implemented in FPGA-based spiking neural networks can obtain a high accuracy in classifying MNIST handwritten digit database. We compare classification accuracy for different bit precision for stochastic and non-stochastic synapses and further argue that stochastic synapses have the same effect as synapses with higher bit precision but require significantly lower computational resources.


international symposium on circuits and systems | 2016

Stochastic synaptic plasticity with memristor crossbar arrays

Rawan Naous; Maruan Al-Shedivat; Emre Neftci; Gert Cauwenberghs; Khaled N. Salama

Memristive devices have been shown to exhibit slow and stochastic resistive switching behavior under low-voltage, low-current operating conditions. Here we explore such mechanisms to emulate stochastic plasticity in memristor crossbar synapse arrays. Interfaced with integrate-and-fire spiking neurons, the memristive synapse arrays are capable of implementing stochastic forms of spike-timing dependent plasticity which parallel mean-rate models of stochastic learning with binary synapses. We present theory and experiments with spike-based stochastic learning in memristor crossbar arrays, including simplified modeling as well as detailed physical simulation of memristor stochastic resistive switching characteristics due to voltage and current induced filament formation and collapse.


international symposium on circuits and systems | 2017

Event-driven random backpropagation: Enabling neuromorphic deep learning machines

Emre Neftci; Charles Augustine; Somnath Paul; Georgios Detorakis

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. The gradient descent back-propagation rule is a powerful algorithm that is ubiquitous in deep learning, but it relies on the immediate availability of network-wide information stored with high-precision memory. However, recent work shows that exact backpropagated weights are not essential for learning deep representations. Here, we demonstrate an event-driven random backpropagation (eRBP) rule that uses an error-modulated synaptic plasticity rule for learning deep representations in neuromorphic computing hardware. The rule is very suitable for implementation in neuromorphic hardware using a two-compartment leaky integrate & fire neuron and a membrane-voltage modulated, spike-driven plasticity rule. Our results show that using eRBP, deep representations are rapidly learned without using backpropagated gradients, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

Collaboration


Dive into the Emre Neftci's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sadique Sheik

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge