Sadique Sheik
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sadique Sheik.
biomedical circuits and systems conference | 2016
Bruno U. Pedroni; Sadique Sheik; Siddharth Joshi; Georgios Detorakis; Somnath Paul; Charles Augustine; Emre Neftci; Gert Cauwenberghs
Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and postsynaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation using run-length encoding leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.
international symposium on circuits and systems | 2016
Sadique Sheik; Somnath Paul; Charles Augustine; Chinnikrishna Kothapalli; Muhammad M. Khellah; Gert Cauwenberghs; Emre Neftci
Using a neural sampling approach, networks of stochastic spiking neurons, interconnected with plastic synapses, have been used to construct computational machines such as Restricted Boltzmann Machines (RBMs). Previous work towards building such networks achieved lower performances than traditional RBMs. More recently, Synaptic Sampling Machines (SSMs) were shown to outperform equivalent RBMs. In Synaptic Sampling Machines (SSMs), the stochasticity for the sampling is generated at the synapse. Stochastic synapses play the dual role of a regularizer during learning and an efficient mechanism for implementing stochasticity in neural networks over a wide dynamic range. In this paper we show that SSMs with stochastic synapses implemented in FPGA-based spiking neural networks can obtain a high accuracy in classifying MNIST handwritten digit database. We compare classification accuracy for different bit precision for stochastic and non-stochastic synapses and further argue that stochastic synapses have the same effect as synapses with higher bit precision but require significantly lower computational resources.
biomedical circuits and systems conference | 2016
Sadique Sheik; Somnath Paul; Charles Augustine; Gert Cauwenberghs
Several learning rules for synaptic plasticity, that depend on either spike timing or internal state variables, have been proposed in the past imparting varying computational capabilities to Spiking Neural Networkss (SNNs). Due to design complications these learning rules are typically not implemented on neuromorphic devices leaving the devices to be only capable of inference. In this work we propose a unidirectional post-synaptic potential dependent learning rule that is only triggered by pre-synaptic spikes, and easy to implement on hardware. We demonstrate that such a learning rule is functionally capable of replicating computational capabilities of pairwise STDP. Further more, we demonstrate that this learning rule can be used to learn and classify spatio-temporal spike patterns in an unsupervised manner using individual neurons. We argue that this learning rule is computationally powerful and also ideal for hardware implementations due to its unidirectional memory access.
international symposium on circuits and systems | 2017
Hesham Mostafa; Bruno U. Pedroni; Sadique Sheik; Gert Cauwenberghs
Spike generation and routing is typically the most energy-demanding operation in neuromorphic hardware built using spiking neurons. Spiking neural networks running on neuromorphic hardware, however, often use rate-coding where the neurons spike rate is treated as the information-carrying quantity. Rate-coding is a highly inefficient coding scheme with minimal information content in each spike, which requires the transmission of a large number of spikes. In this paper, we describe an alternative type of spiking networks based on temporal coding where neuron spiking activity is very sparse and information is encoded in the time of each spike. We implemented the proposed networks on an FPGA platform and we use these sparsely active spiking networks to classify MNIST digits. The network FPGA implementation produces the classification output using only few tens of spikes from the hidden layer, and the classification result is obtained very quickly, typically within 1–3 synaptic time constants. We describe the idealized network dynamics and how these dynamics are adapted to allow an efficient implementation on digital hardware. Our results illustrate the importance of making use of the temporal dynamics in spiking networks in order to maximize the information content of each spike, which ultimately leads to reduced spike counts, improved energy efficiency, and faster response times.
Frontiers in Neuroscience | 2017
Hesham Mostafa; Bruno U. Pedroni; Sadique Sheik; Gert Cauwenberghs
Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
Frontiers in Neuroscience | 2016
Christian Mayr; Sadique Sheik; Chiara Bartolozzi; Elisabetta Chicca
Brain plasticity serves animals in a wide range of vital functions. It assists them in adapting their behavior to the surroundings, in learning new strategies for optimizing a certain reward-seeking policy for their survival or in adjusting motor activity through sensory feedback. Thus, plasticity is an essential ingredient for building artificial autonomous systems that can cope with the real world. In order to build these systems, neuromorphic design labs actively investigate and develop various circuit implementations of plasticity. This research topic collects a comprehensive snapshot of this work. A number of manuscripts published in this topic study the interplay between stochasticity and plasticity (Afshar et al.; Bill and Legenstein; Lagorce et al.; Qiao et al.). Plasticity here acts in a stochastic fashion or extracts features from stochastic sensor data. The current push toward higher complexity/scale in neuromorphic devices can also be observed in plasticity implementations (Qiao et al.; Wang et al.; Noack et al.). Due to advantageous technology scaling and reproducibility, digital implementations of neuromorphic plasticity are gaining popularity (Galluppi et al.; Vogginger et al.). The collection of articles in this topic is rounded out by articles on plasticity in novel nano-scale technologies (Saighi et al.; Thomas et al.; Wang et al.; Bill and Legenstein).
international symposium on circuits and systems | 2017
Bruno U. Pedroni; Sadique Sheik; Gert Cauwenberghs
In this paper we propose a method for continuously processing and learning from data in Restricted Boltzmann Machines (RBMs). Traditionally, RBMs are trained using Contrastive Divergence (CD), which is an algorithm consisting of two phases, of which only one is driven by data. This not only prohibits training of RBMs in conjugation with continuous-time data streams, especially in event-based real-time systems, but also hinders training speed of RBMs in large-scale machine learning systems. The model we propose trades space for time and, by pipelining information propagation in the network, is capable of processing both phases of the CD learning algorithm simultaneously. Simulation results of our model on generative and discriminative tasks show convergence to the original CD algorithm. We finalize with a discussion of applying our method to other deep neural networks, resulting in continuous learning and training time reduction.
International Conference on Applications in Nonlinear Dynamics | 2016
Sadique Sheik
Spiking neural networks are seen as the third generation of neural networks and the closest emulators of their biological counter parts. These networks use spikes as means of transmitting information between neurons. We study the merits and capacity of information transfer using spikes across different encoding and decoding schemes and show that spatio-temporal encoding scheme provides a very high efficiency in information transfer. We then explore learning rules based on neural dynamics that enable learning of spatio-temporal spike patterns. We explore various learning rules that can be used to learn spatio-temporal spike patterns.
Sensors and Actuators B-chemical | 2015
Jordi Fonollosa; Sadique Sheik; Ramón Huerta; S. Marco
Procedia Engineering | 2014
Sadique Sheik; S. Marco; Ramón Huerta; Jordi Fonollosa