Severin Sidler
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Severin Sidler.
IEEE Transactions on Electron Devices | 2015
Geoffrey W. Burr; Robert M. Shelby; Severin Sidler; Carmelo di Nolfo; Jun-Woo Jang; Irem Boybat; Rohit S. Shenoy; Pritish Narayanan; Kumar Virwani; Emanuele U. Giacometti; B. N. Kurdi; Hyunsang Hwang
Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.
Advances in Physics: X | 2017
Geoffrey W. Burr; Robert M. Shelby; Abu Sebastian; SangBum Kim; Seyoung Kim; Severin Sidler; Kumar Virwani; Masatoshi Ishii; Pritish Narayanan; Alessandro Fumarola; Lucas L. Sanches; Irem Boybat; Manuel Le Gallo; Kibong Moon; Jiyoo Woo; Hyunsang Hwang; Yusuf Leblebici
Abstract Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability. Graphical Abstract
international electron devices meeting | 2015
Geoffrey W. Burr; Pritish Narayanan; Robert M. Shelby; Severin Sidler; Irem Boybat; C. di Nolfo; Yusuf Leblebici
We review our work towards achieving competitive performance (classification accuracies) for on-chip machine learning (ML) of large-scale artificial neural networks (ANN) using Non-Volatile Memory (NVM)-based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25×) and lower-power (from 120-2850×) ML training than GPU-based hardware.
european solid state device research conference | 2016
Severin Sidler; Irem Boybat; Robert M. Shelby; Pritish Narayanan; Junwoo Jang; Alessandro Fumarola; Kibong Moon; Yusuf Leblebici; Hyunsang Hwang; Geoffrey W. Burr
We assess the impact of the conductance response of Non-Volatile Memory (NVM) devices employed as the synaptic weight element for on-chip acceleration of the training of large-scale artificial neural networks (ANN). We briefly review our previous work towards achieving competitive performance (classification accuracies) for such ANN with both Phase-Change Memory (PCM) [1], [2] and non-filamentary ReRAM based on PrCaMnO (PCMO) [3], and towards assessing the potential advantages for ML training over GPU-based hardware in terms of speed (up to 25× faster) and power (from 120-2850× lower power) [4]. We then discuss the “jump-table” concept, previously introduced to model real-world NVM such as PCM [1] or PCMO, to describe the full cumulative distribution function (CDF) of conductance-change at each device conductance value, for both potentiation (SET) and depression (RESET). Using several types of artificially-constructed jump-tables, we assess the relative importance of deviations from an ideal NVM with perfectly linear conductance response.
2016 IEEE International Conference on Rebooting Computing (ICRC) | 2016
Alessandro Fumarola; Pritish Narayanan; Lucas L. Sanches; Severin Sidler; Junwoo Jang; Kibong Moon; Robert M. Shelby; Hyunsang Hwang; Geoffrey W. Burr
Large arrays of the same nonvolatile memories (NVM) being developed for Storage-Class Memory (SCM) - such as Phase Change Memory (PCM) and Resistance RAM (ReRAM) - can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as synaptic “weight.” This allows the all-important multiply-accumulate operation within these algorithms to be performed efficiently at the weight data.
IEEE Transactions on Circuits and Systems Ii-express Briefs | 2017
Stanislaw Wozniak; Angeliki Pantazi; Severin Sidler; Nikolaos Papandreou; Yusuf Leblebici; Evangelos Eleftheriou
Neuromorphic computing takes inspiration from the brain to build highly parallel, energy- and area-efficient architectures. Recently, hardware realizations of neurons and synapses using memristive devices were proposed and applied for the task of correlation detection. However, for weakly correlated signals, this task becomes challenging because of the variability and the asymmetric conductance response of the memristive devices. In this brief, we propose a high-density memristive system realized using nanodevices based on phase-change technology. We present a noise-robust phase-change implementation of a neuron and a synaptic learning rule that is capable of capturing patterns of weakly correlated inputs. We experimentally demonstrate the operation with a correlation coefficient as low as 0.2 using a record number of 1M phase-change synapses.
IEEE Journal of the Electron Devices Society | 2018
Kibong Moon; Alessandro Fumarola; Severin Sidler; Junwoo Jang; Pritish Narayanan; Robert M. Shelby; Geoffrey W. Burr; Hyunsang Hwang
We report on material improvements to non-filamentary RRAM devices based on Pr0.7Ca0.3MnO3 by introducing an MoOx buffer layer together with a reactive Al electrode, and on device measurements designed to help gauge the performance of these devices as bidirectional analog synapses for on-chip acceleration of the backpropagation algorithm. Previous Al/PCMO devices exhibited degraded LRS retention due to the low activation energy for oxidation of the Al electrode, and Mo/PCMO devices showed low conductance contrast. To control the redox reaction at the metal/PCMO interface, we introduce a 4-nm interfacial layer of conducting MoOx as an oxygen buffer layer. Due to the controlled redox reaction within this Al/Mo/PCMO device, we observed improvements in both retention and conductance on/off ratio. We confirm bidirectional analog synapse characteristics and measure “jump-tables” suitable for large scale neural network simulations that attempt to capture complex and stochastic device behavior [see companion paper]. Finally, switching energy measurements are shown, illustrating a path for future device research toward smaller devices, shorter pulses and lower programming voltages.
international conference on artificial neural networks | 2017
Severin Sidler; Angeliki Pantazi; Stanislaw Wozniak; Yusuf Leblebici; Evangelos Eleftheriou
Neuromorphic systems using memristive devices provide a brain-inspired alternative to the classical von Neumann processor architecture. In this work, a spiking neural network (SNN) implemented using phase-change synapses is studied. The network is equipped with a winner-take-all (WTA) mechanism and a spike-timing-dependent synaptic plasticity rule realized using crystal-growth dynamics of phase-change memristors. We explore various configurations of the synapse implementation and we demonstrate the capabilities of the phase-change-based SNN as a pattern classifier using unsupervised learning. Furthermore, we enhance the performance of the SNN by introducing an input encoding scheme that encodes information from both the original and the complementary pattern. Simulation and experimental results of the phase-change-based SNN demonstrate the learning accuracies on the MNIST handwritten digits benchmark.
Archive | 2017
Lucas L. Sanches; Alessandro Fumarola; Severin Sidler; Pritish Narayanan; Irem Boybat; Junwoo Jang; Kibong Moon; Robert M. Shelby; Yusuf Leblebici; Hyunsang Hwang; Geoffrey W. Burr
Large arrays of the same nonvolatile memories (NVMs) being developed for storage-class memory (SCM) – such as phase-change memory (PCM) and resistive RAM (RRAM) – can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as synaptic “weight.” This allows the all-important multiply-accumulate operation within these algorithms to be performed efficiently at the weight data.
Advances in Neuromorphic Hardware Exploiting Emerging Nanoscale Devices | 2017
Severin Sidler; Jun-Woo Jang; Geoffrey W. Burr; Robert M. Shelby; Irem Boybat; Carmelo di Nolfo; Pritish Narayanan; Kumar Virwani; Hyunsang Hwang
In the conventional von Neumann (VN) architecture, data—both operands and operations to be performed on those operands—makes its way from memory to a dedicated central processor. With the end of Dennard scaling and the resulting slowdown in Moore’s law, the IT industry is turning its attention to non-Von Neumann (non-VN) architectures, and in particular, to computing architectures motivated by the human brain. One family of such non-VN computing architectures is artificial neural networks (ANNs). To be competitive with conventional architectures, such ANNs will need to be massively parallel, with many neurons interconnected using a vast number of synapses, working together efficiently to compute problems of significant interest. Emerging nonvolatile memories, such as phase-change memory (PCM) or resistive memory (RRAM), could prove very helpful for this, by providing inherently analog synaptic behavior in densely packed crossbar arrays suitable for on-chip learning. We discuss our recent research investigating the characteristics needed from such nonvolatile memory elements for implementation of high-performance ANNs. We describe experiments on a 3-layer perceptron network with 164,885 synapses, each implemented using 2 NVM devices. A variant of the backpropagation weight update rule suitable for NVM+selector crossbar arrays is shown and implemented in a mixed hardware–software experiment using an available, non-crossbar PCM array. Extensive tolerancing results are enabled by precise matching of our NN simulator to the conditions of the hardware experiment. This tolerancing shows clearly that NVM-based neural networks are highly resilient to random effects (NVM variability, yield, and stochasticity), but highly sensitive to gradient effects that act to steer all synaptic weights. Simulations of ANNs with both PCM and non-filamentary bipolar RRAM based on Pr\(_{1-x}\)Ca\(_x\)MnO\(_3\) (PCMO) are also discussed. PCM exhibits smooth, slightly nonlinear partial-SET (conductance increase) behavior, but the asymmetry of its abrupt RESET introduces difficulties; in contrast, PCMO offers continuous conductance change in both directions, but exhibits significant nonlinearities (degree of conductance change depends strongly on absolute conductance). The quantitative impacts of these issues on ANN performance (classification accuracy) are discussed.