Irem Boybat
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Irem Boybat.
Advances in Physics: X | 2017
Geoffrey W. Burr; Robert M. Shelby; Abu Sebastian; SangBum Kim; Seyoung Kim; Severin Sidler; Kumar Virwani; Masatoshi Ishii; Pritish Narayanan; Alessandro Fumarola; Lucas L. Sanches; Irem Boybat; Manuel Le Gallo; Kibong Moon; Jiyoo Woo; Hyunsang Hwang; Yusuf Leblebici
Abstract Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability. Graphical Abstract
international electron devices meeting | 2015
Geoffrey W. Burr; Pritish Narayanan; Robert M. Shelby; Severin Sidler; Irem Boybat; C. di Nolfo; Yusuf Leblebici
We review our work towards achieving competitive performance (classification accuracies) for on-chip machine learning (ML) of large-scale artificial neural networks (ANN) using Non-Volatile Memory (NVM)-based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25×) and lower-power (from 120-2850×) ML training than GPU-based hardware.
european solid state device research conference | 2016
Severin Sidler; Irem Boybat; Robert M. Shelby; Pritish Narayanan; Junwoo Jang; Alessandro Fumarola; Kibong Moon; Yusuf Leblebici; Hyunsang Hwang; Geoffrey W. Burr
We assess the impact of the conductance response of Non-Volatile Memory (NVM) devices employed as the synaptic weight element for on-chip acceleration of the training of large-scale artificial neural networks (ANN). We briefly review our previous work towards achieving competitive performance (classification accuracies) for such ANN with both Phase-Change Memory (PCM) [1], [2] and non-filamentary ReRAM based on PrCaMnO (PCMO) [3], and towards assessing the potential advantages for ML training over GPU-based hardware in terms of speed (up to 25× faster) and power (from 120-2850× lower power) [4]. We then discuss the “jump-table” concept, previously introduced to model real-world NVM such as PCM [1] or PCMO, to describe the full cumulative distribution function (CDF) of conductance-change at each device conductance value, for both potentiation (SET) and depression (RESET). Using several types of artificially-constructed jump-tables, we assess the relative importance of deviations from an ideal NVM with perfectly linear conductance response.
great lakes symposium on vlsi | 2013
Abdulkadir Akin; Ipek Baz; Baris Atakan; Irem Boybat; Alexandre Schmid; Yusuf Leblebici
The computational complexity of disparity estimation algorithms and the need of large size and bandwidth for the external and internal memory make the real-time processing of disparity estimation challenging, especially for High Resolution (HR) images. This paper proposes a hardware-oriented adaptive window size disparity estimation (AWDE) algorithm and its real-time reconfigurable hardware implementation that targets HR video with high quality disparity results. The proposed algorithm is a hybrid solution involving the Sum of Absolute Differences and the Census cost computation methods to vote and select the best suitable disparity candidates. It utilizes a pixel intensity based refinement step to remove faulty disparity computations. The AWDE algorithm dynamically adapts the window size considering the local texture of the image to increase the disparity estimation quality. The proposed reconfigurable hardware of the AWDE algorithm enables handling 60 frames per second on Virtex-5 FPGA at a 1024×768 XGA video resolution for a 120 pixel disparity range.1
conference on ph.d. research in microelectronics and electronics | 2017
Irem Boybat; Manuel Le Gallo; Timoleon Moraitis; Yusuf Leblebici; Abu Sebastian; Evangelos Eleftheriou
Artificial neural networks (ANN) have become a powerful tool for machine learning. Resistive memory devices can be used for the realization of a non-von Neumann computational platform for ANN training in an area-efficient way. For instance, the conductance values of phase-change memory (PCM) devices can be used to represent synaptic weights and can be updated in-situ according to learning rules. However, non-ideal device characteristics pose challenges to reach competitive classification accuracies. In this paper, we investigate the impact of granularity and stochasticity associated with the conductance changes on ANN performance. Using a PCM prototype chip fabricated in the 90 nm technology node, we present a detailed experimental characterization of the conductance changes. Simulations are done in order to quantify the effect of the experimentally observed conductance change granularity and stochasticity on classification accuracies in a fully connected ANN trained with backpropagation.
Nature Communications | 2018
Irem Boybat; Manuel Le Gallo; S. R. Nandakumar; Timoleon Moraitis; Thomas Parnell; Tomas Tuma; Bipin Rajendran; Yusuf Leblebici; Abu Sebastian; Evangelos Eleftheriou
Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems.Memristive technology is a promising avenue towards realizing efficient non-von Neumann neuromorphic hardware. Boybat et al. proposes a multi-memristive synaptic architecture with a counter-based global arbitration scheme to address challenges associated with the non-ideal memristive device behavior.
international reliability physics symposium | 2015
Robert M. Shelby; Geoffrey W. Burr; Irem Boybat; Carmelo di Nolfo
A large-scale artificial neural network, a three-layer perceptron, is implemented using two phase-change memory (PCM) devices to encode the weight of each of 164,885 synapses. The PCM conductances are programmed using a crossbar-compatible pulse scheme, and the network is trained to recognize a 5000-example subset of the MNIST handwritten digit database, achieving 82.2% accuracy during training and 82.9% generalization accuracy on unseen test examples. A simulation of the network performance is developed that incorporates a statistical model of the PCM response, allowing quantitative estimation of the tolerance of the network to device variation, defects, and conductance response.
Nature Communications | 2018
N. Gong; T. Idé; SangBum Kim; Irem Boybat; Abu Sebastian; V. Narayanan; T. Ando
Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO2-based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge2Sb2Te5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.The application of resistive and phase-change memories in neuromorphic computation will require efficient methods to quantify device-to-device and switching variability. Here, the authors assess the impact of a broad range of device switching mechanisms using machine learning regression techniques.
device research conference | 2017
S. R. Nandakumar; Irem Boybat; M. Le Gallo; Abu Sebastian; Bipin Rajendran; Evangelos Eleftheriou
We demonstrate for the first time, the feasibility of supervised learning in third generation Spiking Neural Networks (SNNs) using multi-level cell (MLC) phase change memory (PCM) synapses [1]. We highlight two key novel contributions: (i) As opposed to second generation neural networks that are used in machine learning algorithms [2], or spike timing dependent plasticity based unsupervised learning in SNNs [3], we use a spike-triggered supervised learning algorithm (NormAD [4]) for the weight updates. (ii) SNN learning capability is demonstrated using a comprehensive phenomenological model of MLC PCM that accurately captures the statistics of programming inter-cell and intra-cell variability. This work is a harbinger to efficient supervised SNN learning systems.
Journal of Applied Physics | 2018
S. R. Nandakumar; Manuel Le Gallo; Irem Boybat; Bipin Rajendran; Abu Sebastian; Evangelos Eleftheriou
Phase-change memory (PCM) is an emerging non-volatile memory technology that is based on the reversible and rapid phase transition between the amorphous and crystalline phases of certain phase-change materials. The ability to alter the conductance levels in a controllable way makes PCM devices particularly well-suited for synaptic realizations in neuromorphic computing. A key attribute that enables this application is the progressive crystallization of the phase-change material and subsequent increase in device conductance by the successive application of appropriate electrical pulses. There is significant inter- and intra-device randomness associated with this cumulative conductance evolution, and it is essential to develop a statistical model to capture this. PCM also exhibits a temporal evolution of the conductance values (drift), which could also influence applications in neuromorphic computing. In this paper, we have developed a statistical model that describes both the cumulative conductance evolution and conductance drift. This model is based on extensive characterization work on 10 000 memory devices. Finally, the model is used to simulate the supervised training of both spiking and non-spiking artificial neuronal networks.Phase-change memory (PCM) is an emerging non-volatile memory technology that is based on the reversible and rapid phase transition between the amorphous and crystalline phases of certain phase-change materials. The ability to alter the conductance levels in a controllable way makes PCM devices particularly well-suited for synaptic realizations in neuromorphic computing. A key attribute that enables this application is the progressive crystallization of the phase-change material and subsequent increase in device conductance by the successive application of appropriate electrical pulses. There is significant inter- and intra-device randomness associated with this cumulative conductance evolution, and it is essential to develop a statistical model to capture this. PCM also exhibits a temporal evolution of the conductance values (drift), which could also influence applications in neuromorphic computing. In this paper, we have developed a statistical model that describes both the cumulative conductance evoluti...