A recipe for creating ideal hybrid memristive-CMOS neuromorphic computing systems
MMemristive Neuromorphic Systems
A recipe for creating ideal hybrid memristive-CMOS neuromorphiccomputing systems
E. Chicca a) and G. Indiveri b) Faculty of Technology and Cognitive Interaction Technology - Center of Excellence (CITEC), Bielefeld University, Bielefeld,Germany Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland (Dated: 13 December 2019)
The development of memristive device technologies has reached a level of maturity to enable the design of complexand large-scale hybrid memristive-CMOS neural processing systems. These systems offer promising solutions for im-plementing novel in-memory computing architectures for machine learning and data analysis problems. We argue thatthey are also ideal building blocks for the integration in neuromorphic electronic circuits suitable for ultra-low powerbrain-inspired sensory processing systems, therefore leading to the innovative solutions for always-on edge-computingand Internet-of-Things (IoT) applications. Here we present a recipe for creating such systems based on design strategiesand computing principles inspired by those used in mammalian brains. We enumerate the specifications and propertiesof memristive devices required to support always-on learning in neuromorphic computing system and to minimize theirpower consumption. Finally, we discuss in what cases such neuromorphic systems can complement conventional pro-cessing ones and highlight the importance of exploiting the physics of both the memristive devices and of the CMOScircuits interfaced to them.Neuromorphic computing has recently received consider-able attention as a discipline that can offer promising techno-logical solutions for implementing power- and size-efficientsensory-processing, learning, and Artificial Intelligence (AI)applications , especially in cases in which the computingsystem has to operate autonomously “at the edge”, i.e., with-out having to connect to powerful (but power hungry) serverfarms in the “cloud”. The term “neuromorphic” was originallycoined in the early 90’s by Carver Mead to refer to mixed sig-nal analog/digital Very Large Scale Integration (VLSI) com-puting systems based on the organizing principles used by thebiological nervous systems . In that context, “neuromorphicengineering” emerged as an interdisciplinary research fielddeeply rooted in biology that focused on building electronicneural processing systems by exploiting the physics of sili-con to directly “emulate” the bio-physics of real neurons andsynapses. More recently the definition of the term “neuromor-phic” has been extended in two additional directions: on onehand to describe more generic spike-based processing sys-tems engineered to “simulate” spiking neural networks for theexploration of large-scale computational neuroscience mod-els ; and on the other hand to describe dedicated electronicneural architectures that make use of both electronic Com-plementary Metal-Oxide Semiconductor (CMOS) circuits andmemristive devices to implement neuron and synapse cir-cuits .Another recent and very promising trend in developing ded-icated hardware architectures for building accelerated simula-tors of artificial neural networks is related to the field of ma-chine learning and AI . The types of neural networks be-ing proposed within this context are only loosely inspired bybiology, are aimed at high accuracy pattern recognition basedon large data-sets, and require large amounts of memory for a) Electronic mail: [email protected] b) Electronic mail: [email protected]
FIG. 1. The ideal memristive neuromorphic computing system re-quires the right mix of CMOS circuits and memristive devices, aswell as the proper use of spatial resources and temporal dynamics,that need to be well matched to the system’s signal-processing appli-cations and use-cases. storing network states and parameters. While this approach isproducing amazing results in a wide range of application ar-eas, the computing systems used to simulate these networksuse significant amount of compute resources and power, espe-cially for the training phase: the learning algorithms rely onhigh precision digital representations for calculating high ac-curacy gradients, and they typically require the storage (andtransfer from peripheral memory to central processing areas)of very large data-sets. Furthermore, they often separate thetraining from the inference phase, dismissing the ability toadapt to novel stimuli and changing environmental conditions,typical of biological systems.While there are examples of hybrid memristive-CMOShardware architectures being developed to provide support forAI deep network accelerators , it is important to clar-ify that many of the hybrid memristive-CMOS neuromorphic a r X i v : . [ c s . ET ] D ec emristive Neuromorphic Systems 2circuits proposed in the literature as well as the originalneuromorphic approach of emulating biological neural sys-tems proposed by Mead, are distinct and complementary tothe machine learning one. While the machine learning ap-proach is based on software algorithms developed to minimizethe recognition error in very specific pattern recognition tasks,the original neuromorphic approach is based on brain-inspiredelectronic circuits and hardware architectures designed for re-producing the function of cortical and biological neural cir-cuits . As a consequence, this approach aims at understand-ing how to build robust and low-power neural processing sys-tems using inhomogeneous and highly variable components,fault-tolerant massively parallel arrays of computational ele-ments, and in-memory computing (non von Neumann) infor-mation processing architectures . In the following, when dis-cussing about “hybrid CMOS-memristive neuromorphic com-puting systems”, we will refer to this specific approach.Our recipe (Fig. 1) for optimally building neuromorphicsystems by co-integrating memristive devices with CMOS cir-cuits is based on the following considerations. a. Lay out the ingredients in parallel on the worktop. To minimize power consumption and maximize robustness tovariability, it is important to use physically distinct instanti-ations of neuron and synapse circuits, distributed across thesilicon substrate . This strategy is very different from theone used to build classical computing systems based on thevon Neumann architecture. In classical processors there is asingle or a small number of computing blocks that are time-multiplexed at very high clock rates to execute calculations,or to simulate many “parallel” neural processes . Thecontinuous transfer of data between memory and the time-multiplexed processing unit(s) required to carry out compu-tation is limited by the infamous von Neumann bottleneck ,and is the major cause of high energy consumption. In con-trast, the amazing energy efficiency of biological systems, andof the neuromorphic ones that emulate them, arises from thein-memory computing nature of their architectures: there aremultiple instances of neuron and synapse elements that carryout computation and at the same time store the network state.The disadvantage of having distributed state-full neuron andsynapse circuits is that it can require significant amount ofsilicon real-estate for integrating all their memory structures(e.g., see the 4 . IBM TrueNorth chip ). However, theprogress in CMOS fabrication technologies, the emergence ofmonolithic 3D integration technologies, and the possibility toco-integrate nano-scale memristive devices with mixed-signalanalog/digital CMOS circuits in advanced node processes cansubstantially mitigate this problem . b. Take your time. By eliminating the need to use time-multiplexed processing elements, these neuromorphic pro-cessing architectures can be designed to run in real physicaltime (time represents itself) as it happens in real biologicalneural networks. This is a radical departure from the classicalway of implementing computation, that has decoupled com-puter simulation time from physical time since the very earlydesigns of both computing systems and artificial neural net-works . For sensory-motor processing systems and edge-computing applications that need to measure and process nat- ural signals, this is a tremendous advantage. Allowing timeto represent itself removes the need of complicated clock orsynchronizing structures that would otherwise be required totrack the passage of simulated time. All computing elementsin such neuromorphic systems are then coupled through thecommon variable of real-time (e.g., for implementing bind-ing by synchronization ). To build sensory-processing sys-tems that are best tuned to the signals they are required toprocess (or that can learn to extract information from them),it is necessary to use neural processing and learning circuitsthat have the same time-constants and dynamics of their in-put signals (e.g., to create “matched-filter” that can naturallyresonate with their inputs). In the case of natural signals typ-ically processed by humans, such as voice or gestures, thesetime constants should range from milliseconds to minutes orlonger. These time constants are extremely long, compared tothe typical processing rates of digital circuits. This allowsneuromorphic systems to reduce power consumption evenmore and to have very large bandwidths for seamlessly trans-mitting signals across the network and via I/O pathways inshared buses . However, such long time constants can bevery difficult to achieve using pure CMOS circuits . Mem-ristive devices offer an ideal solution to this limitation. Al-though such devices are usually treated as non-volatile mem-ories, certain material systems exhibit a rather volatile resis-tance change after electrical biasing, with temporal scales thatcan be tuned and matched to biological neural and synapticdynamics . Recent demonstrations of volatile memristivedevices used to model neural dynamics include the emulationof nociceptors (i.e., sensory neuron receptors able to detectnoxious stimuli) and the implementation of spike-timing de-pendent learning rules with tunable forgetting rates . Inaddition to exploiting the physics of the memristive devicesto tune their volatility properties, it is possible to co-designmore complex hybrid memristive-CMOS neuromorphic cir-cuits to implement the wide range of time constants needed tomodel the multiple plasticity phenomena observed in biology(ranging from milliseconds in synaptic short term depressionto hours and more in structural plasticity) and crucial for arti-ficial neural processing systems . c. Don’t worry about density. Memristive devices areoften praised for the small (nano-scale) size, which can beexploited to develop very high density cross-bars in whichthe memristive devices are used as a learning synapses .Nevertheless, current high-density approaches are not able toproduce learning dynamics sufficiently complex for solvingreal-world tasks (e.g., with matched temporal scales, or suit-able for life-long learning requirements). The achievementof such dynamics in a single device requires sophisticatedmaterial engineering efforts which are still beyond the cur-rent state-of-the-art. Conversely, by dismissing the chimeraof high density synaptic arrays and co-integrating nano-scalememory elements with mixed signal analog/digital neuromor-phic circuits, it is possible to implement sophisticated learningmechanisms that can exploit many features of memristive de-vices, besides their compact footprint, such as non-volatility,stochasticity, or state-dependent conductance changes. Fur-thermore, combining multiple transistors with one or moreemristive Neuromorphic Systems 3memristive devices enables the design of complex synapsecircuits that can reduce the effect of variability , enable thecontrol of stochastic switching behaviors , and producelinear or non-linear state-dependent weight-updates . d. Play it by ear: variability and randomness. Memris-tive devices are affected by both device-to-device and cycle-to-cycle variability . Significant material science and de-vice technology research efforts are being made to minimizesuch variability . However, rather than fighting thesevariability effects with different materials or device technolo-gies, neuromorphic systems can be designed to embrace andexploit them . Examples of theoretical neural processingframeworks that require variability can be found in the domainof ensemble learning , reservoir computing and liquid statemachines . Current efforts in neuromorphic engineering forimplementing such frameworks to solve spatio-temporal pat-tern recognition problems rely on the variability provided bytransistor device-mismatch effects . Integration of mem-ristive devices with inhomogeneous properties in such archi-tectures can provide a richer set of distributions useful for en-hancing the computational abilities of these networks. Indeed,multiple circuit solutions have already been proposed to bettercontrol the shape and parameters of such distributions .One important source of variability in the operational pa-rameters of memristive devices is in their switching mech-anism. In filamentary memristive devices, this mechanismexhibits stochastic behavior which stem from the underly-ing filament formation process . This intrinsic proba-bilistic property of memristive devices can be exploited forimplementing stochastic learning in neuromorphic architec-tures , which in turn can be used to implementfaithful models of biological cortical microcircuits , solvememory capacity and classification problems in artificial neu-ral network applications , and reduce the network sensitiv-ity to their variability . Recent results on stochastic learningmodulated by regularization mechanisms, such as homeosta-sis or intrinsic plasticity , present an excellent potentialfor exploiting the features memristive devices, even when re-stricted to binary values. e. Don’t (hard) limit your devices. In the context of de-ploying always-on learning systems (both artificial and bio-logical) in real-world applications, a critical feature is theirmemory storage capacity . When designing hardware neu-romorphic learning system that have practical physical restric-tions or limitations on the available resources (such as thenumber of memory devices integrated in the system, theirresolution, precision, or dynamic range) it is important to beaware of the theoretical limits that set the bounds of achiev-able memory capacity and learning performance, independentof the device properties.The thorough theoretical analysis on the limits of memorycapacity in neural processing systems presented by Fusi andAbbott in 2007 provides essential guiding principles for theconstruction of artificial learning memristive systems. In thisanalysis, learning models are subdivided into four main cat-egories, according to two key features: the synaptic weightbounds (hard or soft) and the (im)balance of potentiation anddepression. Hard bounds are limits on the synaptic weight values that cannot be exceeded. Soft bounds are limits thatcan only be reached in the asymptotic limit. Typically, inneural network models with hard bounds, the weight updatestep size is constant and therefore independent of the weightvalue itself. Conversely soft bounds are introduced by allow-ing weight updates to depend on synaptic strength and to de-crease as they approach the bound itself.Even though it is clear that in real physical systems hardbounds are unavoidable (e.g., the supply rails in an electronicsystem), there is evidence that memristive devices exhibitsoft bounds . Therefore, by combining CMOS circuits withmemristive devices, it is possible to design hybrid circuits thatcan implement and control the devices soft bounds for im-proving learning at the network level and for improving theoverall system performance, e.g., in terms of reduced powerconsumption and increased memory capacity . In contrast, itis impossible to precisely balance positive changes of synap-tic weights with negative ones in hybrid memristive-CMOSneuromorphic computing systems. Given this unbalanced po-tentiation and depression property, the longest memory life-time is achieved thanks to soft bounds, independently of thespecific model chosen among those investigated by Fusi andAbbott .To best implement the recipe we proposed it is necessaryto use the right list of ingredients : a combination of mem-ristive devices with multiple complementary features. Therecipe shopping-list should comprise devices with differentproperties on retention, endurance, variability, switching cur-rents, on-off ratios, that can be interfaced to analog and digitalelectronic CMOS circuits. However, even before attemptingto bake the final hardware neural processing system, it is im-portant to have access to realistic and faithful device models,so that during the design phase it will be possible to specifythe characteristics of both the CMOS and memristive compo-nents and understand how to best exploit their processing fea-tures for properly modeling the different aspects of plasticityand neural information processing systems.Once fabricated, these neuromorphic processing systemsshould implement always-on life-long learning features sothat they can adapt to changes in their input signals andkeep a proper operating regime. This implies that the hybridCMOS-memristive neuromorphic system would be updatingits synaptic weights continuously, with every learning event.This requires the use of memristive devices that support smallgradual conductance changes, and very small currents (e.g., < µ A), to minimize power consumption. In this case, theretention rate of such devices does not need to be extremelylong, but should be compatible with the rate of weight up-date (which can be seen as a “refresh” operation) in the sys-tem. For example, in typical “edge” sensory-processing ap-plications (wearable devices, home automation, surveillance,environmental monitoring, etc.) this could range from mil-liseconds to seconds or minutes.On the other hand, once the learning process has terminatedor if there is a long pause in the rate of input signals (e.g.,during the night in ambient monitoring tasks), then it will beuseful to be able to consolidate the memories formed in non-volatile memristive devices with high on-off ratio and long-emristive Neuromorphic Systems 4retention rates. In this case, since this operation would not beas frequent as the weight-update one for the on-line learningcase, it would be acceptable to use devices that require largerswitching currents, and that have a small number (even two)stable states .To match the time constants of the neural processing systemto the dynamics of its input signals, to maintain a stable oper-ating region over long time scales, and to optimize the learn-ing of complex spatio-temporal patterns, it is necessary to im-plement both fast (short term depression, long term potentia-tion, long term depression, etc.) and slow (intrinsic, homeo-static, structural) plasticity mechanisms, “orchestrating” mul-tiple time-scales in the learning circuits . For this it is crucialto be able to use volatile memristive devices that span a widerange of retention rates (e.g., from milliseconds to hours).In addition, to increase the memory-capacity of such a sys-tem by introducing soft bounds for the synaptic weights, it isnecessary to provide a mechanism that can realize the desiredstate dependence in the synaptic weight-update transfer func-tion . This can be achieved by engineering the conductancechange properties of the single memristive device, or by de-signing hybrid memristive-CMOS neuromorphic circuits in-terfaced with one or more memristive devices . Alterna-tively, one can use multiple binary memristive devices withprobabilistic switching in combination with an analog circuitdesigned to properly control their switching probability.As evident from the list of ingredients and recipe provided,it is now possible to build ultra low power massively par-allel arrays of processing elements that implement “beyond-von Neumann”, “in-memory computing” mixed signal hybridmemristive-CMOS neural processing systems.It is important to realize that for data-intense processingapplications these neuromorphic systems should be used tocomplement, rather than replace, traditional von Neumann ar-chitectures. They could be considered as the cherry on thecake of a complex AI inference engine, that enables always-on neural processing, with life-long learning abilities. In thisscenario, the hybrid memristive-CMOS neuromorphic com-puting system would carry out low-power computation actingas alow accuracy predictive “watch-dog” to quickly activatemore powerful von Neumann architectures for high accuracyrecognition, as soon as events of interest are detected.On the other hand, there are many applications where thesehybrid neuromorphic systems would represent both the cherry and the cake together: these are IoT, edge-computing, andperception-action tasks that are solved efficiently by biologi-cal systems but have been proven to be “difficult” for artificialintelligence algorithms . This difficulty could be measuredwith different performance metrics that could range from thephysical size and energy consumption requirements to latency,adaptation, and ability to learn in continuous time closed-loopsetups. By appropriately mixing all the ingredients and inte-grating them with mixed-signal analog/digital neuromorphicsystems, it will be possible to produce computing systemsthat can directly emulate their biological counterparts. Thisemulation feature, which derives from the exploitation of thephysics of the new materials and memory technologies beingdeveloped, is the key element for building efficient computing devices that can interact with the environment to solve arti-ficial intelligence tasks in the real physical world, rather thansimulating these solutions with general purpose computers. Inother words, it is not very useful to simulate the bee brain ona supercomputer because it will never fly. ACKNOWLEDGMENTS
We wish to acknowledge Melika Payvand and ReginaDittmann for the constructive comments on the manuscript.The illustration of Fig. 1 was kindly provided by Univer-sity of Zurich, Information Technology, MELS/SIVIC, SarahSteinbacher. This work is supported by the European Re-search Council (ERC) under the European Union’s Horizon2020 research and innovation programme grant agreement No724295 (NeuroAgents). E. O. Neftci, “Data and power efficient intelligence with neuromorphiclearning machines,” iScience , 52 – 68 (2018). C. S. Thakur, J. L. Molin, G. Cauwenberghs, G. Indiveri, K. Kumar,N. Qiao, J. Schemmel, R. Wang, E. Chicca, J. Olson Hasler, J.-s. Seo,S. Yu, Y. Cao, A. van Schaik, and R. Etienne-Cummings, “Large-scaleneuromorphic spiking array processors: A quest to mimic the brain,” Fron-tiers in Neuroscience , 891 (2018). Y. Li, Z. Wang, R. Midya, Q. Xia, and J. J. Yang, “Review of memristordevices in neuromorphic computing: materials sciences and device chal-lenges,” Journal of Physics D: Applied Physics , 503002 (2018). Y. van De Burgt, A. Melianas, S. T. Keene, G. Malliaras, and A. Salleo,“Organic electronics for neuromorphic computing,” Nature Electronics ,386–397 (2018). G. W. Burr, R. M. Shelby, A. Sebastian, S. Kim, S. Kim, S. Sidler, K. Vir-wani, M. Ishii, P. Narayanan, A. Fumarola, et al. , “Neuromorphic comput-ing using non-volatile memory,” Advances in Physics: X , 89–124 (2017). C. Mead, “Neuromorphic electronic systems,” Proceedings of the IEEE ,1629–36 (1990). S. Furber, F. Galluppi, S. Temple, and L. Plana, “The SpiNNaker project,”Proceedings of the IEEE , 652–665 (2014). F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla,N. Imam, Y. Nakamura, P. Datta, G.-J. Nam, et al. , “Truenorth: Design andtool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,”IEEE Transactions on Computer-Aided Design of Integrated Circuits andSystems , 1537–1557 (2015). M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Di-mou, P. Joshi, N. Imam, S. Jain, et al. , “Loihi: A neuromorphic manycoreprocessor with on-chip learning,” IEEE Micro , 82–99 (2018). D. Ielmini and R. Waser,
Resistive Switching: From Fundamentals ofNanoionic Redox Processes to Memristive Device Applications (John Wiley& Sons, 2015). I. Boybat, M. L. Gallo, T. Moraitis, T. Parnell, T. Tuma, B. Rajendran,Y. Leblebici, A. Sebastian, E. Eleftheriou, et al. , “Neuromorphic computingwith multi-memristive synapses,” Nature communications , 2514 (2018). Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature , 436–444 (2015). J. Schmidhuber, “Deep learning in neural networks: An overview,” NeuralNetworks , 85–117 (2015). A. Sebastian, M. L. Gallo, and E. Eleftheriou, “Computational phase-change memory: beyond von neumann computing,” Journal of Physics D:Applied Physics , 443002 (2019). S. Ambrogio, P. Narayanan, H. Tsai, R. M. Shelby, I. Boybat, C. di Nolfo,S. Sidler, M. Giordano, M. Bodini, N. C. P. Farinha, B. Killeen, C. Cheng,Y. Jaoudi, and G. W. Burr, “Equivalent-accuracy accelerated neural-network training using analogue memory,” Nature , 60–67 (2018). S. Dai, Y. Zhao, Y. Wang, J. Zhang, L. Fang, S. Jin, Y. Shao,and J. Huang, “Recent advances in transistor-based artificialsynapses,” Advanced Functional Materials , 1903700 (2019),https://onlinelibrary.wiley.com/doi/pdf/10.1002/adfm.201903700. emristive Neuromorphic Systems 5 E. Covi, S. Brivio, A. Serb, T. Prodromakis, M. Fanciulli, and S. Spiga,“Analog memristive synapse in spiking networks implementing unsuper-vised learning,” Frontiers in neuroscience , 1–13 (2016). S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu,“Nanoscale memristor device as synapse in neuromorphic systems,” Nanoletters , 1297–1301 (2010). J. J. Yang and Q. Xia, “Organic electronics: Battery-like artificialsynapses,” Nature materials , 396 (2017). R. Berdan, E. Vasilaki, A. Khiat, G. Indiveri, A. Serb, and T. Prodromakis,“Emulating short-term synaptic dynamics with memristive devices,” Scien-tific Reports , 1–9 (2016). E. Chicca, F. Stefanini, C. Bartolozzi, and G. Indiveri, “Neuromorphic elec-tronic circuits for building autonomous cognitive systems,” Proceedings ofthe IEEE , 1367–1388 (2014). G. Indiveri and S.-C. Liu, “Memory and information processing in neuro-morphic systems,” Proceedings of the IEEE , 1379–1397 (2015). G. Indiveri and Y. Sandamirskaya, “The importance of space and time forsignal processing in neuromorphic agents,” IEEE Signal Processing Maga-zine , 16–28 (2019). P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada,F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo,I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P.Risk, R. Manohar, and D. S. Modha, “A million spiking-neuron integratedcircuit with a scalable communication network and interface,” Science ,668–673 (2014). J. Backus, “Can programming be liberated from the von neumann style?: afunctional style and its algebra of programs,” Communications of the ACM , 613–641 (1978). G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis, andT. Prodromakis, “Integration of nanoscale memristor synapses in neuro-morphic computing architectures,” Nanotechnology , 384010 (2013). J. Von Neumann, “First draft of a report on the edvac,” IEEE Annals of theHistory of Computing , 27–75 (1993). W. McCulloch and W. Pitts, “A logical calculus of the ideas immanent innervous activity,” Bull. Math. Biophys. , 115–133 (1943). M. Shadlen and J. Movshon, “Synchrony unbound: a critical evaluation ofthe temporal binding hypothesis,” Neuron , 67–77 (1999). K. Boahen, “Communicating neuronal ensembles between neuromorphicchips,” in
Neuromorphic Systems Engineering , edited by T. Lande (KluwerAcademic, Norwell, MA, 1998) pp. 229–259. S. Moradi, N. Qiao, F. Stefanini, and G. Indiveri, “A scalable multicorearchitecture with heterogeneous memory structures for dynamic neuromor-phic asynchronous processors (DYNAPs),” Biomedical Circuits and Sys-tems, IEEE Transactions on , 106–122 (2018). N. Qiao, C. Bartolozzi, and G. Indiveri, “An ultralow leakage synapticscaling homeostatic plasticity circuit with configurable time scales up to100 ks,” IEEE Transactions on Biomedical Circuits and Systems (2017),10.1109/TBCAS.2017.2754383. X. Zhang, S. Liu, X. Zhao, F. Wu, Q. Wu, W. Wang, R. Cao, Y. Fang, H. Lv,S. Long, Q. Liu, and M. Liu, “Emulating short-term and long-term plas-ticity of bio-synapse based on cu/a-si/pt memristor,” IEEE Electron DeviceLetters , 1208–1211 (2017). T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J. Gimzewski, andM. Aono, “Short-term plasticity and long-term potentiation mimicked insingle inorganic synapses,” Nature Materials , 591–595 (2011). T. Werner, E. Vianello, O. Bichler, A. Grossi, E. Nowak, J.-F. Nodin,B. Yvert, B. D. Salvo, and L. Perniola, “Experimental demonstration ofshort and long term synaptic plasticity using oxRAM multi k-bit arrays forreliable detection in highly noisy input data,” in (IEEE, 2016) pp. 16–6. J. Yoon, H. Jung, Z. Wang, K. M. Kim, H. Wu, V. Ravichandran, Q. Xia,C. S. Hwang, and J. J. Yang, “An artificial nociceptor based on a diffusivememristor,” Nature communications , 417 (2018). Z. Wang, S. Joshi, S. E. Savel’ev, H. Jiang, R. Midya, P. Lin, M. Hu, N. Ge,J. P. Strachan, Z. Li, Q. Wu, M. Barnell, G.-L. Li, H. L. Xin, R. S. Williams,Q. Xia, and J. J. Yang, “Memristors with diffusive dynamics as synapticemulators for neuromorphic computing,” Nature materials , 101 (2017). J. Xiong, R. Yang, J. Shaibo, H. M. Huang, H. K. He, W. Zhou, and X. Guo,“Bienenstock, cooper, and munro learning rules realized in second-ordermemristors with tunable forgetting rate,” Advanced Functional Materials , 1807316 (2019). M. Payvand, M. V. Nair, L. K. Müller, and G. Indiveri, “A neuromorphicsystems approach to in-memory computing with non-ideal memristive de-vices: From mitigation to exploitation,” Faraday Discussions , 487–510(2019). S. Pi, C. Li, H. Jiang, W. Xia, H. Xin, J. J. Yang, and Q. Xia, “Memristorcrossbar arrays with 6-nm half-pitch and 2-nm critical dimension,” Naturenanotechnology , 35 (2019). Q. Xia and J. J. Yang, “Memristive crossbar arrays for brain-inspired com-puting,” Nature materials , 309 (2019). M. V. Nair and G. Indiveri, “A differential memristive current-mode cir-cuit,” European patent application EP 17183461.7 (2017), filed 27.07.2017. M. Payvand, L. K. Muller, and G. Indiveri, “Event-based circuits for con-trolling stochastic learning with memristive devices in neuromorphic archi-tectures,” in
Circuits and Systems (ISCAS), 2018 IEEE International Sym-posium on (IEEE, 2018) pp. 1–5. E. O. Neftci, B. U. Pedroni, S. Joshi, M. Al-Shedivat, and G. Cauwen-berghs, “Stochastic synapses enable efficient brain-inspired learning ma-chines,” Frontiers in Neuroscience , 241 (2016). S. Brivio, D. Conti, M. V. Nair, J. Frascaroli, E. Covi, C. Ricciardi, G. In-diveri, and S. Spiga, “Extended memory lifetime in spiking neural net-works employing memristive synapses with nonlinear conductance dynam-ics,” Nanotechnology , 015102 (2019). N. Diederich, T. Bartsch, H. Kohlstedt, and M. Ziegler, “A memristiveplasticity model of voltage-based stdp suitable for recurrent bidirectionalneural networks in the hippocampus,” Scientific Reports (Nature PublisherGroup) , 1–12 (2018). A. Fantini, L. Goux, R. Degraeve, D. J. Wouters, N. Raghavan, G. Kar,A. Belmonte, Y. Chen, B. Govoreanu, and M. Jurczak, “Intrinsic switch-ing variability in hfo 2 RRAM,” in (IEEE, 2013) pp. 30–33. M. Suri and V. Parmar, “Exploiting intrinsic variability of filamentary re-sistive memory for extreme learning machine architectures,” IEEE transac-tions on nanotechnology , 963–968 (2015). A. Schönhals, R. Waser, and D. J. Wouters, “Improvement of SET vari-ability in TaOxbased resistive RAM devices,” Nanotechnology , 465203(2017). A. Prakash, D. Deleruyelle, J. Song, M. Bocquet, and H. Hwang, “Resis-tance controllability and variability improvement in a taox-based resistivememory for multilevel storage application,” Applied Physics Letters ,233104 (2015). B. Govoreanu, D. Crotti, S. Subhechha, L. Zhang, Y. Y. Chen, S. Clima,V. Paraschiv, H. Hody, C. Adelmann, M. Popovici, O. Richard, and M. Ju-rczak, “A-VMCO: A novel forming-free, self-rectifying, analog memorycell with low-current operation, nonfilamentary switching and excellentvariability,” in (IEEE, 2015) pp. T132–T133. X. Sheng, C. E. Graves, S. Kumar, X. Li, B. Buchanan, L. Zheng, S. Lam,C. Li, and J. P. Strachan, “Low-conductance and multilevel CMOS-integrated nanoscale oxide memristors,” Advanced Electronic Materials ,1800876 (2019). Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and sys-tem sciences , 119–139 (1997). H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic sys-tems and saving energy in wireless communication,” Science , 78–80(2004). W. Maass, T. Natschläger, and H. Markram, “Real-time computing withoutstable states: A new framework for neural computation based on perturba-tions,” Neural Computation , 2531–2560 (2002). S. Sheik, M. Coath, G. Indiveri, S. Denham, T. Wennekers, and E. Chicca,“Emergent auditory feature tuning in a real-time neuromorphic VLSI sys-tem,” Frontiers in Neuroscience (2012), 10.3389/fnins.2012.00017. O. Richter, R. F. Reinhard, S. Nease, J. Steil, and E. Chicca, “Device mis-match in a neuromorphic system implements random features for regres-sion,” in (IEEE, 2015) pp. 1–4. A. Das, P. Pradhapan, W. Groenendaal, P. Adiraju, R. T. Rajan, F. Catthoor,S. Schaafsma, J. L. Krichmar, N. D. Dutt, and C. V. Hoof, “Unsuper-vised heart-rate estimation in wearables with liquid states and a probabilis- emristive Neuromorphic Systems 6 tic readout,” Neural networks , 134–147 (2018). E. Donati, M. Payvand, N. Risi, R. Krause, K. Burelo, T. Dalgaty,E. Vianello, and G. Indiveri, “Processing EMG signals using reservoircomputing on an event-based neuromorphic system,” in
Biomedical Cir-cuits and Systems Conference, (BioCAS), 2018 (IEEE, 2018) pp. 1–4. F. Bauer, D. Muir, and G. Indiveri, “Real-time ultra-low power ecg anomalydetection using an event-driven neuromorphic processor,” Biomedical Cir-cuits and Systems, IEEE Transactions on (2019), (in press). S. Gaba, P. Sheridan, J. Zhou, S. Choi, and W. Lu, “Stochastic memristivedevices for computing and neuromorphic applications,” Nanoscale , 5872–5878 (2013). S. Ambrogio, S. Balatti, V. Milo, R. Carboni, Z.-Q. Wang, A. Calderoni,N. Ramaswamy, and D. Ielmini, “Neuromorphic learning and recognitionwith one-transistor-one-resistor synapses and bistable metal oxide RRAM,”IEEE Transactions on Electron Devices , 1508–1515 (2016). J. J. Yang, D. B. Strukov, and D. R. Stewart, “Memristive devices for com-puting,” Nature nanotechnology , 13–24 (2013). M. Al-Shedivat, R. Naous, G. Cauwenberghs, and K. N. Salama, “Memris-tors empower spiking neurons with stochasticity,” IEEE Journal on Emerg-ing and Selected Topics in Circuits and Systems , 242–253 (2015). M. Suri, D. Querlioz, O. Bichler, G. Palma, E. Vianello, D. Vuillaume,C. Gamrat, and B. DeSalvo, “Bio-inspired stochastic computing usingbinary CBRAM synapses,” Electron Devices, IEEE Transactions on ,2402–2409 (2013). S. Balatti, S. Ambrogio, R. Carboni, V. Milo, Z. Wang, A. Calderoni, N. Ra-maswamy, and D. Ielmini, “Physical unbiased generation of random num-bers with coupled resistive switching devices,” IEEE Transactions on Elec-tron Devices , 2029–2035 (2016). S. Habenschuss, Z. Jonke, and W. Maass, “Stochastic computations incortical microcircuit models,” PLoS computational biology , e1003311(2013). A. Destexhe and D. Contreras, “Neuronal computations with stochastic net-work states,” Science , 85–90 (2006). S. Fusi and W. Senn, “Eluding oblivion with smart stochastic selection of synaptic updates,” Chaos, An Interdisciplinary Journal of Nonlinear Sci-ence , 1–11 (2006). I. Ginzburg and H. Sompolinsky, “Theory of correlations in stochastic neu-ral networks,” Physical Review E , 3171–91 (1994). T. Dalgaty, M. Payvand, B. De Salvo, J. Casaz, G. Lama, E. Nowak, G. Indi-veri, and E. Vianello, “Hybrid cmos-rram neurons with intrinsic plasticit,”in
International Symposium on Circuits and Systems (ISCAS), 2019 (IEEE,2019). A. Yousefzadeh, E. Stromatias, M. Soto, T. Serrano-Gotarredona, andB. Linares-Barranco, “On practical issues for stochastic stdp hardware with1-bit synaptic weights,” Frontiers in neuroscience (2018). J. Leugering and G. Pipa, “A unifying framework of synaptic and intrinsicplasticity in neural populations,” Neural computation , 945–986 (2018). S. Fusi and L. Abbott, “Limits on the memory storage capacity of boundedsynapses,” Nature Neuroscience , 485–493 (2007). S. Ganguli, D. Huh, and H. Sompolinsky, “Memory traces in dynamicalsystems,” Proceedings of the National Academy of Sciences , 18970–18975 (2008). J. Frascaroli, S. Brivio, E. Covi, and S. Spiga, “Evidence of soft boundbehaviour in analogue memristive devices for neuromorphic computing,”Scientific reports , 71–78 (2018). P. Del Giudice, S. Fusi, and M. Mattia, “Modeling the formation of workingmemory with networks of integrate-and-fire neurons connected by plasticsynapses,” Journal of Physiology Paris 97 , 659–681 (2003). F. Zenke, E. J. Agnes, and W. Gerstner, “Diverse synaptic plasticity mech-anisms orchestrated to form and retrieve memories in spiking neural net-works,” Nature Communications , 1–13 (2015). J. Bill and R. Legenstein, “A compound memristive synapse model for sta-tistical learning through stdp in spiking neural networks,” Frontiers in neu-roscience , 1–18 (2014). G. Plastiras, M. Terzi, C. Kyrkou, and T. Theocharidcs, “Edge intelli-gence: Challenges and opportunities of near-sensor machine learning ap-plications,” in2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP)