NNeuromorphic Electronic Systems for ReservoirComputing
Department of Computer Science and Electrical Engineering, Jacobs UniversityBremen, 28759 Bremen, GermanyInstitute of Computational Neuroscience, University Medical CenterHamburg-Eppendorf (UKE), 20251 Hamburg, Germany [email protected]
16 April 2019
This chapter provides a comprehensive survey of the re-searches and motivations for hardware implementation of reservoir com-puting (RC) on neuromorphic electronic systems. Due to its computa-tional eﬃciency and the fact that training amounts to a simple linearregression, both spiking and non-spiking implementations of reservoircomputing on neuromorphic hardware have been developed. Here, a re-view of these experimental studies is provided to illustrate the progressin this area and to address the technical challenges which arise from thisspeciﬁc hardware implementation. Moreover, to deal with challenges ofcomputation on such unconventional substrates, several lines of poten-tial solutions are presented based on advances in other computationalapproaches in machine learning.
Analog Microchips, FPGA, Memristors, Neuromorphic Ar-chitectures, Reservoir Computing
The term “neuromorphic computing” refers to a variety of brain-inspiredcomputers, architectures, devices, and models that are used in the endeavor tomimic biological neural networks . In contrast to von Neumann architectures,biologically inspired neuromorphic computing systems are promising for beinghighly connected and parallel, incorporating learning and adaptation, collocat-ing memory and processing and requiring low-power. By creating parallel arraysof connected synthetic neurons that are asynchronous, real-time, and data- orevent-driven, neuromorphic devices oﬀer an expedient substrate to model neu-roscience theories as well as implementing computational paradigms to solvechallenging machine learning problems.The accustomed growth rates of digital computing performance levels (Moore’sLaw) is showing signs of ﬂattening out . Furthermore, the exploding energydemand of digital computing devices and algorithms is approaching the limitsof what is socially and environmentally tolerable . Neuromorphic technolo-gies suggest escape routes from both predicaments due to their potentials for a r X i v : . [ c s . ET ] A ug Neuromorphic Electronic Systems for Reservoir Computing unclocked parallelism and minimal energy consumption of spiking dynamics.Moreover, neuromorphic systems have also received increased attention due totheir scalability and small device footprint .Signiﬁcant keys to such advancements are remarkable progress in materialscience and nanotechnology, low-voltage analog CMOS design techniques andtheoretical and computational neuroscience. At the device level, using the newmaterials and nanotechnologies for building extremely compact and low-powersolid-state nanoscale devices, has paved the way towards on-chip synapses withcharacteristic properties observed in biological synapses. For instance, “mem-ory resistor” or the memristor, a non-linear nanoscale electronic element withvolatile and non-volatile modes, is ubiquitously used in neuromorphic circuitsto store multiple bits of information and to emulate dynamic weights with in-trinsic plasticity features (e.g., spike time dependent plasticity (STDP)) . Ithas been argued that hybrid memristor-CMOS neuromorphic circuit may rep-resent a proper building block for implementing biological-inspired probabilis-tic/stochastic/approximate computing paradigms that are robust to memris-tor device variability and fault-tolerant by design [6–8]. Similarly, conductive-bridging RAM (CBRAM) , which is a non-volatile memory technology, andatomic switches  – nanodevices implemented using metal-oxide based mem-ristors or memristive materials – have also been fabricated to implement bothshort-term plasticity (STP) and long-term plasticity (LTP) in neuromorphic sys-tems [11,12]. Both atomic switches and CBRAM have nano dimensions, are fast,and consume low energy . Spike- time-dependent-depression and -potentiationobserved in biological synapses have also been emulated by phase change mem-ory (PCM) elements in hybrid neuromorphic architectures where CMOS “neu-ronal” circuits are integrated with nanoscale “synaptic” devices. PCM elementsare commonly used to achieve high density and scalability; they are compat-ible with CMOS circuits and show a good endurance [13, 14]. Programmablemetallization cells (PMCs)  and oxide-resistive memory (OXRAM)  areanother types of resistive memory technologies that have been demonstrated topresent STDP-like characteristics. Beyond the CMOS technologies for neuro-morphic computing, spintronic devices and optical (photonic) components havealso been considered for neuromorphic implementation [4, 17].At the hardware level, a transition away from purely digital systems to mixedanalog/digital implementation to purely analog, unclocked, spiking neuromor-phic microchips has led to the emergence of more biological-like models of neu-rons and synapses together with a collection of more biologically plausible adap-tation and learning mechanisms. Digital systems are usually synchronous orclock-based, rely on Boolean logic-based gates and discrete values for compu-tation and tend to need more power. Analog systems, in contrast, tend to relymore on the continuous values and inherent physical characteristics of electronicdevices for computation and more closely resemble the biological brain whichtakes the advantages of physical properties rather than Boolean logic for com-putation . Analog systems, however, are signiﬁcantly vulnerable to varioustypes of thermal and electrical noise and artifacts. It is, therefore, argued that euromorphic Electronic Systems for Reservoir Computing 3 only computational paradigms that are robust to noise and faults may be propercandidates for analog implementation. Hence, among the wide diversity of com-putational models, a leading role is emerging for mixed analog/digital or purelyanalog implementation of artiﬁcial neural networks (ANNs). Two main trendsin this arena are the exploit of low-energy, spiking neural dynamics for “deeplearning” solutions , and “reservoir computing” methods .At the network and system level, new discoveries in neuroscience research arebeing gradually incorporated into hardware models and mathematical theoryframeworks are being developed to guide the algorithms and hardware devel-opments. A constructive paradigm shift has also occurred from strict structuralreplication of neuronal systems towards the hardware implementation of sys-temic/functional models of the biological brain . Consequently, over the pastdecade, a wide variety of model types have been implemented in electronic multi-neuron computing platforms to solve pattern recognition and machine learningtasks [18, 22].As mentioned above, among these computational frameworks, reservoir com-puting (RC) has been variously considered as a strategy to implement usefulcomputations on such unconventional hardware platforms. An RC architecturecomprises three major parts: the input layer feeds the input signal into a ran-dom, large, ﬁxed recurrent neural network that constitutes the reservoir , fromwhich the neurons in the output layer read out a desired output signal. In con-trast to traditional (and “deep”) recurrent neural networks (RNNs) trainingmethods, the input-to-reservoir and the recurrent reservoir-to-reservoir weightsin an RC system are left unchanged after a random initialization, and only thereservoir-to-output weights are optimized during training. Within computationalneuroscience, RC is best known as liquid state machines (LSMs) , whereasthe approach is known as echo state networks (ESNs)  in machine learning.Reﬂecting the diﬀerent objectives in these ﬁelds, LSM models are typically builtaround more or less detailed, spiking neuron models with biologically plausibleparametrizations, while ESNs mostly use highly abstracted rate models for itsneurons. Due to its computational eﬃciency, simplicity and lenient requirements,both spiking and non-spiking implementations of reservoir computing on neuro-morphic hardware exist. However, proper solutions are still lacking to addressa variety of technological and information processing problems. For instance,regarding the choice of hardware, variations of stochasticity due to device mis-match, temporal drift, and aging must be taken into account; in case of exploitingthe spiking neurons, an appropriate encoding scheme needs to be developed totransform the input signal into spike trains; the speed of physical processes mayrequire further adjustment to achieve on-line learning and real-time informationprocessing; depending on the available technology, compatible local learning rulesmust be developed for on-chip learning.Here, a review of recent experimental studies is provided to illustrate theprogress in neuromorphic electronic systems for RC and to address the above-mentioned technical challenges which arise from such hardware implementations.Moreover, to deal with challenges of computation on such unconventional sub-
Neuromorphic Electronic Systems for Reservoir Computing strate, several lines of potential solutions are presented based on advances inother computational approaches in machine learning. In the remaining part ofthis chapter, we present an overview of the current approaches to implementreservoir computing model on digital neuromorphic processors, purely analogneuromorphic microchips and mixed digital/analog neuromorphic systems. Sincethe neuromorphic computing attempts to implement more biological-like modelsof neurons and synapses, spiking implementations of RC will be highlighted.
Field programmable gate arrays (FPGAs) and application speciﬁc integratedcircuit (ASIC) chips – two categories of digital systems – have been very com-monly utilized for neuromorphic implementations. To facilitate the applicationof computational frameworks in both embedded and stand-alone systems, FP-GAs and ASICs oﬀer considerable ﬂexibility, reconﬁgurability, robustness, andfast prototyping .As illustrated in Figure 1, an FPGA-based reservoir computer consists ofneuronal units, a neuronal arithmetic section, input/output encoding/decodingcomponents, an on-chip learning algorithm, memory control, numerous memoryblocks such as RAMs and/or memristors wherein the neuronal and synapticinformation is stored, and a Read/Write interface to access to these memoryblocks. Realizing spiking reservoir computing on such substrates entails tacklinga number of critical issues related to spike coding, memory organization, parallelprocessing, on-chip learning and tradeoﬀs between area, precision and poweroverheads. Moreover, in real-world applications such as speech recognition andbiosignal processing, spiking dynamics might be inherently faster than real-timeperformance. In this section, an overview of experimental studies is provided toillustrate how these challenges have been so far addressed in the literature.In the ﬁrst digital implementation of spiking RC,  suggested storing theneuronal information in RAM, exploiting non-plastic exponential synapses andperforming neuronal and synaptic operations serially. In this clock-based simula-tion, each time step consists of several operations such as adding weights to themembrane or synapse accumulators, adding the synapse accumulators to themembrane’s, decaying these accumulators, threshold detection and membranereset. On a real-time speech processing task, this study shows that in contrastto “serial processing, parallel arithmetic”  and “parallel processing, serialarithmetic” , serial implementations of both arithmetic operations that de-ﬁne the dynamics of the neuron model and processing operations associated withsynaptic dynamics yield to slower operations and low hardware costs. Althoughthis architecture can be considered as the ﬁrst compact digital neuromorphicRC system to solve a speech recognition task, the advantage of distributed com-puting is not explored in this framework. Moreover, for larger networks and inmore sophisticated tasks, a sequential processor – which calculates every neu-ronal unit separately to simulate a single time step of the network – does not euromorphic Electronic Systems for Reservoir Computing 5
A top-level schematic of an FPGA-based neuromporphic reservoir computer. seem to be eﬃcient. Stochastic arithmetic was, therefore, exploited to parallelizethe calculations and obtain a considerable speedup .More biologically plausible models of spiking neurons (e.g., Hodgkin–Huxley,Morris–Lecar and Izhikevich models ) are too sophisticated to be eﬃcientlyimplemented on hardware and have many parameters which need to be tuned.On the other hand, the simple hardware-friendly spiking neuron models, suchas leaky-integrate-and-ﬁre (LIF) , often have a hard thresholding that makessupervised training diﬃcult in spiking neural networks (SNNs). In spiking reser-voir computing, however, the training of the readout mechanism amounts onlyto solving a linear regression problem, where the target output is a trainablelinear combination of the neural signals within the reservoir. Linear regressionis easily solved by standard linear algebra algorithms when arbitrary real-valuedcombination weights are admitted. However, for on-chip learning, the weightswill be physically realized by states of electronic synapses, which currently canbe reliably set only to a very small number of discrete values. It has been recentlyproved that computing optimal discrete readout weights in reservoir computingis NP-hard and approximate (or heuristic) methods must be exploited to obtainhigh-quality solutions in reasonable time for practical uses . The spike-basedlearning algorithm proposed by  is an example of such approximate solutionsfor FPGA implementations. In contrast to oﬄine learning methods, the proposedonline learning rule avoids any intermediate data storage. Besides, through thisabstract learning process, each synaptic weight is adjusted based on only the ﬁr-ing activities of the corresponding pre- and post-synaptic neurons, independentof the global communications across the neural network. In a speech recogni-
Neuromorphic Electronic Systems for Reservoir Computing tion task, it has been shown that due to this locality, the overhead of hard-ware implementation (e.g., synaptic and membrane voltage precision) can bereduced without drastic eﬀects on its performance . This biological-inspiredsupervised learning algorithm was later exploited in developing an RC-basedgeneral-purpose neuromorphic architecture where training the task-speciﬁc out-put neurons is integrated into the reconﬁgurable FPGA platform . The eﬀectsof catastrophic failures such as broken synapses, dead neurons, random errorsas well as errors in arithmetic operations (e.g., comparison, adding and shift-ing) in similar architecture were addressed in  where the reservoir consists ofexcitatory and inhibitory LIF neurons that are randomly connected with non-plastic synapses governed by second-order response functions. The simulationresults suggest that at least 8-bit resolution is needed for the eﬃcacy of plas-tic synapses if the spike-based learning algorithm proposed by  is applied fortraining the readout weights. The recognition performance only degrades slightlywhen the precision of ﬁxed synaptic weights and membrane voltage for reservoirneurons are reduced down to 6 bits.To realize the on-chip learning on digital systems with extremely low-bitprecision (binary values),  suggested an online pruning algorithm based onvariances of ﬁring activities to sparsify the readout connections. The cores to thisreconﬁguration scheme are STDP learning rule, ﬁring activity monitoring, andvariance estimation. The readout synapses projected from low-variance reservoirneurons are, then, powered oﬀ to save energy. To reduce the energy consumptionof the hardware in FPGA, the reservoir computer developed by [24, 35] utilizesﬁring activity based power gating by turning oﬀ the neurons that seldom ﬁre fora particular benchmark, and applies approximate arithmetic computing  tospeed up the runtime in a speech recognition task.Exploiting the spatial locality in a reservoir consisting of excitatory and in-hibitory LIF neurons, a fully parallel design approach was also presented forreal-time biomedical signal processing in [37, 38]. In this design, the memoryblocks were replaced by distributed memory to circumvent the long access timedue to wire delay. Being inspired by new discoveries in neuroscience,  devel-oped a spiking temporal processing unit (STPU) to eﬃciently implement morebiologically plausible synaptic response functions in digital architectures. STPUoﬀers a local temporal memory buﬀer including an arbitrary number of mem-ory cells to model delayed synaptic transmission between neurons. This allowsmultiple connections between neurons with diﬀerent synaptic latency. Utilizingexcitatory and inhibitory LIF neurons with diﬀerent time scales and exploit-ing the second-order response function to model synaptic transmission, the ef-fects of the input signal in a spoke digit recognition task will only slowly “washout” over time, enabling the reservoir to provide suﬃcient dynamical short-termmemory capacity. In order to create a similar short-term memory on a reconﬁg-urable digital platform with extremely low bit resolutions, an STDP mechanismwas proposed by  for on-chip reservoir tuning. Applying this local learningrule to the reservoir synapses together with a data-driven binary quantizationalgorithm creates sparse connectivities within the reservoir in a self-organized euromorphic Electronic Systems for Reservoir Computing 7 fashion, leading to signiﬁcant energy reduction. Stochastic activity-based STDPapproach , structural plasticity-based mechanism , and correlation-basedneuron gating rule  have also been introduced for eﬃcient low-resolutiontuning of the reservoir in hardware. Neural processes, however, require a largenumber of memory resources for storing various parameters, such as synapticweights and internal neural states, and lead to a heavy clock distribution load-ing and a signiﬁcant power dissipation. To tackle this problem,  suggestedpartitioning the memory elements inside each neuron that are activated at dif-ferent phases of neural processing. This leads to an activity-based clock gatingmechanism with a granularity of a partitioned memory group inside each neuron.Another technique for power reduction is incorporating memristive cross-bars into digital neuromorphic hardware to perform synaptic operations in on-chip/online learning and to store the synaptic weights in oﬄine learning .In general, a two terminal memristor device can be programmed by applying alarge enough potential diﬀerence across its terminals, where the state change ofthe device is dependent on the magnitude, duration, and polarity of the poten-tial diﬀerence. The device resistance states, representing synaptic weights, willvary between a high resistance state (HRS) or low resistance state (LRS), de-pending on the polarity of the voltage applied. At a system level, various sourcesof noise (e.g., random telegraph, thermal noise, and 1/f noise) arise from non-ideal behavior in memristive devices and distort the process of reading from thesynapses and updating the synaptic weights. Given theoretical models for thesestochastic noise processes, the eﬀects of diﬀerent manifestations of memristorread and write noise on the accuracy of neuromorphic RC in a classiﬁcation taskwere investigated in . A hardware feasible RC structure with memristor dou-ble crossbar array has also been proposed in  and was tested on a real-timetime series prediction task. The two studies not only conﬁrm that reservoir com-puting can properly cope with low-precision environments and noisy data, butthey also experimentally demonstrate how RC beneﬁts from memristor devicevariation to secure a more random heterogeneous weight distribution leading tomore diverse response for similar inputs. Although memristive crossbars are areaand energy eﬃcient, the digital circuitry to control the read/write logic to thecrossbars is extremely power hungry, thus preventing the use of memristors inlarge-scale memory systems.In RC literature, it has also been shown that in time series prediction tasks,a cyclic reservoir –where neurons are distributed on a ring and connected withthe same synaptic weights to produce the cyclic rotations of the input vector– isable to operate with an eﬃciency comparable to the standard RC models [45,46].In order to minimize the required hardware resources, therefore, a single cyclicreservoir with stochastic spiking neurons was implemented for a real-time timeseries prediction task . The architecture shows a high level of scalabilityand a prediction performance comparable to the software simulations. In non-spiking reservoir computing implementations on reconﬁgurable digital systems,the reservoir has sometimes been conﬁgured as a ring topology and the readout
Neuromorphic Electronic Systems for Reservoir Computing weights are trained through gradient descent algorithm or ridge regression [48–50].
Digital platforms, tools and simulators oﬀer robust and practical solutionsto a wide range of engineering problems and provide convenient approaches toexplore the quantitative behavior of neural networks. However, they do not seemto be ideal substrates for simulating the detailed large-scale models of neural sys-tems where density, energy eﬃciency, and resilience are important. Besides, theobservation that the brain operates on analog principles and relies on the inher-ent physical characteristics of the neuronal system for computation motivatedthe investigations in the ﬁeld of neuromorphic engineering. Following the pioneer-ing work conducted by Carver Mead  – therein biologically inspired electronicsensors were integrated with analog circuits and an address-event-based asyn-chronous, continuous time communications protocol was introduced – over thelast decade, mixed analog/digital and purely analog very large scale integration(VLSI) circuits have been fabricated to emulate the electrophysiological behav-ior of biological neurons and synapses and to oﬀer a medium in which neuronalnetworks can be presented directly in hardware rather than simply simulatedon a general purpose computers . However, the fact that such platformsprovide only a qualitative approximation to the exact performance of digitallysimulated neural systems may preclude them from being the ideal substrate forsolving engineering problems and achieving machine learning tasks where de-tailed quantitative investigations are essential. In addition to the low resolution,realizing global asynchrony and dealing with noisy and unreliable componentsseem to be formidable technical hurdles. An architectural solution can be partlyprovided by the reservoir computing due to its bio-inspired principle, robustnessto the imperfect substrate and the fact that only readout weights need to betrained . Similar to other analog and digital computing chips, transistors arethe building blocks of analog neuromorphic devices. It has been experimentallyshown that in the “subthreshold” region of operation, the current-voltage char-acteristic curve of the transistor is exponential and analogous to the exponentialdependence of active populations of voltage-gated ionic channels as a functionof the potential across the membrane of a neuron. This similarity has paved theway towards the fabrication of compact analog circuits that implement electronicmodels of voltage-sensitive conductance-based neurons and conductance-basedsynapses as well as computational circuitry to perform logarithmic functions,ampliﬁcation, thresholding, multiplication, inhibition, and winner-take-all selec-tion [53, 54]. An example of biophysically realistic neural electronic circuits isdepicted in Figure 2. This diﬀerential pair integrator (DPI) neuron circuit con-sists of four components: 1) the input DPI ﬁlter ( M L – M L ) including theintegrating membrane capacitor C mem , models the neuron’s leak conductancewhich produces exponential sub-threshold dynamics in response to constant in-put currents, 2) a spike event generating ampliﬁer ( M A – M A ) together with a euromorphic Electronic Systems for Reservoir Computing 9 Fig. 2.
A schematic of (DPI) neuron circuit (reproduced from ). positive-feedback circuit represent both sodium channel activation and inactiva-tion dynamics, 3) a spike reset circuit with address event representation (AER)handshaking signals and refractory period functionality ( M R – M R ) emulatesthe potassium conductance functionality and 4) a spike-frequency adaptationmechanism implemented by an additional DPI ﬁlter ( M G – M G ) producesan after hyper-polarizing current proportional to the neuron’s mean ﬁring rate.See  for an insightful review of diﬀerent design methodologies used for siliconneuron fabrication.In conjunction with circuitry that operates in subthreshold mode, exploit-ing memristive devices to model synaptic dynamics is a common approach inanalog neuromorphic systems for power eﬃciency and performance boostingpurposes [11, 18]. In connection with the fully analog implementation of RC,it has been shown that by creating a functional reservoir consisting of numerousneurons connected through atomic nano-switches, the functional characteristicsrequired for implementing biologically inspired computational methodologies ina synthetic experimental system can be displayed . A memristor crossbarstructure has also been fabricated to connect the analog non-spiking reservoirneurons – which are distributed on a ring topology – to the output node [54,55].In this CMOS compatible crossbar, multiple weight states are achieved usingmultiple bi-stable memristors (with only two addressable resistance states). Thestructural simplicity of this architecture paves the way to independent controlof each synaptic element. However, the discrete nature of memristive weightsplaces a limit on the accuracy of reservoir computing; thus the number of weightstates per synapse, that are required for satisfactory accuracy, must be deter- mined beforehand. Besides, appropriate learning algorithms for on-chip trainingpurpose have not yet been implemented for fully analog reservoir computing.Another signiﬁcant design challenge associated with memristor devices is cycle-to-cycle variability in the resistance values even within a single memristor. Withthe same reservoir structure, therefore,  showed by connecting memristordevices in series or parallel, a staircase memristor model could be constructedwhich not only has a delayed-switching eﬀect between several somewhat stableresistance levels, but also can provide more reliable state values if a speciﬁc re-sistance level is required. This model of synaptic delay is particularly relevantfor time delay reservoir methodologies .From the information processing point of view, the ultimate aim of neuro-morphic systems is to carry out neural computation in an energy-eﬃcient way.However, ﬁrstly, the quantities relevant to the computation have to be expressedin terms of the spikes that spiking neurons communicate with. One of the keycomponents in both digital and analog neuromorphic reservoir computers is,therefore, a neural encoder which transforms the input signals into the spiketrains. Although the nature of the neural code (or neural codes) is an unre-solved topic of research in neuroscience, based on what is known from biology, anumber of neural information encodings have been proposed . Among them,hardware prototypes of rate encoding, temporal encoding, inter-spike interval en-coding, and latency (or rank order) encoding schemes have been implemented,mostly on digital platforms [49, 56, 58, 59]. Digital conﬁgurations, however, re-quires large hardware overheads associated with analog-to-digital converters,operational ampliﬁers, digital buﬀers, and electronic synchronizers and will in-crease the cost of implementation. Particularly, in analog reservoir computers,fabricating a fully analog encoding spike generator is of crucial importance bothto speed the process up and to optimize the required energy and hardware costs.Examples of such generators have been proposed, mainly, for analog implemen-tation of delayed feedback reservoirs [59, 60]. The majority of analog neuromorphic systems designed to solve machinelearning/signal processing problems tend to rely on digital components, forinstance, for pre-processing, encoding spike generation, storing the synapticweights values and on-chip programmability/learning [61–63]. Inter-chip andchip-to-chip communications are also primarily digital in some analog neuro-morphic platforms [61, 64, 65]. Mixed analog/digital architectures are, therefore,very common in neuromorphic systems. In the context of hardware reservoircomputing and inspired by the nonlinear properties of dendrites in biologicalneurons,  proposed a readout learning mechanism which returns binary valuesfor synaptic weights such that the “choice” of connectivity can be implementedin a mixed analog/digital platform with the address event representation (AER)protocols where the connection matrix is stored in digital memory. Relying on thesame communication protocol and exploiting the reconﬁgurable online learning euromorphic Electronic Systems for Reservoir Computing 11
Architecture of ROLLS neuromorphic processor (reproduced from ). spiking neuromorphic processor (ROLLS chip) introduced in ,  designedand fabricated a reservoir computer to detect spike-patterns in bio-potentialand local ﬁeld potential (LFP) recordings. The ROLLS neuromorphic processor(Figure 3) contains 256 adaptive exponential integrate-and-ﬁre (spiking) neu-rons implemented with mixed signal analog/digital circuits. The neurons areconnected to an array of 256 ×
256 learning synapse circuits for modeling long-term plasticity mechanisms, an array of 256 ×
256 programmable synapses withshort-term plasticity circuits and 256 × information about the state of the whole reservoir to each neuron. To fabri-cate non-spiking silicon neurons, current-mode diﬀerential ampliﬁers operatingin their subthreshold regime have been used to emulate the hyperbolic tangentrate model behavior. For epileptic seizure detection and prosthetic ﬁnger con-trol, it has been shown that a random distribution of weights in input-to-reservoirand reservoir synapses can be obtained by employing mismatches in transistorsthreshold voltages to design subthreshold bipolar synapses.Recently, in order to create a functional spiking reservoir on the Dynap-seboard , a “Reservoir Transfer Method” was proposed to transfer the func-tional ESN-type reservoir into a reservoir with analog, unclocked, spiking neu-rons . The Dynap-se board is a multicore neuromorphic processor chip thatemploys hybrid analog/digital circuits for emulating synapse and neuron dynam-ics together with asynchronous digital circuits for managing the address-eventtraﬃc. It oﬀers 4000 adaptive exponential integrate-and-ﬁre (spiking) neuronsimplemented with mixed signal analog/digital circuits with 64 / k fan-in/out.From the computational point of view, implementing eﬃcient algorithms on thisneuromorphic system encounters general problems such as bit resolution, de-vice mismatch, uncharacterized neural models, unavailable state variables, andphysical system noise. Realizing the reservoir computing on this substrate, addi-tionally, leads to an important problem associated with ﬁne-tuning the reservoirto obtain the memory span required for a speciﬁc machine learning problem.The reservoir transfer method proposes a theoretical framework to learn ternaryweights in a reservoir of spiking neurons by transferring features from a well-tuned echo state network simulated on software . Empirical results from anECG signal monitoring task showed that this reservoir with ternary weights isable to not only integrate information over a time span longer than the timescaleof individual neurons but also function as an information processing mediumwith performance close to a standard, high precision, deterministic, non-spikingESN . Reservoir computing appears to be a particularly widespread and versatileapproach to harness unconventional nonlinear physical phenomena into usefulcomputations. Spike-based neuromorphic microchips, on the other hand, promiseone or two orders of magnitude less energy consumption than traditional digi-tal microchips. Implementation of reservoir computing methodologies on neuro-morphic hardware, therefore, has been an attractive practice in neuromorphicsystem design. Here, a review of experimental studies was provided to illustratethe progress in this area and to address the computational bottlenecks whicharise from speciﬁc hardware implementations. Moreover, to deal with challengesof computation on such unconventional substrates, several lines of potential so-lutions are presented based on advances in other computational approaches inmachine learning. euromorphic Electronic Systems for Reservoir Computing 13
This work was supported by European H2020 collaborative project Neu-RAM3 [grant number 687299]. I would also like to thank Herbert Jaeger, whoprovided insight and expertise that greatly assisted this research.
1. Carver Mead and Mohammed Ismail.
Analog VLSI implementation of neural sys-tems , volume 80. Springer Science & Business Media, 2012.2. Laszlo B Kish. End of moore’s law: thermal (noise) death of integration in microand nano electronics.
Physics Letters A , 305(3-4):144–149, 2002.3. Ronald G Dreslinski, Michael Wieckowski, David Blaauw, Dennis Sylvester, andTrevor Mudge. Near-threshold computing: Reclaiming moore’s law through energyeﬃcient integrated circuits.
Proceedings of the IEEE , 98(2):253–266, 2010.4. Catherine D Schuman, Thomas E Potok, Robert M Patton, J Douglas Birdwell,Mark E Dean, Garrett S Rose, and James S Plank. A survey of neuromorphiccomputing and neural networks in hardware. arXiv preprint arXiv:1705.06963 ,2017.5. J Joshua Yang, Dmitri B Strukov, and Duncan R Stewart. Memristive devices forcomputing.
Nature nanotechnology , 8(1):13, 2013.6. Giacomo Indiveri, Bernab´e Linares-Barranco, Robert Legenstein, George Delige-orgis, and Themistoklis Prodromakis. Integration of nanoscale memristor synapsesin neuromorphic computing architectures.
Nanotechnology , 24(38):384010, 2013.7. Gina C Adam, Brian D Hoskins, Mirko Prezioso, Farnood Merrikh-Bayat, BhaswarChakrabarti, and Dmitri B Strukov. 3-d memristor crossbars for analog andneuromorphic computing applications.
IEEE Transactions on Electron Devices ,64(1):312–318, 2017.8. Sung Hyun Jo, Ting Chang, Idongesit Ebong, Bhavitavya B Bhadviya, PinakiMazumder, and Wei Lu. Nanoscale memristor device as synapse in neuromorphicsystems.
Nano letters , 10(4):1297–1301, 2010.9. M Suri, O Bichler, D Querlioz, G Palma, E Vianello, D Vuillaume, C Gamrat, andB DeSalvo. Cbram devices as binary synapses for low-power stochastic neuromor-phic systems: auditory (cochlea) and visual (retina) cognitive processing applica-tions. In , pages 10–3. IEEE, 2012.10. Masakazu Aono and Tsuyoshi Hasegawa. The atomic switch.
Proceedings of theIEEE , 98(12):2228–2236, 2010.11. Audrius V Avizienis, Henry O Sillin, Cristina Martin-Olmos, Hsien Hang Shieh,Masakazu Aono, Adam Z Stieg, and James K Gimzewski. Neuromorphic atomicswitch networks.
PloS one , 7(8):e42772, 2012.12. Henry O Sillin, Renato Aguilera, Hsien-Hang Shieh, Audrius V Avizienis,Masakazu Aono, Adam Z Stieg, and James K Gimzewski. A theoretical and ex-perimental study of neuromorphic atomic switch networks for reservoir computing.
Nanotechnology , 24(38):384004, 2013.13. Manan Suri, Olivier Bichler, Damien Querlioz, Boubacar Traor´e, Olga Cueto, LucaPerniola, Veronique Sousa, Dominique Vuillaume, Christian Gamrat, and BarbaraDeSalvo. Physical aspects of low power synapses based on phase change memorydevices.
Journal of Applied Physics , 112(5):054904, 2012.4 Neuromorphic Electronic Systems for Reservoir Computing14. Stefano Ambrogio, Nicola Ciocchini, Mario Laudato, Valerio Milo, AgostinoPirovano, Paolo Fantini, and Daniele Ielmini. Unsupervised learning by spike tim-ing dependent plasticity in phase change memory (pcm) synapses.
Frontiers inneuroscience , 10:56, 2016.15. Shimeng Yu and H-S Philip Wong. Modeling the switching dynamics ofprogrammable-metallization-cell (pmc) memory and its application as synapse de-vice for a neuromorphic computation system. In , pages 22–1. IEEE, 2010.16. Shimeng Yu, Yi Wu, Rakesh Jeyasingh, Duygu Kuzum, and H-S Philip Wong.An electronic synapse device based on metal oxide resistive switching memory forneuromorphic computation.
IEEE Transactions on Electron Devices , 58(8):2729–2737, 2011.17. Gouhei Tanaka, Toshiyuki Yamane, Jean Benoit H´eroux, Ryosho Nakane, NaokiKanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, and Akira Hirose. Re-cent advances in physical reservoir computing: a review.
Neural Networks , 2019.18. Giacomo Indiveri and Shih-Chii Liu. Memory and information processing in neu-romorphic systems.
Proceedings of the IEEE , 103(8):1379–1397, 2015.19. Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, JunSawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Naka-mura, et al. A million spiking-neuron integrated circuit with a scalable communi-cation network and interface.
Science , 345(6197):668–673, 2014.20. Herbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic sys-tems and saving energy in wireless communication. science , 304(5667):78–80, 2004.21. Conrad D James, James B Aimone, Nadine E Miner, Craig M Vineyard, Fredrick HRothganger, Kristofor D Carlson, Samuel A Mulder, Timothy J Draelos, Aleksan-dra Faust, Matthew J Marinella, et al. A historical survey of algorithms and hard-ware architectures for neural-inspired and neuromorphic computing applications.
Biologically Inspired Cognitive Architectures , 19:49–64, 2017.22. Janardan Misra and Indranil Saha. Artiﬁcial neural networks in hardware: Asurvey of two decades of progress.
Neurocomputing , 74(1-3):239–255, 2010.23. Wolfgang Maass, Thomas Natschl¨ager, and Henry Markram. Real-time computingwithout stable states: A new framework for neural computation based on pertur-bations.
Neural computation , 14(11):2531–2560, 2002.24. Qian Wang, Youjie Li, Botang Shao, Siddhartha Dey, and Peng Li. Energy ef-ﬁcient parallel neuromorphic architectures with approximate arithmetic on fpga.
Neurocomputing , 221:146–158, 2017.25. Benjamin Schrauwen, Michiel D’Haene, David Verstraeten, and Jan Van Camp-enhout. Compact hardware for real-time speech recognition using a liquid statemachine. In , pages 1097–1102. IEEE, 2007.26. Andres Upegui, Carlos Andr´es Pena-Reyes, and Eduardo Sanchez. An fpga plat-form for on-line topology exploration of spiking neural networks.
Microprocessorsand microsystems , 29(5):211–223, 2005.27. Benjamin Schrauwen and Jan Van Campenhout. Parallel hardware implementationof a broad class of spiking neurons using serial arithmetic. In
Proceedings of the14th European Symposium on Artiﬁcial Neural Networks , pages 623–628. d-sidepublications:, 2006.28. David Verstraeten, Benjamin Schrauwen, and Dirk Stroobandt. Reservoir comput-ing with stochastic bitstream neurons. In
Proceedings of the 16th annual Proriscworkshop , pages 454–459, 2005.euromorphic Electronic Systems for Reservoir Computing 1529. Wulfram Gerstner and Werner M Kistler.
Spiking neuron models: Single neurons,populations, plasticity . Cambridge university press, 2002.30. Fatemeh Hadaeghi and Herbert Jaeger. Computing optimal discrete readoutweights in reservoir computing is np-hard.
Neurocomputing , 338:233–236, 2019.31. Yong Zhang, Peng Li, Yingyezhe Jin, and Yoonsuck Choe. A digital liquid state ma-chine with biologically inspired learning and its application to speech recognition.
IEEE transactions on neural networks and learning systems , 26(11):2635–2649,2015.32. Qian Wang, Yingyezhe Jin, and Peng Li. General-purpose lsm learning processorarchitecture and theoretically guided design space exploration. In , pages 1–4. IEEE, 2015.33. Yingyezhe Jin and Peng Li. Performance and robustness of bio-inspired digi-tal liquid state machines: A case study of speech recognition.
Neurocomputing ,226:145–160, 2017.34. Yingyezhe Jin, Yu Liu, and Peng Li. Sso-lsm: A sparse and self-organizing ar-chitecture for liquid state machine based neural processors. In , pages 55–60.IEEE, 2016.35. Qian Wang, Youjie Li, and Peng Li. Liquid state machine based pattern recognitionon fpga with ﬁring-activity dependent power gating and approximate computing.In , pages361–364. IEEE, 2016.36. Botang Shao and Peng Li. Array-based approximate arithmetic computing: A gen-eral model and applications to multiplier and squarer design.
IEEE Transactionson Circuits and Systems I: Regular Papers , 62(4):1081–1090, 2015.37. Anvesh Polepalli, Nicholas Soures, and Dhireesha Kudithipudi. Digital neuro-morphic design of a liquid state machine for real-time processing. In , pages 1–8. IEEE, 2016.38. Anvesh Polepalli, Nicholas Soures, and Dhireesha Kudithipudi. Reconﬁgurable dig-ital design of a liquid state machine for spatio-temporal data. In
Proceedings of the3rd ACM International Conference on Nanoscale Computing and Communication ,page 15. ACM, 2016.39. Michael R Smith, Aaron J Hill, Kristofor D Carlson, Craig M Vineyard, JonathonDonaldson, David R Follett, Pamela L Follett, John H Naegle, Conrad D James,and James B Aimone. A novel digital neuromorphic architecture eﬃciently facil-itating complex synaptic response functions applied to liquid state machines. In , pages 2421–2428. IEEE, 2017.40. Yingyezhe Jin and Peng Li. Ap-stdp: A novel self-organizing mechanism for eﬃcientreservoir computing. In , pages 1158–1165. IEEE, 2016.41. Subhrajit Roy and Arindam Basu. An online structural plasticity rule for gener-ating better reservoirs.
Neural computation , 28(11):2557–2584, 2016.42. Yu Liu, Yingyezhe Jin, and Peng Li. Online adaptation and energy minimiza-tion for hardware recurrent spiking neural networks.
ACM Journal on EmergingTechnologies in Computing Systems (JETC) , 14(1):11, 2018.43. Nicholas Soures, Lydia Hays, and Dhireesha Kudithipudi. Robustness of a mem-ristor based liquid state machine. In , pages 2414–2420. IEEE, 2017.6 Neuromorphic Electronic Systems for Reservoir Computing44. Amr M Hassan, Hai Helen Li, and Yiran Chen. Hardware implementation of echostate networks using memristor double crossbar arrays. In , pages 2171–2177. IEEE, 2017.45. Ali Rodan and Peter Tino. Minimum complexity echo state network.
IEEE trans-actions on neural networks , 22(1):131–144, 2011.46. Lennert Appeltant, Miguel Cornelles Soriano, Guy Van der Sande, Jan Danckaert,Serge Massar, Joni Dambre, Benjamin Schrauwen, Claudio R Mirasso, and IngoFischer. Information processing using a single dynamical node as complex system.
Nature communications , 2:468, 2011.47. Miquel L Alomar, Vincent Canals, Antoni Morro, Antoni Oliver, and Josep LRossello. Stochastic hardware implementation of liquid state machines. In , pages 1128–1133.IEEE, 2016.48. Piotr Antonik, Anteo Smerieri, Fran¸cois Duport, Marc Haelterman, and SergeMassar. Fpga implementation of reservoir computing with online learning. In , 2015.49. Yang Yi, Yongbo Liao, Bin Wang, Xin Fu, Fangyang Shen, Hongyan Hou, andLingjia Liu. Fpga based spike-time dependent encoder and reservoir design inneuromorphic computing processors.
Microprocessors and Microsystems , 46:175–183, 2016.50. Miquel L Alomar, Vincent Canals, Nicolas Perez-Mora, V´ıctor Mart´ınez-Moll, andJosep L Rossell´o. Fpga-based stochastic echo state networks for time-series fore-casting.
Computational intelligence and neuroscience , 2016:15, 2016.51. Giacomo Indiveri, Bernab´e Linares-Barranco, Tara Julia Hamilton, Andr´eVan Schaik, Ralph Etienne-Cummings, Tobi Delbruck, Shih-Chii Liu, Piotr Dudek,Philipp H¨aﬂiger, Sylvie Renaud, et al. Neuromorphic silicon neuron circuits.
Fron-tiers in neuroscience , 5:73, 2011.52. Felix Sch¨urmann, Karlheinz Meier, and Johannes Schemmel. Edge of chaos com-putation in mixed-mode vlsi-a hard liquid. In
Advances in neural informationprocessing systems , pages 1201–1208, 2005.53. Shih-Chii Liu and Tobi Delbruck. Neuromorphic sensory systems.
Current opinionin neurobiology , 20(3):288–295, 2010.54. Colin Donahue, Cory Merkel, Qutaiba Saleh, Levs Dolgovs, Yu Kee Ooi, Dhiree-sha Kudithipudi, and Bryant Wysocki. Design and analysis of neuromemristiveecho state networks with limited-precision synapses. In , pages1–6. IEEE, 2015.55. Cory Merkel, Qutaiba Saleh, Colin Donahue, and Dhireesha Kudithipudi. Mem-ristive reservoir computing architecture for epileptic seizure detection.
ProcediaComputer Science , 41:249–254, 2014.56. Xiao Yang, Wanlong Chen, and Frank Z Wang. Investigations of the staircasememristor model and applications of memristor-based local connections.
AnalogIntegrated Circuits and Signal Processing , 87(2):263–273, 2016.57. Andr´e Gr¨uning and Sander M Bohte. Spiking neural networks: Principles andchallenges. In
ESANN , 2014.58. Olivier Bichler, Damien Querlioz, Simon J Thorpe, Jean-Philippe Bourgoin, andChristian Gamrat. Extraction of temporally correlated features from dynamicvision sensors with spike-timing-dependent plasticity.
Neural Networks , 32:339–348, 2012.euromorphic Electronic Systems for Reservoir Computing 1759. Chenyuan Zhao, Bryant T Wysocki, Clare D Thiem, Nathan R McDonald, JialingLi, Lingjia Liu, and Yang Yi. Energy eﬃcient spiking temporal encoder design forneuromorphic computing systems.
IEEE Transactions on Multi-Scale ComputingSystems , 2(4):265–276, 2016.60. Jialing Li, Chenyuan Zhao, Kian Hamedani, and Yang Yi. Analog hardware im-plementation of spike-based delayed feedback reservoir computing system. In , pages 3439–3446.IEEE, 2017.61. Alan F Murray and Anthony VW Smith. Asynchronous vlsi neural networks us-ing pulse-stream arithmetic.
IEEE Journal of Solid-State Circuits , 23(3):688–697,1988.62. Mostafa Rahimi Azghadi, Saber Moradi, and Giacomo Indiveri. Programmableneuromorphic circuits for spike-based neural dynamics. In , pages 1–4. IEEE,2013.63. Daniel Briiderle, Johannes Bill, Bernhard Kaplan, Jens Kremkow, Karlheinz Meier,Eric M¨uller, and Johannes Schemmel. Simulator-like exploration of cortical net-work architectures with a mixed-signal vlsi system. In
Proceedings of 2010 IEEEInternational Symposium on Circuits and Systems , pages 2784–8787. IEEE, 2010.64. Syed Ahmed Aamir, Paul M¨uller, Andreas Hartel, Johannes Schemmel, and Karl-heinz Meier. A highly tunable 65-nm cmos lif neuron for a large scale neuromorphicsystem. In
ESSCIRC Conference 2016: 42nd European Solid-State Circuits Con-ference , pages 71–74. IEEE, 2016.65. Elisabetta Chicca, Patrick Lichtsteiner, Tobias Delbruck, Giacomo Indiveri, andRodney J Douglas. Modeling orientation selectivity using a neuromorphic multi-chip system. In ,pages 4–pp. IEEE, 2006.66. Subhrajit Roy, Amitava Banerjee, and Arindam Basu. Liquid state machine withdendritically enhanced readout for low-power, neuromorphic vlsi implementations.
IEEE transactions on biomedical circuits and systems , 8(5):681–695, 2014.67. Ning Qiao, Hesham Mostafa, Federico Corradi, Marc Osswald, Fabio Stefanini,Dora Sumislawska, and Giacomo Indiveri. A reconﬁgurable on-line learning spikingneuromorphic processor comprising 256 neurons and 128k synapses.
Frontiers inneuroscience , 9:141, 2015.68. Federico Corradi and Giacomo Indiveri. A neuromorphic event-based neu-ral recording system for smart brain-machine-interfaces.
IEEE transactions onbiomedical circuits and systems , 9(5):699–709, 2015.69. Joseph M Brader, Walter Senn, and Stefano Fusi. Learning real-world stimuliin a neural network with spike-driven synaptic dynamics.
Neural computation ,19(11):2881–2912, 2007.70. Dhireesha Kudithipudi, Qutaiba Saleh, Cory Merkel, James Thesing, and BryantWysocki. Design and analysis of a neuromemristive reservoir computing architec-ture for biosignal processing.
Frontiers in neuroscience , 9:502, 2016.71. Saber Moradi, Ning Qiao, Fabio Stefanini, and Giacomo Indiveri. A scalable multi-core architecture with heterogeneous memory structures for dynamic neuromorphicasynchronous processors (dynaps).
IEEE transactions on biomedical circuits andsystems , 12(1):106–122, 2018.72. Xu He, Tianlin Liu, Fatemeh Hadaeghi, and Herbert Jaeger. Reservoir transferon analog neuromorphic hardware. In9th International IEEE EMBS Neural En-gineering Conference