PeleNet: A Reservoir Computing Framework for Loihi
aa r X i v : . [ c s . N E ] N ov PeleNet: A Reservoir Computing Framework for Loihi
Carlo Michaelis24. November 2020
Note: This is a draft
Abstract
High-level frameworks for spiking neural networks are a key factor for fast prototyping and efficientdevelopment of complex algorithms. Such frameworks have emerged in the last years for traditionalcomputers, but programming neuromorphic hardware is still a challenge. Often low level programmingwith knowledge about the hardware of the neuromorphic chip is required. The PeleNet frameworkaims to simplify reservoir computing for the neuromorphic hardware Loihi. It is build on top of theNxSDK from Intel and is written in Python. The framework manages weight matrices, parametersand probes. In particular, it provides an automatic and efficient distribution of networks over severalcores and chips. With this, the user is not confronted with technical details and can concentrate onexperiments.
Introduction
Several different neuromorphic hardware chips have been developed in recent years (reviewed by Schuman et al.,2017; Young et al., 2019; Rajendran et al., 2019). All of them promise to be a key factor in future neu-roscientific research as well as technological developments in artificial intelligence. The main benefit ofneuromorphic systems is their low power consumption and speed (Rajendran et al., 2019). Shown forLoihi for example from Tang et al. (2019). This advantage of brain inspired hardware comes with a solu-tion to the von Neumann bottleneck (Backus, 1978). While novel neuromorphic hardware becomes moreand more powerful, algorithms for such hardware systems are still in an early stage. A specific field ofspiking neural network algorithms, which can be used for neuromorphic hardware, is reservoir computing.For details about reservoir computing I refer the reader to the literature (Jaeger, 2001; Maass et al., 2002;Jaeger, 2007; Schrauwen et al., 2007; Lukoˇseviˇcius et al., 2012; Goodfellow et al., 2016).Here, I focus on the neuromorphic hardware chip Loihi (Davies et al., 2018). The chip is digital andincludes a current-based (CUBA) leaky integrate-and-fire (LIF) neuron. A chip contains 128 cores andeach core time-multiplexes 1024 compartments. In addition, every chip contains three conventional x86CPUs. Several chips can be used in parallel on one board. The parameters of the synapses and thecompartments can be adapted, but the neuron model itself is fixed. A single compartment neuron canbe extended to a multi compartment neuron which comes with the cost of less available neurons. Intelprovides a software development kit, the so-called
NxSDK which is written in
Python and already allowshigher-level programming of the chip (Lin et al., 2018). In addition, C scripts can be used to run code onthe x86 CPUs of the chip. The NxSDK allows to define compartment prototypes and compartment groups which can be combined to neuron prototypes and neurons . Using connection prototypes and connections these compartments and neurons can be interconnected using a connection matrix. Spikes can be injected1eleNet Loihi Framework Michaelis 2020
Core 1 Core 2 Core 3 C o r e C o r e C o r e Input OutputneuronsExcitatoryneuronsInhibitoryneurons ReservoirreadoutOutputreadout
A B
Noise
Figure 1: (A)
The reservoir network consists of a pool of excitatory neurons (red) and a pool of inhibitoryneurons (blue). Those pools are connected within and in-between. The excitatory neurons can be stimulated byan input (orange) and/or noise. Optionally a pool of output neurons (purple) can be defined. The spiking datacan be read out from the reservoir itself and/or from the output neurons.
PeleNet supports configuring a learningrule for the connections of the excitatory neurons (dotted arrow). (B)
The example shows 12 neurons which aredistributed to three cores, where each core contains four neurons. Each core is color coded (grey, green, purple).Every triangle symbolises a potential connection between two neurons. The color of the triangle indicates whichcores need to be interconnected to interconnect the related neurons. In this case, neurons spread over three coresrequire 9 connection matrices. using spike generators and several different types of probes can be defined. Finally, learning rules can beused to make connections in the reservoir plastic.However, the
NxSDK defines compartments for each core separately. If bigger networks are used, itis necessary to split the connections manually to the Loihi cores. Compartments between cores needthen to be interconnected manually, which results in n conn = n connection matrices, if all potentialconnections should be possible. For example, if we create 3 compartment groups, distributed to n core = 3cores, and we want to interconnect all of them, we need to define n conn = 9 connection weight matrices,which is illustrated in Figure 1B. Note that this amount of matrices is necessary even for sparse reservoirsnetworks, since we do not want to exclude any possibility a priori. In addition to handling these connectionmatrices, also probes can only be taken for each core individually. Probing the whole network from theexample above requires defining and handling 3 probes for the compartment groups and 9 probes for theconnection weight matrices. Note that this problem is only difficult in recurrent structures, especiallyin a reservoir where all neurons can potentially connect to each other. In feed-forward structures, evenin deep spiking neural networks (already applied to several neuromorphic hardware systems, see e.g.Diehl et al., 2016; Schmitt et al., 2017; Patino-Saucedo et al., 2020; Massa et al., 2020), this problem isless predominant since the layers are connected in series. Multiple probes still need to be defined, but theconnection matrices scale linear with the number of neuron groups and not quadratic as in reservoirs.The PeleNet framework was developed to solve the distribution of connections efficiently and to makeit easy for the user. In the framework, the experimenter only needs to define one connection matrix forevery part of the network (e.g. for the reservoir or for the output layer). After the simulation, the usergets usable probes for every part of the network. In addition,
PeleNet provides different distributionsfor initializing the connection weights, defining learning rules, creating standard plots, logging relevantcomputation steps and a collection of utils for calculating statistics and handling data. Moreover, theframework is a whole new abstraction layer on top of the
NxSDK . Compartments, connections and probes2eleNet Loihi Framework Michaelis 2020are defined implicitly and are controlled via parameters. Due to its modular and object oriented archi-tecture, the framework can easily be extended with additional functionality. Here, I give a brief overviewof the code structure and the main features. The code is available on Github under the MIT license. Design and implementation
Pele is the goddess of volcanoes and fire in the Hawaiian religion (Nimmo, 1986; Emerson, 2013). Shehas the control over lava and volcanoes and is inter alias in control of the volcano
Loihi . The name ofthe
PeleNet framework is an eponym of the goddess Pele.The framework is build to allow experiments with reservoir networks on Loihi. As shown in Figure 1A,
PeleNet currently supports reservoir networks that follow Dale’s law. The reservoir contains a pool ofexcitatory and a pool of inhibitory neurons. Those neuron pools are connected within and in-between.Additionally an input, noise and an output is available. The spiking data can be read out from theexcitatory, inhibitory and output neurons. Every experiment can contain one or multiple trials. Figure 3shows an example with 10 trials. The spiking activity can optionally be reset after every trial. With this,it is possible to simulate much faster, since is is not necessary to initialize the network again after everytrial.Programmatically,
PeleNet consists of two main parts. One part contains some helper functions andexternal libraries which are not available as a package. This part is imported from the
PeleNet frameworkinternally, the user does not need to import modules from this part. The other part consists of the
PeleNet code itself which is imported and used by the user.
Libraries
The lib folder currently contains code to generate an anisotropic connectivity matrix and some helperfunctions and classes. The code for initializing an anisotropic connectivity matrix is public on Github . The underlying principle was introduced in (Spreizer et al., 2019) and used in (Michaelis et al., 2020)for generating robust robotic trajectories, using the PeleNet framework. The helper folder containscustom exception functions for invalid parameters or invalid function arguments and a
Singleton classto decorate classes in the
PeleNet framework to make use of the singleton design pattern.
PeleNet structure
The central entity of
PeleNet is the experiment , which inherits from an abstract experiment . In anexperiment one or several networks can be defined and used. In the experiment, a parameter set isdefined and overwrites values of the default parameter object. The parameters are passed to every network , when it is initialized by the experiment . In addition, every network contains a plot object,which has access to the data sets and probes of the simulation. Passing data to one of the plot methodsis therefore in most cases not necessary. Hence, the arguments in den plot method shape the plotsappropriately in size, limits, labels, colors, etc. Two singleton objects are globally available to all otherobjects. The system singleton contains a datalog object which logs all important steps in a log file. Italso logs the parameter set which was used for a particular experiment and basic plots and optionallydata sets from this experiment. The utils singleton provides a bunch of methods to handle and evaluate PeleNet on Github: https://github.com/sagacitysite/pelenet Code for generating anisotropic connection weights on Github: https://github.com/babsey/spatio-temporal-activity-sequence/tree/6d4ab597c98c01a2a9aa037834a0115faee62587
Exp erimentAbstract ExperimentNetwork System DatalogParameterPlotsUtilsconnect input noise outputprobes snips weights
Figure 2:
The code structure of the
PeleNet framework. Classes have a green background, singletons are inpurple and collections of methods for a class are yellow. data, like dimensionality reduction, smoothing, calculating the spectral radius of the weight matrix anddifferent kind of statistics. Figure 2 gives an overview of the dependencies between the classes in
PeleNet . Network
The major component of the
PeleNet framework is the network . It contains a collection of methods,distributed over several files. Since this is the core of the framework, I will give some more details aboutthe behavior in the following listing. weights
Methods in the weights file initialize the weight matrices for all parts of the network. Thebasic weight matrices are for connecting the excitatory and inhibitory parts of the reservoir. Weights canbe initialized using constant values or a log-normal or normal distribution. In addition, a 2D topologicalanisotropic weight matrix can be initialized. All weight matrices are stored sparsely in a compressedsparse row (CSR) format. connect
Takes a weight matrix for every part of the network (e.g. one weight matrix for the wholereservoir), splits it in parts (called chunks in the framework), distributes these parts to the cores andinterconnects them to each other. It is currently still necessary to define the number of neurons that shouldbe used per core as a parameter. For an efficient distribution we need to consider two aspects. First,every core time-multiplexes up to 1024 neurons, the less neurons we simulate on every core, the faster thesimulation will be. Second, the more cores are used, the more connection matrices are required, whichslows down the initialization of the network. For an optimal run-time performance we should thereforereduce the number of neurons per core as much as possible and for an optimal initialization performancewe should use as much neurons per core as possible. It is currently up to the user to choose best fittingvalues for this trade-off, but in most cases it is probably the fast simulation time which is desired. Laterversions of the framework may allow the user to choose a preferred method and handles the distributionof neurons automatically. 4eleNet Loihi Framework Michaelis 2020 input
Adds different types of inputs to the network. All of them base on the spike generators fromthe
NxSDK . Currently topological inputs (in case of a 2D-network), noisy inputs (leave-n-neurons out inevery trial), sequences of inputs and varying input positions per trial are supported. Topological inputsdefine a square of stimulated neurons in the excitatory layer of the reservoir at a defined position. Noisyinputs stimulate a specific number of reservoir neurons, but in every trial some few neurons are left outsuch that the input differs slightly in every trial. A sequence of inputs are multiple input regions whichare stimulated in a row within one trial such that relations between them can be learned (e.g. withspike-timing-dependent plasticity). An example for a input sequence is shown in Figure 3. Finally, it ispossible to define separate input regions, where one input region is randomly chosen in every trial. noise
Noise is currently generated by random inputs from spike generators which are connected torandomly chosen neurons in the network. Future implementations will probably make use of randomchanges in the current of a synapse or the membrane voltage of the neuron. output
Adds output neurons to the reservoir. Currently the only available output is a pooling layer,which was used in Michaelis et al. (2020). The pooling layer was used for a faster read out and performedregularization for the anisotropic network. probes
Contains several methods to define and process probes. First, probes are defined for every coreand every connection matrix chunk. Second, after a successful simulation, probe data are post-processedand stacked together to complete and useful data sets. Note that the output of a connection weightprobes is in CSR format again such that this matrix can directly be used as an initial connection matrixfor another simulation. snips
Handles small C -scripts that run on the x86 cores of the Loihi chips (so called SNIPs). Thesescripts are located in the pelenet/snips folder. Currently a reset SNIP is available that resets themembrane voltages after a trial. It is important to note that the plasticity is currently not stopped whileresetting the membrane voltages, which can cause problems when the uk variables ares used. But thisfeature is under development and will probably be added soon. Experiments An experiment inherits from an abstract experiment and is created in the pelenet/experiments folder.The abstract experiment inherits again from the ABC package, which allows defining abstract methods.The defineParameters method in the abstract experiment class is implemented as an abstract methodthat is then necessary in every experiment , otherwise an exception is thrown. The abstract experiment also provides some default functionality, which can optionally be overwritten. It initializes all necessaryobjects for the experiment and contains a default build process. Also the execution of the simulationfollows a default behavior. In both cases, it is preferred to control the behavior of the experiment viaparameters instead of overwriting methods. If the parameters do not cover the wanted behavior, it issuggested to make use of the available lifecycle methods. Available lifecycle methods are: • onInit : Called after the experiment was initialized. • afterBuild : Called after all network parts are connected (i.e. weight matrix, inputs, outputs,noise, probes). 5eleNet Loihi Framework Michaelis 2020 from . _abstract import Experiment """ @desc : An experiment with a sequential input , trained over several trials """ class SequenceExperiment ( Experiment ): """ @desc : Define parameters for this experiment """ def defineParameters ( self ): return { ’seed ’: 1, ’trials ’: 10 , ’ stepsPerTrial ’: 60 , ’ refractoryDelay ’: 2, ’voltageTau ’: 100 , ’currentTau ’: 5, ’ thresholdMant ’: 1200 , ’ reservoirExSize ’: 400 , ’ reservoirConnPerNeuron ’: 35 , ’ isLearningRule ’: True , ’ learningRule ’: ’2^-2*x1 *y0 - 2^-2* y1* x0 + 2^-4*x1 *y1 *y0 - 2^-3* y0*w*w ’, ’ inputIsSequence ’: True , ’ inputSequenceSize ’: 3 , ’inputSteps ’: 20 , ’ inputGenSpikeProb ’: 0.8, ’ inputNumTargetNeurons ’: 40 , ’ isExSpikeProbe ’: True , ’ isInSpikeProbe ’: True , ’ isWeightProbe ’: True } Code Listing 1:
Defining an experiment in the pelenet/experiments folder. The defineParameters methodis required. Lifecycle methods are optional (not shown). In addition, the experiment can be extended by custommethods for e.g. data evaluation or visualization. • afterRun : Called after the simulation has finished and all data are post-processed.Only if the parameters and the lifecycle methods are not sufficient to solve the intended behavior it issuggest to overwrite the build and run methods of the abstract experiment .In practice, Jupyter notebooks are used to quickly evaluate the results of an experiment . It is suggested touse
Jupyter notebooks only for visualization of the results and for prototyping. Code for the experimentshould be included in the defined experiment . An example of a simple experiment is shown in CodeListing 1. In Code Listing 2 the experiment is used. The plotted spike train from the Code Listing 2(line 25) is shown in Figure 3. 6eleNet Loihi Framework Michaelis 2020 from pelenet . experiments . sequence import SequenceExperiment parameters = { ’seed ’: 2 , } exp = SequenceExperiment ( name = ’random -network - sequence - learning ’, parameters = parameters ) exp . build () exp .run () exp .net . plot . reservoirSpikeTrain ( figsize =(12 ,6)) Code Listing 2:
An example for running the experiment. For this Jupyter notebooks can be used. Parametersof the experiment can be overwritten to allow fast experimentation. At the end of the script an included plottingmethod is called to show spike trains. They are directly plotted in Jupyter and stored in the log folder related tothis simulation.
Parameters
The parameter system is a powerful tool of
PeleNet for defining an experiment. All
NxSDK functionalities,which are included in
PeleNet , are covered by the parameter set. It is not required to know anythingabout the
NxSDK at all, if the functionality provided by
PeleNet is sufficient for the user. The defaultparameters are split in three parts, parameters for the experiment, the system and derived parameters.The system parameters cover information about e.g. the used Loihi board, settings for the matplotlib library, logging and paths. The experiment parameters contain e.g. neurons, connections, inputs, outputs,noise, probes and also a learning rule for the reservoir. Finally, the derived parameters are calculatedfrom the system and experiment parameters and are useful for the framework or for the user. Someparameters are sanity checked to avoid serious issues with false parameters. These checks are constantlyextended. Note that the parameters are well documented in the pelenet/parameters file to understandtheir meaning, but it is not suggested to overwrite the parameters there. For overwriting parameters the defineParameters method is available in the experiment . All parameter values which are defined in thismethod overwrite the default parameter set. Derived parameters are calculated after the parameters aredefined in an experiment . Discussion
Currently the neuromorphic hardware community grows fast and it is probably only a matter of timeuntil new or updated hardware systems will emerge. The field of applications is very broad for such hard-7eleNet Loihi Framework Michaelis 2020 trial
Figure 3:
Example for a spike raster plot that shows inputs and trials. The experiment was performed with 10trials and a input sequence with three inputs. ware, including the estimation of linear models, like LASSO (Shapero et al., 2014; Davies et al., 2018),non-parametric classification with k-nearest neighbor (Frady et al., 2020), deep spiking neural network(Massa et al., 2020) or reservoir networks (Michaelis et al., 2020). High-level libraries and frameworksare needed to cover these different specialized application areas. While for deep spiking neural networksthe
SNN Toolbox (Rueckauer et al., 2017; Rueckauer and Liu, 2018) is available for ANN to SNN con-versions (also for Loihi) or
SLAYER is available for training SNNs via backpropagation, reservoir networkframeworks are still rare for neuromorphic computing.The here presented
PeleNet framework simplifies the implementation of reservoir networks on the neuro-morphic hardware Loihi. The framework is an abstraction layer on top of the
NxSDK from Intel.
PeleNet allows an efficient distribution of the network over an arbitrary number of Loihi cores and chips. Probesare combined to data sets, which can directly be used for further evaluations. Parameters define theexperiments. The “parameter approach” allows an easy initialization process and keeps the experimentsclear. Finally, the plot systems already includes a bunch of default plots for reservoir computing.Beside its already existing features, the framework is still in an early stage. Some functionalities of theLoihi chip are not provided yet. This includes inter alia current and voltage noise, usage of the tag forlearning rules and synaptic delays. The learning rule covers the excitatory neurons in the reservoir, futureversion will also be able to apply a learning rule to output neurons. In addition, the framework is notyet available as a pip python package and still requires a manual installation of dependencies. Thereforeone of the next steps will be to provide a package release of
PeleNet . It is also intended to add anoptimization functionality which runs several experiments in order to find optimal parameters, accordingto a criterion. The parameter system is already designed to support future optimization scripts. Finally,it is planned to add unit and integration tests to make sure that simulations are performed correctly.Currently four
Jupyter notebooks are available as functional tests.The aim of
PeleNet is to speed up the implementation of reservoir computing experiments on Loihi. In amore broad perspective it has even the potential to push the field of reservoir computing implementationson neuromorphic hardware in general. Despite its early development stage, I am confident that theframework is already useful for computational studies.8eleNet Loihi Framework Michaelis 2020
License and code availability
The framework
PeleNet is published under the MIT License and is therefore freely available withoutwarranty. The code is available on Github . Contributions are highly appreciated. Acknowledgement
I sincerely thank Dr. Christian Tetzlaff for providing me with funding for my doctoral thesis and hissupervision. In addition, I want to thank Intel for providing the Kapoho Bay to our lab and ongo-ing support. Finally I want to thank Andrew B. Lehr, since he is always willing to discuss problemsconstructively and influenced the implementation of the anisotropic network significantly.
References
Backus, J. (1978). Can programming be liberated from the von neumann style? a functional style andits algebra of programs.
Commun. ACM , 21(8):613–641.Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y., Choday, S. H., Dimou, G., Joshi, P., Imam,N., Jain, S., et al. (2018). Loihi: A neuromorphic manycore processor with on-chip learning.
IEEEMicro , 38(1):82–99.Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U., and Neftci, E. (2016). Conversion of artificialrecurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In , pages 1–8. IEEE.Emerson, N. B. (2013).
Pele and Hiiaka: a myth from Hawaii . Tuttle Publishing.Frady, E. P., Orchard, G., Florey, D., Imam, N., Liu, R., Mishra, J., Tse, J., Wild, A., Sommer, F. T., andDavies, M. (2020). Neuromorphic nearest neighbor search using intel’s pohoiki springs. In
Proceedingsof the Neuro-Inspired Computational Elements Workshop , NICE ’20, New York, NY, USA. Associationfor Computing Machinery.Goodfellow, I., Bengio, Y., and Courville, A. (2016).
Deep Learning . MIT Press. .Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks-withan erratum note.
Bonn, Germany: German National Research Center for Information TechnologyGMD Technical Report , 148(34):13.Jaeger, H. (2007). Echo state network.
Scholarpedia , 2(9):2330. revision
Computer , 51(3):52–61.Lukoˇseviˇcius, M., Jaeger, H., and Schrauwen, B. (2012). Reservoir computing trends.
KI-K¨unstlicheIntelligenz , 26(4):365–371.Maass, W., Natschl¨ager, T., and Markram, H. (2002). Real-time computing without stable states: A newframework for neural computation based on perturbations.
Neural computation , 14(11):2531–2560. PeleNet on Github: https://github.com/sagacitysite/pelenet arXiv preprintarXiv:2006.09985 .Michaelis, C., Lehr, A. B., and Tetzlaff, C. (2020). Robust robotic control on the neuromorphic researchchip loihi. arXiv preprint arXiv:2008.11642 .Nimmo, H. A. (1986). Pele, ancient goddess of contemporary hawaii.
Pacific Studies , 9(2):121.Patino-Saucedo, A., Rostro-Gonzalez, H., Serrano-Gotarredona, T., and Linares-Barranco, B. (2020).Event-driven implementation of deep spiking convolutional neural networks for supervised classificationusing the spinnaker neuromorphic platform.
Neural Networks , 121:319–328.Rajendran, B., Sebastian, A., Schmuker, M., Srinivasa, N., and Eleftheriou, E. (2019). Low-powerneuromorphic hardware for signal processing applications: A review of architectural and system-leveldesign approaches.
IEEE Signal Processing Magazine , 36(6):97–110.Rueckauer, B. and Liu, S.-C. (2018). Conversion of analog to spiking neural networks using sparsetemporal coding. In , pages1–5. IEEE.Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., and Liu, S.-C. (2017). Conversion of continuous-valueddeep networks to efficient event-driven networks for image classification.
Frontiers in neuroscience ,11:682.Schmitt, S., Kl¨ahn, J., Bellec, G., Gr¨ubl, A., Guettler, M., Hartel, A., Hartmann, S., Husmann, D.,Husmann, K., Jeltsch, S., et al. (2017). Neuromorphic hardware in the loop: Training a deep spikingnetwork on the brainscales wafer-scale system. In , pages 2227–2234. IEEE.Schrauwen, B., Verstraeten, D., and Van Campenhout, J. (2007). An overview of reservoir computing:theory, applications and implementations. In
Proceedings of the 15th european symposium on artificialneural networks. p. 471-482 2007 , pages 471–482.Schuman, C. D., Potok, T. E., Patton, R. M., Birdwell, J. D., Dean, M. E., Rose, G. S., and Plank,J. S. (2017). A survey of neuromorphic computing and neural networks in hardware. arXiv preprintarXiv:1705.06963 .Shapero, S., Zhu, M., Hasler, J., and Rozell, C. (2014). Optimal sparse approximation with integrateand fire neurons.
International journal of neural systems , 24(05):1440001.Spreizer, S., Aertsen, A., and Kumar, A. (2019). From space to time: Spatial inhomogeneities lead tothe emergence of spatiotemporal sequences in spiking neuronal networks.
PLoS computational biology ,15(10):e1007432.Tang, G., Shah, A., and Michmizos, K. P. (2019). Spiking neural network on neuromorphic hardware forenergy-efficient unidimensional slam. arXiv preprint arXiv:1903.02504 .Young, A. R., Dean, M. E., Plank, J. S., and Rose, G. S. (2019). A review of spiking neuromorphichardware communication systems.