A multi-agent model for growing spiking neural networks
UUniveristy of Southern Denmark
Master Thesis
M.Sc. Robot Systems
A multi-agent model for growingspiking neural networks
Author
Javier L´opez Randulfe
Supervisor
Leon Bonde Larsen
June 3, 2019 a r X i v : . [ c s . N E ] S e p bstract Artificial Intelligence has looked into biological systems as a source of inspiration.Although there are many aspects of the brain yet to be discovered, neuroscience hasfound evidence that the connections between neurons continuously grow and reshapeas a part of the learning process. This differs from the design of Artificial NeuralNetworks, that achieve learning by evolving the weights in the synapses betweenthem and their topology stays unaltered through time.This project has explored rules for growing the connections between the neuronsin Spiking Neural Networks as a learning mechanism. These rules have been imple-mented on a multi-agent system for creating simple logic functions, that establisha base for building up more complex systems and architectures. Results in a simu-lation environment showed that for a given set of parameters it is possible to reachtopologies that reproduce the tested functions.This project also opens the door to the usage of techniques like genetic algorithmsfor obtaining the best suited values for the model parameters, and hence creatingneural networks that can adapt to different functions.1 cknowledgments
I would like to express my most sincere gratitude to my supervisor Leon BondeLarsen, who has always been a source of support, ideas, and resources. His dedica-tion motivates the people around and unquestionably increased the quality of mywork.I would also like to thank the other students and researchers at the MMMIinstitute, whose feedback during this project is much appreciated.Thanks to my classmates Katrine, Job, Anne, and Sergi for joining in the worksessions and making the duty of the thesis more pleasing. Special thanks to Joband Katrine for listening and proof reading my work repeatedly, and offering veryprecious tips. Last but not least, I would also like to thank Leo, as her companyduring those days turned invaluable.Finally, I would like to thank my friends and family back home.2 ontents
Implementation and results 50 hapter 1Introduction Artificial intelligence (AI) has proven to be a very successful field during the lastdecades, providing techniques and algorithms able to give solutions to complex prob-lems that traditional methods were not able to cope with. One the most repre-sentative AI systems is the Artificial Neural Network (ANN). Convolutional NeuralNetworks, which are derived from ANN, have shown remarkable achievements in thefield of image recognition (Krizhevsky et al., 2012), being able to improve humanperformance for recognizing objects in images containing different sorts of complexcontexts. Also, Recurrent Neural Networks have produced promising results in fieldssuch as speech and handwriting recognition (Graves et al., 2013).Although the developed mathematical models for creating ANNs get some in-spiration from biological neural circuits, both systems define two wholly differentparadigms. One major difference is the absence of the time dimension when assess-ing the inputs of an ANN. ANNs take discrete “snapshots” of the input values, andreturn output values based on the former. On the contrary, the output of a neuronin a biological neural circuit is dependent on the time evolution of the input signals.Another big difference between both systems is the way learning is achieved.One of the ways for achieving short-term learning in biological systems is by thefast growth of the neuron circuit structures i.e. neurons in neural circuits breakand create new connections based on the correlation between their activity. On thecontrary, learning in ANNs is only achieved by evolving the weights between theneurons connections.Despite their success in several applications, ANNs present some drawbacks thatare limiting their performance, which have not been observer in biological neuralcircuits. For instance, ANNs entail a very big computational cost and high responsetime to certain stimuli due to their time-driven and synchronous nature. On thecontrary, the energy consumption in neural circuits is considerably lower (See Fig.1.2 in next section) and their event-driven nature makes them excel in terms of timereaction. Although these benefits encourage the use of neural circuits, replicatingthem is not a trivial task, due to their vast complexity. It is also important totake into account the crucial role of genetics in the shaping of neural circuits, as itencodes the continuous learning that has been taking place for thousands of years.Learning is hence not only reduced to the the learning mechanisms used during thelifetime of the individual, and this makes the replication of neural circuits to bemore challenging. 5ith all this in mind, neuromorphic engineering originated a few decades agoas a field of study which would look into biology for looking for models giving solu-tion to the biggest flaws of technology. Neuromorphic engineers have focused theirresearch in most of the elements present in a typical control engineering problem,including the way sensors work, how processing their information is carried out, andhow actuators react to the decisions made. To do that, the behaviours of biologi-cal systems have been partially replicated, resulting in new hardware and softwaresolutions for crucial problems observed in traditional engineering.Spiking Neural Networks (SNNs) are a completely new family of neural networksdeveloped within the context of neuromorphic engineering. Therefore, they aredeeply inspired by biological brains, and they reproduce some of their behaviours.Their most important features are their event-driven nature, as well as the wayneurons assess inputs in order to obtain the value of their outputs, similar to howbiological neurons work. Neurons are at rest in absence of stimuli, and they switchto an excited state when the recent activity in their inputs is higher than a specificthreshold. When a neuron shifts to an excited state, it produces a pulse called aspike, which propagates through its axon to connected neurons. Thus, its output isthe result of decoding the activity of the input signals within a time interval.SNNs are already been used for solving certain problems, and they outperformANNs when the application is highly dependent on the timing of the signals, as wellas being more energy efficient. However, their development is still far from beingcomplete there are some flaws that have to be fixed before being able to reach abig scope of applications, like ANNs do. One of their main issues is that learningmethods traditionally employed for ANNs for supervised learning can not be appliedto SNNs (Ghosh-Dastidar and Adeli, 2009). Another is the need of specific hardwarefor their development, to deal with a continuous assessment of the input values andtheir asynchronous nature.Due to the aforementioned disadvantages of using ANNs and the flaws presentin the state-of-the-art SNNs, this project has explored alternative learning methods.To do so, it has taken inspiration from the fundamentals of biological neurons.Therefore, the project included an analysis of the current knowledge in neuroscienceregarding neuron spiking and fast structural growth, and applied that knowledgefor designing a new model for learning based on the growth of network topology.In order to create and implement the designed model, an inverted-blackbox ap-proach was followed i.e. some of the rules and events happening outside the designedsystem have been purposely omitted, so the outside “world” has been reduced toa set of incoming pulses that follow arbitrary patterns. This way, the complexitythat the environment of a brain presents was simplified, and most of the events andinteractions happening around a neuron have been drastically reduced.The proposed SNN model works with a set of parameters, which can take a bigrange of values. This project paves the way to the application of Machine Learningalgorithms for tuning the mentioned parameters. Namely, genetic algorithms arebelieved to be a promising technique for optimizing the model and adapting it todifferent problems and contexts.In the rest of this chapter, firstly, an extended introduction to neuromorphicengineering is given. Secondly, the fundamentals of the biological neuron are brieflydiscussed. After, the motivations and initial considerations driving the project areexplained, and finally an outline of the rest of this document.6igure 1.1: Block diagram representing the different stages in the implementation ofthe designed model, being the ones in the green box the main loop for learning. Thestages which have not been developed yet and are left for future work are depicted inred. Hence, the next steps in the project involve closing the loop and being able totune the system parameters autonomously by using a machine learning algorithm.
In order to understand the aim and motivation of this project, it is paramount tounderstand what neuromorphic engineering is, as the project is highly influencedand motivated by the ideas that build up this field.Neuromorphic engineering is a field that has been continuously growing in thelast few decades. Its main goal is the replication of biological brains with technology,under the assumption that those systems have been evolving and adapting to thereal world during thousands or millions of years and therefore they are highly reliableand optimised (Liu and Wang, 2001).It is true that already existing digital systems and computer science have over-taken the biological brain in several aspects, specially those concerning speed forsolving arithmetic and logical tasks. However, artificial computing systems are stillvery far from biological brains regarding energy efficiency, and their performance isvery poor when dealing with the vast whole extent of the real world (Indiveri andHoriuchi, 2011). It is actually nowadays a big struggle for engineering to developsystems that can solve problems which require a big amount of computation with-out spending large amounts of energy (A comparison is offered in Fig. 1.2). Also,the task turns non-feasible if it requires interacting with most of the informationavailable in a real environment.This motivation has led to the exploration of the idea of replicating some of theconcepts that rule biological systems. Thus, neuromorphic engineering has put focusin the development of new hardware systems that are closer to the physical structureof biological systems, as well as software solutions that implement algorithms whichresemble the behaviours of the biological brain (Soman et al., 2016).Regarding the introduction of new computing paradigms, perhaps the most no-ticeable breakthrough has been the appearance of Spiking Neural Networks (SNNs).Whereas traditional Artificial Neural Networks (ANNs) calculate the value of thesystem outputs based on the value of the inputs in the same discrete time, SNNs7igure 1.2: Comparison of the power density and clock frequency of different proces-sors developed during the last 40 years, as well as the human brain. Image obtainedfrom (Merolla et al., 2014).assess the time evolution of the inputs i.e. The value of an output not only dependson the current value of the inputs, but also in their activity in the past. This newparadigm is closer to biological postulates, and the mathematical models describingit are based on the theories that explain the behaviour of biological neurons.SNNs allow, on the one hand, to drastically reduce the amount of computationalunits (neurons), due to the fact that one neuron can theoretically encode a muchbigger amount of information, as the value at each time step is relevant for theoutcome of the system. On the other hand, the processing times can be considerablyaccelerated, as fewer neurons need to be assessed in order to determine the valueof the outputs. This feature is enhanced too by the asynchronous nature of thesenetworks, as they do not need to wait for a synchronization clock. This meansthat neurons do not have to be evaluated synchronously at the same discrete time.Actually, all of them can work as independent units that react based on incomingpulses, no matter when these happen.Regarding the development of hardware solutions closer to biological structures,some initiatives have appeared recently, such as the TrueNorth circuit developed byIBM (Merolla et al., 2014), or the EU Human Brain Project. The latter involves,among others, the development of the SpiNNaker board (Furber et al., 2014). Allof them have in common the implementation of systems that are composed bysmall computing units that work asynchronously and independent of each other,introducing a big degree of parallelism between different cores, and where the conceptof memory is represented by the current state of the internal neurons. In Fig. 1.3is depicted a schematic of the structure followed by these systems.8igure 1.3: Conceptual structure of the TrueNorth circuit, which ressembles tothe SpiNNaker board. They are formed by multiple processing systems that workasynchronously and in parallel. Image obtained from (Merolla et al., 2014).Their structure is opposed to the traditional and predominant Von Neumannarchitecture for electronic and digital systems, where the different computing entitiesshare a common system memory that they need to access synchronously. The mainbenefit of biological structures is a highly optimization of the resources needed formaking calculations or operations, as they will consume only the minimum amountof computational units and reduce the read/write operations to the minimum extent.The hope is to drastically reduce the energy consumption from the current order ofmegawatts for super computing systems to the few tens of watts used by the humanbrain.
Neurons are specialized cells which carry out a central role in the nervous systemsof animals. They are electrically excitable, feature that they use for receiving, pro-cessing, and transmitting information, mostly to other neurons. They are groupedin large populations, forming clusters that are able to originate what is known asintelligence and, in the case of mammals, most of the neurons are allocated in thebrain, which is the centre of the nervous system.In this section, a brief description of the biological neurons is offered, as well assome of their most relevant details. A more detailed description of the working ofbiological neurons and neural circuits is offered in section 2.1. In any case, a deepstudy on the biological neuron is out of the scope of this project, and since thestudy of biological neurons has been a very relevant topic in science for more thana century, the reader may look for further information in more specialized literatureif interested (Gerstner et al., 2014; Dayan and Abbott, 2001).9 euron structure:
Physically, the structure of a neuron is typically decomposed into three maincomponents, which play different roles (See Fig. 1.4): • Soma : It is the main body of the cell, and it contains the main organelles. Itis covered by a membrane that can be charged with electric potential. Whenthis potential is high enough the neuron gets excited, and generates what isknown as a spike. • Dendrites : These are filaments organized in structures known as dendritictrees. They are sensitive to incoming signals. Thus, their role is to receiveinformation and propagate it to the soma of the neuron • Axon : A neuron typically has only one axon, and its function is to propagatethe electric pulses generated in the neuron. The stem of the axon is denom-inated axon shaft, and at its tip is located the growth cone, which leads theelongation and movement of the axon through the neural circuit. It can travelas far as 1 meter in the case of humansFigure 1.4: Schematic of the biological neuron, with its main components. Imageobtained from Wikipedia ( https://en.wikipedia.org/wiki/Neuron ). Neuron synapse and synaptic plasticity:
The synapse is the connection between two neurons, where one of them is trans-mitting information and the other one is receiving it. The synapse is driven bychemical and electrical forces, as the information is transmitted by the change ofthe chemical composition in the junction between both neurons, which leads to amodification in the electrical potential of the receptor neuron.The chemical processes involved in the synapse are complex. Activity in a neuronleads to a modification of its state, which leads to some of its organelles to startreleasing chemical components that travel through the axon shaft to the synapseand eventually reach the membrane of the receptor neuron. These components aredenominated neurotransmitters, and the process by which they flow within the axonto the synapse and activate the connection is called neurotransmission.10raditionally, it was believed that the wiring between neurons and the strengthof their synapses were formed during early stages of animal life (i.e. during child-hood) and stayed mostly unchanged afterwards. This idea actually reinforced theconception that learning is a process that happens mostly during the first years ofanimals’ life, being drastically reduced during their adulthood. However, researchfound evidences against this idea, raising the concept of synaptic plasticity. Synap-tic plasticity is a property of neurons which makes their synapses dynamic. Thisimplies that the synaptic weight is evolving through time, as well as the dendriticstructure of the neurons (Feldman, 2009). Hence, it is nowadays assumed that learn-ing is produced throughout the whole life of the animals (although less intense fromadulthood) due to the aforementioned synaptic plasticity.
Hebbian learning:
The previous paragraph introduces the concept of synaptic plasticity, and how thesynaptic weights evolve through time, provoking as well the creation and destructionof new synapses between neurons. In order to create a neuronal model replicatingthis behaviour, it is crucial to have an equation or set of equations that providesquantitative values, and Hebb’s rule follows this purpose.Hebbian learning is a concept introduced half a century ago which states that thesynaptic strength between 2 neurons is modified based on the correlation betweenthe firing times of the 2 neurons involved. It offers a simple approach for explaininghow synapses between neurons get stronger, based on the time differences betweentheir spikes. A simple way of summarizing the rule would be as follows:
If neuron A and neuron B are connected by a synapse, and neuron A spikesrepeatedly before neuron B spikes, the synapse from neuron A towards neuron Bbecomes stronger. Conversely, if neuron A spikes repeatedly after neuron B spikes,the synapse from neuron A towards neuron B becomes weaker.
Moreover, neuroscience has found evidences showing that the short-term fastgrowth of neuron spines is lead by a sort of Hebbian learning process (Feldman,2009). This means that the spines of a neuron tend to grow towards other neuronsthat tend to spike before the actual neuron in a correlated way.
Neurotaxis:
Neurotaxis is the travelling of neurons’ dendritic spines and axons through theneural circuits in order to reorganize the neural structure, being learning one ofits main purposes (the other main purpose is the restoration of damaged areasof the nervous system after injuries). Biology explains neurotaxis mostly by theappearance of chemical potentials which provoke the attraction or repulsion of theneuron filaments. It has been proven that the growth cone at the tip of the axonshaft is sensitive to the proteins present in the neuron environment, reacting indifferent ways depending on the type of protein. Thus, the concentration of theseproteins creates the moving behaviour leading to the desired structure (Dickson,2002). 11owever, neurons are sensitive as well to electric potentials, and the movementof filaments towards their destination could be explained as well by the appearanceof electric forces acting on them (galvanotaxis). Although literature supports thatchemotaxis is the main process behind neurotaxis, there are plenty of experimentsshowing that dendrites are sensitive to electric potentials, and neurons emit as wellelectric pulses when they get excited (Patel and Poo, 1982). Thus, despite neuro-taxis can not be fully explained if chemical mechanisms are not taken into account,considering only electric forces for explaining the movement of dendrites offer asimple representation of the movement of the spines within the neural circuits.
One of the main motivations of this project was the idea that ANNs have draw-backs that can be hardly solved without approaching their development with awholly different paradigm. This was the main inspiration leading to the inceptionand development of SNNs, lead by the knowledge learned by neuroscience.
Traditional neural networks suffer from intrinsic limitations, mainly for process-ing large amount of data or for fast adaptation to a changing environment. Severalcharacteristics [...] are strongly restrictive compared with biological processing innatural neural networks. (Paugam-Moisy and Bohte, 2012)
Neuroscience has found evidence showing that neurons follow a Hebbian-likelearning pattern. Moreover, Hebb’s rule also explains how dendrites grow and createnew connections.This phenomenon is believed to be paramount in the short-term learning inanimal brains (Feldman, 2009). In fact, fast spine growth does a fine tuning ofdendritic trees and thus makes the layout of the circuits to adapt to external events.This happens after the main structure of a neural network has been already createdWith this in mind, the aim of this project was to establish a set of mathematicalrules for modeling SNNs whose neuron connections are dynamic.As mentioned in section 1.1, neuromorphic engineering follows the idea of cre-ating asynchronous independent computing units (i.e. neurons). Therefore, theimplementation and testing of the neuron model has been done by using multi-agent systems, where a whole network can be distributed in several units whichfollow the same rules with a big degree of independence to each other.All this lead to the formulation of the main premise for the current project:
There is a set of rules that defines the evolution of the structure of SNNs andmakes them produce intelligent systems.
There are many ways of defining intelligence, depending on the perspective,scope, or field in which the definition is formulated. In this project, intelligence hasbeen considered to be “the capacity of a system to solve a logical problem” . It is asimple definition, and lacks many details that have to be taken into considerationfor a complete and broad application of the concept. In any case, it is enough forformulating the set of statements which give shape to the hypothesis that have beentried to validate or reject in this project.12he hypothesis proposed to be validated or rejected in this project were thefollowing:1. A set of rules exists for making neurons grow in an SNN.(a) The established growth rules are determined by a set of parameters.(b) Growth can be obtained by applying Hebbian learning.(c) Growth depends on the spatial distribution of the neurons.(d) Neurons can also show behaviours inhibiting growth.2. There are logical problems that can be solved by an SNN with a certain topol-ogy.(a) It is possible to propose a logical problem that has at least one solution.(b) There is a quantitative value for determining how successful a solutionis. It can range from a true-false boolean to a percentage score.3. The growth rules can be optimized by using a Machine Learning algorithm.4. For a given function y = f ( x ( t ) , x ( t ) ...x n ( t )), an SNN exists with a minimumnumber of neurons, N , that is able to grow to a topology able to solve thefunction.5. The topology of an SNN can change from one structure to another based onits environment and the problem it has to solve.This project has mainly focused on the two first points in the previous list (Al-though points 1.c, 1.d, 2.b have only been partially tested). Point 3 was initiallyincluded in the scope of this project, but the goals were narrowed afterwards, andresearch on this area has been left for future works. The last two points are very rel-evant, as their veracity could allow this networks to be reused for different purposeswithout the need of going through the design stage i.e. it would not be required adesigner to tune the network until the correct setup was reached. In any case, workon this direction is still far, and more efforts are needed in the previous points first. Based on the aforementioned motivation outline, the main goals that had driven theefforts made during the project are the following:
In-depth study of neurotaxis, and review of the current knowledge:
As the final task is to implement SNNs that can grow dynamically, and it isknown that biological brains present this feature, it was paramount to investigateand understand the current knowledge in neuroscience about neurotaxis.
Establishment of a mathematical model for the growth of neuron spines:
Once understood how neurotaxis works in biological brains, a set of rules thatsimulate this behaviour was proposed. These rules were designed with the tradeoff between creating a faithful model of the biological neurons and shortcoming thehuge complexity that they present. 13 esting of the previous model in SNNs by adapting it to existing spikingmodels:
The created set of rules that implements Hebb’s rule for simulating neurotaxis cannot be put into practice without a mathematical model representing the firing of theneuron. This is due to the fact that Hebb’s rule establishes the growth of neuronconnections based on the timings of the firing of the neurons involved. Therefore, itcan only exist if neurons fire along time.Once this and the previous points were implemented, it was obtained the de-sign of a system capable of being tested autonomously without the need of otheralgorithms.
Design of a multi-agent model for implementing the aforementioned rules:
Due to the nature of neural networks, the designed model was implemented bycreating a multi-agent system. This way, the system could be split into independentcomputational units, which would make it easier for handling networks with a bigamount of neurons.A multi-agent system was prepared, and the simulation environment RANAwas chosen. In chapter 3 is explained the design, and the main features of theaforementioned environment.
Establish the grounds for the application of genetic algorithms for tuningthe growth model:
The project involves the design of a model for replicating the behaviour of bio-logical neural networks, and it was tested for implementing simple topologies andfunctions.As it will be explained later in this report, the designed model contains severalparameters that can drastically alter the performance of the implemented neuralnetworks. Setting the parameters to different values can lead to a wide variety oftopologies and logic functions.Therefore, this is presented as a potential system to be optimized by the usageof a complementary learning mechanism that can be responsible of the tuning of thementioned parameters. Namely, it is believed that evolutionary algorithms can beused for obtaining an adequate set of parameters that lead to the implementationof a desired function.
As mentioned in the previous sections, the goal of this project was to establishgrowth rules for the dendrite spines of the neurons in Spiking Neural Networks(SNNs). Moreover, this rules were based on the knowledge provided by neuroscienceand, more specifically, inspired by Hebbian learning.Starting from this motivation and the initial hypothesis, a set of considerationsregarding the scope and implications of the project were made, and they are sum-marized within this section. 14 .4.1 Problem statement
The problem that this project dealt with was finding a set of rules that createdintelligence in SNNs by dynamically modifying their topologies.Let us formalize the problem by setting an SNN formed by k neurons, so theset of all the neurons in the system is N = { N , N ...N k } . If neuron i has j inputconnections I = { I , I ...I j } , whose values vary during time following the set offunctions F i = { f i ( t ) , f i ( t ) ...f ij ( t ) } , there is a function G dependent of I thatdefines the value of the neuron’s output O i . Moreover, its value depends on how thesignals have evolved through time: O i = G ( I , t ) (1.1) Specifications:
The previously mentioned system has to comply with the following specifications: • The system is formed by 1 to n neurons, where n ∈ N ∗ . • The connections of the neurons may change during their lifetime i.e. Theamount j of input connections does not need to stay constant. • Every neuron follows the same rules, although they may be tuned with differentparameters values. • The neurons work independently to each other, in parallel, and asynchronously. • The state of the neurons depend on the evolution through time of their differentinputs (eq. 1.1).
This project deals with the design of a set of rules able to reproduce some of thebehaviours observed in biological neural circuits. The final goal is to produce intel-ligence by creating SNNs that follow the aforementioned rules.In order to implement these rules, a multi-agent system (MAS) was designed.A MAS is a system where the computation is distributed among a certain amountof units called agents. These agents possess some sort of intelligence, and they canoperate both autonomously or they may require some degree of cooperation betweenthemselves.According to literature (Ferber and Weiss, 1999), the applications of MAS coverdifferent purposes, among which this project puts focus on two of them: • Multi-Agent Simulation: This application of MAS creates simulations of theobserved behaviours (mostly in natural systems), with the goal of reproducingand validating the theories and inferences made about those systems. In thecase of this project, it was simulated systems reproducing neural circuits byimplementing some of the rules established in the field of neuroscience. Withthe application of MAS is possible to modify and simplify deductions made inneuroscience, and therefore test if the reduced set of rules is able to produceintelligence and reproduce the behaviours observed in biological neural circuits.15
Problem solving: This other application of MAS focuses on splitting and dis-tributing into several agents the computational effort that has to be done fordoing a task. This way, it is possible to obtain a better organisation of thetasks, and to optimize its solution.
Traditional non-spiking Artificial Neural Networks (ANNs) tipically implement logicfunctions of different complexities, depending on the size of the network. This meanssimple ANNs can implement basic logic functions such as OR or AND gates.Following a reductionist approach, the initial idea for developing and testing theneuron growth in SNNs was to develop rules that, once applied to a given network,were able to create a network which could implement simple logic functions.Furthermore, it was also among the initial goals for the developed project thatthe obtained rules could be applied in a general way to any kind of network. Then,after the learning process, the network would end up with a certain topology thatwould implement the desired function based on the value of the input and outputsignals during the learning period. This means that the same growth rules appliedto the same initial network conditions would end up generating different topologiesbased on the signal values during the learning process.However, the neuron model implementing growth rules designed so far can notmeet this last requirement for one of the most basic networks i.e. a 3-neuronsnetwork (see Fig. 1.5). According to the previous statement, the developed rulesshould be able to generate a network implementing both an OR or an AND function,depending on how the signals behaved during the learning process.Figure 1.5: Spiking neural network formed by 2 input neurons connected to a thirdneuron. The blue dots represent the neurons’ somas, whereas the red ones representthe dendrites’ segments. The arrows indicate the information direction flow.First of all, for a given static set of parameters, the previous network can onlyimplement one specific function once it gets wired as shown, due to the absence ofsynaptic weights. For example, for a certain set of parameters the network provokesthe output neuron to spike when at least 2 synapses were detected within a shortperiod of time, as shown in Fig. ?? . Furthermore, if the rate of the train of inputspikes is lower, it would be necessary to get more input pulses in order to reach thethreshold voltage. In any case, the plotted function would correspond to an activitydetector which, without considering the time dimension, is equivalent to an ANDgate i.e. the output neuron would only spike when the 2 input synapses triggersimultaneously. 16n the other hand, if the ∆ u parameter is modified so it is big enough to makethe neuron spike after one single input synaptic pulse, then the network wouldbecome an OR gate if the time dimension is ignored.The absence of specific weights for each synapse, plus the fact that one singlesynapse firing several consecutive times has the same effect as several synapses firingonce at similar times, makes the neurons agnostic to which input is sending pulses.Actually, these neurons behave as activity detectors, as their outputs are functionsof how many synapse pulses have been received in the short term, independent ofwhich were the inputs providing these synaptic pulses.Perhaps it is not valid to assume that SNNs can work as logic units at singletime instants. In fact time dimension is crucial to understand their behaviour, andas spiking neurons implement functions that depend on the activity through time oftheir inputs, trying to implement classic logic functions that ignore time evolutionof signals is somewhat preposterous. Several spines connecting the same 2 neurons:
The designed model allows spines from a neuron to grow towards any other somain its neighbourhood based on the Hebbian growth rules. Moreover, once a spinejoins 2 neurons, new spines have the restriction of not being able to grow towardsthe same neuron again.However, the drawback of not having synaptic weights may be solved by allowingseveral spines to connect to the same destination neuron. This way, when the inputneuron spikes the membrane potential of the destination neuron would be increasedby N I · ∆ u , where N I is the number of input spines from the same neuron.Therefore, the rule limiting spines to grow towards the same neuron can bemodified so the ANNs concept of synaptic weight could be applied i.e. more spinesconnecting the neurons increase the weight between both neurons. Synaptic weight based on Hebbian-based synaptic strength:
Another alternative that can eliminate this problem is the usage of the Hebbianlearning for establishing the strength of the synapse once the neurons are connected.If this alternative were to be implemented, the effect of electric pulse in the receptorwould be diminished if the pulses of both neurons follow an anti-Hebbian pattern.On the other hand, neurons following a Hebbian pattern would imply a strongersynapse and therefore a higher increment of the membrane potential after incomingpulses in the synapse.
One main question that emerged during the early stages of the project was whichshould be the main direction that the development of the research should focus on.Whereas the design and implementation of a model of the neuron growth was themain target of the project, 2 directions in order to test the model and validate weredepicted.On the one hand, this project could be focused on testing the robustness of asingle neuron in different situations and environment conditions. Since the outcomeof the Hebbian-based growth is neuron spines growing towards other neurons withcorrelated firing sequences, this axiom can be validated by setting scenarios with17airs of neurons with different correlation levels in their spikes. Moreover, differentlevels in the environment noise can be tried out, as well as strong disturbancescoming from specific spatial points.On the other hand, the neuron model can be implemented in large networks bypreparing tests with several neurons. Therefore, the performance of the model canbe evaluated in different network layouts, and its behaviour can be assessed. Somebenchmark networks with known outcomes can be prepared, so it can be measuredthe success rate of the model on the different scenarios.In the end, a trade-off between both approaches was followed, though closer tothe first option. Hence, it was mostly assessed the robustness of the Hebbian-basedgrowth by setting single pairs of neurons and small pre-defined layouts. Moreover,its performance was evaluated within networks of bigger complexity as well.The implementation of a neuron reservoir for testing the performance of theintroduced rules js a very interesting test-bench. If successful, it can be a firststep towards a tool that could be used in engineering for implementing intelligentsystems able to solve logic tasks. Despite this may be an interesting later stage ofthe project, the current project is focused on the creation of a neuron model andtesting the mathematical properties of the model, as well as its robustness againstdifferent situations. Moreover, this project also tries to get insight on how suchmodel can benefit from the application of a MAS for deploying the growth rules.18 .5 Related work
A self-organizing map (SOM), also named Kohonen map after their inventor, is avariant of unsupervised learning in ANNs. It consists in the mapping of an inputvector into a N-dimensional output, typically a 2-D map. Through an iterativeprocess, random input vectors are chosen, and the weights whose distance is closerto the input, as well as their neighbours, are modified (Kohonen, 1990). In Fig. 1.6 isshown a graphical explanation of how this algorithm work. The weights between theinput and the output layers evolve based on similarity between different elements inthe input. At the end, inputs with similar values are clustered together and mappedto close output neurons.Figure 1.6: Graphical example of a SOM. The n input variables are fed into thesystem, and these are mapped into the output layer. Image obtained from .Therefore, this learning creates ANNs where the outputs form clusters withtheir closest inputs. The distance between the weights and the inputs is normallycalculated with the Euclidean distance.The weights are not a means of deciding how to activate output neurons, buta feature defining the proper output neuron i.e. they are the value identifyingthe output neuron. This is, they define their relative distance to the inputs in anEuclidean space.This family of ANNs is used for classification, as it allows to cluster inputswithout former knowledge about them.There are popular modifications to this algorithms that typically include a tradi-tional ANN setup before the output layer for optimizing the values fed to the SOM(Thurau et al., 2003).The main similarity between SOMs and the proposed SNNs is that SOMs followto some degree the postulate of Hebbian learning: Although the concept of spikingdoes not apply to SOMs, the connection between their neurons evolve based on thesimilarity between their values, which is represented by the weights. In this specificANN, the weights are a way of representing the characteristic feature of the outputneurons, opposed to the traditional role of representing the connection strength.19owever, there are several and relevant differences between SOMs and the pro-posed networks: • Despite SOMs get some inspiration from Hebbian learning, neurons in an SOMdo not spike, and therefore the original definition of Hebbian learning can notbe applied to them. • SOMs do a mapping between input and output neurons by evolving the weightsbetween these two layers, whereas the proposed design modifies the structureof the network itself, creating and destroying connections between the neurons. • A SOM is mainly focused in the task of classification, whereas the proposeddesign serves as a learning mechanism for general purpose SNNs • SOMs are time-driven systems, and they do not take into account the timedimension, opposed to the networks implemented in this project, which areevent-driven. In any case, this is one of the main differences of SNNs andANNs, and it is already covered in the introduction of this document.
Traditional ANNs obtain output values based on the values of the input vector at aspecific discrete time. This approach turns disadvantageous when the solution to aproblem depends on the recent states of the system. For example, when analysingthe meaning of a sentence in speech recognition, doing an assessment of the previouswords makes the problem easier than evaluating the meaning of each word separately.This shortcomming in ANNs led to the design and usage of Recurrent NeuralNetworks (RNNs). They are neural networks very similar to the vanilla ANNs, beingtheir most relevant feature the existence of neurons that feed their output to theirown input vector as well (Graves et al., 2008). Therefore, when a neuron in an RNNevaluates its new state, it takes into account its former state (See Fig. 1.7). Thisis a way of giving memory to the neurons, and making their outputs dependent notonly in the current input vector, but also in the states they were in previous steps.This architecture is normally called long short-term memory (LSTM).Figure 1.7: Representation of a neuron feeding back its state in an RNN. It can beobserved that on each step, a neuron takes as inputs both the input vector and itsformer value. Image taken from https://machinelearning-blog.com .20lthough this approach may look similar to SNNs, they are very different ap-proaches for solving a problem. It is important to realize that an RNN feeds backthe former states of some of its neurons, and use that information as an additionalinput to the system. On the contrary, a neuron in an SNN does not feed back itsoutput, but calculates its state based on the recent activity on all its inputs i.e.An SNN analyses the history of its inputs for calculating its output, whereas anRNN only uses the current value of the inputs, as well as the history of its previousoutputs.
The rest of this document is divided in 4 chapters covering the different aspectsrelevant to the project.In chapter 2 is offered a review of the background of the project. It has beendivided in 3 main fields: On the one hand, it is summarized some of the knowledgeacquired in the last century in the field of neuroscience that is related to the growthand learning mechanisms of the neurons. The discoveries regarding spine mechanicsand neurotaxis are relatively recent, whereas neural learning is a topic that has beengenerating relevant literature since the early 20th century. On the other hand, thereis a survey of SNNs, which cover the main trends in the last decades. Finally, thelast section offers a brief overview of state-of-the-art multi-agent systems.In chapter 3 is offered a thorough explanation of the proposed set of rules forneuron growth in SNNs, as well as an adapted spiking model that is compatible withthose rules. The mathematical model is designed on the grounds of Hebbian learning,which is further developed by translating its results into actual spine movement.Fundamental mechanics are used for creating the kinematics of the spine, provokedby the forces applied by the environment. This set of rules depends mostly on thefiring times of the neurons, reason why a spiking model was needed. It was chosenthe Leaky integrate and fire model, which was modified so it could be adaptedto the designed growth rules. Finally, it has been included an overview of theapplication of multi-agent systems for the project, explaining the different types ofagents implemented and how they interact with each other.Chapter 4 introduces the different experiments that have been performed withthe designed model, which basically consist in different network layouts that allowto assess different aspects of the model performance. Afterwards, it is offered ananalysis of the different parameters of the model, showing what is their impacton the system performance, and how they could be altered for obtaining differentresults.Finally, chapter 5 offers a final conclusion of the project, which further work canbe done in the future, and what can be expected from the designed model.21 hapter 2Background and state of the art
Hebbian learning was originally an hypothesis that explained that the synaptic con-nections between neurons get stronger or weaker depending on the timing correlationbetween their spikes. Furthermore, in recent years new in-vitro and in-vivo experi-ments have given evidences towards the existence of a Hebbian alike rule involvingthe growth of neurons’ spines based on their activity (Feldman and Brecht, 2005).This process is often also named structural plasticity.The structural plasticity in the mammalian brain is dominated by different mech-anisms that operate in different time scales (from minutes to weeks and months),and that span from single spines and axon boutons to entire dendritic arbors. Thisprocess is believed to be tightly related to the formation of memories and the processof learning (Holtmaat and Svoboda, 2009).The long structural processes involving the creation and shaping of axons andcomplex dendritic arbors are related to the activity that takes place during neurogen-esis (the process of creation of neurons in animals) and injury recovery. Despite theirobvious relevance for understanding the operation of the brain, the rapid experience-dependent structural plasticity is more relevant for establishment of growth rulesapplicable to Spiking Neural Networks. These rapid dynamics typically correspondto dendritic spines growing towards axon boutons (Holtmaat and Svoboda, 2009).
Fundamentals of Hebbian learning in spiking neurons:
There are two main different models for the neuron assessment of incoming pulses:rate-based or time-based. The first one establishes that the connection between twoneurons will be stronger when the spiking rate of both is similar, whereas the secondone calculates the average during time slots an evaluates each spike during that timeslot.The first model is easier to describe and implement in a computational system.However, it is considered to have some limitations that makes it insufficient forexplaining learning in some situations e.g. the visual response time for many animalsis less than 200ms, which makes it incompatible with rate-based input analysis, as22eurons would not have enough amount of information for evaluating the input spikerate.Literature supports the idea of biological neurons being more similar to time-based model, as they transfer information through spikes and the strength of the con-nections between depending on the time correlations between their spikes (Kempteret al., 1999). Eq. 2.1 is proposed for calculating the efficacy J i of a synapse of aneuron with another presynaptic neuron i ,∆ J i ( t ) = η [ (cid:88) t fi w in + (cid:88) t n w out + (cid:88) t fi ,t n W ( t fi − t n )] (2.1)where t fi is the firing time of neuron i , t n is the firing time of the actual neuron,and W is the learning-window function. η is a very small parameter that makes thelearning evolve much slower than the actual network dynamics. The parameters w in and w out depend on J i .Opposite to what is implemented in a typical ANN, this equation models neu-rons with binary outputs i.e. as long as the neurons are connected, the individualincoming pulses have always the same effect on the postsynaptic neuron. Spike-timing-dependent synaptic modification induced by spike trains:
In (Kempter et al., 1999) it is assumed that the contribution of each pulse in atrain of pulses in a neuron will be independent, and based on a general strengthvariation rule i.e. a formal representation of the Hebbian rule.However, the authors in (Froemke and Dan, 2002) prove with in-vivo experimentsthat the influence of a pulse is strongly determined by the presence of previous pulsesin the same neuron.Hence, if two pulses happen within few difference in time, the effect of the secondone gets considerably diminished, and the final change in the synapse strength willbe mostly determined by the first one. This is formally presented by applying anexponential decrease to the effect of a pulse depending on the time difference withthe preceding pulse. It is therefore introduced the neuron efficacy (cid:15) i , calculated byusing eq. 2.2, (cid:15) i = 1 − e − ( t i − t i − ) /τ s (2.2)where t i and t i − are the timings of the i th and i − th pulses of the neuron i , and τ s is a time constant.The efficacy is applied to the general equation eq. 2.3 for calculating the variationof the synapse strength between neurons i and j ,∆ w ij = (cid:15) i (cid:15) j F (∆ t ) (2.3)where F (∆ t ) is the function for calculating the spike-timing-dependent plasticity(STDP) between both neurons, and calculated by using eq. 2.4, F (∆ t ) = Ae −| ∆ t | /τ if ∆ t > F (∆ t ) = A ( − e −| ∆ t | /τ ) if ∆ t > A is a scaling factor, and τ a time constant.23 .1.3 An overview of the forces driving neurotaxis In (Dickson, 2002) is offered a review of the known mechanisms of the axon guidanceat that time (2000). On the one hand, it offers a review of the different guidancechemical cues present in the axon environment, stressing the fact that single cuesmay have more than one role in the axon guidance. There are four known familiesof guidance cues: • Netrins: They have the ability to both attract and repel the axon, dependingon the receptors present on it. Their effect can span from the short range tomillimetres. • Slits: They are large proteins that act as a repellent for certain receptors(Roundabout receptors, or Robo), hence they establish the borders of the axongrowth. They also work as stimulants of the axon sensory axon branching andelongation • Sempahorins: These molecules are divided in 8 classes. They work as a short-range inhibitor for the growth, although it seems they may also work as at-traction cues for some receptors. Their main role seems to be the avoidanceof inappropriate cells contacts • Ephrins: They form molecular gradients that lead to the topographic orderof an axon, though not its precise end. Thus, they signal the direction of thegrowthThe growth cone is formed by actin parallel oriented filaments (fillopodia), andan intervening networks of filaments, whereas the growth is directed by the ex-tension and contraction of the microtubules. The turning in the growth cone canbe explained by the signaling produced by different proteins contained in the conestructure. Moreover, it has been observed gradients of Calcium ions in the fillopodiathat can lead to the turning of the cone.One of the key features of the growth cone is the plasticity of its properties,which allows to react in a different way to the chemical cues, depending on thestage of the growth. This plasticity is obtained by at least three mechanisms: • Modulation by cyclic nucleotides: Inhibiting, lowering or rising the levels ofcertain proteins (cAMP or cGMP levels, or PKA or PKG) provokes the at-traction or repulsion to certain cues. • Local translation in the growth cone: The translation of certain moleculesthrough the axon provokes the synthesis of certain proteins. Blocking thistranslation, and hence inhibit the synthesis, inhibits as well the turning of thecone, although not the growth. • Switching responses at the midline: The axons may change their sensitivity tothe cues after reaching and passing through them “The ultimate challenge, after all, is to find out how a comparatively small num-ber of guidance molecules generate such astonishingly complex patterns of neuronalwiring.” (Dickson, 2002) .1.4 Mechanics of the growth cone The trip of the tip: Understanding the growth cone machinery
In (Loweryand Van Vactor, 2009) is offered a review of the chemical mechanisms and factorsthat influence the movement of the growth cone, which is the tip of the axon in aneuron. Using the metaphor of a road trip for explaining the whole system (See Fig.2.1), the growth cone is considered a vehicle that has to drive through a roadway(adhesive substrate-bound cues), delimited by guard rails (repellent substrate-boundcues), and follows road signs for deciding the path (diffusible chemotropic cues).They also distinguish two main functions of the system i.e. the vehicle, which dealswith the motion mechanisms of the growth cone for keep growing; and the navigator,which is responsible for deciding which path to take.Figure 2.1: Representation of the main chemical cues interacting with the growthcone, and their function. Image obtained from (Lowery and Van Vactor, 2009).Regarding the mechanisms behind the growth cone (the vehicle), there are threestages that are repeated continuously and that make the growth cone to progress onits trip. At first, during the protrusion stage, the filopodia and lamellipodia extendforward. Secondly, during the engorgement stage, the main body of the growthcone moves forward following the filopodia. At last, during the consolidation stage,the shape of the axon shaft is formed again. This all is achieved by a molecularclutch model, that allows the cytoskeleton to get anchored to the adhesive substrate(This is achieved thanks to the properties of a family of proteins called actin, whichcomprise the cytoskeleton of the cells). During this process, filopodia are consideredto work both as exploration sensors and points of attachment. Fully understandingthe clutch mechanism is paramount for being able to understand the overall logicgoverning the progression of the growth cone (Lowery and Van Vactor, 2009).25egarding the navigator, there are two main components ruling the process ofsteering the growth cone and finding the adequate path. First of all, the aforemen-tioned interaction of the actin structures with the chemical cues working as roadsigns. Depending on the chemical properties of the axon (for example, the presenceof different kinds of neurotransmitters), the actin will be attracted or repelled bythe different kinds of chemotropic cues (the road signs). Secondly, the microtubulespresent along the growth cone seem to have a very important role in the steering ofthe axon. Their polymer structure makes them show a dynamic instability. Thanksto this property, they act as sensors during the protrusion stage, interacting withactin cues and steering the growth cone towards the correct direction; but they actas well as inhibitors, granting stabilization against the guidance cues and acting asa scaffold for guidance of cue signaling.The growth cone structure and mechanisms offer several alternatives to be con-sidered when developing the rules for a model of the evolution of neuron topology.From the perspective of a multi-agent system, and given the importance of differentelements in the growth and steering of the axon, some of these elements may bemodeled as separate agents: A multi-agent model of the neuron may include somaagents (maybe another type for the dendrites), one or several axon segment agents,a growth cone agent, several filopodium agents, several microtubule agents, andseveral actin bundle agents.Moreover, it is clear that the environment has a very important effect on thegrowth cone, and some of its properties could be included in the model of the MASmap. At least 4 different types of chemotropic cues are mentioned, which are crucialfor determining the route the growth cone will take, due to their interaction withthe growth cone elements.However, several questions without an easy answer appear, mostly related to thehow. How is determined the location and intensity of the chemical cues? How are theattraction and repulsion rules between the different chemical elements quantitativelydecided? According to (Lowery and Van Vactor, 2009), the overall logic that governsthis process is still emerging. 26 .2 Spiking neural networks
Spiking Neural Networks (SNNs), which are often called 3rd generation neural net-works, were developed under the idea of creating a system able to reproduce thebehaviour of neural circuits. Although the ANNs were created as well as an at-tempt of replicating some of their behaviour, advances in neuroscience soon provedthat the assumptions made for ANNs were far from the reality of biological neuralcircuits (Paugam-Moisy and Bohte, 2012).The most innovative idea behind SNNs is taking into account the time evolutionof input spikes for calculating the state of neurons. This means that SNNs are basedon the evolution of the inputs in the time dimension, and they get excited whenenough spikes have been recently received in their inputs. This idea also impliesthat the neurons in SNNs are event-driven computing units i.e. the computation isperformed when certain events occur, opposed to time-driven processing, were thecomputation is performed at constant time intervals.Despite there are several models for spiking neurons, most of them make use ofthe concept of the membrane potential. When spikes arrive to a neuron, itsmem-brane potential increases and, depending on the model, the neuron will reach anexcitation state after some time as a function of this potential. Finally, this excita-tion state entails the generation of a spike in the proper neuron, which is transmittedby its axon to other neurons connected to it. Moreover, this process involves timedelays, and many models include stochastic processes as well. They are a way ofrepresenting a plethora of phenomena occurring in the neuron and its environment,such as the amount of and sensitivity to neurotransmitters, as well as the amount ofthem which can travel to the synapses; or the presence of chemical cues around theneuron, which can boost or inhibit the synapse. In any case, the modification of themembrane potential after an incoming spike is normally referred as postsynapticpotential, which can be excitatory (EPSP), or inhibitory (IPSP) (Paugam-Moisyand Bohte, 2012).In any case, biological neurons get electrically charged after receiving spikes, andproduce new spikes asynchronously based on the amount of received spikes. Neuronsin an SNN work in an analogue way, where the timing between spikes is the mostimportant way of transmitting information.
The firing model is a crucial part of an SNN design, as it defines what is requiredfor neurons to spike, and how they will behave after a spike occurs. The two mainfeatures that are evaluated in a firing model are their similarity with biologicalneurons and their computational simplicity. In the rest of this section the mostpopular firing models are introduced. In any case, many more have been designed,most of which derive from the following ones or combine some of their features.
Hodgkin-Huxley model:
The Hodgkin-Huxley model represents the spiking of the neurons by an elec-trical circuit consisting in a capacitor representing the capacitance of the neuronmembrane, and three parallel conductances representing the different ion channels(potasium, calcium, etc.). Moreover, they are in series with batteries representing27he equilibrium potentials of these channels. It was initially developed for modelingthe behaviour of the squid nervous system, after a series of experiments during the1940s. A generalized differential equation proved to be highly accurate for describingthe properties of neurons’ action potential (Nelson, 2004).Despite being very faithful to the biological neuron, this model is very complexand rather complicated for being implemented in an SNN with a big number ofneurons. In Fig. 2.2 is depicted the electrical schematic of this model.An important feature of this model is the existence of a temporary refractorytime after a spike, where the occurrence of a second spike is very unlikely to happen.This time typically consists of few milliseconds.Figure 2.2: Electrical schematic of the Hodgkin-Huxley model. Image obtained from(Nelson, 2004).
Leaky integrate and fire model:
The Leaky I-F model consists in the representation of the soma membrane as anR-C electrical circuit (See Fig. 2.3). It is derived from the Hodgkin-Huxley model,where the incoming electric pulses contribute to charge the capacitor in the model.Furthermore, it slowly discharges over time until reaching the rest voltage.Figure 2.3: Electrical circuit of the Leaky I-F model and shape of the intensity andvoltage plots when a pulse is received. Image obtained from (Gerstner et al., 2014).The occurrence of a spike is determined by a threshold value. Whenever themembrane potential raises over that value, the neuron produces a spike. Immediatelyafter that event happens, the voltage drops back to the rest voltage value. Theexistence of a refractory time is optional in this model.28 zhikevich model:
The Izhikevich model (Izhikevich, 2003) is one of the most suitable models forneuromorphic engineering, as it grants a remarkable trade-off between computationalcomplexity and resemblance to actual neurons.By using the differential equations shown in eqs. 2.5 and 2.6, this model is ableto reproduce several different spiking patterns, which makes it possible to adjust tothe different dynamic behaviours present in biological neurons. This is achieved bychanging the values of the variables a, b, and c dvdt = 0 . v ( t ) + 5 v ( t ) + 140 − u ( t ) + I ( t ) dudt = a ( bv ( t ) − u ( t )) (2.5) if v ≥ mV, then v ←− cu ←− u + d (2.6)This model adds complexity that is not needed in this project and some of itsmost relevant features would not be used for testing the neuron growth, such as theadaptation to several different types of neuron spike trains. Fig. 2.4 contains fourdifferent spiking patterns that can be obtained with eqs. 2.5 and 2.6.Figure 2.4: Example of four different spiking patterns that can be obtained withthe Izhikevich model, by varying the variables a, b, and c . Image obtained from(Izhikevich, 2003). 29 hapter 3System design Throughout this chapter, the neuron model that was designed in this project is thor-oughly described. The chapter starts with a description of the multi-agents systemused for implementing the designed model of the neuron and creating SNNs takinginto account the space dimensions. Then, the equation defining spine growth is in-troduced in section 3.2. Later on, the dynamics of the spines movement throughoutthe network environment are described in section 3.3. Finally, section 3.4 explainsthe model used for representing the neuron firing, based on the Leaky integrate andfire model.All along this chapter, the reader will find figures depicting software simula-tions that have been done for showing behaviours and properties of neural networksfollowing the design rules that are being introduced. This simulations have beendone in a sofware framework named RANA, developed by the University of South-ern Denmark, and improved in some minor details during the development of thisproject. In the simulations, the red dots represent soma agents, growth cones arerepresented by green dots, and blue dots represent the spine agents. In section 4.1it is offered a more detailed explanation of this tool, and some of the functionalitiesthat have been added to it during this project.
A MAS model has been designed in order to build up a system that implements thebehaviours and the designed set of rules that are explained in detail in the followingsections. Therefore, the computation is distributed in several agents, and the finalgoal with this system is to distribute the intelligence between several computationalunits.It has been developed a neuron model with dynamic dendrite spines that growtowards neighbouring neurons based on the correlation between their firing times.This model has been implemented and tested by using a multi-agent approach, wherethe entity representing a neuron is formed by 3 different types of agents.
Soma agent:
There is one soma agent per neuron, and it is the top agent in the neuron’shierarchy tree i.e. it is the parent of the rest of the neuron’s agents. The applicationof the spiking rule is implemented in this agent i.e. the Leaky I-F model, as well asthe stochastic process for deciding whether a spike occurs or not.30he spiking algorithm requires to gather the data of the incoming input pulses,which are received through the neuron dendrites. Both the intensity and the timeat which these events happen are needed. They are fed to the Leaky I-F algorithmin order to obtain the soma membrane potential, which also takes into account thenoise (See eq. 3.18). This value is used for calculating the probability of a spikeevent happening, by following the next steps:1. Normalization of the membrane potential, being 0 the rest voltage U rest , and1 the threshold voltage U threshold (eq. 3.13).2. Calculation of the spiking probability, by feeding the normalized voltage tothe Sigmoid function (eq. 3.12).3. Decision of whether the neuron spikes or not, by using the previously calculatedprobability by the Bernouilli algorithm (eq. 3.11).Moreover, this agent spawns a growth cone agent at the start of the simulation,and also after the current growth cone connects to another neuron. However, insome networks this functionality has been limited, for the sake of avoiding undesiredcomplex structures i.e. some neurons can only spawn a limited amount of growthcones and thus dendrite spines.This agent communicates certain information to other agents by emitting asyn-chronous events: • excited neuron: The targets of this event are the growth cone agents ofthe same neuron. It is emitted when a spike happens in the soma, and it isrequired by the Hebb’s rule implemented in those agents. • electric pulse: The targets of this event are the growth cone agents fromdifferent neurons in the neighbourhood. This event represents the propagationof an electric pulse through the environment when the current neuron getsexcited. • assign group: This event is used for informing the children agents of thesoma the identity of their parent soma. • firing time and stop growth: These are auxiliary events used for recordingthe times at which the firings occur for later analysis, and to prevent theneuron’s cone to keep growing.
Growth cone agent:
This agent represents the tip of a neuron spine, and it contains the logic dealingwith the Hebbian based spine growth. In order to apply Hebbian learning, the nextsteps are followed:1. Record the time at which the neuron’s soma is excited. Only the last ex-cited neuron is relevant for the algorithm.2. Detect electric pulse events occurring in the environment, and record the timewhen they happen. 31. When each of the previous events happen, calculate the time difference betweenboth.4. Calculate the STDP (eq. 3.1) in the neuron i.e. the increase in the membranepotential.5. If the growth cone is not connected, calculate its acceleration and velocity byusing the designed kinematics algorithm (see eqs. 3.5 and 3.6).Therefore, it is sensitive to electric pulses traveling in the environment which,altogether with the firing state of the current neuron, are paramount for establishingthe spine growth direction.This agent communicates relevant information by emitting the following asyn-chronous events: • synapse: Its destination is the parent soma agent of the growth cone. It isused for transmitting received synaptic pulses to the soma for further process-ing. • cone init: Event indicating that the growth cone has been correctly spawnedand will start its normal operation. • cone connected: Event for informing the parent soma agent that the conehas reached a destination. The soma will normally spawn a new growth coneafter receiving this event. • cone parent: and cone kinematics: Auxiliary events used for recordingthe reached destination neuron and the historic velocity and acceleration valuesfor further data analysis
Spine agent:
This agent represents a link in the spine of the neuron, and a new one is generatedwhen the spine grows a certain length. Its purpose so far is merely graphical, asthere are no functions associated with this agent.However, further improvements of the model shall give intelligence to this agents,so they can react to electric pulses present in the environment, and eventually allowa dendrite to fork in more than one direction.
As explained in the introductory chapter of this document, the main goal of thisproject was to create intelligence by using a set of rules that make the structure ofSNNs to evolve through time. Therefore, the first step for creating such model wasto establish a rule for the growth of neuron spines.In order to establish a rule for making neurons to grow towards each other,the Spike Time-Dependent Potential (STDP) process has been used, which is a re-formulation of Hebbian learning,and it is described in (Froemke and Dan, 2002).According to them, F (∆ t ) is the function for calculating the STDP between 2 neu-rons, and it is calculated by using eq. 3.1,32 (∆ t ) = Ae −| ∆ t | /τ if ∆ t > A ( − e −| ∆ t | /τ ) if ∆ t < t is the time difference between the spiking of both neurons involved, A is a scaling factor, and τ a time constant.In Fig. 3.1a are depicted the results of an in-vivo experiment regarding thechange in the excitatory postsynaptic potentials (EPSC) of a neuron in relation tothe spiking time difference between the input pulse and the trigger of the neuron,and in Fig. 3.1b is depicted the plot of eq. 3.1. It can be observed that the shape ofthe implemented equation highly resembles the results obtained with real neurons. (a) − −
20 0 20 40
Time difference [ms] − − − EPS C a m p li t ude (b) Figure 3.1: a) Results of the in-vivo experiments obtained in (Bi and Poo, 1998), forthe change in EPSC amplitude (in %), plotted against the time difference betweenspikes, and b) plot of the mathematical equation representing the STDP (eq. 3.1),for A = 1, and τ = 10.Furthermore, eq. 3.1 was modified so it takes into account the distance vectorbetween neurons. This way they are more influenced by close neighbours than bydistant ones. This modification has been inspired on the electric field equation, aselectric forces are one of the main phenomena explaining the interaction betweenneurons. The resulting force vector is represented in eq. 3.2 for assessing the attrac-tion force of one neuron towards a second one when both are triggering at similartimes. Therefore, the attraction force F ij of neuron j over neuron i is determinedby −→ F ij = F (∆ t ij ) d ij · −→ u ij (3.2)where F (∆ t ij ) is the EPSC amplitude calculated according to eq. 3.1, d ij is theabsolute distance between the growth cone of neuron i and the soma of neuron j ,and u ij is the unit vector joining both elements.33 mplementing the negative side of the Hebbian rule In the previous section the equation modeling Hebbian learning was introduced,which results in the variation of the synaptic strength between two neurons basedon their firing times. This model is relatively simple to implement for modeling thepositive growth of the neurons’ spines.However, in order to have a full model of the Hebbian structure plasticity, thenegative side of the Hebbian rule has to be implemented as well. Eq. 3.1 can beused for obtaining the negative STDP value corresponding to a presynaptic neuronfiring after the postsynaptic one.Obtaining the final repulsion force is not straightforward, and different ap-proaches can be taken into consideration (see Fig. 3.2). The problem is, all ofthe shown approaches offer at least one major drawback: • The first approach would provoke the growth cone to travel in an unpredictableand unrealistic direction, moving away from both its soma and the secondneuron • The second approach would create a very long spine doing a loop, althoughthis could be fixed by, for example, making the spine to decay and ending updisappearing. • In both the second and third approaches the growth cone would not necessarilytake a direction opposite to the second neuron. It could actually get closer tothe neuron if it is located between the growth cone and the soma of the firstone. (a) (b) (c)
Figure 3.2: Basic approaches of the direction that the growth cone can take whenit suffers a repulsion force from a second neuron. In a) it moves in the oppositedirection to the second neuron, in b) it moves back towards the neuron’s soma,whereas in c) the cone goes backwards in the same direction it came from.A fourth approach would be to implement a decay function which makes theattraction force between two neurons to get weaker. Therefore, for a system withtwo neurons, if there is a repulsion detected the force between them would start todecay until reaching zero. From that point, the spine would start to get weaker untilit disappears.Still, there would be gaps in this model, as the behaviour when the growth coneis influenced by more than one neuron would not be realistic and thus it needs tobe further developed.In any case, this was left as future work, and hence the negative side of Hebb’srule has not been included in the design.34 .3 Spine growth dynamics
In order to be able to implement complex network topologies, the neuron modelneeded changes in some important aspects. Otherwise, it would not be possible tocreate the necessary connections for generating many networks.First of all, the model so far describes neurons whose spines follow irregulartrajectories, as they are considerably affected by random noise and stranger incomingpulses (experiments showing these effects are shown in chapter 4). Therefore, it isparamount to implement more stable dynamics for the spines in order to get robustconnections between neurons. This problem is addressed in the subsection 3.3.1,where the concept of the spine acceleration is introduced for creating inertia in theagents.Second of all, when the velocity is obtained from a constant acceleration, theobtained velocity has to be bounded, otherwise it would grow to the infinity, beinga source of instability in the system. Hence, a drag force opposing the movement ofthe spine is implemented in the model. It is described in subsection 3.3.2.
A drawback observed in the experiments was the high sensitivity of the spine growthto incoming pulses. Despite incoming white noise would not considerably affect thesuccess of reaching the final destination, some disturbances provoked big spikes inthe trajectory of the spine e.g. stranger points provoked sudden changes in the pathof the spine.In other words, it is desired for the spine to follow smooth trajectories, and tobe robust against noise and undesired incoming pulses. The following are somealternatives that could achieve this: • Use a correction function that reduces the resulting attraction force if its direc-tion deviates from the current neuron velocity. For example, using the cosineof half of the angle between the velocity and the force would reduce the effectof the force the more the force deviates from existing trajectory. F (cid:48) ij = F ij ∗ cos ( α/
2) (3.3) • Using the attraction force for getting the second derivative of the position, in-stead of the first derivative. Said in other words, using the force for calculatingthe acceleration instead of the velocity. • Use the second order derivative equation typical of spring-mass systems forcalculating the new position.The first option presents the shortcoming that forces would tend to be ignoredthe more deviated they are from the current velocity.A solution based on the second option has been included in the design, and isexplained in the following subsection. 35 ovement based on the second derivative
By using this approach, the spine acceleration is obtained from the incoming electricforce. This makes sense from the physics point of view, as the acceleration of anobject is directly affected by the force that is applied to it according to Newton’ssecond law of motion (eq. 3.4). −→ F = m −→ a (3.4)Thus, last equation was used for calculating the spine acceleration from theelectric force value obtained from eq. 3.1.As it was explained in the previous section, the current velocity is used forcalculating the new position of the agent by applying simple kinematics. The modelhas been modified by calculating first the acceleration with the previous equationsand from there the instantaneous velocity of the agent (See eq. 3.5). −→ p t = −→ p t − + −→ v t − ∆ t −→ v t = −→ v t − + −→ a t − ∆ t (3.5)These equations were tested on a delay line (for details, see section 4.3). Fig. 3.3shows the resulting layout. It can be seen that it is satisfactory, as the spines followsmooth trajectories. The experiment also shows that this model is robust againstthe presence of a stranger point emitting electric pulses with a given frequency.Figure 3.3: RANA simulation of a delay line initialized with 8 neuron somas (reddots) using the neuron model defined by eqs. 3.5. The experiment was performedwithout environment noise, neither trigger Poisson noise, but a stranger pulse gener-ator was included in the bottom right corner. The kinematics of the marked neuronare plotted in Fig. 3.4. The experiment is pseudo-deterministic, and thus the resultis always the same. 36n Fig. 3.4 are shown the plots of the acceleration and velocity of one of theneurons in the experiment. It can be observed that the pulse generator createssingle pulse disturbances in the spine acceleration at 2 different points in the plot(Around iteration 800 and iteration 5800). However, the effect of these disturbancescan not be observed in the plot of the velocity of the agent.Furthermore, the result also shows that, in the sole presence of one attractivesource, a spine will tend to constantly accelerate towards it, which will provoke thevelocity to grow towards infinity. This is an undesired behaviour, which was dealtwith by introducing a drag force. This is addressed in the subsection 3.3.2. − V x [ μ m / s ] Velocity Acceleration
Iteration − V y [ μ m / s ] − μ x [ μ m / s ] − μ y [ μ m / s ] Figure 3.4: X and Y components of the velocity(red) and acceleration (blue) of theagent marked in Fig. 3.3. The plotted time spans from the simulation start untilthe dendrite reaches the destination soma.
Robustness against noise:
The new kinematic model was tested under different noise parameters in order toassess its robustness. Environment noise was added in the system, by following thesame approach as in the former model i.e. The noise has an intensity that followsa Gaussian distribution with zero mean, and the direction of the noise follows auniform distribution between 0 and 360 degrees.The spines are still able to reach the desired target, even with a considerableamount of noise (see Fig. 3.5). The velocity and acceleration plots of one of theagents show that the noise is considerably big compared to the attraction of theother neurons, but it is still capable of reaching the target without being observedany undesired deviations from its path i.e. the spine has a certain inertia thatprevents it from deviating from the current path.37 − − − A x [ μμ / s ] Acceleration Filtered acc.
Iteration − − − A y [ μμ / s ] (a) − − − V x [ μ m / s ] Velocity Filtered acc.
Iteration − − − V y [ μ m / s ] − − μ x [ μ m / s ] − − μ y [ μ m / s ] (b) Figure 3.5: Outcome of the bottom spine agent in the delay detector whown inFig. 3.3, with an environment Gaussian noise of µ = 0, and σ = 2. Plot of a) theinstantaneous acceleration of the agent in blue, and its average value in a windowof 70 neighbouring samples in green; and b) the value of the instantaneous velocityin red, together with the filtered value of the acceleration. The acceleration wasfiltered only for visualization purposes. As mentioned in the previous section, the spine agents in the model show a constantand limitless increase in their velocities. This leads to cases where the spines traveltoo fast and can not manage to correct their trajectory and thus can not reach insome cases their desired goals. This can be observed in Fig. 3.6, where half of thespines grow passed their target somas.In order to minimize this issue, the calculation of the resulting attraction forceover a spine growth cone includes now a new component opposing the current move-ment i.e. a drag force, whose modulus is proportional to the current velocity andhas opposite direction.The drag force is a concept used in physics for describing the opposition of afluid to the movement of an object through it. Its value is proportional to theobject velocity, the fluid density, the cross section area, and the drag coefficient.For the sake of simplicity, the last 3 terms have been grouped under one single term,as all of them are considered constant in the current context. It will be named dragcoefficient, C D , from now on. Thus, the drag force is obtained by using eq. 3.6, F D = v C D (3.6)Due to the lack of resemblance with typical fluid mechanics problems, the valueof C D has been tuned empirically based on the desired kinematics of the spines,instead of using typical values used in fluid mechanics. In fact, this model is asimplification of the biological neuron, so the used used values are the ones that fitthe best for achieving the desired networks.Biological neurons do not freely navigate through a fluid inside the brain. Ac-tually, their movement depends on the mechanic adhesion to the substrate, theforces created by the attraction and repulsion of surrounding chemical cues, andthe mechanisms of the spines for growing. These are based on the contraction and38igure 3.6: RANA simulation of a delay line where four out of the eight neuronsovergrow and do not reach their target goals due to the high inertia they have whenthey approach their destination.extension of the fillopodia inside the neuron, as well as the alteration of the proteinsdistribution inside it (Lowery and Van Vactor, 2009).Based on the previous experiments, it has been tried to stabilize the speed of thespine growth around 0 . µm/s when it is attracted by another neuron. Lookingat the results plotted in Fig. 3.4, when the neuron has a velocity of 0 . µm/s the acceleration has an average value of 0 . µm/s . Thus, in order to stabilizethe acceleration around this point, the drag force has to cancel out the attractionforce. It is then obtained a drag coefficient of C D = 0 . m = 1 Kg and a = 0 . µm/s in eq. 3.4, and solving eq. 3.6).To sum up, the force acting on a spine growth is calculated using eq. 3.7, andentails the addition of 3 different components, which are the attraction forces fromthe rest of the neurons in the environment, the force provoked by the differentsources of noise, and the drag force. F T i = N (cid:88) j =1 ,j (cid:54) = i F j + (cid:88) F noise + F D (3.7)The previous equation was implemented for the same experiment (the delay)line. In Fig. 3.7 are depicted the plots of the obtained velocity and acceleration afterimplementing the drag force. It can be observed that the magnitude of the velocitygets limited, because the acceleration gets reduced when the velocity increases. Theacceleration plot has been zoomed in compared to the previous experiment due toits smaller values. It can be clearly observed the stranger electric pulses in theacceleration, and how small their effect on the velocity is. In general, it can beobserved that the introduced changes make the velocity more robust towards fastchanges in the acceleration. 39 a) − V x [ μ m / s ] Velocity Acceleration
Iteration − V y [ μ m / s ] − μ x [ μ m / s ] − μ y [ μ m / s ] (b) Figure 3.7: Result of the delay detector network with the second-derivative-basedmodel, and the presence of a drag force, calculated with eq. 3.6 for a drag coefficientof C D = 0 .
8. The experiment was performed with a stranger pulse generator, butwithout environment noise nor trigger Poisson noise. In b) is plotted the velocityand acceleration of the marked agent. The result of the simulation is deterministic.The introduced changes have increased the robustness of the model against noise.In order to prove this, an experiment with the same delay line with an environmentnoise with variance σ = 10 has been conducted. In Fig. 3.8 is depicted the result ofuna of the simulation runs, where the spine agents perform very irregular trajectoriesalong the map.Figure 3.8: Simulation of the delay line with a white Gaussian noise with a varianceof σ = 10. The drag force was calculated using eq. 3.6 for a drag coefficient of C D = 0 .
8. The success rate after 1000 seconds of simulation is 0.57.40he success rate was 0.57 (i.e. 57% of the spines connected to the desireddestination soma). It can also be observed that an unconnected spine was very closeto reach its destination, which would have increased the success rate to 0.71. Fromthe previous experiment, it is obtained that the average acceleration of a spine forreaching another soma oscillates around 0 . µm/s . Therefore, the noise varianceis 10 times bigger than the attraction force for reaching the desired soma. It isnoticeable that the network obtained a high success rate given such big differencein the orders of magnitude of the noise and the attraction force. The design so far deals with neurons that consist in one soma agent from which onlyone spine agent can grow. This is a big limitation for the development of meaningfulnetworks, as the implementation of logic functions and complex tasks involve as wella complex tree of connections between neurons.The model described until this point has been applied for the delay line and thecoincidence detector introduced in sections 4.3 and 4.4 in the next chapter (in Fig.3.9 the result is depicted). It can be observed that the rightmost neuron - whichcorresponds to the output neuron - connects to one of the input neurons, whereasthe spines of the 2 input neurons wander around, as they are basically ruled by theenvironment noise.As the current model does not allow neurons to develop more than one spine,the network can not be completed. (a) − V x [ μ m / s ] Velocity Filtered acc.
Iteration − V y [ μ m / s ] − μ x [ μ m / s ] − μ y [ μ m / s ] (b) Figure 3.9: Result of the coincidence detector network with the second-order deriva-tive model, and the presence of a drag force, calculated with eq. 3.6 for a dragcoefficient of C D = 0 .
8. The experiment was performed with environment noisewith σ = 1, but without trigger Poisson noise. In b) is plotted the velocity andacceleration of the rightmost spine. The acceleration has been filtered by a meanfilter with a kernel size of 2000, only for visualization purposes.41ence, the model was modified in order to allow neurons to grow more than onespine, so they can have multiple input signals. The inception and growth of morethan one spine has been modeled by the following rules: • A soma can generate a new spine if and only if all of the existing spines arealready connected • Two spines of the same neuron can not grow towards the same destination. • A soma can only generate a finite number of spines. Although a functionfor limiting its number shall be included, this goal can be achieved as wellnaturally due to the properties of the environment.These rules have been implemented in the neuron existing model, exceptingthe last rule. This is due to the fact that the spawning of new spines gets naturallylimited by the amount of neighbouring neurons that are actually affecting the currentneuron. This shall be revisited in future work, but it proved to be good enough forthe current problem.In Fig. 3.10 is depicted the result of this changes in the model for the coincidencedetector model. It can be observed that the result is satisfactory, as the outputneuron connects two spines to both input neurons.Figure 3.10: Result of the coincidence detector network after using the rule forspawning more than one spine per neuron, with an environment Gaussian noise of σ = 0 .
5. Thus, the rightmost neuron spawns a second spine after the first one getsconnected. The input neurons’ spines wander around due to the environment noise42 .4 Neuron firing model
A fundamental topic regarding SNNs is determining which is the rule deciding whenneurons spike. This has been a topic of interest for neuroscience since the beginningof the 20th century, and a plethora of models have been proposed so far (Gerstneret al., 2014). It is important to realize that despite biology prioritizes the resem-blance with real neurons, computer science and neuromorphic engineering placemuch emphasis in the computational cost of implementing it.Contrary to the Izhikevich model, the Leaky integrate and fire (I-F) model isa simple approach to the behaviour of the neuron membrane potential, where itis represented as an R-C electrical circuit (See Fig. 3.11). Therefore, the neuronmembrane behaves as an electrical capacitance with a leak resistance towards itsbody, which has a characteristic rest voltage. External electric pulses are representedas intensity functions that charge the capacitor. Once the system voltage reaches acertain threshold, the neuron triggers a spike and the voltage is reset.Figure 3.11: Leaky I-F model, based on an electric RC circuit modeling the neuronmembrane. On the right is shown the shape of the membrane intensity and voltage.Image obtained from (Gerstner et al., 2014)The instantaneous voltage potential of the neuron membrane is calculated byusing eq. 3.8, which corresponds to the Leaky integrate and fire rule (Gerstneret al., 2014). U ( t ) = U rest + ∆ U exp − t − t τ m (3.8) if U ( t ) ≥ U threshold = ⇒ U ( t ) = U rest f ire = true (3.9)In the previous equations, U ( t ) is the instantaneous membrane potential, U rest is its steady-state potential in the absence of external pulses, ∆ U is the potentialapplied to the membrane due to an external pulse, t − t the time since the pulsearrived, and τ m is the RC circuit time constant, which is calculated with eq. 3.10.43 m = RC m (3.10)The values of the model parameters have been initially chosen based on measure-ments done on real neurons. Namely, for cortical regular spiking pyramidal neuron, C m = 0 . nF , R = 40 M Ω, U rest = − mV , and U threshold = − mV (Liu and Wang,2001). From the values of C m and R a value of τ = 20 ms is obtained by using eq.3.10.In Fig. 3.12 is shown the result of a test where the Leaky I-F model is applied.The central neuron stops getting externaly excited once it gets wired to the 2 in-put neurons. Moreover, the rightmost neuron is inactive until such event happens.Therefore, the central and right neurons get connected solely due to the spiking rulethat is represented in eq. 3.8.For an incoming electric pulse, the voltage increment in the membrane has beenarbitrarily decided to be ∆ U = 10 mV . Therefore, for reaching the threshold voltage,the neuron would have to receive at least 2 incoming pulses not very separate intime, allowing thus to replicate the behaviour of a coincidence detector.Figure 3.12: Result of a coincidence detector network when using the Leaky I-Fmodel for making central neuron to spike once it is connected to the input neurons.Contrary to the other connections, the connection between the central and outputneuron is achieved by using the I-F model, as their excitation is not forced byexternal events. 44 .4.2 Stochastic model for triggering the neuron The Leaky integrate and fire model has been introduced in the previous section. Itmodels the neuron as an RC circuit, where the instantaneous value of the membranepotential determines whether the neuron will trigger or not.However, literature in neuroscience supports the idea that the triggering of theneurons is actually led by a stochastic process (Dayan and Abbott, 2001). Accordingto this idea, the triggering times of the neurons are non-deterministic, and themembrane potential only contributes to increase the probability of spiking.Therefore, a probabilistic function dependent of the neuron membrane potentialhas been introduced into the triggering model. As the outcome of this probabilisticfunction can only be 1 or 0, it has been used the Bernouilli distribution for modelingthis behaviour (eq. 3.11), F = P ( b | p ) = p , if b = 11 − p , if b = 0 (3.11)where F is the existence of the firing event, and the expected value E [ x ] = p iscalculated with the neuron’s membrane potential.When the membrane potential U ( t ) has values closer to the rest potential U rest ,the probabilities of the neuron firing are low. On the other hand, when the membranepotential reaches values close to the threshold potential U threshold , the probability ofthe neuron firing is high, being the spike very likely to happen within few iterations.In order to represent this behaviour, it has been used the sigmoid function. It isa continuous function, which asymptotically grows towards 0 and 1 without theneed of introducing artificial boundaries, and it covers the intermediate points bydescribing an S shape. It is represented by eq. 3.12, and its plot can be observed inFig. 3.13, S ( x ) = 11 + e − k ( x − x ) (3.12)where k is the growth rate of the Sigmoid function, and x is its middle point.By modifying these 2 values, the function can be drifted towards one of the sides,and it can be made to grow faster towards the asymptotes.Moreover, the function was fed with normalized values of the membrane poten-tial, by using the conversion formula shown in eq. 3.13. The values used for U min and U max are the membrane rest potential and the membrane threshold. X = U ( t ) − U min U max − U min (3.13)The result of the Sigmoid function (eq. 3.12) is the probability of the neuronto spike. Hence, it is the probability of obtaining a 1 in the Bernouilli processrepresented by eq. 3.11. 45 − − − − − − − U(t ) [ m V] S ( x ) Figure 3.13: Plot of the Sigmoid function S(x), compared to the membrane potentialvalues. S(x) was plotted after normalizing U ( t ) between U rest and U threshold , it wascentered around the middle point of both values, and its growth rate was set at k = 5. The red vertical lines represent the voltage potentials U rest = − mV and U threshold = − mV . Tuning the stochastic model:
In order to implement the aforementioned stochastic process, a value for the prob-ability p of the neuron to trigger at a discrete time step is required. For tuning thisvalue, it has been used the better known value of the natural frequency of the neuron,altogether with the properties of the geometric distributions for setting p .The geometric distribution can be used for representing the number of Bernouillitrials needed for obtaining one success for a constant probability. It is formallyrepresented by eq. 3.14, whose result is the probability of obtaining one successafter k trials, given the success probability p of one trial. P ( N = k ) = (1 − p ) k − p (3.14)Given the natural frequency f of the neuron is known, it can be obtained thetypical number of iterations until a spike occurs by using eq. 3.15, k = 1 f ∆ t (3.15)where ∆ t is the time step of the simulation environment.Assuming the membrane potential is constant and equal to U rest , the Bernouilliprobability p rest can be calculated by using the cumulative distribution function ofthe geometric distribution (eq. 3.16). CDF = 1 − k (cid:112) − pp u rest = 1 − (1 − CDF ) f ∆ t (3.16)46herefore, in order to centre the firing rate around the neuron’s natural ratewhen no external pulses are received (i.e. the membrane potential is always U rest ),the previous equation is solved for CDF = 0 .
5, and k = 1000[ ms ] / ms ], resulting p rest = 0 . k that satisfiesthe previous condition. This is, S ( x rest ) = p rest = 0 . x rest = 0, and x − x = − .
5. The obtained result is k = 14 . e − k ( x − x ) ] S ( x ) = 11 − S ( x ) = S ( x ) e − k ( x − x ) Ln ( 1 − S ( x ) S ( x ) = − k ( x − x ) k = Ln [ S ( x )] − Ln [1 − S ( x )] x − x (3.17)The previous stochastic model for determining the firing time of the neuronproves to be realistic when the membrane potential is close to the rest potential, andalso for values close to the threshold potential. However, intermediate values provokethe neuron to trigger after 1 to 3 iterations, which makes is a similar behaviour tothe neuron at high voltage values.This issue is addressed in the next section, and a modification for solving it isintroduced. The introduced stochastic model consists in consecutive Bernouilli events for decid-ing whther a neuron spikes or not on each iteration. Its probability is determinedby the membrane potential. Therefore, if a neuron is isolated from the surroundingones, its membrane potential is constant and then the probability of spiking is con-stant too. Then, the process can be modeled by a geometric distribution, and hencethe probability density function will be monotonic and decreasing. This behaviourcan be observed in Fig. 3.14.Although in these results the expected firing time of the neuron is close to itsnatural period, real neurons tend to show a behaviour resembling a Poisson dis-tribution, where the firing is centered around the expected value, instead as beingspread out like in the Fig. 3.14. As the process consists in a series of identical andindependent Bernouilli trials, the distribution is drifted to the left side and hencetrials will be more likely to occur close to the starting time.In order to obtain a behaviour closer to the one of real neurons, white noise hasbeen added to the membrane potential on each iteration, so the voltage tends togrow and the probability of spiking in a single trial increases with time. Thereforethe distribution of the firing times will be squeezed to a more narrow area, followinga bell-like shape. 47
500 1000 1500 2000 2500 3000t [ms]0.00000.00010.00020.00030.00040.00050.00060.00070.0008 P Figure 3.14: Normalized histogram of the firing times of an isolated neuron, witha constant membrane potential U ( t ) = U rest . The mean of the triggering timesis ˆ t = 1448[ ms ], and the median is 1002 [ms]. It was calculated over a set of5000 samples. Note that only the range from 0 to 3000 [ms] is shown, althoughtheoretically the values can span until infinity.Since the added noise follows a Gaussian distribution with mean µ noise , eq. 3.8is transformed into eq. 3.18. U k = U rest + ∆ U e − t − t τm + U noise (3.18)In Fig. 3.15 the evolution of the membrane potential is depicted after addingwhite Gaussian noise to the membrane potential on each iteration. In the secondfigure is depicted the histogram of the firing times. The neuron spikes at a fast pace,and this is due to the fact that the parameters of the Sigmoid function were keptunaltered from the previous model i.e. it was tuned in a way that the neuron wouldspike on average in the first 1000 ms under the assumption that it would stay witha membrane potential of U = U rest , which is not the case anymore.Since there is a voltage addition at each iteration due to the noise, the Leakytime difference parameter in the previous equation is the same at each iteration, andequal to the time step. In absence of other disturbances, the membrane potentialgrows monotonically towards an asymptote that is calculated as follows: U k = U k − , λ = e − ∆ tτm , U noise ∼ µ noise U k = U rest + ( U k − U rest ) λ + µ noise U k (1 − λ ) = U rest (1 − λ ) + µ noise U k = u rest + µ noise − λ (3.19)Therefore, with eq. 3.19 can be calculated the steady-state voltage i.e. thevoltage at which the neuron will stabilize in absence of inputs.48
250 500 750 1000 1250 1500 1750 2000t [ms]706866646260585654 U ( t ) [ m V ] R= 65 MOhms, C= 8 nF, Noise= 0.02 (a) P (b) Figure 3.15: a) Evolution of the membrane potential during 2000 [ms]. The averagefiring time is ¯ t = 173 . ms ]. b) Normalized histogram of the firing times of anisolated neuron getting excited by Gaussian noise with µ = 0 .
02. The Sigmoidfunction has k = 14 .
58 and x = 0 .
5. It was calculated over a set of 10000 samples.In order to slow down the firing rate of the neuron, the Sigmoid parameters weremodified, following a graphical rule of thumb: As a reference voltage, it is takenthe voltage of the membrane after 75% of the natural period (62 mV). That voltageis used for solving the Sigmoid function (eq. 3.12), setting S ( x ) to the probabilityof the Bernouilli process for having 50% chances of triggering in the following 1000trials. Moreover, x was set to U thres = − mV .In Fig. 3.16 is depicted the ideal and simulated behaviour of a neuron with τ = 520[ ms ]. Then, λ (cid:39) . U s − s = − . mV ]. It is also depicted the histogram of thefiring times of a neuron when Gaussian noise is added to it on each iteration. It canbe observed that it resembles the shape of a Poisson distribution, and its samplemean is near the desired natural period. U ( t ) [ m V ] R= 65 MOhms, C= 8 nF, Noise= 0.02 (a) P (b) Figure 3.16: a) Behaviour of the membrane potential for τ = 520[ ms ], when Gaus-sian noise with µ = 0 .
02 is added on each iteration. In blue, ideal evolution ofthe membrane potential. In green, behaviour of a neuron during a simulation. b)Normalized histogram of the firing times of an isolated neuron. The average firingtime is ¯ t = 1057 , ms ]. It was calculated over a set of 10000 samples.49 hapter 4Implementation and results This chapter deals with the simulations that have been done for testing and vali-dating the designed rules, as well as the results obtained from them. Therefore, itis firstly introduced the software tool that was used for implementing the system.Later in the chapter the different neural structures that have been simulatedin the above mentioned software are explained (a summary is offered in table 4.1).They are defined by the number of neurons, their layout, and how the input signalstrigger through time. Moreover, for each of these structures there is an expectedfunction that they should implement. Therefore, each of them has proven useful fortesting different aspects of the designed system, as well as validating its performance.Finally, different results obtained from some of these simulations are summarizedand discussed in the final section of the chapter.This project includes a public repository where the code for implementing theMAS simulations and analysing the data is stored and maintained. The MAS scriptsare coded in Lua, and are importable with the RANA framework, whereas the scriptsfor assessing the stored data are written in Python. In order to implement the designed system it was used a simulation tool for multi-agent systems, namely the RANA software framework (Jørgensen et al., 2015). Thistool is a software project aimed at executing multi-agent simulations able to replicatebehaviours in real-time.In terms of software structure, RANA is divided in two main blocks: • The simulation core, written in C++, contains the code for running the graph-ical interface, iterating and rendering the simulation following real-time con-straints, parsing the agents’ behaviours, and processing the communicationbetween them. • The agent scope, where the behaviours of the different agents are defined. Itis written in Lua, and requires a master agent as the entry point to the agentsystem description. In the code defining this master agents it can be specifiedthe spawning of new agents with different behaviours. https://github.com/jlrandulfe/hebbian_learning . Beside the division of the tasks into the agents explained in section 3.1, the imple-mentation in the RANA framework required the creation of a master agent. It isthe entry point of the simulation, so RANA will spawn this agent and execute itsroutines as soon as it is created. This is due to the fact that RANA only allows tospecify one type of agent for creating a simulation. Therefore, its main purpose is tocreate the agents structure and their distribution in a 2-D space, as well as feedingthem with some crucial data. This main goal can be subdivided in the followingtasks: • Spawning the neuron somas, according to a specified initial layout. • Emitting electric pulses to the different neurons, in order to force their spikingduring the learning period. The time sequence of the firings follow a patternbased on the type of function that has to be implemented. • Commanding the neurons to not grow, for those whose growth is not desired. • Gathering data that will be used afterwards for analyzing the performance ofthe system.
In order to implement the described MAS, two new functionalities were added tothe RANA core. Namely, two functions were implemented in the core: One functionfor getting numbers following a Gaussian distribution, and one function for gettingnumbers following a Poisson distribution. The next steps were followed:1. Calculate values following Gaussian and Poisson distributions in two differentclass functions of the
Phys class in the RANA core. They are calculated byusing the corresponding functions of the C++ standard library .2. In the AgentLuaInterface class, two functions were added for interfacing eachof the functions created in the previous step with the Lua environment.3. In the Lua modules statistic library, a call to each of the functions was added,so they are accessible from the Lua environment. https://github.com/jlrandulfe/learning_RANA/wiki https://en.cppreference.com/w/cpp/numeric/random/normal_distribution https://en.cppreference.com/w/cpp/numeric/random/poisson_distribution uxiliary tools used for data analysis and experiment set-up During this project, the next two RANA functionalities were used in order to easethe debugging and data analysis:
Log results into a .csv file:
In order to centralize the data collection, the master agent handles now certainincoming events in order to gather the desired data. Also, on the clean up functionof the master agent all the collected data is written down to a local .csv file. Thisis performed by using the Lua IO library .Further work focused on the data analysis entailed developing scripts for parsingthe obtained data and extracting meaningful results. The data analysis was per-formed in an independent environment using the Python programming language. Set automatic experiments:
This second functionality creates automatic experiments by specifying in a Luascript the desired configuration of the experiment. So far, this configuration filecommands to do experiments where the master agent is spawned and run for aspecific amount of seconds. Moreover, it is also specified how many runs of thisconfiguration will be executed.In order to use this feature, the experiments are run from the command line,and the GUI is not executed. This is done with the following command: /
This is a powerful tool that, altogether with the previous feature, allows toautomatically execute and record the results of a given number of experiments.Furthermore, different parameters of the agents can be configured, so the influenceof them in the result of the simulation can be easily quantitatively analyzed. TheRANA wiki offers a good introduction and baseline of the topic http://lua-users.org/wiki/IoLibraryTutorial https://github.com/sojoe02/RANA/wiki/Setup-of-mass-experiments etwork name Tested features Network layout Neuron pair Neuron’s individual fir-ing histogram. Hebbianlearning based dendritegrowth.Delay line Hebbian learning baseddendrite growth. Noisetolerance. Spine growthdynamicsCoincidence detector Dendrites growing to-wards multiple goals.Spine growth dynamicsExtended coincidencedetector 2-D firing histogram.Timing behaviour.Leaky I-F modelNeuron reservoir Capacity to generalizethe application. Abil-ity to generate multiplefunctions.Table 4.1: Summary of the experiments that have been performed for testing thedesigned model, as well as the different features that have been tested on each ofthe experiments. 53 .2 Neuron pair
This is the simplest network that has been implemented. It consists in 2 neurons N = { N , N } , which receive input pulses following different functions dependenton time i.e. I = { I = f ( t ) , I = f ( t ) } .First of all, this layout was very useful for testing Hebb’s rule i.e. the time dif-ference between the spikes in both neurons was varied, so it could be easily analyzedhow sensitive are the implemented equations to that parameter. Moreover, the ef-fect of other parameters could be tested, such as the noise and the distance betweenneurons.Secondly, this was the chosen layout for studying the behaviour of single neu-rons. For example, by blocking the spiking of one of the neurons, the probabilistichistogram of a neuron when it is isolated was tested. This layout was used for the initial stages of the design, aimed at testing the Heb-bian rule as a simplified way of modeling the mechanism of the neuron growth andsynapse. It was chosen because it is a simple network which could be used forimplementing certain features in the first versions of the model.The equation representing the Hebbian learning (eq. 2.1) was implemented inthe neurons’ growth cones, and it defined the growth behaviour. Basically, once aneuron triggers, it looks for incoming pulses and gets the corresponding excitationlevel from each of them. Moreover, Hebb’s rule was applied into an euclidean spaceby getting a vector representing the growth cone velocity in a 2-D space.Finally, a stochastic model was added to the environment and to the firing timesof the neurons. The final results show the influence that the noise has in the finalstructure.
This experiment was intended to be a baseline for further research on the applicationof the Hebbian rule for making SNNs to dynamically grow. Therefore, the choiceprioritized a network that is both well-known and simple. Hence, the delay linesused by the mammalian brain for locating the source of sounds were replicated.Initially proposed by L. Jefress (Jeffress, 1948), this theory is widely accepted, andin-vivo experiments have given empirical evidence of its existence (Konishi, 1993).In Fig. 4.1 is depicted the schematic of the biological neural network that hasbeen partially reproduced. When a sound reaches both ears of the animal, itsinformation travels through 2 delay lines in opposite directions composed by severalshared coincidence detectors, which are sub-networks of 3 neurons. When the soundis registered by both inputs of a coincidence detector, its output spikes and furtherlayers of the brain can determine the spatial location of the perceived sound.One important difference between the implemented network and the ones de-scribed in literature (Konishi, 1993) is that the line delay of the biological networkis believed to be achieved by a unique wire with a specific time delay for the signalsthat it transmits. On the contrary, the designed network achieves the delay line byconnecting neurons sequentially and making use of their trigger delay.54igure 4.1: Schematic of the neural model for the location of sound sources inmammals and other animals. Obtained from (Konishi, 1993).Despite not being a faithful reproduction of the biological model, it mocks thesame behaviour. Moreover, the main purpose of the experiment was to test theHebbian rule, where the evolution of the network is known and simple.
Environment set-up
In order to simplify the aforementioned network, the initial experiment was focusedon reproducing the delay line connected to only one of the ears. This delay line iscomposed by several neurons that trigger with a certain time delay from each other.The network starts with the neurons unconnected, and if the implementation issuccessful, the neurons would end up connected to each other in a sequential shape,so when the first neuron registers an incoming pulse, the following ones would triggerafter their corresponding delay. In Fig. 4.2 is depicted the distribution of the neuronsin RANA for this initial layout, before and after the growth process takes place. (a) (b)
Figure 4.2: Initial layout of the first experiment, before and after the growth process.The red dots are the somas of the neurons, the blue dots are the axon/dendriteslinks, and the green dots represent their growth cones.55he shown layout has the main inconvenient that a neuron can only reach anyof its two adjacent neurons, as it would have to jump over any of them in order toreach any other neuron. This would introduce a limitation to the neurons, as theycould only grow towards their immediate neighbours.Therefore, the layout was modified into a circular pattern in order to minimizethis drawback. In Fig. 4.3 is depicted this layout both before and after the growthprocess takes place. (a) (b)
Figure 4.3: Initial layout of the first experiment after modifying the layout from alinear to a circular distribution of the agents, before and after the growth process.The red dots are the somas of the neurons, the blue dots are the axon/dendriteslinks, and the green dots represent their growth cones.Once the geometrical layout of the network was set, the following assumptionsand specifications were applied to the designed model: • the excitation of a neuron only needs an incoming pulse for happening. There-fore, the triggering is deterministic, and the stochastic model used for thespiking is not tested in this experiment. • The electric pulses of all neurons have the same intensity. Hence, the attractionforce solely depends on the distance between agents, and on the spiking timedifference. • The learning process is much slower than the neuron dynamics i.e. F i = ηF (∆ t ), where η << • Once a neuron’s input is connected to a second neuron, the learning processfor that neuron stops. • Neuron’s soma is the agent getting excited through the RANA events called synapse . Once excited, it propagates the excitation state to the rest of theneuron’s agents. 56
Neuron’s growth cone is the agent managing the incoming electric pulses,through the RANA events called electric pulse .Hebb’s rule (eq. 3.1) is appliedby merging this information with the timing of the soma’s excitation, . • Signal propagation speed is neglected. • Once a neuron is excited, it emits an electric pulse after a certain time delay.
Environment and process noise
In order to make the model more realistic, noise was added to the system. Thus, therobustness of the network could be tested. Moreover, learning is highly influencedby stochastic processes taking place in the neurons and neural circuits according toliterature (Dayan and Abbott, 2001).Namely, 2 different sources of noise have been added to the model: • Environment noise: Corresponding to disturbances caused by a plethora ofunknown sources, or sources that can not be controlled or monitored, such asother neurons in the brain that are unconnected to the current system. Byapplying the central limit theorem, the intensity of this noise can be modeledas a normal distribution. Moreover, the direction of the noise follows a uniformdistribution, from 0 to 360 degrees. • Process noise: Corresponding to the uncertainty of the triggering time of theneuron. Although it is likely that a neuron triggers right after being excited,the precise time of its triggering follows as well a stochastic model. It has beenapplied the Poisson distribution for modeling this noise.Moreover, the experiment also tested the robustness of the system towards anintense source of noise at a specific location i.e. there may be a group of very activeneurons close to the current neurons that may induce attraction to them. However,the current neurons should not grow towards the others in case that they belong toan independent process. This is assessed true if the triggering frequencies of bothgroups of neurons are different.This second group of neurons has been modeled as a single electric pulse generator agent, which sends an electric signal with a frequency uncoupled of the current groupof neurons. Should this external group of neurons have a very intense field, the num-ber of coincidences in time inside the Hebbian equation curve would be small enoughto avoid the attraction between them.
Environment Gaussian noise:
In Fig. 4.4 are depicted the results of two simulations when the growth conesare affected by additive Gaussian noise, corresponding to the environment noise. Inboth cases the noise follows a normal distribution N (0 , σ = 50). Whereas the firstimage shows the pure effect of the noise, the second one shows the effect of the noisewhen the neurons are also following the Hebbian rule (eq. 3.2).57t is remarkable that even though the noise level is very high ( σ = 50) comparedto the value obtained by eq. 3.1 ( F (∆ t ) < σ < (a) (b) Figure 4.4: Effect of additive white noise on the neuron growth, with µ = 0, and σ = 50. a) depicts the behaviour of the growth cone when it is only driven by theGaussian pattern, and b) shows the addition of the noise over the Hebbian rulebehaviour. Process noise:
It corresponds to the delay in the triggering of the neurons after the input synapsegets excited. In order to model it, three different alternatives have been tried: • Add Poisson patterns with the same mean to all of the neurons. Even thoughthe obtained noise values will be randomly determined for each neuron, andhence different, they will all have the same average values. • The mean of the Poisson pattern of each neuron is randomly decided at thebeginning of the execution i.e. When a neuron is created, the Poisson distri-bution determining its process noise is characterized by λ = U ( t , t ), where U ( t , t ) is a uniform distribution between time values t and t . • The value of the noise level of each neuron is manually determined, based onits ID number. Thus, the effect of the process noise in a specific neuron canbe more easily assessed. 58t has been observed that if all of the neurons are disturbed with the samePoisson distribution, the effect will be almost neglected. This is due to the fact thatfor a given Poisson distribution P ( λ ), the relative delay in the triggering betweenneurons will be kept static, as all of them will tend to suffer the same amount ofdelay. Thus, the aforementioned second and third alternatives have been tried outon the neuron model.The behaviour of the system when the 3 different approaches are applied is de-picted in Fig. 4.5. It can be observed that the effect is almost none when everyneuron is affected by the same distribution. When neurons triggering time followPoisson distributions with randomly selected expected times, the results are unpre-dictable, as depending on the randomly obtained parameters, they will grow towardsdifferent directions. Finally, as it was expected, adding a trigger delay to only oneneuron affects the direction of that neuron and its closest neighbours. (a) (b) (c) Figure 4.5: Effect of Poisson noise P ( λ ) on the soma triggering time, when a) thetriggering time of each soma is affected by a Poisson distribution with λ = 30, b) thetriggering time of each soma is affected by a Poisson distribution with λ = U (0 , U ( t , t ) is a uniform distribution between t and t , and c) when only thetriggering time of one soma is affected by a Poisson distribution with λ = 20. Disturbances at specific locations:
As mentioned before, the neurons may be affected by two different noise sourcesfrom the environment. The first one, already introduced, relates to white noisedue to all of the processes that are taking place in the brain. The second one,however, corresponds to intense neural activity taking place at a close location fromthe current neurons, so it has a specific direction and intensity. Moreover, it spikesat a characteristic frequency, which can not be the same as the current network -Otherwise they should end up being connected, according to Hebb’s rule.Therefore, it was implemented a pulse generator agent in the simulation, forreplicating a neighbouring cluster of neurons uncoupled with the current set ofneurons, but generating a very intense field.The simulation is shown in Fig. 4.6. It can be observed that the growth conestend to follow the desired path. However, they experience sudden changes corre-sponding to the spikes of the pulse generator that happen when the neurons areexcited. 59igure 4.6: Effect on the growth of an agent (near the bottom right corner) emittingpulses with a period of 1020 ms, whereas the rest of the neurons have a period of1000 ms. The intensity of the emitted pulses is 10 (the intensity of the other neuronsoscillate between 0.1 and 1). The environment noise was set to N (0 , P ( λ ) was not included. Neither the 2nd derivative model and thedrag force were included in this simulation.The effect of this noise is highly dependent on how the Hebbian rule is applied.Namely, it is very relevant how the influence of a neuron on a second one decaysduring time when they do not fire at similar times. The implemented model considersthat the incoming intensity is reduced by a factor of 0 .
01 when the incoming pulse isbeyond the time limits of eq. 3.2. Changing the decay factor when a neuron triggersout of its range have a big impact on the reaction of a neuron towards incominguncoupled pulses.
The delay detector described in the previous section served as an initial experimentfor testing the ground knowledge of the Hebbian learning. In this section, a differentneural network topology is introduced in order to delve deeper into the growth ofthe networks, and observe how they get affected by other factors.Applying again the initial versions of the neuron model, a new topology has beenimplemented. The implemented function is a coincidence detector, which meansthat the output signal spikes when there is a spike detected in both of the inputsignals. In Fig. 4.7 is depicted the time evolution of an ideal network implementinga coincidence detector.Furthermore, more features of the designed model were tested with this layout.As before, all the simulations and experiments have been performed using the RANAframework, where the neurons are represented by agents modeled in Lua.60 .4.1 Implemented network
The goal of the developed neuron model is to be able to reproduce a coincidencedetector with two inputs. Therefore, a successful implementation of this networkwill generate spikes in the output signal whenever two spikes are detected at thesame time in both inputs. In Fig. 4.7 is depicted the value of the output signal inrelation with the input signals. I npu t I npu t O u t pu t Figure 4.7: Schematic of the signals of an ideal coincidence detector. The two topgraphs represent the pulses trains received in the inputs of the neurons, whereas thebottom one represents the pulses emitted by the neuron at its output. A neuronacting as a coincidence detector should only emit spikes when spikes arrive simulta-neously to both of its inputs.
Assessment of the temporal values of input neuron spikes
In the model introduced in the previous section, a neuron gets excited every timean electric pulse event reaches its soma. Furthermore, the excitation of the neuronis handled by the master agent, which sends electric pulses to the different neuronsaccording to a given pattern.Therefore, that experiment does not take into account the spiking of intercon-nected neurons. In fact, their outputs solely depend on the pattern of electric pulsessent by the master agent. Despite that experiment proved useful for testing a rulebased on the Hebbian learning for making neurons’ spines grow towards other neu-rons, the experiment does not assess the firing rule of the neurons, which meansthat the effect of connected neurons is not analysed. The main purpose of this ex-periment was to implement and test the growth rule of the dendrite spines of theneurons.
Initial conditions
It is important to take into account that Hebbian learning and rapid structuregrowth is only one of the multiple mechanisms behind learning in neural circuits.Actually, the rapid growth of spines for connecting different neurons is a fine tun-ing of the network, and the main structure of the network is built up by longer and61ore complex processes taking place during the brain development (Feldman, 2009).Therefore, it is assumed that the initial setup of a network is formed by neuronsgetting excited by previously connected sources, which include many other neuronallayers in the brain, as well as previously established connections with external sen-sory signals. Hence, the modeled network receives inputs from the environment,which is assumed to be a black box, so the implemented model reacts to incomingsignals which follow unknown rules.This experiment has been set up with 3 neurons, where the 2 leftmost onesrepresent the input neurons, and the rightmost one represents the output neuron.
A network with the shape of a coincidence detector was obtained after simulatingin RANA the described setup. The result for the basic experiment is shown in Fig.4.8. The neurons start completely unwired, and after running the simulation fora certain amount of time, the two input neurons end up connected to the outputneuron. Moreover, a stranger pulse was added in the bottom right corner of themap, which slightly leans the dendrites but is not able to alter the final result.Figure 4.8: Evolution of a set of 3 agents that trigger following the behaviour ofa coincidence detector i.e. the leftmost agents fire at the same time, whereas therightmost agent fires 20ms later. The environment noise was set to N (0 , P ( λ ) was not included. The resultis pseudo-deterministic (although the trajectories are subject to random deviations,the result is always the same). 62 inal consideration about the geometrical layout If a set of three neurons must behave in certain way, their behaviour should berobust enough to keep working even if they are part of a larger network where otherneurons are triggering with undetermined frequency and intensity.Furthermore, the specific neurons that would need to be wired together in orderto implement the desired function should not need to be initially determined. Theway the network is connected is irrelevant, as long as it accomplishes its function.Therefore, for a given network there may be an undetermined number of layoutsthat can be considered successful.
In the previous sections two different logic functions taking into account the timedimension have been introduced, namely a delay line and a coincidence detector.The former was used for testing the feasibility of using Hebb’s rule for generatingthe growth of neurons’ spines, whereas the latter explored the performance of thespine dynamics designed in this project and introduced in section 3.3. In order to testthe designed rules on a more complicated scenario, a network that implements thefeatures of the two aforementioned networks has been developed i.e. a coincidencedetector where at least one of the inputs go through a delay line before reaching theoutput neuron.The main purpose of using this layout was being able to test the firing rule forthe neurons (let us recall that in the previous experiments the firing of the neuronswas forced by the master agent). Moreover, this layout could be used for testing thewhole integration of the designed rules, as it requires all of them to work in orderto obtain the desired outcome. Therefore, it could be used as a golden standardfor testing further modifications of the system and being able to have comparableresults.To sum up, with this layout it is possible to have a network that starts with anunconnected set of neurons that get wired by using Hebb’s rule and fire following theproposed modification of the Leaky I-F model (see section 3.4). Furthermore, thegrowth of the spines through the network environment is achieved by implementingthe designed dynamics for the spines.With all this in mind, an extended version of the coincidence detector introducedin section 4.4 has been designed.
In Fig. 4.9 is shown the schematic of the desired network after the learning process.The two input signals are fed to N and N respectively, and once they get excitedthey propagate electric pulses to the output neuron. It can be observed that there isan additional neuron between N and N O , being its purpose the addition of a delayin the spiking of N O with regards to the spiking time of N .63igure 4.9: 2-neurons coincidence detector with one of them delayed by using anintermediate neuron. The tipycal delay of each neuron is 20ms, and the spikingperiod is 200ms Learning process
When the execution of the systems starts, the neurons are not connected to eachother. During a learning period of 500 s, the inputs of the neurons are forced bytriggering N
20 ms on average later than N and N , which fire simultaneously.Furthermore, the excitation of N O is forced 40 ms later than N and 20 ms laterthan the other two neurons. The described network has been implemented and simulated in RANA. The simula-tion result is depicted in Fig. 4.10. The initial layout consists in two input neuronsin the left side, and output neuron in the right side, and an intermediate neuron forprovoking a time delay in the bottom branch.The neurons start unconnected, and during the simulation the first input neurongets connected to the output neuron, whereas the second one gets connected to theintermediate neuron, which afterwards gets connected to the output neuron too.Despite the randomness in the process, the default values for the set of parame-ters makes the network to always result in the same topology. The required learningwindow oscillates between 380 ms and 450 ms.64igure 4.10: Simulation result in RANA of the 4-neurons extended coincidencedetector. The output soma is the rightmost agent (red dot), and the input neuronsare two leftmost somas. There is an extra neuron in the bottom branch for delayingthe signal of one of the inputs. The delay in the firing of neurons was set to 20 ms.
This network consists in a neuron reservoir with random initial connections, wherethe input signals are applied to some of the neurons, and another one is connectedto the desired output signal during the learning period (see Fig. 4.11).Figure 4.11: Sketch of a possible initial state of a neuron reservoir with the inputsand output of a coincidence detector. 65s neurons are highly dominated by stochastic processes, the given reservoir mayend up connected in a plethora of different ways. If the model is robust enough, thegreat majority of the times the final layout should reproduce the desired function ina satisfactory way. Moreover, the number of neurons in the reservoir may probablyaffect the success rate of the final layout.If this layout were to be successful, further versions could deal with more thanone function at the same time i.e. If the network implements 2 coincidence detectorsat the same time, it would have their 4 inputs and 2 outputs in the same reservoir,or even sharing the inputs.
Specifications: • The output neuron is initially isolated i.e. it starts without any input connec-tions. • The rest of the neurons are initially randomly connected (or disconnected) toeach other. • System inputs are initially connected to random neurons (at least 1 neuronper input). • There is present in the system environment noise (Gaussian + specific distur-bances), and process noise (Poisson based delay).
Model requirements: • The LTP is modeled by the positive side of the Hebbian rule (eq. 3.1) • The LTD is modeled by the negative side of the Hebbian rule (eq. 3.1) • There are needed 3 noise models for the environment white noise, the neurontrigger delay, and the presence of local disturbances. • The synapse between neurons decay over time • The output of the neurons is correlated with the connected inputs ( O i = f ( I , I , ...I N )) Controller parameters:
In order to do a quantitative assessment of the model, the influence of the followingparameters in the final outcome was evaluated: • Number of neurons in the reservoir • Number of neurons connected to the inputs • Number of initial random connections • Noise intensity 66 .6.1 Simulation results
The neuron reservoir network has been simulated in RANA, for a network with 21neurons. In Fig. 4.12 is depicted the network layout at the beginning and end ofthe simulation. The network was fed with a function equivalent to a coincidencedetector of three inputs i.e. there is an output neuron whose spiking is forced agiven time after the simultaneous spikes of three input neurons are produced.It is observed that the neurons in the system are able to grow their spines andconnect to each other. What is more important, both the input and output neuronsare connected to the main body of the network, and therefore the input spikes canbe propagated to the rest of the network.Doing a quantitative and meaningful analysis of this network is not trivial, asthere are many signals and intermediate neurons involved. There is also a highdegree of randomness and was difficult to predict the topology that was created. (a) (b)
Figure 4.12: Initial and final states of a neuron reservoir network simulated inRANA. The network contains 21 neurons, and the learning took place for 400 s.The three leftmost neurons are selected as input neurons. Their spiking are forced,and the growth of their dendrites is inhibited. The rightmost neuron, which isconsidered the output neuron of the system, and spikes with a certain delay relativeto the input neurons. 67 .7 Final results and discussion
In the previous sections of this chapter, a couple of layouts implementing the de-signed model have been introduced. Each of them has been tested with the RANAenvironment, and it has been proven in most cases that the result is satisfactory.However, when running the simulations the design parameters have been set to val-ues that lead to the desired results, without doing a thorough assessment on theirimpact.In this section are shown the results of some experiments that have been donein order to analyse the effect of some of the parameters of the designed model onits performance.When not stated the contrary, the set of parameters used in the following ex-periments are fixed to the values shown in Table 4.2. Moreover, the initial neuronslayout is crucial for determining the final outcome of the network.
Parameter name Value U rest -70 [mV] U threshold -54 [mV]Pulse amplitude 10 [mV]Leaky τ = RC m
520 [ms] k sigmoid X sigmoid µ trigger noise σ env. noise In order to assess a specific set of parameters, it is relevant to take a look to theirperformance on a single neuron, when it does not have any sort of interaction withother neurons. As the introduced design consists in neurons following a stochasticmodel, they trigger even without the presence of external pulses.68f this first assessment results unfruitful, it will be unlikely that those parameterswill be able to be used for producing useful networks. In order to perform theanalysis, it was used the neuron pair network introduced in section 4.2, as it onlycontains two neurons and it is straightforward to remove the interaction betweenthe two of them.
Characterization of the 1-D firing histogram of a neuron
Fig. 4.13 showsthe histograms of a neuron which follows all the rules introduced in the chapter 3for two different values for the characteristic time constant of the Leaky I-F model.Namely, this results were used in section 3.4 for validating the designed probabilisticmodel for making neurons to spike.It is observed that the bigger the time constant, the more the histogram getsshifted to the right, keeping a shape similar to a Gaussian bell. This makes sense, asthat parameter is the one defining how slow the charge of the membrane capacitancewill go back to its original value ( U rest ). P (a) P (b) Figure 4.13: Normalized histogram of the firing times of an isolated neuron gettingexcited by white noise with µ = 0 .
02 when a) τ = 20 ms (the average firing time is¯ t = 173 . τ = 520 ms The parameters of the Sigmoid function are k = 14 .
58 and x = 0 . t = 1057 . By using the extended coincidence detector introduced in section 4.5 it was evaluatedhow different parameters affect to the correlation between the time differences of thefiring of the output neuron relative to the two input neurons.Fig. 4.14 shows the correlation of the firing times for a successful set of param-eters. It can be observed that most of the firings concentrate at ∆ t (cid:39) ms , and∆ t (cid:39) ms . Effect of the voltage intensity of the incoming pulses
In the next figure canbe observed how the amplitude of the incoming pulses affect the timing performanceof the network. In Fig. 4.15 is depicted the simulation when the electric pulses have69igure 4.14: Histogram of the firing times of the output neuron in the extendedcoincidence detector in relation with the input neurons. The electric pulses havean amplitude of 3 mV. Times in the X and Y axis represent the time differencebetween the triggering of the output neuron ( N ) and the input neurons ( N and N , respectively). The white area implies no triggers at all happening with thosetime difference values. The relative firing times of the input and output neurons arerecorded for 10000 spikes of the output neuron.an amplitude of 1 mV (whereas in the simulation shown in Fig. 4.14, they had anamplitude of 6 mV).It is observed that the dispersion of the histogram is bigger, and it tends to getrepeated at time intervals which correspond to the period of the neurons (200 ms).It is also interesting to observe the results when one of the connections in thenetwork was not successful i.e. when neurons N and N did not get connectedduring the simulation. This phenomena is depicted in Fig. 4.16, for an electricimpulse amplitude of 1 mV. The dispersion is greater than in the same experimentwhen all the connections are correctly achieved.In general, it can be assumed that the correlation in the firing times tends togrow to more precise results when the impulse amplitude is bigger. For a specificcircuit and function, this will result in more reliable results, giving always the correctoutput when the corresponding inputs spike.However, this is actually an undesired behaviour, as it would imply an ad-hocnetwork, which could only work for a very specific function. In the extreme situationwhen the amplitude gets extremely high values, the neuron will tend do always spikewithin the same time difference, which would be a behaviour closer to that of anANN. 70igure 4.15: Histogram of the firing times of the output neuron in the extendedcoincidence detector after the learning process, when the pulse intensity is 1 mV.The network learns during 500 [ms]. After that, the relative firing times of the inputand output neurons are recorded for 10000 spikes of the output neuron.Figure 4.16: Histogram of the firing times of the output neuron in the extendedcoincidence detector, when the pulse intensity is 1 mV, and the neurons N and N did not connect during the learning process. The relative firing times of the inputand output neurons are recorded for 10000 spikes of the output neuron.71 .8 Discussion In the previous sections of this chapter the different simulations that have been runfor testing the proposed model are summarized. Moreover, analytic results of thesimulations show the non-deterministic nature of the model.The results also show that the multi-agent approach and the implemented grow-ing rules are able to evolve the topology of a small set of neurons and reproduce thefunction that was initially fed to the system.The 2D histogram of the extended coincidence detector shows the relationshipbetween the relative times of the input and output neurons firing. The networkstarts fully unconnected, and after a learning period, the connections between theneurons evolve until reaching certain topology. The graph shows a distribution inthe firing times of the neurons centered around the time delays considered duringthe learning process.In any case, the proposed simulations are simple, and the resulting networksrequire a deeper analysis in order to have a more solid model.
Usage of a 2D map
Due to the nature of the RANA simulation environment, the tests have been con-ducted for geometrical spaces with two dimensions. This is different to biologicalneural circuits, where the neurons are distributed in a 3D domain.Despite the simulations were implemented in 2D and the obtained results aresatisfactory, the flexibility of the model is limited due to this condition. Futurenetworks will probably require complex structures which will not be possible toreach without a third dimension being implemented. Otherwise, it is likely that thedendrite spines of different neurons collide and block the growth of each other.
Crossing of dendrites in the geometric space
The larger a network is, the bigger the amount of dendrites, which occupy physicalspace (as well as the neuron somas). In this project, it has been omitted the collisionsthat could happen between dendrites, and they were allowed to ignore and crossother agents in their way. In the situation of a dendrite colliding with a soma thatwas not its goal, the implemented rules force the establishment of a connectionbetween the dendrite and the soma. This is particularly undesired in the case ofthree aligned neurons, where it is necessary to get a connection between the two inthe borders. Although the noise may let the connection be reached, there are highchances of it happening, no matter if it is a 2D or 3D space.Some modifications of the model are proposed for shortcoming this issue: • Adding a repulsion force in the model that keeps the growth cone away fromother somas when their firing times are not correlated i.e. when Hebb’s ruledo not apply to them. Moreover, this would be a short range function, so itwould only have a noticeable effect when the growth cone is close to the soma.The ideal behaviour would be the growth cone describing a circular trajectoryaround the undesired soma, until it is able to reach again its trajectory to thetarget one. 72
The studies covering the function of chemical cues in biological neural circuitssuggest that some proteins present in the neural circuit accomplish this func-tion. Therefore, a solution could be adding more agents to the MAS designwhich perform the function of such proteins. This would be a long line ofwork, and it is not clear that its results would present a high performance. • The combination of growth noise and connection pruning can probably makethe network converge to the desired structure i.e. undesired connections wouldappear in many occasions, but the presence of noise would allow the dendriteto avoid the undesired soma and continue its path towards the destination insome of the tries. Moreover, adding a pruning process in the network wouldmake the undesired connections to disappear.
Computational cost of the MAS model and RANA
An issue observed during the execution of the neuron reservoir experiment was thatthe simulations were very computationally expensive. Moreover, the speed of thesimulation decreased exponentially, so an improvement in the processing hardwarewould still lead to very long simulation times after certain time of execution.A thorough computing analysis of the designed MAS has not been done. How-ever, it may be crucial for exploring the alternatives of this simulation framework,specially for implementing networks with more complexity.73 hapter 5Conclusion
A new model for growing SNNs has been designed. The new model takes inspira-tion from biology and makes use of Hebb’s rule for defining the growth of neuronconnections in an SNN. Moreover, the designed model has been implemented in amulti-agent system design, where the intelligence is distributed into different agents.Besides representing the concept of a neuron as an independent agent, the logic driv-ing each neuron has been divided into different agent types. Namely, each neuroncontains a soma agent that processes the incoming electric pulses and evaluates ifthe neuron shifts towards an excitation state, several spine agents whose purpose ismerely graphical right now, and a growth cone agent per dendrite that contains thelogic for deciding the growth direction and transfers incoming pulses to the soma.In order to test and validate the developed model, five different network topolo-gies have been implemented, in order to assess different aspects of the model per-formance. The results after running the neuron pair network showed that the im-plemented neurons spike following a Gauss-like spike probability distribution whentheir membranes are at a constant potential. Moreover, the obtained 2-D time his-togram showed that there is a correlation between the spiking times of the inputneurons and the output neron in the extended coincidence detector .The results of the neuron reservoir are more difficult to analyse, due to theincreased complexity of that network. The simulations done so far show that neu-rons following the designed model tend to grow their spines towards other neigh-bouring neurons even if they are only following a purely stochastic firing process.Furthermore, the neurons that follow specific firing patterns wire as well towardsother neighbouring neurons, and a path is established between the input and outputneurons. Further work in this type of network should start by finding meaningfulquantitative measurements of the behaviour that neurons are currently showing withthe used set of parameters. Once a standard set of measurements is established asa standard, this network could offer a wide range of opportunities for exploring theperformance of this model and improving it.If such standard measurements are devised, Machine Learning algorithms couldbe used as a complimentary tool for optmising the performance of the growth modele.g. a genetic agorithm could be implemented for evolving the system parametersto values that improve the performance of the network.74he project also observed an incomplete area in the field of neuroscience. Heb-bian learning is a process that was initially proposed for explaining the evolutionin the synapse strength between neurons. Furthermore, some literature in the fieldexplored in the last decades the idea of a Hebbian-based structural learning in neu-ral circuits. However, there is no consensus around that hypothesis. Althoughmore research and experiments are needed in that direction, there are already somewell-established ideas that support its veracity e.g. it is accepted that learning is aprocess that do not happen exclusively during childhood, that learning is achieved bythe establishment of new connections between neurons and the modification of theirsynaptic strength, and that learning can happen in very short periods of time (lessthan 20 minutes). Hebbian-based structural plasticity is therefore a good candidatefor explaining these phenomena.An alternative explanation for the learning process in the animal brain is theexistence of a continuous chaotic growth in the dendritic arbors, leading to thecreation of random connections between neurons. Later on, the connections betweenneurons with unrelated activity would get destroyed in a process known as pruning,staying only in time those connections that fulfill Hebb’s rule and therefore exchangeuseful information. This idea reinforces the hypothesis that neuronal processes arehighly determined by randomness. However, one pitfall is the difficulty of creatingconnections between neurons that are far away (although those connections couldbe achieved by the presence of auxiliary structures such as glial cells). Moreover,this hypothesis would require a big amount of energy for creating a massive chaoticgrid of dendrites, from which only a small amount would end up being useful.It is also feasible that the two hypothesis are partially true, and both explainpart of a bigger and more complex model of the neurons and neural circuits. Inany case, it is clear that there are still many things to discover about the animalbrain, and this knowledge may be paramount for developing artificial systems ableto improve the performance of the current state-of-the-art technology, by raisingnew computing paradigms and learning models.75 ibliography
Bi, G.-q. and Poo, M.-m. (1998), ‘Synaptic modifications in cultured hippocampalneurons: dependence on spike timing, synaptic strength, and postsynaptic celltype’,
Journal of neuroscience (24), 10464–10472.Dayan, P. and Abbott, L. F. (2001), ‘Theoretical neuroscience: computational andmathematical modeling of neural systems’.Dickson, B. J. (2002), ‘Molecular mechanisms of axon guidance’, Science (5600), 1959–1964.Feldman, D. E. (2009), ‘Synaptic mechanisms for plasticity in neocortex’,
Annualreview of neuroscience , 33–55.Feldman, D. E. and Brecht, M. (2005), ‘Map plasticity in somatosensory cortex’, Science (5749), 810–815.Ferber, J. and Weiss, G. (1999),
Multi-agent systems: an introduction to distributedartificial intelligence , Vol. 1, Addison-Wesley Reading.Froemke, R. C. and Dan, Y. (2002), ‘Spike-timing-dependent synaptic modificationinduced by natural spike trains’,
Nature (6879), 433.Furber, S. B., Galluppi, F., Temple, S. and Plana, L. A. (2014), ‘The spinnakerproject’,
Proceedings of the IEEE (5), 652–665.Gerstner, W., Kistler, W. M., Naud, R. and Paninski, L. (2014),
Neuronal dynamics:From single neurons to networks and models of cognition , Cambridge UniversityPress.Ghosh-Dastidar, S. and Adeli, H. (2009), ‘A new supervised learning algorithm formultiple spiking neural networks with application in epilepsy and seizure detec-tion’,
Neural networks (10), 1419–1431.Graves, A., Liwicki, M., Fern´andez, S., Bertolami, R., Bunke, H. and Schmidhuber,J. (2008), ‘A novel connectionist system for unconstrained handwriting recogni-tion’, IEEE transactions on pattern analysis and machine intelligence (5), 855–868.Graves, A., Mohamed, A.-r. and Hinton, G. (2013), Speech recognition with deeprecurrent neural networks, in ‘2013 IEEE international conference on acoustics,speech and signal processing’, IEEE, pp. 6645–6649.Holtmaat, A. and Svoboda, K. (2009), ‘Experience-dependent structural synapticplasticity in the mammalian brain’, Nature Reviews Neuroscience (9), 647.76ndiveri, G. and Horiuchi, T. K. (2011), ‘Frontiers in neuromorphic engineering’, Frontiers in neuroscience , 118.Izhikevich, E. M. (2003), ‘Simple model of spiking neurons’, IEEE Transactions onneural networks (6), 1569–1572.Jeffress, L. A. (1948), ‘A place theory of sound localization.’, Journal of comparativeand physiological psychology (1), 35.Jørgensen, S. V., Demazeau, Y. and Hallam, J. (2015), Rana, a real-time multi-agentsystem simulator, in ‘2015 IEEE/WIC/ACM International Conference on WebIntelligence and Intelligent Agent Technology (WI-IAT)’, Vol. 2, IEEE, pp. 92–95.Kempter, R., Gerstner, W. and Van Hemmen, J. L. (1999), ‘Hebbian learning andspiking neurons’, Physical Review E (4), 4498.Kohonen, T. (1990), ‘The self-organizing map’, Proceedings of the IEEE (9), 1464–1480.Konishi, M. (1993), ‘Listening with two ears’, Scientific American (4), 66–73.Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2012), Imagenet classification withdeep convolutional neural networks, in ‘Advances in neural information processingsystems’, pp. 1097–1105.Liu, Y.-H. and Wang, X.-J. (2001), ‘Spike-frequency adaptation of a generalizedleaky integrate-and-fire model neuron’, Journal of computational neuroscience (1), 25–45.Lowery, L. A. and Van Vactor, D. (2009), ‘The trip of the tip: understanding thegrowth cone machinery’, Nature reviews Molecular cell biology (5), 332.Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan,F., Jackson, B. L., Imam, N., Guo, C., Nakamura, Y. et al. (2014), ‘A millionspiking-neuron integrated circuit with a scalable communication network and in-terface’, Science (6197), 668–673.Nelson, M. E. (2004), ‘Electrophysiological models’,
Databasing the brain: from datato knowledge pp. 285–301.Patel, N. and Poo, M.-M. (1982), ‘Orientation of neurite growth by extracellularelectric fields’,
Journal of Neuroscience (4), 483–496.Paugam-Moisy, H. and Bohte, S. (2012), ‘Computing with spiking neuron networks’, Handbook of natural computing pp. 335–376.Soman, S., Suri, M. et al. (2016), ‘Recent trends in neuromorphic engineering’,
BigData Analytics (1), 15.Thurau, C., Bauckhage, C. and Sagerer, G. (2003), Combining self organizing mapsand multilayer perceptrons to learn bot-behaviour for a commercial game., inin
BigData Analytics (1), 15.Thurau, C., Bauckhage, C. and Sagerer, G. (2003), Combining self organizing mapsand multilayer perceptrons to learn bot-behaviour for a commercial game., inin ‘GAME-ON’, Citeseer, p. 119. 77 ibliography Bi, G.-q. and Poo, M.-m. (1998), ‘Synaptic modifications in cultured hippocampalneurons: dependence on spike timing, synaptic strength, and postsynaptic celltype’,
Journal of neuroscience (24), 10464–10472.Dayan, P. and Abbott, L. F. (2001), ‘Theoretical neuroscience: computational andmathematical modeling of neural systems’.Dickson, B. J. (2002), ‘Molecular mechanisms of axon guidance’, Science (5600), 1959–1964.Feldman, D. E. (2009), ‘Synaptic mechanisms for plasticity in neocortex’,
Annualreview of neuroscience , 33–55.Feldman, D. E. and Brecht, M. (2005), ‘Map plasticity in somatosensory cortex’, Science (5749), 810–815.Ferber, J. and Weiss, G. (1999),
Multi-agent systems: an introduction to distributedartificial intelligence , Vol. 1, Addison-Wesley Reading.Froemke, R. C. and Dan, Y. (2002), ‘Spike-timing-dependent synaptic modificationinduced by natural spike trains’,
Nature (6879), 433.Furber, S. B., Galluppi, F., Temple, S. and Plana, L. A. (2014), ‘The spinnakerproject’,
Proceedings of the IEEE (5), 652–665.Gerstner, W., Kistler, W. M., Naud, R. and Paninski, L. (2014),
Neuronal dynamics:From single neurons to networks and models of cognition , Cambridge UniversityPress.Ghosh-Dastidar, S. and Adeli, H. (2009), ‘A new supervised learning algorithm formultiple spiking neural networks with application in epilepsy and seizure detec-tion’,
Neural networks (10), 1419–1431.Graves, A., Liwicki, M., Fern´andez, S., Bertolami, R., Bunke, H. and Schmidhuber,J. (2008), ‘A novel connectionist system for unconstrained handwriting recogni-tion’, IEEE transactions on pattern analysis and machine intelligence (5), 855–868.Graves, A., Mohamed, A.-r. and Hinton, G. (2013), Speech recognition with deeprecurrent neural networks, in ‘2013 IEEE international conference on acoustics,speech and signal processing’, IEEE, pp. 6645–6649.Holtmaat, A. and Svoboda, K. (2009), ‘Experience-dependent structural synapticplasticity in the mammalian brain’, Nature Reviews Neuroscience (9), 647.78ndiveri, G. and Horiuchi, T. K. (2011), ‘Frontiers in neuromorphic engineering’, Frontiers in neuroscience , 118.Izhikevich, E. M. (2003), ‘Simple model of spiking neurons’, IEEE Transactions onneural networks (6), 1569–1572.Jeffress, L. A. (1948), ‘A place theory of sound localization.’, Journal of comparativeand physiological psychology (1), 35.Jørgensen, S. V., Demazeau, Y. and Hallam, J. (2015), Rana, a real-time multi-agentsystem simulator, in ‘2015 IEEE/WIC/ACM International Conference on WebIntelligence and Intelligent Agent Technology (WI-IAT)’, Vol. 2, IEEE, pp. 92–95.Kempter, R., Gerstner, W. and Van Hemmen, J. L. (1999), ‘Hebbian learning andspiking neurons’, Physical Review E (4), 4498.Kohonen, T. (1990), ‘The self-organizing map’, Proceedings of the IEEE (9), 1464–1480.Konishi, M. (1993), ‘Listening with two ears’, Scientific American (4), 66–73.Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2012), Imagenet classification withdeep convolutional neural networks, in ‘Advances in neural information processingsystems’, pp. 1097–1105.Liu, Y.-H. and Wang, X.-J. (2001), ‘Spike-frequency adaptation of a generalizedleaky integrate-and-fire model neuron’, Journal of computational neuroscience (1), 25–45.Lowery, L. A. and Van Vactor, D. (2009), ‘The trip of the tip: understanding thegrowth cone machinery’, Nature reviews Molecular cell biology (5), 332.Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan,F., Jackson, B. L., Imam, N., Guo, C., Nakamura, Y. et al. (2014), ‘A millionspiking-neuron integrated circuit with a scalable communication network and in-terface’, Science (6197), 668–673.Nelson, M. E. (2004), ‘Electrophysiological models’,
Databasing the brain: from datato knowledge pp. 285–301.Patel, N. and Poo, M.-M. (1982), ‘Orientation of neurite growth by extracellularelectric fields’,
Journal of Neuroscience (4), 483–496.Paugam-Moisy, H. and Bohte, S. (2012), ‘Computing with spiking neuron networks’, Handbook of natural computing pp. 335–376.Soman, S., Suri, M. et al. (2016), ‘Recent trends in neuromorphic engineering’,
BigData Analytics (1), 15.Thurau, C., Bauckhage, C. and Sagerer, G. (2003), Combining self organizing mapsand multilayer perceptrons to learn bot-behaviour for a commercial game., inin