A Deep 2-Dimensional Dynamical Spiking Neuronal Network for Temporal Encoding trained with STDP
AA Deep 2-Dimensional Dynamical Spiking NeuronalNetwork for Temporal Encoding trained with STDP
Matthew S. Evanusa
Dept. of Computer ScienceUniversity of Maryland [email protected]
Yiannis Aloimonos
Dept. of Computer ScienceUniversity of Maryland [email protected]
Cornelia Fermuller
Dept. of Computer ScienceUniversity of Maryland [email protected]
Abstract
The brain is known to be a highly complex, asynchronous dynamical systemthat is highly tailored to encode temporal information. However, recent deeplearning approaches to not take advantage of this temporal coding. Spiking NeuralNetworks (SNNs) can be trained using biologically-realistic learning mechanisms,and can have neuronal activation rules that are biologically relevant. This type ofnetwork is also structured fundamentally around accepting temporal informationthrough a time-decaying voltage update, a kind of input that current rate-encodingnetworks have difficulty with. Here we show that a large, deep layered SNN withdynamical, chaotic activity mimicking the mammalian cortex with biologically-inspired learning rules, such as STDP, is capable of encoding information fromtemporal data. We argue that the randomness inherent in the network weightsallow the neurons to form groups that encode the temporal data being inputted afterself-organizing with STDP. We aim to show that precise timing of input stimulus iscritical in forming synchronous neural groups in a layered network. We analyze thenetwork in terms of network entropy as a metric of information transfer. We hopeto tackle two problems at once: the creation of artificial temporal neural systemsfor artificial intelligence, as well as solving coding mechanisms in the brain.
Many of the major advances in A.I. have been made due to reverse engineering biological brains,including Deep Neural Networks (DNNs) and Reinforcement Learning (RL) ([33]). Research intoA.I. has been given a boost in recent decades due to success in reverse engineering neural connectionsand functions of the mammalian cortex, and in reverse, machine learning approaches have helpedcomputational neuroscientists in learning spike patterns from neural data. So-called “deep learning”frameworks ([24]) adopt the feed-forward, layered aspects of the connections in the brain, whilesome (convolutional networks) go further to mimic the convolutional connections that are presentin many pathways. These approaches have demonstrated human level accuracy in some tasks,such as face detection from static images ([38]). However, because they rely on non-biologicallearning mechanisms, it is difficult to use these networks to uncover biological mechanisms for neuralinformation encoding. In addition, these networks still struggle to classify temporal data; this willbe a critical next step for A.I. systems of the future. In the neuroscience community, it is a currentdebate as to whether the information is encoded in the firing rate of the neuron, or if the information isencoded in the precise timing of incoming spikes ([32, 41, 42]). It has been shown that unsupervisedlearning rules, such as STDP (section 1.4), are capable of encoding this spike-timing informationinto the synaptic weights ([14, 22]). It has also been argued that through these learning rules, theneurons compete with one another to form groups in a selection process ([11]). Here we investigatewhether biologically-inspired neural plasticity rules combine with precise timing of incoming firing
Submitted to 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. a r X i v : . [ c s . N E ] S e p ates to encode information about novel stimuli in the form of a biologically-inspired camera in amulti-layered network. Current ANNs, including deep-learning frameworks, are what computational neuroscientists refer toas "rate-encoding models", in that the real-value of the output from a unit (roughly approximatingan integrating neuron) represents a firing rate, or a probability of firing, over time. These networksaccept inputs all at once and are also referred to as "linear non-linear" models ([34]), in that theyrepresent a combination of a linear operation (summation of incoming connections), as well as anon-linear operation called the "activation function" (some sort of sigmoidal, inverse tangent function,or rectified linear unit). The rate R of a neuron’s fire is algebraically: R = N/T (1)for N spikes in a given time-scale window T. As we can see from this interpretation, these rate-encoding networks divide out time as a factor in their design; they are fundamentally time-less. Anyrate-encoding network that attempts to encode a sequence of images as a video is not encoding2-dimensional images in time, but rather the entire sequence of 2-dimensional frames at once as one3-dimensional feature block.
Our proposed network is closer to what is referred to as "reservoir computing" ([30]), in that it actsas a chaotic, dynamic system due to the neuron update, lateral connections, and randomness placedin the network. Reservoir computers are a type of recurrent neural network that do not learn on therecurrent connections within the random recurrent connections but rather only on a read-out layer.Reservoir computers have demonstrated high success in classification of difficult temporal patternsdue to the integration of time information in each artificial neuron’s update rule. They have beenshown to encode very complicated temporal input, even input that exhibits very chaotic patterns suchas the Lorentz attractor. Although reservoir computers require a strong supervised signal to drivethem towards a target, they still demonstrate that random weights and connections are a powerfulfactor in a network’s ability to encode temporal data. As such, they have been successfully used toanalyze complicated recurrent neuronal dynamics in decision making ([7, 13]).
The so-called “third generation” of neural networks, SNNs offer a promising way of encoding timeinformation, although the dynamics are hardly new ([17]), and what constitutes a "spiking neuralnetwork" is broad, as well as the desired goal of biological plausibility or classifying power. SNNsremain at this point a loose collection of different neural-like networks that share the common featurethat individual neurons operate as dynamical systems that individually collect voltage and “spike” anoutput at a given threshold, but differ in their layout and learning rules. SNN activation functions forthe neuron vary widely, from neurons that simply integrate over time without any voltage leak ([40]) tothose that model ion channels at individual compartments along the neuron’s dendrite ([35]). Analysisof the interconnection of these spiking units in larger networks, however, is a more recent undertaking([15]). Efforts have been made to show that backpropagation can be implemented with SNNs as well([3]), although this departs from the biological plausibility and principles of self-organization. Herewe analyze the information-encoding potential of a semi-recurrent feed-forward cortical-like deepnetwork of spiking neurons, and investigate whether information is stored in assemblies as Hebbpostulated. We take the approach of using a network that has solid biological basis while trying tokeep the operations as simple as possible.
Donald Hebb postulated that given a neuron’s spiking behavior, which is exhibited in SNNs, neuronsorganize into groups (what he called cell assemblies) that encode information ([16]). Hebb formulated2 learning rule that has been boiled down to the phrase "neurons that fire together, wire together.",and is formalized for continuous (rate-encoding) networks as: δw , = αN N (2)For neurons N and N , with learning rate α ; the weight w changes only when both neurons fire. Thisrule allows a networks’ neurons, either rate-encoding or spiking, to learn in an unsupervised mannerto reorganize their connections to match the input as there is no error term. However, this simplerule has been extensively studied and proven to be able to self-classify input data, for example inHopfield Networks, although they have issues such as a limited patterns storage capacity([1]). Recentdiscoveries showed that neurons in the brain modify their connections via a temporally asymmetric learning rule, dubbed Spike Time Dependent Plasticity ([2, 37]), a timing-based modification ofHebb’s original learning rule that takes advantage of (and requires) spiking dynamics. STDP combinestwo phenomena that occur in neurons in vivo: Long Term Potentiation (LTP), which strengthens theconnection (weight) between two neurons, and Long Term Depression (LTD), which weakens it. Therule is implemented here using a common trace mechanism for STDP: δw pre,post = (cid:26) A + w pre,post T post if N pre fires A − w pre,post T pre if N post fires (3)for presynaptic neuron N pre and post N post . T is a trace that goes to some maximum value (thispaper uses 2) and exponentially decays after the neuron fires, with time constant τ . A + and A − arescaling factors that allow for the maximum LTP and LTD, respectively, and can be weighted. Themaximum excitatory and inhibitory weights were bounded; this is a natural balancing mechanisminspired by the fact that a given neuron can only put a maximum number of transmitter receptorsphysically on the outside of the neuron, and from studies that show that LTP works slower in highfiring regimes [43]. STDP increases the weights for a connection between two neurons, with a givenpre- and post-synaptic connection, only when the presynaptic neuron fires followed closely in timeby a firing of the postsynaptic neuron. If a postsynaptic neuron fires before a presynaptic neuron, thatconnection is weakened ([2]). Because some synapses are also weakened, this results in a competitiveprocess whereby only the connections that fire in lock-step with the input are strengthened. It hasalso been shown that STDP could be the fundamental mechanism that causes neurons, both in thebrain and SNNs, to organize into Hebbian assemblies ([22]). It has already been shown that STDPis a powerful enough learning rule to allow network to categorize temporal data in the form ofEEG ([23]), although the network setup was quite different from what will be described in Methods.Previous work also showed a 2-dimensional layered network that was able to classify static images ofhandwritten digits using STDP on a more simplified network ([20]).However, it has also been argued that STDP alone is not enough to maintain a balanced network, asrepeated high-frequency stimulus could trigger a continuous chain of LTP which infinitely increasesthe weights of the network ([43]). In order to attempt to counter this, we introduced both inhibitoryneurons, as well as inhibitory plasticity, iSTDP, inspired by the implementation in [28]. The ideais that an inhibitory neuron should increase its firing to a post-synaptic excitatory neuron if thatexcitatory neuron fires too often.As a consequence of STDP, certain "pathways" in the neural substrate begin to strengthen,; these arethe cell assemblies that Hebb postulated. Because they are more tightly connected, a lesser input isrequired to reignite the group. In addition, because all the neurons are tightly bound, a reactivationin a small subset of the neurons will cause chain firing, ending up in the activation of the entiregroup[18]. A consequence of this is that if information can be successfully encoded in a neuronalgroup, it is extremely robust to noise and missing data. The temporal input being investigated comes in the form of the biologically-inspired DVS camera([9]). The DVS sensor sends data in packets, called "events", rather than frames, and is a goodcandidate to take advantage of the independent nature of each neuron’s input in an SNN. The DVS3igure 1: A typical STDP curve. Y axis is change in weight, X axis is time between post and prespike. A negative time difference (post fires before pre) will decrease the weight, and the inverse fora positive one. (from Bi and Poo, 2001)Figure 2: Example DVS frame of a hand moving used in the input. The events capture the major shiftin intensity of the edges of the hand as the hand moves across the background.camera is inspired by certain retinal ganglion cells that spike when a change in intensity is detected([36]). Similarly, rather than send all pixels at the same time, as with RGB cameras, the DVScamera sends only events that correspond to large changes in pixel intensity. Because of the currentinstruction-based hardware, this network batches the events into discrete frames, although it couldbe transplanted into a neuromorphic chip hardware in the future that receives input asynchronously.Recent work has been devoted to discovering good encoding mechanisms for DVS data for use inrate encoding network, whereas we make the argument that the raw DVS events themselves workwell with the SNN network structure.
The neuron activation is based off of the Izhikevich neuron ([21]), which is a variant of the quadraticintegrate and fire model; it is able to reproduce many behaviors seen by neurons in vivo, except ismore computationally efficient than more complex models such as Hodgkin-Huxley ([17]). Specificneurotransmitter conductances are not modeled explicitly. At each time-step, the voltages for eachneuron at each layer are updated according to the following equations: v (cid:48) = 0 . v + 5 v + 140 − u + I syn (4) u (cid:48) = a ( bv − u ) (5)4ith the voltage being reset to reset parameter c , and the variable u being incremented by d , when thevoltage goes over a threshold, − mV , which indicates that the neuron spiked. We used the sameparameters as in [21] to model a regular-spiking cortical neuron, with a = . , b = . , c = − mV and d = 2 . In order to add randomness to the network, c is modified by r and d is modifiedby − r , where r is a random number on the interval [0,1]. We did not model inhibitory neuronsdifferently.At each time-step, every neuron j updates its voltage as in eqn. 4, after calculating the incomingsynaptic current. For layer 1, this current is equal to 0 or 1, depending on if an event occurred fromthe DVS frame. If the neuron is layers 2 or later, it calculates its voltage via the following: I syn ( i ) = k (cid:88) j =0 w j,i s j + m (cid:88) l =0 w l,i s l (6)where I syn ( i ) is the incoming synaptic current into neuron i , k is the length of a side of the kernel forincoming connections and m is the length for horizontal (see Figure. 3); s x is 1 if neuron x spikedand 0 else. Indices j and l are neurons that have synaptic connections to neuron i from feed-forwardand lateral connections, respectively. If a neuron is inhibitory, its weight is multiplied by − . Theneurons’ weights are set within a bounds, [0, w max ] for excitatory and [ − w min ,0] for inhibitory. The network consists of a set of connected "layers" of spiking neurons. Each layer is a 2-dimensionalsheet of neurons of the same
N xN dimensions (see Figure 3 for schematic). The code is completelycustom-written, and is written expressly for the NVIDIA GPU using the CUDA programminglanguage ([29]) to maximize speedup. The advent of GPU technology allows large-scale networkslike this to run in real time, even while training: a 5-layered network of 612,000 total neurons with122 million total synapses, on the NVIDIA Titan Black, runs at 38.5 Hz without STDP updates, and20Hz while training with STDP.The connections are inspired by how feed-forward connections in the mammalian brain are highlyoverlapping and convolutional ([12, 25]) and hierarchical ([44]), being the inspiration for Convo-lutional Neural Networks (CNNs)[26]. However, unlike CNN’s, the kernels do not share weights,in an effort to remain biologically realistic, as well as to investigate if the non-shared quality ofweights is an important factor in a spiking network. The connection schema is loosely based on amix of [31] and modern CNN architectures. Each neuron has feed-forward connections to a smallsquare block of neurons in the post-layer, exactly as in a CNN. This is done intentionally because thenetwork is not learning a kernel using backpropagation, and needs as much variation as possible toaid in the STDP process. In addition, in attempting to reverse-engineer the fundamental principles ofspiking, it is biologically implausible that neurons communicate some sort of shared weight acrossthe cortex; weight updates are local by nature. The weights are initialized with a uniform randomdistribution, and given an equally-high starting weight that allows spikes to occur so that STDPcan begin reshaping the weights, consistent with theories that state that intrinsically high startingactivation is a major part of neuronal group formation ([18]). Excitatory weights are capped at +7and inhibitory at -30, and STDP is only performed on feed-forward connections. In addition to thefeed-forward connection, each neuron also connects horizontally to a square of neurons in the samelayer. This provides a degree of recurrence to the system, implementing a version of "re-entrant"connections as described by Edelman ([11]), although more long-distance reentry is left for futurework. We refer to the system as "semi-recurrent" and not fully recurrent because it does not containbackward connections to previous layers.The network features both excitatory and inhibitory neurons, to maintain balance. As per prior work([22, 28]) the ratio of excitatory to inhibitory is set at 4:1, to mirror the ratio of neurons seen in themammalian brain. Only 20% of feed-forward connections and 30% of lateral connections were keptnon-zero, per calculations done about the probability of connections in the cortex being between .1and .3 ([5, 27]). 5igure 3: The network architecture. Each layer is a 2-dimensional sheet of Izhikevich neurons ofdimension N x N . A given neuron, represented in red, has both horizontal connections to neurons ina k x k box around it in the same layer, as well as non-shared feed-forward connections in an m x m box in the next layer. The input DVS data is fed into the first layer, and the categorization is read outfrom groups formed in the last layer. Layer Numbers are indicated in the bottom right corner. Much work has been done to analyze the information capacity of neurons’ firings using information-theoretic metrics. ([4, 6, 41]). Here, we use a metric, network entropy, that has been used in networksanalyzing cancer genome alterations ([39]), that analyzes the local entropy of the flux around eachnode. To calculate the entropy for one neuron, we calculate an averaged entropy over all the outgoingconnections, one entropy for inhibitory and one for excitatory: E i = − /log ( K ) (cid:88) j ∈ N ( i ) p i,j log ( p i,j ) (7)Where E i is the entropy for a given neuron i , j is a neuron with a non-zero weight connected toneuron i , and K , the degree of i , is the number of non-zero connections to neuron i . The mutualinformation given by an input is analogous to a decrease in entropy after a stimulus is presented([8]). Fig. 4 shows some results after training the network continuously. The top row shows the individualneurons spiking; the second row shows a spike count over a 300ms time window; the third and fourthrow show the network entropy for each neuron. When the network starts running, the entropy plot iscompletely black; all the weights are uniformly random, so the entropy is at maximum. As STDPbegins to strengthen the input patterns, and convolve itself forward into new patterns, it changes theweight distribution around a given neuron, thus altering the entropy.Due to the effects of the convolutional connections and STDP, rapid pulses of activity propagatethrough the network that do not occur without learning; this activity is similar to activity demonstratedin cortical-like networks in the brain ([10]). Because the weights are capped, and due to inhibitorysignals and the voltage decay, this means that these neurons must be activated by concurrent,synchronous firing at precise times in order to cause the postsynaptic neuron to fire. Unlike in[28], where uniformly low rates recruited depression, because the stimulus was driving the neurons,potentiation was dominant. The network exhibited local swells of activity in specific clusters, thatwould indicate a local synchrony, rather than a global synchrony ([19]). Because of the lateral6onnections, these clusters were able to move horizontally across the same layer. The groups alsoseem to fire at higher rates, which would give evidence that they are in fact Hebbian groups.One point to note was that changing the parameters of the overlap size (the feed-forward convolution)had drastic effects on the activity of the network. In addition, the maximum and minimum weightcutoffs were highly dependent on the size of the kernel: a larger kernel would overflow with activitywith a much lower cutoff.One assumption is that the network entropy carries information about what the weights are learning.The intution is that if a neuron were to have equal-weighted outgoing connections, it would offerno processing power, as all of the outputs would be the same and indistinguishable. Because thenetwork is learning a temporal signal, rather than a static one, we suppose that the network is learning"temporal features", in addition to traditional features such as lines, edges, and corners. Putting thesetogether, we can see in Fig. 4 that as the spikes propagate through the network, they progress frombeing more local temporal-features, to more and more disperse; this would seem to correlate withthe hypothesis that information is stored in a distributive fashion in the brain. These results wouldseem to aid credence to the hypothesis that temporally-precise firing can encode information aboutan input stimulus in a two-dimensional multi-layered network, because the synaptic weights encodethe input stimulus.When inhibitory STDP is added, the entropy seems to be inverted, at least in the second row: wherethe entropy is high for excitatory neurons it is low for inhibitory ones. We did not notice anydifference in the firing of neural groups with the introduction of iSTDP, which is reflected in therelative similarity of the excitatory entropy between figures 4 and 5; this could either be because ofthe weight cutoff, or because inhibition works less in later layers. For both observations, they will bean interesting avenue of investigation for future work.
A realistic first step would be to perform simple machine learning techniques on the last layer of thenetwork, such as using a linear SVM, to see if the features encoded in the last layer can predict theclass of input. Knowing that the network organizes its weights around the input stimulus, the nextstep is to integrate these networks into larger networks that could use the output of the final layers forfurther processing. For example, the last layer could be connected to a 1-D layer of lateral-inhibitionneurons, which would act as a winner-take-all classification layer. Such neurons could be linked up tomotor commands, for example moving a hand, and the network could be used to explore and validateany role STDP has in motor learning in development, both as investigation into humans as well as totrain dexterous robots for quick and smooth maneuvers.
Here we have demonstrated a novel semi-recurrent feed-forward dynamical network and providedevidence that it is encoding information about temporal stimuli. Our hope is that this will lead tofuture work that takes advantage of the potential of temporal coding in spiking networks.
Acknowledgments
This work was supported partly by the University of Maryland COMBINE program and NSF awardDGE-1632976. The authors would like to also thank Mrs. Greg Davis and Jesse Milzman at theUniversity of Maryland for thoughtful conversation and advice.7igure 4: A display of the network running. The network consists of 96,774 neurons and 7.5 millionsynapses. Here we have turned off iSTDP; just excitatory STDP is running. The top row displays thefiring, the second an average spike count analagous to a PSTH that gets refreshed every 300ms (red =high rate, blue = low), and the last row displays the network entropy, darker colors represent higherentropy for a given neuron. Also visible is a feed-forward propagation of activity from lower layers.Figure 5: A display of the network running with iSTDP, inhibitory entropy is in the bottom row.8 eferences [1] Y. Abu-Mostafa and J. S. Jacques. Information capacity of the hopfield model.
IEEE Transac-tions on Information Theory , 31(4):461–464, 1985.[2] G.-q. Bi and M.-m. Poo. Synaptic modification by correlated activity: Hebb’s postulate revisited.
Annual Review of Neuroscience , 24(1):139–166, 2001.[3] S. M. Bohte, J. N. Kok, and H. La Poutre. Error-backpropagation in temporally encodednetworks of spiking neurons.
Neurocomputing , 48(1-4):17–37, 2002.[4] A. Borst and F. E. Theunissen. Information theory and neural coding.
Nature neuroscience ,2(11):947, 1999.[5] V. Braitenberg and A. Schüz.
Anatomy of the cortex: statistics and geometry , volume 18.Springer Science & Business Media, 2013.[6] D. A. Butts, C. Weng, J. Jin, C.-I. Yeh, N. A. Lesica, J.-M. Alonso, and G. B. Stanley. Temporalprecision in the neural code and the timescales of natural vision.
Nature , 449(7158):92, 2007.[7] W. Chaisangmongkon, S. K. Swaminathan, D. J. Freedman, and X.-J. Wang. Computingby robust transience: how the fronto-parietal network performs sequential, category-baseddecisions.
Neuron , 93(6):1504–1517, 2017.[8] T. M. Cover and J. A. Thomas.
Elements of information theory . John Wiley & Sons, 2012.[9] T. Delbruck. Frame-free dynamic digital vision. In
Proceedings of Intl. Symp. on Secure-LifeElectronics, Advanced Electronics for Quality Life and Society , pages 21–26, 2008.[10] M. Diesmann, M.-O. Gewaltig, and A. Aertsen. Stable propagation of synchronous spiking incortical neural networks.
Nature , 402(6761):529, 1999.[11] G. M. Edelman.
Neural Darwinism: The Theory of Neuronal Group Selection.
Basic books,1987.[12] M. Eickenberg, A. Gramfort, G. Varoquaux, and B. Thirion. Seeing it all: Convolutionalnetwork layers map the function of the human visual system.
NeuroImage , 152:184–194, 2017.[13] P. Enel, E. Procyk, R. Quilodran, and P. F. Dominey. Reservoir computing properties of neuraldynamics in prefrontal cortex.
PLoS computational biology , 12(6):e1004967, 2016.[14] W. Gerstner, A. K. Kreiter, H. Markram, and A. V. Herz. Neural codes: firing rates and beyond.
Proceedings of the National Academy of Sciences , 94(24):12740–12741, 1997.[15] S. Ghosh-Dastidar and H. Adeli. Spiking neural networks.
International journal of neuralsystems , 19(04):295–308, 2009.[16] D. O. Hebb.
The organization of behavior: A neuropsychological theory . Psychology Press,2005.[17] A. L. Hodgkin and A. F. Huxley. Currents carried by sodium and potassium ions through themembrane of the giant axon of loligo.
The Journal of Physiology , 116(4):449–472, 1952.[18] A. Holtmaat and P. Caroni. Functional and structural underpinnings of neuronal assemblyformation in learning.
Nature neuroscience , 19(12):1553, 2016.[19] R. Hosaka, O. Araki, and T. Ikeguchi. Stdp provides the substrate for igniting synfire chains byspatiotemporal input patterns.
Neural computation , 20(2):415–435, 2008.[20] T. Iakymchuk, A. Rosado-Muñoz, J. F. Guerrero-Martínez, M. Bataller-Mompeán, and J. V.Francés-Víllora. Simplified spiking neural network architecture and stdp learning algorithmapplied to image classification.
EURASIP Journal on Image and Video Processing , 2015(1):4,2015.[21] E. M. Izhikevich. Simple model of spiking neurons.
IEEE Transactions on neural networks ,14(6):1569–1572, 2003.[22] E. M. Izhikevich, J. A. Gally, and G. M. Edelman. Spike-timing dynamics of neuronal groups.
Cerebral Cortex , 14(8):933–944, 2004.[23] N. Kasabov. Evolving spiking neural networks and neurogenetic systems for spatio-and spectro-temporal data modelling and pattern recognition. In
Advances in Computational Intelligence ,pages 234–260. Springer, 2012. 924] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutionalneural networks. In
Advances in neural information processing systems , pages 1097–1105,2012.[25] V. A. Lamme and P. R. Roelfsema. The distinct modes of vision offered by feedforward andrecurrent processing.
Trends in neurosciences , 23(11):571–579, 2000.[26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to documentrecognition.
Proceedings of the IEEE , 86(11):2278–2324, 1998.[27] D. T. Liley and J. J. Wright. Intracortical connectivity of pyramidal and stellate cells: estimatesof synaptic densities and coupling symmetry.
Network: Computation in Neural Systems ,5(2):175–189, 1994.[28] A. Litwin-Kumar and B. Doiron. Formation and maintenance of neuronal assemblies throughsynaptic plasticity.
Nature communications , 5:5319, 2014.[29] D. Luebke. Cuda: Scalable parallel programming for high-performance scientific computing. In
Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposiumon , pages 836–838. IEEE, 2008.[30] M. Lukoševiˇcius and H. Jaeger. Reservoir computing approaches to recurrent neural networktraining.
Computer Science Review , 3(3):127–149, 2009.[31] E. D. Lumer, G. M. Edelman, and G. Tononi. Neural dynamics in a model of the thalamocorticalsystem. i. layers, loops and the emergence of fast synchronous rhythms.
Cerebral cortex (NewYork, NY: 1991) , 7(3):207–227, 1997.[32] Z. F. Mainen and T. J. Sejnowski. Reliability of spike timing in neocortical neurons.
Science ,268(5216):1503–1506, 1995.[33] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein-forcement learning.
Nature , 518(7540):529, 2015.[34] S. Ostojic and N. Brunel. From spiking neuron models to linear-nonlinear models.
PLoScomputational biology , 7(1):e1001056, 2011.[35] I. Segev, R. E. Burke, and M. Hines. Compartmental models of complex neurons.
Methods inneuronal modeling , 63, 1989.[36] T. Sejnowski and T. Delbruck. The language of the brain: The brain makes sense if ourexperiences by focusing closely on the timing of the impulses that flow through billions of nervecells.
Scientific American , 307(4):54, 2012.[37] T. J. Sejnowski. The book of hebb.
Neuron , 24(4):773–776, 1999.[38] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-levelperformance in face verification. In
Proceedings of the IEEE conference on computer visionand pattern recognition , pages 1701–1708, 2014.[39] A. E. Teschendorff and S. Severini. Increased entropy of signal transduction in the cancermetastasis phenotype.
BMC systems biology , 4(1):104, 2010.[40] R. Urbanczik and W. Senn. Similar nonleaky integrate-and-fire neurons with instantaneouscouplings always synchronize.
SIAM Journal On Applied Mathematics , 61(4):1143–1155, 2001.[41] R. R. d. R. van Steveninck, G. D. Lewen, S. P. Strong, R. Koberle, and W. Bialek. Reproducibilityand variability in neural spike trains.
Science , 275(5307):1805–1808, 1997.[42] R. VanRullen, R. Guyonneau, and S. J. Thorpe. Spike times make sense.
Trends in neurosciences ,28(1):1–4, 2005.[43] A. J. Watt and N. S. Desai. Homeostatic plasticity and stdp: keeping a neuron’s cool in afluctuating world.
Frontiers in synaptic neuroscience , 2:5, 2010.[44] S. Zeki and S. Shipp. The functional logic of cortical connections.