Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming
Henrik D. Mettler, Maximilian Schmidt, Walter Senn, Mihai A. Petrovici, Jakob Jordan
EEvolving Neuronal Plasticity Rulesusing Cartesian Genetic Programming
Henrik D. Mettler ∗ Department of Physiology, University of Bern
Maximilian Schmidt
RIKEN Center for Brain Science, Tokyo, Japan
Walter Senn
Department of Physiology, University of Bern
Mihai A. Petrovici † Department of Physiology, University of BernKirchhoff Institute for Physics, HeidelbergUniversity
Jakob Jordan † Department of Physiology, University of Bern
ABSTRACT
We formulate the search for phenomenological models of synapticplasticity as an optimization problem. We employ Cartesian geneticprogramming to evolve biologically plausible human-interpretableplasticity rules that allow a given network to successfully solve tasksfrom specific task families. While our evolving-to-learn approachcan be applied to various learning paradigms, here we illustrate itspower by evolving plasticity rules that allow a network to efficientlydetermine the first principal component of its input distribution.We demonstrate that the evolved rules perfom competitively withknown hand-designed solutions. We explore how the statisticalproperties of the datasets used during the evolutionary search influ-ences the form of the plasticity rules and discover new rules whichare adapted to the structure of the corresponding datasets.
KEYWORDS
Synaptic plasticity, metalearning, genetic programming
Changes in coupling strength between neurons in the central ner-vous system are believed to be central for the acquisition of newskills and memories in humans and other animals. While the mi-croscopic biochemical processes are extraordinarily complex, phe-nomenological models which describe changes in the postsynapticresponse to presynaptic activity have long been explored and suc-cessfully related to experimental data [2]. Furthermore, modernapproaches often provide a normative view on neuron and synapsedynamics [5]. Despite these successes, the construction of new phe-nomenological models remains a laborious, manual process. Herewe pursue an automated approach to constructing phenomenologi-cal models of synaptic plasticity by employing genetic programmingto evolve rules for synaptic that learn efficiently. We refer to thisapproach as “Evolving to learn” (E2L).A simple, but useful abstraction of information processing incortical neurons is obtained by describing a neuron’s output 𝑦 𝑖 as alinear, weighted sum of presynaptic activities 𝑥 𝑗 , followed by theapplication of an activation function 𝜌 : 𝑦 𝑖 = 𝜌 (cid:16)(cid:205) 𝑛𝑗 = 𝑤 𝑖 𝑗 𝑥 𝑗 (cid:17) . Weconsider plasticity rules 𝑓 that determine changes in the couplingstrength 𝑤 𝑖 𝑗 from neuron 𝑗 to 𝑖 : Δ 𝑤 𝑖 𝑗 ∝ 𝑓 ( 𝑋 𝑖 𝑗 ) . Here 𝑋 𝑖 𝑗 representsa set of local variables, such as pre- and postsynaptic activity tracesor synaptic weights. We formulate the search for synaptic plasticityrules as an optimization problem [1]: 𝑓 ∗ = argmax 𝑓 F ( 𝑓 , Ω ) . (1) ∗ Correspondence: [email protected] † Shared senior authorship
Here F represents the fitness of rule 𝑓 , and Ω represents thespecific experimental conditions, for example the network modeland task family. The fitness measures how well a given networkwith plasticity rule 𝑓 solves tasks from the considered task family.Recent work has defined 𝑓 as parametric function, using evo-lutionary strategies to optimize parameter values [3]. While thisapproach allows the use continuous optimization methods, thechoice of the parametric form severely constraints the search space.Other authors have encoded plasticity rules using artificial neuralnetworks [8]. While this allows the plasticity rule to take, in princi-ple, any computable form, the macroscopic computation by ANNsis notoriously difficult to understand, limiting the interpretabilityof the discovered rules. In contrast, we aim to discover interpretablesynaptic plasticity rules in large search spaces. We employ Cartesiangenetic programming (CGP) [6] to represent and evolve plasticityrules as compact symbolic expressions. Previous work has success-fully demonstrated this approach on various learning paradigmsfor spiking neuronal networks [4]. Here we explore the applicationto rate-based models. As an example, we aim to discover plasticityrules that extract the first principal component of an input data set.We use the hand-designed “Oja’s rule” [7] as a competitive baseline. The neuronal network consists of 𝑛 input input units and a singleoutput unit. Like previous work [7] we consider linear activationfunctions 𝜌 ( 𝑥 ) = 𝑥 , hence 𝑦 = (cid:205) 𝑛𝑗 = 𝑤 𝑗 𝑥 𝑗 . A task is defined bya set D of 𝑀 input vectors x sampled from a multi-dimensionalGaussian with zero mean and covariance matrix Σ . In every trial 𝑖 wesample (without replacement) an input vector x ( 𝑖 ) from D , computethe output activity 𝑦 and update synaptic weights elementwiseaccording to 𝑓 : Δ 𝑤 ( 𝑖 ) 𝑗 = 𝜂 𝑓 ( 𝑦 ( 𝑖 ) , 𝑥 ( 𝑖 ) 𝑗 , 𝑤 ( 𝑖 − ) 𝑗 ) , where 𝜂 is a fixedlearning rate. Our goal is to discover rules which align the synapticweight vector w with the first principal component of the dataset( PC ). The set of all possible covariance matrices { Σ } defines atask family T . We further consider two additional task families: T ,where the components of PC are of approximately equal amplitudeand T , where PC is aligned with one of the axes. We define thefitness of a plasticity rule 𝑓 for a dataset D as a sum of two terms,measuring the deviation of the weight vector from PC , and aregularizer for its length, respectively, averaged over 𝑀 trials: F ( 𝑓 , D) = 𝑀 𝑀 ∑︁ 𝑖 = (cid:12)(cid:12) cos ( ∠ ( w 𝑖 , PC )) (cid:12)(cid:12) − 𝛼 (cid:12)(cid:12) || w 𝑖 || − (cid:12)(cid:12) . (2)Here ∠ (· , ·) denotes the angle between two vectors, and 𝛼 > isa hyperparameter controlling the strength of the regularizer. Toavoid overfitting plasticity rules to a single dataset, we define the a r X i v : . [ c s . N E ] F e b . D. Mettler et al. fitness of a plasticity rule 𝑓 for a task family T as the sampledaverage over 𝐾 datasets from this family: F ( 𝑓 ) = E T [F ( 𝑓 , D)] .When trained with tasks sampled from T , out of evolution-ary runs with different initial conditions evolve plasticity ruleswhich allow the network to approximate PC of the respectivedataset as good as or even slightly better than Oja’s rule (Fig. 1a, b; Δ 𝑤 Oja 𝑗 = 𝜂𝑦 ( 𝑥 𝑗 − 𝑤 𝑗 𝑦 ) , Δ 𝑤 lr 𝑗 = 𝜂 ( 𝑦 + + 𝑤 𝑗 )( 𝑥 𝑗 − 𝑤 𝑗 𝑦 ) , Δ 𝑤 lr 𝑗 = 𝜂 𝑦 ( 𝑥 𝑗 − 𝑤 𝑗 𝑦 ) ). These learning rules typically contain Oja’s ruleas a subexpression. Similarly to Oja’s rule, learning rules evolvedon datasets with random principle components generalize wellto datasets with statistical structure (Fig. 1c,d). lr slighly outper-forms Oja across the investigated datasets due to a constant scalingfactor which effectively increases its learning rate. These resultsdemonstrate that our approach is able to robustly recover efficienthand-designed plasticity rules.When evolved on structured data (task families T , T ), learn-ing rules tend to specialize and outperform their more generalcounterparts (Fig.1c, Δ 𝑤 lr 𝑗 = 𝜂 (− 𝑥 𝑗 )( 𝑥 𝑗 − 𝑤 𝑗 𝑦 ) ; Fig.1d, Δ 𝑤 lr 𝑗 = 𝜂 ( 𝑦 + 𝑤 𝑗 𝑥 𝑗 )( 𝑥 𝑗 − 𝑤 𝑗 𝑦 ) ). However, evolved rules vary in their gener-alizability. For example, lr rule does not generalize well to datasetswith different statistical structure. The availability of plasticityrules as closed-form expressions helps us understand why. It isstraightforward to derive the expected weight changes under lr as E D (cid:104) Δ 𝑤 lr 𝑗 (cid:105) = 𝜂 (cid:16) ( 𝑤 𝑗 − ) Var [ 𝑥 𝑗 ] + 𝑤 𝑗 (cid:205) 𝑖 ≠ 𝑗 𝑤 𝑖 Cov [ 𝑥 𝑖 𝑥 𝑗 ] (cid:17) . Intwo dimensions, this system of equations has only one stable fixedpoint with a wide basin of attraction that fully covers our assumedinitialization space ( || w || = )(Fig. 1e). For D from T , the fixedpoint is close to (− , − ) , thus approximately maximizing the fit-ness. For D from T , the fixed point remains close to the diagonal,which is no longer aligned with PC , thus prohibiting high fitnessvalues (green dots in Fig. 1c, d). In contrast, learning rules evolvedon datasets from T , perform well on tasks from all task families(Fig. 1b,c,d), similar to Oja’s rule. We demonstrated that E2L can successfully discover interpretablebiophysically plausible plasticity rules allowing a neuronal networkto solve a well-defined task. Not only did we recover Oja’s rule, butby evolving rules on datasets with specific structure we obtainedvariations which are adapted to the corresponding task families.This adaptation can be viewed as an example of “overfitting” thatshould be avoided. However, we believe this to be an importantfeature of our approach: Evolving to learn from data with specificstatistical structure and thus embedding empirical priors into plas-ticity rules could potentially explain some of the fascinating aspectsof few-shot learning and quick adaptation to novel situations dis-played by biological agents. For example, it seems reasonable toexpect that plasticity mechanisms driving the organization of sen-sory cortices are adapted to the statistical structure of their inputs,reflecting an evolutionary specialization to the ecological niche oforganisms.
ACKNOWLEDGMENTS
This research has received funding from the European Union Hori-zon 2020 Framework Programme for Research and Innovation under
Figure 1: E2L discovers plasticity rules which perform PCA.a) Fitness of the best-performing individual over genera-tions for multiple evolutionary runs with different initialconditions with covariance matrix Σ sampled from T . Ran-dom initial weights for each dataset, constant across gener-ations to make individuals from different generations com-parable. b-d) Fitness per dataset for 𝑛 = datasets not usedin evolutionary run, with covariance Σ sampled from T (b), T (c) and T (d). Parameters: 𝑛 = , 𝐾 = , 𝑀 = , 𝜂 = . . 𝑓 is constructed from the operator set {+ , − , ∗} , with the in-put set 𝑋 𝑖 𝑗 = (cid:0) { 𝑤 𝑖 𝑗 , 𝑥 𝑗 , 𝑦 } (cid:1) . For implementation details see[9]. e) Phase plane of lr , trained on dataset, Var [ 𝑥 ] = . , Var [ 𝑥 ] = . , Cov [ 𝑥 𝑥 ] = . , with two sample tracto-ries converging to the fixed point. Gray indicates possibleinitial weights. the Specific Grant Agreement No. 945539 (Human Brain ProjectSGA3). REFERENCES [1] Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. 1992. On theoptimization of a synaptic learning rule. In , Vol. 2. Univ. of Texas.[2] Guo-qiang Bi and Mu-ming Poo. 1998. Synaptic Modifications in Cultured Hip-pocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsy-naptic Cell Type.
Journal of Neuroscience
18, 24 (1998), 10464–10472.[3] Basile Confavreux, Everton J. Agnes, Friedemann Zenke, Timothy Lillicrap, andTim P. Vogels. 2020. A meta-learning approach to (re)discover plasticity rules thatcarve a desired function into a neural network. bioRxiv (2020).[4] Jakob Jordan, Maximilian Schmidt, Walter Senn, and Mihai A. Petrovici. 2020.Evolving to learn: discovering interpretable plasticity rules for spiking networks.arXiv:q-bio.NC/2005.14149[5] Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and GeoffreyHinton. 2020. Backpropagation and the brain.
Nature Reviews Neuroscience
21, 6(2020), 335–346.[6] Julian Miller. 2019. Cartesian genetic programming: its status and future.
GeneticProgramming and Evolvable Machines
21 (08 2019).[7] Erkki Oja. 1982. Simplified neuron model as a principal component analyzer.
Journal of Mathematical Biology
15, 3 (1 Nov. 1982), 267–273.[8] Sebastian Risi and Kenneth O Stanley. 2010. Indirectly encoding neural plasticityas a pattern of local rules. In
International Conference on Simulation of AdaptiveBehavior . Springer, 533–543.[9] Maximilian Schmidt and Jakob Jordan. 2020. hal-cgp: Cartesian genetic program-ming in pure Python.hal-cgp: Cartesian genetic program-ming in pure Python.