FFate of Duplicated Neural Structures
Lu´ıs F Seoane
1, 2 Departamento de Biolog´ıa de Sistemas, Centro Nacional de Biotecnolog´ıa (CSIC), C/ Darwin 3, 28049 Madrid, Spain. Instituto de F´ısica Interdisciplinar y Sistemas Complejos (IFISC), CSIC-UIB, Palma de Mallorca, Spain.
Statistical physics determines the abundance of different arrangements of matter depending oncost-benefit balances. Its formalism and phenomenology percolate throughout biological processesand set limits to effective computation. Under specific conditions, self-replicating and computa-tionally complex patterns become favored, yielding life, cognition, and Darwinian evolution. Neu-rons and neural circuits sit at a crossroads between statistical physics, computation, and (throughtheir role in cognition) natural selection. Can we establish a statistical physics of neural circuits?Such theory would tell what kinds of brains to expect under set energetic, evolutionary, and com-putational conditions. With this big picture in mind, we focus on the fate of duplicated neuralcircuits. We look at examples from central nervous systems, with a stress on computational thresh-olds that might prompt this redundancy. We also study a naive cost-benefit balance for duplicatedcircuits implementing complex phenotypes. From this we derive phase diagrams and (phase-like)transitions between single and duplicated circuits, which constrain evolutionary paths to complexcognition. Back to the big picture, similar phase diagrams and transitions might constrain I/Oand internal connectivity patterns of neural circuits at large. The formalism of statistical physicsseems a natural framework for thsi worthy line of research.
Keywords: duplicated neural circuits, brain symmetry, brain asymmetry, lateralization, statistical physics ofneural circuits
I. INTRODUCTION
Statistical mechanics determines the abundance of different patterns of matter according to cost-benefit calculations.In the simplest cases, structures that minimize their internal energy and maximize their entropy are more likelyobserved than other kinds of matter arrangements. More complex scenarios require additional chemical potentialsthat must be optimized as well – by combining concentrations of different molecules across space, resulting in richerpatterns. As external parameters (pressure, temperature, etc.) are varied, the most likely arrangements of mattermight change smoothly or radically – in what we know as phase transitions. By mapping optimal structures oversuch relevant dimensions we obtain phase diagrams that help us understand what arrangements of matter to expectunder distinct circumstances.The appearance of life has been (qualitatively) described as a phase transition [1, 2]. It would take place asrelationships between thermodynamic potentials overcame a complexity threshold. After this, a preferred pattern ofmatter consists of self-replicating entities (Figure 1 a ) that can jump-start Darwinian evolution [3]. Then, naturalselection raises the bar of the thermodynamic cost-benefit requirements: stable patterns of matter in the biospherenot only need to be thermodynamically favored, they also need to win a fitness contest (Figure 1 b ). As a consequence,structures within organisms often operate close to computational thermodynamic limits (e.g. of effective informationprocessing and work extraction [2, 4, 5]). Despite the added layer of complexity, statistical physics remains a veryapt language to describe some biological processes [6–10]: cell cycles can be studied as thermodynamic cycles [11–13],aspects of organisms might stem from an effective free energy minimization (i.e. yet another cost-benefit balance)[14], and variation along relevant dimensions (e.g. organism size vs metabolic load) determines the viability of keyliving structures, which can be terminated abruptly as in phase transitions [15, 16]. An organism’s computationalcomplexity, which enables its cognition, is another one such key dimension. Many essential aspects of life rely oninformation processing mechanisms such as memory, error correction, information transfer, etc., resulting in a centralrole for computation in biology and Darwinism [2, 17–30]. And, once again, statistical physics plays a paramount rolein computation, e.g., by setting limits to efficient implementation of algorithms [31–36].Cognition is possible in organisms without neurons [37–41]; however, nerve cells are the cornerstone of cognitivecomplexity. Within the previous framework we wonder how to elaborate a biologically grounded statistical physics of neural circuits (Figure 1 c ). This should bring known thermodynamic aspects of computation into a Darwinianframework where cognition serves biological function [29, 42–46] (eventually, to balance metabolic costs and extractfree energy from an environment for an organism’s advantage). Such theory would dictate the abundance of differentkinds of circuits; or which kinds of brains to expect under fixed thermodynamic, computational, and Darwinianconditions. For example, it could tell whether “brains” are likely to have a solid substrate (such as the cortex orlaptops [37]) or a liquid substrate (such as ants, termites, or the immune system [37, 46, 47]). We could also derive a r X i v : . [ q - b i o . N C ] A ug Hot reservoirCold reservoirHot reservoirCold reservoirHot reservoirCold reservoir T i m e S e l ec ti on R e p r odu c ti on Hot reservoirCold reservoir Hot reservoirCold reservoirReproduction a b
InputOutput c Fitness cost of wrong computations M e t a bo li c c o s t o f n e u r a l c onn ec ti on s C o m pu t a ti on a l e rr o r No computationRegenerationNeural redundancy d e f
Time
FIG. 1
Towards a statistical physics of neural circuits. a
Some thermodynamic engines can use free energy to shapematter as a replica of themselves. b Such self-replicating patterns can kick-start Darwinian evolution, deeply altering theoptimality of different phases of matter. Efficient replicators are favored and their internal arrangement become subjectof detailed optimization. c Neural circuits enable complex cognition in self-replicating patterns. Accordingly, they becomesubjected to evolutionary-thermodynamic pressures eventually measured by their performance in a computational landscape.Varying external parameters (e.g. computational task, energetic constraints...) renders phase diagrams for neural architectures. d Phase diagram derived derived from [55]. Neural circuits recover from injuries by using i) costly redundant connections orii) paying a metabolic cost for regeneration. Three phases emerge: two where either one of the two strategies is preferred, andone the computational phenotype never emerges. Transitions between phases constraint evolutionary paths. e Grid cells spanthe available room with shorter (black) or larger (gray) periods to create exhaustive spatial representations. Place cells (blue)encode specific locations. The spiking of each cell type is shown along the mouse’s trajectory. f Reconstruction of archetypalcortical columns by Oberlaender et al. reconstruction methods in [167]. phase diagrams saying when computing units behaving as reservoirs (as in Reservoir Computing [48–53]) are moreefficient [54] or, in a different context, whether redundancy or regeneration is a favored strategy to overcome neuraldamage [55] (Figure 1 d ). Matter samples that undergo phase transitions often lose or gain some symmetry [56, 57](e.g. as a glass’s grid invariance fades upon melting). Symmetry (or lack thereof) is a prominent topic for the brain[58], as it concerns redundant computations or lateralization of cognitive functions (a kind of broken symmetry).Statistical Physics has an array of tools ready to characterize symmetry in neural systems.Developing a Statistical Physics of neural circuitry is a big task but some efforts are underway [5, 35–37, 46, 47,54, 55, 59–61]. To facilitate this, we can come up with prominent dimensions that capture essential aspects of partialproblems. For example (besides the metabolic cost of neural circuits), the complexity of the organism’s interactionwith its environment as reflected by i) richness of input signals and ii) generated output behavior; as well as iii)internal organizing principles needed to carry out the relevant computations. The physical scale of the investigatedneural structures might be another deciding feature: a broad cortical area or a cortical column might deal moreeasily with excess input or output complexity than single neurons – as we will discuss. Our hope is that effectivethermodynamic-like potentials might emerge associated to such relevant dimensions. Thus, we might reduce theproblem to simple cost-benefit calculations again. Such coarse-graining of lesser details into effective causal tokens isa current hot topic in Statistical Physics and Information Theory [62–71]. And this is somehow the task of the brainas well, as it builds efficient causal representations of its environment, or as it passes these representations arounddifferent processing centers. Perhaps, information theoretical limits to efficient symbolic encoding constrain neuralwiring and communication between brain structures.Building a statistical physics of neural circuits at large is out of our scope here, but that framework guides andinspires our research. We focus on much narrower questions: As we wonder in the title, what is the evolutionary fate of duplicated neural circuits as they confront fixed thermodynamic and computational conditions? Given somemetabolic constraints and information-processing needs, is a duplicated neural structure stable? Or is it so redundantthat it becomes energetically unjustified? Can two duplicated circuits interact with each other to alter the availablecomputational landscape? How does this affect the evolutionary path further taken by each symmetric counterpart?How is this affected by the scale of the duplicated units (e.g. are the evolutionary fates available the same forredundant neurons, ganglia, cortical columns or layers, etc.)? Can we capture some of these aspects with simplecost-benefit tradeoffs, thus building phase diagrams to reveal evolutionary transitions?To better ground the problem in actual biology, in section II we collect examples of neural duplicities in CentralNervous Systems (CNS). The list is not exhaustive. The examples were chosen because of their prominence, becausethey pose open neuroscientific problems, or because we wish to explore interesting ideas about them. Sometimes theduplicity is explicit (a same circuit appears twice in our brains), sometimes it is subtler (a task is implemented twiceby different structures, perhaps in different ways – thus, why this phenotypic redundancy and how is it achieved?).Throughout, we wonder what conditions (usually, of computational complexity) might support this duplicity or triggerits evolution in a phase-like transition. In section III we model a simple case through naive cost-benefit tradeoffs.These result in actual phase diagrams with transitions that we can calculate explicitly. These diagrams informus about possible evolutionary paths towards computationally complex phenotypes. These are novel results furtherinvestigated elsewhere [72]. In section IV we bring models, results, and examples together into the big picture exposedin this introduction. II. A SHOWCASE OF DUPLICATED NEURAL STRUCTURESA. The two hemispheres
The two brain hemispheres come readily to mind as duplicated neural structures. Broadly, the brain presentsbilaterality with two large, mirror-symmetric halves [73, 74]. Their symmetry likely stems from the body plan ofbilateralians [75] – non-bilateralian ‘brains’ present other symmetries [76, 77]. Part of the correspondence betweenthe body plan and the brain is still explicit in the somatosensory and motor cortices, which contain explicit represen-tations of body parts ordered contiguously in the brain roughly as in the body [78]. But evidence from lesions andhemispherectomies shows that the brain’s bilateral symmetry might be redundant. Individuals who lose a hemisphere(especially young ones) can often regain control of both bilateral body sides with the halved brain alone, as well asimplement other critical tasks [79–86].A more careful examination further dismantles the appearance of bilateral symmetry in the brain [58, 73, 87, 88].One hemisphere motor-dominates the other, resulting in a more skilled contralateral part of the body [88]. Usually,the dominant hemisphere also houses all prominent language centers [87, 89–95]. The areas were these centers coulddwell (see next subsection) are thicker in the language-dominant side, resulting in macroscopically visible asymmetries.Primary visual cortices are fairly symmetrical, but visual processing higher up in the hierarchy is somehow lateralized[73, 96, 97]. The visual system is usually larger in the motor-dominated hemisphere [73].In humans, more often the left hemisphere motor-dominates, resulting in right-handedness. The typical brainpresents developed language centers at the left as well, and enlarged visual cortices at the right [73]. When thedominance is reversed, the brain is mirror-symmetric to the typical one – i.e. a swap of dominance does not result inmajor disturbances of the neural architecture [73, 87]. But excessively symmetric brains correlate with pathologies suchas aphasia [87, 98]. What are the reasons behind this lateralization of certain tasks? Some unsettled discussion existsas to whether more complex cognitive function correlates with lateralized brains [99]. Is lateralization a prerequisitefor the emergence of certain traits, or does it follow from their evolution? How might an excess of symmetry (i.e.faithfully duplicated neural circuits) result detrimental [100]? These questions appear related to symmetry breakingphenomena, which are thoroughly characterized statistical by physics tools [56, 57]. Can we incorporate this formalisminto a computational and Darwinian framework to describe symmetry breaking in the brain?
B. The perisylvan network for human language
The discovery of Broca’s area was a milestone in the understanding of the brain [73]. It offered definitive proof thatdifferent cortical parts take care of distinct functions, and that both hemispheres are not equal. Human languageis lateralized, typically with most prominent language-specific circuitry at the motor-dominant hemisphere. Thiscircuitry is located around the Sylvian fissure [89–95]. It includes the Broca and Wernicke areas, among others, andinterfaces abundantly with motor and auditory cortices. The language-dominated hemisphere lacks these structuresand the thick wiring connecting them. Both hemispheres are more symmetric at birth [87, 95, 101]. Evidence suggeststhat similar circuits exist at either side in newborns, and that both react to speech as soon as day 2 (even with apreponderant reaction on the right side) [101]. Presumably, only the circuits at the dominant side mature into thefully lateralized perysilvan langauge network. Some adult brains are less lateralized, developing seemingly symmetriccircuitry at both sides. Such brains more often present language pathologies [87, 98]. This strongly suggests thata duplicity of language circuits is counterproductive. Might this be due to a conflicting interference between twocandidates for language production [100]?Normal language development thus suggests that it is convenient to lose some of the innate duplicity. But clinicalcases of language recovery after hemispherectomy or injury of the dominant side suggest that enough redundancy canpersist in the dominated hemisphere. These studies show that children who lost their matured language centers cangrow them anew in the opposite side, and that this is easier the earlier that the intervention or injury takes place[79, 82–86, 102]. A mainstream explanation of this capability posits that the brain is more plastic at younger ages,thus a potential to reconstruct language remains. This plasticity would be gradually lost, eventually preventing fullyfunctional language from regrowing in the dominated hemisphere. This would suggest that the duplicity of neuralcircuitry related to language is not realized, but potential. However, when fully functional language develops afterhemispherectomy or injury, the corresponding centers do not establish themselves in arbitrary places, but in thecorresponding Wernicke, Broca, etc. territories of the dominated sides. Some duplicity, a blueprint of the missingcircuits, must exist to guide this process. How is this latent duplicity balanced to prevent unhealthy interferences inhealthy brains?
C. Internalizing the control of movement
Rodolfo Llin´as suggested that the evolutionary history of the CNS is that of the progressive “encephalization” ofmotor rhythms [103, 104]. We can find living fossils of some stages in a range of species, including ours. The earliestcircuits for motor control, still present in Cnidarian and others, dwelt right under the skin [76]. Some arthropodshave decentralized ganglia to coordinate their motion [105, 106]. Much simpler ganglia still take care of fast reflexesin most other species [107]. As we progress towards more complex brains, movement coordination is centralized andlayers of control are added. In reptiles, birds, and mammals, the brain stem coordinates several stereotyped behaviors– e.g. gait, chewing, breathing, or digesting [108]. The voluntary aspect of these and other, more complex tasksoriginates at subcortical centers or motor cortices.In this process, new structures take over tasks previously managed by simpler neural centers. Whenever thishappens, some overlap in function exists. Eventually, the older control structures become controlled by the newerones. They might lose some of its complexity (e.g. ganglia controlling reflexes in mammals). In any case, they enablea hierarchical control exerted from the top. This requires that a dialog be successfully established across levels. Thebrain stem plays a paramount role in this sense: it works as a
Central Pattern Generator [104, 108, 109] that translatessignals from higher up centers into salient electric waveforms that activate motor neurons in an orderly manner –thus establish, e.g., gait rhythms. Transition between different gaits (walk, trot, gallop, etc.) is discrete, as is thetransition between the patterns generated at the brain stem.Throughout the motor control hierarchy, the resolution of simpler tasks coarse grains them so that they become building blocks for the next level. Similar phenomenology is currently under active research in information theory,through the study of coarse grained symbolic dynamics [62–71]. What triggers the emergence of new layers of controlin this hierarchy? Is this externally motivated – e.g. because a new range of behaviors becomes available and morecomplex control is needed? Or internally – e.g. because an increased computational power of the CNS prompts areorganization [110]? The statistical physics of coarse grained symbolic dynamics might shed some light in thesequestions, as well as research on robot control [108].
D. Place and grid cells – a twofold representation of space?
Both grid and place cells represent space [111–113]. Grid cells are located in the medial entorhinal cortex. Theybuild an exhaustive representation of the available room [113–115]. Each grid cell encodes space periodically suchthat its receptive fields are the nodes of a grid with fixed spatial period (Figure 1 e ). Different grid cells use differentperiodicity and phase such that each point is uniquely encoded by the spiking of confluent nodes. Place cells arelocated in the Hippocampus. Instead of responding periodically over space, they code specific, individual spots – asingle place . They can also code for salient elements in a landscape or more extended areas – e.g. the length along aboundary.Why this duplicitous space representation? Is it needed, convenient, or redundant? Did it emerge as each structurespecialized in some specific aspect of spatial coding? In this sense, grid cells have been associated to path integration[113–119], indispensable to keep track when external cues are missing. And place cells relate to episodic memory,memory retrieval, and consolidation of trajectories [118, 120, 121]. They also integrate non-spatial modal information[122].We entertain a complementary possible origin about the necessity of a twofold space representation. Our hypothesisis compatible with other jobs for both cell types. In [123], movement planning is solved as a constraint satisfactionproblem. Networks of spiking neurons are extremely apt at quickly finding great solutions to such problems [124, 125],but only if we manage to encode the constraints in the neural network topology. Different neurons in the networkbecome the embodiment of causal variables of the problem. Their firing represents different combinations of theproblem’s variables – i.e. candidate solutions for the constraint satisfaction. As neurons repress or activate eachother (as dictated by the network’s wiring), they test candidate solutions against the problem’s constraints. Toencode movement planning like this, a two-fold representation is needed [123]: first, an exhaustive representationof the available room; second, an encoding of relevant elements that act as constraints (e.g. walls that cannot betraversed, goals that offer a reward when reached). The first representation acts as a virtual space upon which externalconstraints or internalized goals can be uploaded. Both representations interact to generate ordered spiking patternsrepresenting candidate trajectories. Optimal paths satisfy the problem’s constraints, avoiding obstacles and reachinggoals.It is tempting to assign grid cells the role of a virtual, all encompassing space representation. Place cells couldthen impose constraints derived from internal goals or external cues. In [123], the virtual space is a square grid withone neuron per grid position. This requires ∼ N neurons to encode the N discrete locations. It is more efficient torepresent the N sites with a binary code, which uses around ∼ log ( N ) neurons. This is achieved by neurons codingposition periodically, just as grid cells do. Thus, grid cells build efficient representations of the available room [113] –disregarding of their role in our constraint satisfaction scheme.Regarding place cells: they respond to landscapes, to stimuli in the environment, to non-spatial features suchas odors, directionality, etc. [122, 126, 127]. Their firing can change as a response of environment manipulations[121, 126, 128]. This remapping can happen due to the change of a location’s relevance in an ongoing task [129].Firing can also be modulated by specific behaviors (e.g. sniffing) at a location [130]. Place cells have also beenassigned a predictive role, as they engage in mechanisms relevant for reinforcement learning [131, 132]. For example,consecutive place cells representing a path are known to fire sequentially before that path is taken – thus preplaying it[118, 133, 134]. An already traveled path is also replayed once a destination is reached, potentially allocating rewards[118]. The hippocampus contains populations explicitly encoding goals and rewards [135, 136]. All these features areideal for hippocampal cells (and specifically places cell) to represent constraints (objectives, destinations, places toavoid, etc.) to be uploaded onto the virtual room representation of grid cells, just as in [123]. This hypothesis is abeautiful way forward to understand our navigation system, and how its different parts come together as a distributedalgorithm. Some ongoing research seems to point in this direction [114, 131, 132]. E. Somatosensory and motor cortices
Evidence from extant animals indicates that the earliest neocortical structures dealt mostly with sensory input[137, 138], with prominent roles for visual, auditory, and olfactory modes. Somatosensory cortices evolved earlier thantheir motor counterparts. The most evolutionarily ancient, extant mammal species (e.g. opossums) seem to lack amotor cortex [139]. The corresponding (rather poorly developed) motor functions are handled by the somatosensorycortex itself [137–139] or by thalamic centers [137, 140]. A primary motor cortex appears with placental mammals[137, 138, 141]. The number and complexity of devoted motor cortical areas grows as we approach our closestrelatives [137, 138] – notably, first, due to the motor-visual integration demanded by tasks such as reaching orgrasping [137, 138, 142, 143].If we want to build an effective controller of a complex system (e.g. a cortex to handle a body with many parts)it is a mathematical necessity that the controller has to be a good model of the system itself [144]. We might thinkof somatosensory areas as the model within the larger (brain-wide) controller. This suggests that the evolution ofsomatosensory and motor areas must be intertwined, and that somatosensory maps should predate complex motorcontrol. But, why does a dedicated motor cortex appear eventually if motor control could already be handled bysomatosensory centers [137–139]? What evolutionary pressures prompted this emancipation of the motor areas? Doesthe motor cortex rely on the somatosensory cortex for the modeling needed for control? Or do motor areas developtheir own modeling? If so, how does this differ from the representation in somatosensory areas? How much modelingredundancy exists in both these physically separated cortices?We have chosen the somatosensory–motor axis, but we might ask similar questions about redundancy in somatosen-sory areas as well. They accommodate coexisting parallel representations of the same body parts [137, 138, 145]. Whatprompted this abundance of duplicated circuits? How did they emerge? Is there something about the computationalnature of the task (body representation and motor control) that favors such coexistence of redundant circuits?
F. Reactive versus predictive brains
In [146] Andy Clark defends the predictive brain hypothesis – also studied as predictive coding [147–150]. Accordingto this view, complex brains do not stay idle, awaiting external inputs. Instead, they embody generative modelsthat continuously put forward theories about how external environments look like. Representations built by thosegenerative models would flow from high in a cognitive hierarchy towards the sensory cortices. There, they would beconfronted with external signals captured by the senses. These signals would correct mismatches and further tunethose aspects of the generated representation that were broadly correct. In predictive brains, input signals would stillbe necessary, but not as the main actors that build complex percepts. Instead, they would merely serve as an errorcorrection mechanism to contrast the generated models. What flows upwards in the hierarchy are the discriminatingerrors that need to be discounted in the generated representations [148–154].Advanced visual systems are a favorite example [147–149]. Again, higher cognitive centers elaborate hypothesesabout a current scene. The generated representation flows from cortices that suggest shapes and locations for distinctobjects, down into primary visual cortices that hypothesize about the location and orientation of these object’s edges.There, the input stream is discounted, making no correction if the hypotheses flowing top-down are correct. Otherwise,discriminant errors (signaling, e.g., misaligned edges, mismatching 3-D shapes, etc.) make their way bottom-up untilthey tighten the right screws across the visual hierarchy. A similar confrontation would happen at each interfacebetween levels in this hierarchy. Some empirical evidence supports this view of complex visual systems [148, 149, 152].Some Machine Learning approaches draw inspiration from the predictive brain hypothesis [155–157]. However,historically, most mainstream Artificial Neural Networks for vision or scene recognition work in a reverse fashion[158, 159]. Visual representations are generated from scratch as the input (processed sequentially from the bottomup) reaches the distinct layers. We term this a reactive brain . The wiring of reactive brains is simpler than that of predictive brains . The former only need to convey information in one direction, while the later need to allow two:generative model outputs flow top-down and correcting errors climb bottom-up. If cognitively advanced brains arepredictive, rather than reactive, they need to harbor duplicated circuitry to support the bidirectional flow. Interestingquestions arise: • Is there a threshold of cognitive complexity beyond which predictive brains always pay off? Protozoa reactingto gradients for chemotaxis hardly need a model of possible distributions of chemicals. The ability to dream incomplex brains reveals the existence of generative models. When did they become favored? • What environments are more likely to select for predictive brains? Picture a scene that varies wildly over time,often showing new inputs that a brain could hardly imagine based on previous experience. The cost of correctinghypotheses from generative models might be unbearable. Building representations anew from each incominginput (just as in deep neural networks) might be cheaper. Similarly, a pressure to anticipate the environment(e.g. to react to looming dangers) must be important – otherwise, reactive brains would do just as well and arecheaper. • How did the duplicated circuitry needed for predictive brains emerge? Was a same system duplicated andreversed? Did, instead, a structure dedicated to error propagation grow over a previously existing scaffold,reversing the flow of information as it expanded? Is there a site in a cognitive hierarchy in which the predictivestance is more easily adopted, and hence the needed circuitry expands from there? Or did both flow directionsof predictive brains evolve simultaneously since early? This later possibility demands that even simple brainsallow the predictive way of working, which might be possible [160–163]. Note that complex brains might alsowork in a reactive manner if needed – as reflexes do in advanced nervous systems [107].We suspect that some answers to these questions might be constrained by information theory limits to channelcapacity and error correction. Numerical evaluations of performances of reactive versus diffusive circuits under differentcomputational conditions seems affordable. This would help us build phase diagrams and characterize transitionsbetween these brain classes.
G. The cortical column
Rather than a duplicated neural structure we look at a multiplicated one. The Cortical Column (CC) [167] (Figure1 f ) has been proposed as a basic computing unit of the mammalian brain [164, 165]. Hypothetically, evolutionstumbled upon this complex circuit so apt for advanced cognition that it was “manufactured” en masse , eventuallypopulating the whole neocortex [164, 165]. This story is appealing, but there is no consensus around it [166].Single CCs have been used as a model reservoir since the inception of Reservoir Computing [48–53]. A reservoiris a highly non-linear dynamical system that projects input signals into large-dimensional spaces. In them, differentinput features become easy to separate from one another. Within this framework, a CC would not be task-specific,but rather work as a generic reservoir that separates potentially meaningful features of arbitrary inputs. Other,dedicated circuits would be tasked with selecting the relevant information from among all the insights that a CCmakes available. This would make CCs very versatile computing units, as they can be adapted to different purposesand even implement different functions simultaneously [48].Evidence of computing principles compatible with reservoir-like behavior in CCs is scarce [54, 168–171]. In con-trast, there is abundant evidence of task-specificity. CCs morphology and abundance varies across the cortex, likelyinfluenced by their computational idiosyncrasies. The mouse whisker barrel cortex is an extreme example. Individ-ual CCs of this cortex have grown and thickened through evolution to take care of the somatosensory processing ofeach whisker [166, 172]. The purported versatility of CCs (including this capacity to commit to a task and changemorphology over evolutionary time), make them an interesting candidate, as a computing unit, to test the ideasexposed in the Introduction [54]: What is the steady shape and dynamics of a CC under fixed computational andthermodynamic (energetic, input entropy, etc.) constraints? Under which conditions does a CC remain reservoir-like– thus task-versatile? What conditions, instead, prompt their evolution towards task-specific forms? What might trig-ger their multiplicity across the neocortex? Theoretical advance seems plausible through relatively simple computersimulations. From the empirical side, CCs across species and cortical areas seem real-life versions of the proposedexperiments. III. SIMPLE MODELS FOR A COMPLEX RESEARCH LINE
The examples just reviewed show promising intersections between statistical physics, information theory, and theevolutionary origin and fate of duplicated neural structures. In this section we develop specific models that capturetradeoffs of circuit topology and computational complexity. Through these, we build phase diagrams – explicitly insubsection III.A and qualitative in section III.B.
A. A naive cost-efficiency model of duplicated circuitry for complex tasks
We aim at building the simplest model that still captures some aspects which, we think, determine the stability ofduplicated versus single neural structures. Therefore, we make mathematical choices that simplify our calculations,then look at what such minimal model can teach us. Alternative, less simple models will be discussed elsewhere [72].The essential aspects that we wish to capture are:1. A neural structure garners a fitness advantage by successfully implementing some computation. This results ina cognitive phenotype that makes the organism more apt at navigating its environment, mating, obtaining food,etc; thus securing more energy to sustain its metabolism and, eventually, increasing progeny. A duplicated neuralstructure can result in computational robustness (i.e. more reliable cognition) even with faulty components[173, 174].2. Computation is costly. A circuit’s physical structure (neurons, synapses, etc.) has a material and metabolicstress. Signaling between neurons has a high energetic toll [175], thus a circuit’s cost grows with its wiringcomplexity, or if connections become too long [138]. Duplicated structures would pay twice these costs, resultingin a pressure against redundancy [176, 177].3. Coordinating duplicates is also costly. It often requires additional structures (with its associated costs) tointegrate the many outputs. If missing, failure to coordinate can become pathological [178]. If the duplicatedstructures are far apart (e.g. at bilaterally-symmetric, distant positions in each hemisphere), output integrationwould pay the cost of lengthy connections as well. This results in further pressures against duplicated circuits[177, 179, 180].Assume a complex neural structure composed of a number of subcircuits or submodules. We will study whether itpays off to duplicate such structure in two different scenarios: • In the first (uncooperative) scenario, each subcircuit implements a task different and independent from the tasksof other submodules. The implementation of each task results in a fitness benefit, independently of whether theother subcircuits function correctly. • In the second (cooperative) scenario, the neural structure has been presented a chance to evolve a more complexcognitive phenotype. Its successful implementation reports a large fitness benefit – but only if all tasks arecorrectly implemented. Thus, while each subcircuit still performs different computations, they are no longer independent .These two scenarios roughly model i) evolutionary preconditions that are independent of each other and ii) the processthat integrates them into a new, emerging cognitive phenotype.If duplicated, each copy of the neural structure contains the same number of submodules. Equivalent subcircuitsat either structure solve a same task, so a coordination cost ensues. In return, as remarked above, computationmight be more robust [173, 174]. These potentially duplicated structures might be bilateral counterparts or other,arbitrary, duplicated circuits not obeying to bilateral symmetry. For simplicity, let us assume bilaterality and labelthese structures S L and S R (for left and right). We will say that a phenotype has “bilateral symmetry” if it favorsstructure duplicity, and that it is “lateralized” if only one is preferred. This is just a convenient notation – our analysisremains valid for arbitrary (non-bilateral) duplicates.Let K be the number of tasks available – so that S L/R consist of K submodules. Let us assume that each structurehas an active number of submodules 0 ≤ K L/R ≤ K . Active modules will enter the cost-benefit calculation, inactiveones will not. The probability of selecting a module at random and finding it active is κ L/R ≡ K L/R /K . Furthermore,subcircuits are unreliable – each of them computes incorrectly with probability ε .With this, let us write costs and benefits in the uncooperative scenario. In it, a benefit b is cashed in by eachindependent, successfully implemented subtask. Thus a global benefit reads: B = b (cid:2) (1 − ε ) κ L (cid:0) − κ R (cid:1) + (1 − ε ) (cid:0) − κ L (cid:1) κ R + (1 − ε ) κ L κ R (cid:3) K . (1)Within square brackets we have, for each subtask, the probability that only the left submodule is active and computescorrectly, the probability that only the right submodule is active and computes correctly, and the probability thatboth submodules are active and at least one computes correctly. The activation of both submodules for a same taskentails a coordination cost: C = cκ L κ R K . (2)We further assume a fixed cost for each active module independent of coordination:ˆ C = ˆ c (1 − ε ) (cid:0) K L + K R (cid:1) = ˆ c (1 − ε ) (cid:0) κ L + κ R (cid:1) K . (3)We made this cost grow with the module’s efficiency (i.e. fall with ε ). For simplicity, we assume a linear dependency.Other, non-linear alternatives will be discussed in [72]. The resulting utility function (normalized by K ) reads: ρ = b (1 − ε ) (cid:2) κ L (1 − κ R ) + (1 − κ L ) κ R + (1 + ε ) κ L κ R (cid:3) − cκ L κ R − ˆ c (1 − ε ) (cid:0) κ L + κ R (cid:1) . (4)We now study the cooperative scenario. In it, the different tasks need to become interdependent to earn the fitnessreward. Let this reward be bK , so it grows with the complexity (in number of modules involved) of the phenotype.Multiplying by the likelihood that all tasks are successfully implemented:˜ B = bK · (1 − ε ) K (cid:2) κ L (1 − κ R ) + (1 − κ L ) κ R + (1 + ε ) κ L κ R (cid:3) K . (5)The costs remain the same, so the second utility function (normalized by K ) reads:˜ ρ = b · (1 − ε ) K (cid:2) κ L (1 − κ R ) + (1 − κ L ) κ R + (1 + ε ) κ L κ R (cid:3) K − cκ L κ R − ˆ c (1 − ε ) (cid:0) κ L + κ R (cid:1) . (6)Always seeking simplicity, we assumed a linear weighting of costs and benefits. Hence, parameters b , c , and ˆ c actas external biases and correcting factors to homogenize units. More rigorously, we should optimize costs and benefitsindependently, as in Pareto Optimization [181–183]. But this formalism maps back to statistical physics, and phasediagrams result from the utility functions above [183–187].Take b = 1 without loss of generality, and a fixed cost ˆ c = 0 .
1. Optimizing Equations 4 and 6 we obtain the phasediagrams from Figure 2. They tells us what structures to expect depending on a series of metabolic (energetic) costslinked to successful information processing. A first important result is that neither equation admits graded solutions:either both structures are kept, or one, or none. This naive account does not support a backup structure that takescare of partial computations. If each individual subtask contributes a fitness independent of all others (Equation4), it always pays off to implement either one or both structures (Figure 2 a ). This is not the case for the emergent Bilateral structure No structureLateralized structure
Lateralized structureNo structureBilateral structureBilateral structureLateralized structure a b c
No emergenceLateral to bilateral
Bilateral to bilateral No emergenceLateral to lateral
FIG. 2
Phases of single and duplicated neural structures. a
Phase diagram shows parameter combinations where single(darker) or duplicated (lighter) are preferred when subcircuits are not cooperating towards an emerging phenotype. b Phasediagram when subcircuits cooperate towards an emergent phenotype. A new phase exists in which no circuit is implemented atall (white). c Superposing both phase diagrams reveals evolutionary paths for the emergence of the complex phenotype frompre-adaptations. Depending on metabolic costs and fitness contributions, its emergence might be blocked (white) or demandthat a single circuit gets duplicated. Blue paths indicate necessary changes in external (metabolic and efficiency) conditionsfor an evolutionary path that takes us from a bilateral to a lateralized structure. phenotype (Equation 6): in its phase diagram (Figure 2 b ) for a broad region of parameters it never pays off to buildany structure at al.Superposing both phase diagrams (Figure 2 c ) reveals evolutionary paths for transitions into the novel, complexphenotype. For a broad region, the complex phenotype cannot be accessed (labeled “No emergence” in Figure 2 c ).For the rest of the diagram there is a direct evolutionary path towards the emergent phenotype. This includes cases inwhich bilateral structures remain in place (“Bilateral to bilateral” in Figure 2 c ) and cases in which a single structure(already optimal for the subtasks) implements the emergent phenotype on its own (“Lateral to lateral” in Figure2 c ). Notably, this naive tradeoff never shows a bilateral structure that becomes lateralized as the complex phenotypeemerges. To observe such a transition we would need to move around the phase diagram – i.e. the new emergentphenotype must change ε or c (blue curves in Figure 2 c ). This might happen, e.g., if the complexity of the emergentphenotype imposes a higher reliability (lower ε ) for all the parts, or if coordination becomes more costly – e.g. becauseless discrepancies are tolerated). B. The garden of forking neural tissues
In an outstanding, broad region of the phase diagram (“Lateral to bilateral” in Figure 2 c ), it is favored that asingle structure becomes duplicated when a high fitness can be gained by the complex, emergent phenotype. Remindthat ‘lateral’ and ‘bilateral’ are just convenient labels – our results concern any neural structure afforded such anevolutionary chance. Based on our analysis, the duplication of extant structures appears as a reliable evolutionarypath. Perhaps it was followed by some of the cases reviewed in section II. Empirical observations have indeedrecently suggested that path duplication is a key mechanism for the unfolding of brain complexity [188], and thatit offers buffering opportunities similar to those presented by duplicated genes [189, 190]. In the mechanism of geneduplication, a copy remains faithful and functional while the other explores the phenotypic landscape, often uncoveringnew functions. Might duplicated neural structures wander off in a similar manner? What might their evolutionaryfate become as learning or Darwinian selection press on? How does this fate depend on the task at hand? For example,how does it change with the complexity of the input signals or of the desired output? And with the complexity ofthe input-output mapping? In this section we introduce a Gedankenexperiment to gain some insights about furtherevolution of duplicated neural structures. We will also attempt to fit examples from section II into this qualitativeframework. Necessarily, our conclusions are speculative.Take a single neuron that implements a specific function. It receives weighted inputs from some sources (potentiallyincluding itself, thus recurrence is allowed). Its output feeds into a set of ‘actuators’ to produce some function, whichresults from a weighted sum of the neuron’s output as well. Now let us duplicate this neuron precisely (same weights atthe in-, self-, and out-synapses) and let us add a few random links between the (now two) neurons (Figure 3 a ). Next,let us feed them a stream of inputs while implementing some plasticity rule (e.g. spike-timing dependent plasticity[191]). Of course, let this plasticity be influenced by the fitness of the resulting behavior (via the actuators). Correctimplementation (e.g. of the previous, or of a new, emerging cognitive phenotype) should reinforce the connectionsthat produced it. Wrong implementations should penalize the corresponding weights.0 Input
Human languageGenerative modelsGeometric space representationSomatosensory and motor corticesCentral Pattern GeneratorRather predictive brainsRather reactive brains Animal communication
InputOutput InputOutputOutput 1 Output 2InputOutput InputOutput
Pathway duplication a bc d e
FIG. 3
The garden of forking neural structures. a
Two neurons become duplicated: how do they evolve further? b Might one become specialized in input processing and another one in output control? c Might one take care of I/O interfaceand the other one become a controller that allows for subtler response? d Might they split ways, effectively dividing the originaltask, or perhaps exploring new functions? e The answers might depend on broad aspects of the task at hand, such as its inputor output richness, or intrinsic mathematical complexity. In this speculative morphospace we attempt to locate some casesreviewed in section II. Separation of reactive vs predictive brains is based on their dependence on input and output richnessrespectively. Predictive brains need to contain generative models (gray balls), circuits capable of generating context-dependentrepresentations richer than the input that elicited them.
What happens to the duplicated neurons? • Is one of them lost, thus reverting to the original configuration? Then, duplicating this structure was neverfavorable in the first place. Do they help each other instead, achieving a more robust computation? • Do they become respectively specialized in pre-processing the input and producing elaborated outputs (Figure3 b )? This reminds us of the specialization of somatosensory and motor cortices, or of Wernicke’s and Broca’sarea – rather processing input and producing output syntax respectively. • As the neuron is duplicated, the fitness landscape of computational possibilities changes. For example, it mightbecome feasible to implement the original function in a hierarchical fashion, as we have seen in motor control.Might the neurons arrange themselves in a ‘controller-controlled’ architecture (Figure 3 c )? Might them, instead,take care of different subsets of the function – becoming effectively uncoupled? Or might they unlock previouslyunavailable phenotypes, thus expanding the computational landscape (Figure 3 d )?Two duplicated neurons might be too simple a system and some options might be locked. What happens if,instead, the duplicated neural structure is a small ganglion, a complete cortical column, or a whole patch of cortex?What if it is a large, complex structure functioning as a phenotype module (e.g. the whole fusiform face area)? Donew evolutionary paths become available beyond some complexity threshold of the duplicated substrate? Are someconfigurations easier to achieve for simpler circuits, and others favored for more complex structures? Or does thelandscape of possibilities remain roughly similar to the one for the duplicated spiking neuron?Numerical experiments to shed some light on these questions are underway. Meanwhile, we single out dimensionsthat are relevant for the problem. Final evolutionary paths will depend on the specific task at hand, but threequantifiable aspects might stand out: i) the complexity, or intricacy, of input signals ( K IN ); ii) the complexity, orrichness, of the sought behavior ( K OUT ); and the complexity of the input-output mapping ( K MAP ). These quantitiessuggest a morphospace where we can locate some of the neural structures discussed above (Figure 3 e ). Morphospacesremind us of phase diagrams. They are less rigorous, but still useful tools to contemplate possible morphologies ordesigns [192–199]. They sort out real data or architectures produced by models with respect to some magnitudes,thus helping us compare structures to each other.While speculative, we think that our example is a worthy exercise. We separated two large volumes of morphospacefor reactive vs. predictive brains. The rationale for these locations is that: i) predictive brains should be mainlyguided by input complexity, as their output (a reaction) lags behind; ii) predictive brains should be able to producea wide array of representations so that minor corrections sufice to tune them. Let us ellaborate on these reasons.1Reactive brains often reduce the input richness into categorical responses. Some neural circuits fall in this region:i) The brain-stem working as a Central Pattern Generator takes as input the motor behavior planned in highercortical areas and reduces it to sequences of stereotypical patterns that motor neurons easily understand. ii) Mostanimal communication systems (far from the human faculty of language) also reduce a range of possible scenarios intocategorical responses that (opposed to human language) must be communicated without ambiguity [199–202].Rather predictive brains require little input complexity to prompt varied cognitive responses. Some extremelysimple circuits implement predictive dynamics [160–163], but we assume that, in general, the I/O mapping ( K MAP )of predictive brains is more complex, as it can be context dependent. At the core of a predictive brain there aregenerative models that produce a range of possibilities being constantly evaluated and corrected [146]. They mustpresent some recursivity, thus the interaction of the model’s state with itself would add to K MAP .For three more examples we postulate similar and fairly high input and output complexity ( K IN ∼ K OUT ) andincreasing mapping complexity ( K MAP ). These are the representation of geometric space (with smaller K MAP ),sensory-motor maps (intermediate K MAP ), and human language (largest K MAP ). Most likely, the navigation andsensory-motor systems of different species present a range of K MAP . In any case, the sensory-motor and languagesystems suggest that balanced K IN ∼ K OUT more easily leads to duplicated structures specializing, respectively, ininput pre-processing and rich output generation – perhaps above a certain K MAP threshold.
IV. DISCUSSION
Thermodynamics dictates the abundance of distinct matter phases through straightforward cost-benefit calculations[183, 184]. In the simplest case, those patterns that minimize internal energy and maximize entropy are moreprobable. These calculations become more complex as other (e.g. chemical) potentials become relevant. A transitionto life and Darwinian evolution [1–3] raises the cost-benefit stakes: matter patterns need to be thermodynamicallyfavorable and win an evolutionary contest. Interestingly, the formalism of statistical physics remains effective todescribe phenomenology across biology and cognition [6–14, 31–36]. The later is enabled by expanded computationalcomplexity and allows organisms to increasingly integrate and predict environmental information [2, 17–30].A thermodynamically viable and evolutionarily successful organism must: First, optimize its interface with envi-ronmental inputs [203, 204] as well as its responding behavior. Second, optimize its internal organization so that theinput can be mapped into the output as cheaply as possible. These optimizations entail minimizing metabolic costs,e.g., from heat dissipation or entropy production. Some of these costs depend on information theoretical input-outputrelationships, and are detached from the material implementation of a circuit [35, 36]. Ideally, we would write downall the thermodynamic ingredients involved to calculate detailed balances of computations happening in candidateneural structures. This would reveal what wiring patterns are likely (i.e. more optimal in the eventual cost-benefitbalance) given a fixed set of computations to be implemented (i.e. sought cognitive phenotype), energetic affordancesand demands, and entropic losses.Performing such comprehensive calculation is unrealistic even for simple duplicated structures, on which this paperfocuses. Our showcase of examples from real brains illustrates that some relevant aspects of the problem depend tightlyon the specific behavior implemented by each circuit. Hence actual thermodynamic potentials for this problem mayrely largely on the circuit’s computational ends. But clever abstractions might reveal outstanding dimensions commonto diverse phenotypes. Finding such coarse-grained dimensions would allow us to build effective cost-benefit balancesand phase diagrams. The mathematics of arbitrary, emergent cost-benefit tradeoffs map back to the formalism ofstatistical mechanics [183–187]. Thus, neural structures might still be captured by such phase diagrams and “phase”transitions (akin to thermodynamic ones) might be uncovered.We work out a simple case explicitly to study subcircuits within a neural structure before and after they cooperatetowards a complex cognitive phenotype. Our cost-benefit calculation reckons: i) the fitness benefit garnered by theimplemented computations, ii) expenses to coordinate redundant circuits, and iii) other costs associated to normal(i.e. non-redundant) computing. Ultimately, all costs originate in thermodynamics (e.g. metabolism or materialexpenses). In the resulting diagrams: • Two phases exist for uncooperative preconditions: one with a single structure and another with a duplicatedstructure. Large coordination costs (e.g. because the structures are far apart) result in a single structure. Thissupports that functions must lateralize due to enlarged brains [138]. • An additional third phase appears in cooperating preconditions. In it, the complex phenotype cannot emerge.Furthermore the diagram is distorted, so each phase happens for different parameters than before. • Superposing both diagrams shows evolutionary paths for the emerging phenotype, including:2 – A reliable path that results in the duplication of single neural structures. This has been proposed as afrequent mechanism for unfolding cognitive complexity [188]. – The absence of a direct route from duplicated to single structures. This suggests that the emergence ofnovel function cannot prompt lateralization (e.g., as in language) with the elements in our cost-benefitstudy alone.In a second example we speculate about how duplicated neural structures might further evolve once they are inplace. This may depend on the specific phenotype that the structures implement and on how it expands the cognitivelandscape. We propose three salient dimensions that might constrain evolutionary paths: i) complexity of input signals( K IN ), ii) complexity of sought output behavior ( K OUT ), and iii) complexity of the input-output mapping ( K MAP ).Guided by them, we elaborate a tentative morphospace where we attempt to locate some examples reviewed in sectionII. We expect that reactive and predictive brains are dominated by high input and output complexity respectively.We also note that structures with high, yet balanced K IN and K OUT (namely language and sensory-motor centers)have evolved separated regions devoted specifically to input processing and output generation. But evolutionaryevidence from early mammals also suggests that motor control was originally handled by somatosensory centers [137–139], potentially suggesting that the input-output segregation did not happen until motor control exceeded somecomplexity threshold (which might be captured by K MAP reaching a critical value in our morphospace). While theseconjectures are speculative, the morphospace can advise numerical simulations to clarify these points.Both examples presented have been inspired by concepts from Statistical Physics and Information Theory. We thinkthat Statistical Physics (with its phase transitions and diagrams, criticality, susceptibilities to external parameters,etc.) is a very apt language to study the optimality and abundance of neural structures. Perhaps it is an unavoidablelanguage. The task is big, but efforts are building up [5, 35–37, 46, 47, 54, 55, 59–61]. We hope to see importantdevelopments soon, hopefully along empirical records of neural circuitry falling within the resulting phase diagrams.
Acknowledgments
Seoane would like to thank Profs. Ricard Sol´e and Claudio Mirasso, Dr. Aina Oll´e-Vila and David Encinas for usefuldiscussion. This work has been funded by the Spanish National Reseach Council (CSIC) and the Spanish Ministryfor Science and Innovation (MICINN) through the Juan de la Cierva fellowship number IJC2018-036694-I and by theInstitute of Interdisciplinary Physics and Complex Systems (IFISC) of the University of the Balearic Islands (UIB)through the Mar´ıa de Maeztu Program for Units of Excellence in R&D (grant number MDM-2017-0711).
References [1] Kauffman, S.A.
The origins of order: Self-organization and selection in evolution . Oxford University Press: Oxford, UK,1993.[2] Smith E.; Morowitz H.J.
The origin and nature of life on earth: the emergence of the fourth geosphere . CambridgeUniversity Press: Cambridge, UK, 2016.[3] Eigen, M. Natural selection: a phase transition?.
Biophysical chemistry , (2-3), pp.101-123.[4] Kempes, C.P.; Wolpert, D.; Cohen, Z.; P´erez-Mercader, J. The thermodynamic efficiency of computations made in cellsacross the range of life. Philos. T. R. Soc. A , (2109), p.20160343.[5] Wolpert, D.; Kempes, C.; Stadler, P.F.; Grochow, J.A. The energetics of computing in life and machines . Santa FeInstitute Press: Santa Fe, NM, USA, 2019.[6] Drossel, B. Biological evolution and statistical physics.
Adv. Phys. , (2), pp.209-295.[7] Goldenfeld, N.; Woese, C. Life is physics: evolution as a collective phenomenon far from equilibrium. Annu. Rev. Condens.Ma. P. , (1), pp.375-399.[8] England, J.L. Statistical physics of self-replication. J. Chem. Phys. , (12), p.09B623 1.[9] Perunov, N.; Marsland, R.A.; England, J.L. Statistical physics of adaptation. Phys. Rev. X , (2), p.021036.[10] Wolpert, D.H. The free energy requirements of biological organisms; implications for evolution. Entropy , (4),p.138.[11] Fellermann, H.; Corominas-Murtra, B.; Hansen, P.L.; Ipsen, J.H.; Sol´e, R.; Rasmussen, S. Non-equilibrium thermody-namics of self-replicating protocells. arXiv arXiv:1503.04683.[12] Corominas-Murtra, B. Thermodynamics of duplication thresholds in synthetic protocell systems. Life , (1), p.9.[13] Corominas-Murtra, B.; Fellermann, H.; Sol´e, R. Protocell cycles as thermodynamic cycles. In The energetics of computingin life and machines ; Wolpert, D.H.; Kempes, C.; Grochow, J.A.; Stadler, P.F.; Eds.; Santa Fe Institute Press: Santa Fe,NM, USA, 2019.[14] Friston, K. Life as we know it.
J. Roy. Soc. Interface , (86), p.20130475. [15] Kempes, C.P.; Dutkiewicz, S.; Follows, M.J. Growth, metabolic partitioning, and the size of microorganisms. Proc. Nat.Acad. Sci. , (2), pp.495-500.[16] Kempes, C.P.; Wang, L.; Amend, J.P.; Doyle, J.; Hoehler, T. Evolutionary tradeoffs in cellular composition across diversebacteria. ISME J. , (9), pp.2145-2157.[17] Hopfield, J.J. Physics, computation, and why biology looks so different. J. Theor. Biol. , (1), pp.53-60.[18] Smith, J.M. The concept of information in biology. Philos. Sci. , (2), pp.177-194.[19] Joyce, G.F. Booting up life. Nature , (6913), pp.278-279.[20] Nurse, P. Life, logic and information. Nature , (7203), pp.424-426.[21] Krakauer, D.C. Darwinian demons, evolutionary complexity, and information maximization. Chaos , (3), p.037110.[22] Joyce, G.F. Bit by bit: the Darwinian basis of life. PLoS Biol. , (5), p.e1001323.[23] Mehta, P.; Schwab, D.J. Energetic costs of cellular computation. Proc. Nat. Acad. Sci. , (44), pp.17978-17982.[24] Walker, S.I.; Davies, P.C. The algorithmic origins of life. J. R. Soc. Interface , (79), p.20120869.[25] Lang, A.H.; Fisher, C.K.; Mora, T.; Mehta, P. Thermodynamics of statistical inference by cells. Phys. Rev. Lett. , (14), p.148103.[26] Hidalgo, J.; Grilli, J.; Suweis, S.; Munoz, M.A.; Banavar, J.R.; Maritan, A. Information-based fitness and the emergenceof criticality in living systems. Proc. Nat. Acad. Sci. , (28), pp.10095-10100.[27] Mehta, P.; Lang, A.H.; Schwab, D.J. Landauer in the age of synthetic biology: energy consumption and informationprocessing in biochemical networks. J. Stat. Phys. , (5), pp.1153-1166.[28] Tkaˇcik, G.; Bialek, W. Information processing in living systems. Annu. Rev. Conden. Ma. P. , , pp.89-117.[29] Seoane, L.F.; Sol´e, R.V. Information theory, predictability and the emergence of complex life. Roy. Soc. Open Sci. , (2), p.172221.[30] Seoane, L.F.; Sol´e, R. How Turing parasites expand the computational landscape of life. arXiv arXiv:1910.14339.[31] Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. , (4), p.620.[32] Jaynes, E.T. Information theory and statistical mechanics. II. Phys. Rev. , (2), p.171.[33] Landauer, R. Irreversibility and heat generation in the computing process. IBM J Res. Dev. , (3), pp.183-191.[34] Parrondo, J.M.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. , (2), pp.131-139.[35] Wolpert, D.H. The stochastic thermodynamics of computation. J. Phys. A , (19), p.193001.[36] Wolpert, D.; Kolchinsky, A. The thermodynamics of computing with circuits. New J. Phys. , (2020) 063047.[37] Sol´e, R.; Moses, M.; Forrest, S. Liquid brains, solid brains. Philos. T. R. Soc. B , (1774), p.20190040.[38] Mart´ınez-Corral, R.; Liu, J.; Prindle, A.; S¨uel, G.M.; Garc´ıa-Ojalvo, J. Metabolic basis of brain-like electrical signallingin bacterial communities. Philos. T. R. Soc. B , (1774), p.20180382.[39] Boussard, A.; Delescluse, J.; P´erez-Escudero, A.; Dussutour, A. Memory inception and preservation in slime moulds: thequest for a common mechanism. Philos. T. R. Soc. B , (1774), p.20180368.[40] Duran-Nebreda, S.; Bassel, G.W. Plant behaviour in response to the environment: information processing in the solidstate. Philos. T. R. Soc. B , (1774), p.20180370.[41] Oborny, B. The plant body as a network of semi-autonomous agents: a review. Philos. T. R. Soc. B , (1774),p.20180371.[42] Friston K, Kilner J, Harrison L. A free energy principle for the brain. J. Phys.-Paris , (1-3), pp.70-87.[43] Friston KJ, Stephan KE. Free-energy and the brain. Synthese , (3), pp.417-458.[44] Friston, K. The free-energy principle: a rough guide to the brain?. Trends Cogn. Sci. , (7), pp.293-301.[45] Friston, K. The free-energy principle: a unified brain theory?. Nat. Rev. Neurosci. , (2), pp.127-138.[46] Pi˜nero, J.; Sol´e, R. Statistical physics of liquid brains. Philos. T. R. Soc. B , (1774), p.20180376.[47] Vining, W.F.; Esponda, F.; Moses, M.E.; Forrest, S. How does mobility help distributed systems compute?. Philos. T.R. Soc. B , (1774), p.20180375.[48] Maass W, Natschl¨ager T, Markram H. Real-time computing without stable states: A new framework for neural compu-tation based on perturbations. Neural Comput. , (11), pp.2531-2560.[49] Maass W, Natschl¨ager T, Markram H. Fading memory and kernel properties of generic cortical microcircuit models. J.Physiol.-Paris , (4-6), pp.315-330.[50] Burgsteiner H. Training networks of biological realistic spiking neurons for real-time robot control. In Proc. of the 9thInter. Conf. on Engineering Applications of Neural Networks , pp. 129-136.[51] Legenstein R, Maass W. Edge of chaos and prediction of computational performance for neural circuit models.
NeuralNetworks , (3), pp.323-334.[52] Maass W, Joshi P, Sontag ED. Computational aspects of feedback in neural circuits. PLoS Comput. Biol. , (1),p.e165.[53] Maass W. Searching for principles of brain computation. Curr. Opin. Behav. Sci. , , pp.81-92.[54] Seoane, L.F. Evolutionary aspects of reservoir computing. Philos. T. R. Soc. B , (1774), p.20180377.[55] Oll´e-Vila, A.; Seoane, L.F.; Sol´e, R. Ageing, computation and the evolution of neural regeneration processes. J. R. Soc.Interface , (168), p.20200181.[56] Sol´e, R. Phase transitions . Santa Fe Institute Press: Santa Fe, NM, USA, 2011.[57] Goldenfeld, N.
Lectures on phase transitions and the renormalization group . CRC Press: Cleveland, OH, USA, 2018.[58] Davidson, A.J.; Hugdahl, K.; eds.
Brain asymmetry . MIT Press: Cambridge, USA, 1995.[59] Sompolinsky, H. Statistical mechanics of neural networks.
Phys. Today , (21), pp.70-80.[60] Clune, J.; Mouret, J.B.; Lipson, H. The evolutionary origins of modularity. P. Roy. Soc. B , (1755), p.20122863.[61] Mengistu, H.; Huizinga, J.; Mouret, J.B.; Clune, J. The evolutionary origins of hierarchy. PLoS Comput. Biol. , (6), p.e1004829.[62] Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. , , 379-423.[63] Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication ; Univ of Illinois Press: Champaign, IL, USA,1949.[64] Crutchfield, J.P. The calculi of emergence: computation, dynamics and induction.
Physica D , (1-3), pp.11-54.[65] Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv physics/0004057.[66] Shalizi, C.R.; Moore, C. What is a macrostate? Subjective observations and objective dynamics. arXiv cond-mat/0303625.[67] Israeli, N.; Goldenfeld, N. Coarse-graining of cellular automata, emergence, and the predictability of complex systems. Phys. Rev. E , (2), p.026203.[68] Still, S.; Crutchfield, J.P.; Ellison, C.J. Optimal causal inference: Estimating stored information and approximating causalarchitecture. Chaos , (3), p.037111.[69] Wolpert, D.H.; Grochow, J.A.; Libby, E.; DeDeo, S. Optimal high-level descriptions of dynamical systems. arXiv arXiv:1409.7403.[70] Marzen, S.E.; Crutchfield, J.P. Predictive rate-distortion for infinite-order Markov processes. J. Stat. Phys. , (6),pp.1312-1338.[71] Seoane, L.F.; Sol´e, R. Criticality in Pareto Optimal Grammars?. Entropy , (2), p.165.[72] Seoane LF. Evolutionary paths to lateralization of complex functions. In preparation .[73] Harrington, A. Unfinished business: models of laterality in the nineteenth century. In Brain asymmetry ; Davidson, A.J.;Hugdahl, K.; MIT Press: Cambridge, USA, 1995; pp.3-27.[74] Swanson, L.W. What is the brain?.
Trends Neurosci. , (11), pp.519-527.[75] Northcutt, R.G. Understanding vertebrate brain evolution. Integr. Comp. Biol. , (4), pp.743-756.[76] Holland, N.D. Early central nervous system evolution: an era of skin brains?. Nat. Rev. Neurosci. , (8), pp.617-627.[77] Watanabe, H.; Fujisawa, T.; Holstein, T.W. Cnidarians and the evolutionary origin of the nervous system. Dev. Growth.Differ. , (3), pp.167-183.[78] Penfield, W.; Boldrey, E. Somatic motor and sensory representation in the cerebral cortex of man as studied by electricalstimulation. Brain , (4), pp.389-443.[79] Danelli L, Cossu G, Berlingeri M, Bottini G, Sberna M, Paulesu E. Is a lone right hemisphere enough? Neurolinguisticarchitecture in a case with a very early left hemispherectomy. Neurocase , (3), pp.209-231.[80] Kliemann D, Adolphs R, Tyszka JM, Fischl B, Yeo BT, Nair R, Dubois J, Paul LK. Intrinsic Functional Connectivity ofthe Brain in Adults with a Single Cerebral Hemisphere. Cell Reports , (8), pp.2398-2407.[81] White, R.J.; Schreiner, L.H.; Hughes, R.A.; MacCarty, C.S.; Grindlay, J.H. Physiologic consequences of total hemi-spherectomy in the monkey. Neurology , , , pp.149-159.[82] Ivanova, A.; Zaidel, E.; Salamon, N.; Bookheimer, S.; Uddin, L.Q.; de Bode, S. Intrinsic functional organization ofputative language networks in the brain following left cerebral hemispherectomy. Brain Struct. Funct. , (8),pp.3795-3805.[83] Hertz-Pannier, L.; Chiron, C.; Jambaqu´e, I.; Renaux-Kieffer, V.; Moortele, P.F.V.D.; Delalande, O.; Fohlen, M.; Brunelle,F.; Bihan, D.L. Late plasticity for language in a child’s non-dominant hemisphere: A pre-and post-surgery fMRI study. Brain , (2), pp.361-372.[84] Li´egeois, F.; Connelly, A.; Baldeweg, T.; Vargha-Khadem, F. Speaking with a single cerebral hemisphere: fMRI languageorganization after hemispherectomy in childhood. Brain Lang. , (3), pp.195-203.[85] Piattelli-Palmarini, M. Normal language in abnormal brains. Neurosci. Biobeh. R. , , pp.188-193.[86] Smith, A. Speech and other functions after left (dominant) hemispherectomy. J. Neurol. Neurosur. Ps. , (5), p.467.[87] Galaburda, A.M. Anatomic basis of cerebral dominance. In Brain asymmetry ; Davidson, A.J.; Hugdahl, K.; MIT Press:Cambridge, USA, 1995; pp.51-73.[88] Peters, M. Handedness and its relation to other indices of cerebral lateralization. In
Brain asymmetry ; Davidson, A.J.;Hugdahl, K.; MIT Press: Cambridge, USA, 1995; pp.183-214.[89] Geschwind, N. Language and the brain.
Sci. Am. , (4), pp.76-83.[90] Catani, M.; Jones, D.K.; Ffytche, D.H. Perisylvian language networks of the human brain. Ann. Neurol. , (1),pp.8-16.[91] Fedorenko, E.; Kanwisher, N. Neuroimaging of language: why hasn’t a clearer picture emerged?. Lang. Linguist. , (4), pp.839-865.[92] Fedorenko, E.; Nieto-Castanon, A.; Kanwisher, N. Lexical and syntactic representations in the brain: an fMRI investiga-tion with multi-voxel pattern analyses. Neuropsychologia , (4), pp.499-513.[93] Fedorenko, E.; Thompson-Schill, S.L. Reworking the language network. Trends Cogn. Sci. , (3), pp.120-126.[94] Blank, I.; Balewski, Z.; Mahowald, K.; Fedorenko, E. Syntactic processing is distributed across the language system. Neuroimage , , pp.307-323.[95] Berwick, R.C.; Chomsky, N. Why only us: Language and evolution . MIT press: Cambridge, USA, 2016.[96] Brown, H.D.; Kosslyn, S.M. Hemispheric differences in visual object processing: Structural versus allocation theories. In
Brain asymmetry ; Davidson, A.J.; Hugdahl, K.; MIT Press: Cambridge, USA, 1995; pp.77-97.[97] Hellige, J.B. Hemispheric asymmetry for components of visual information processing. In
Brain asymmetry ; Davidson,A.J.; Hugdahl, K.; MIT Press: Cambridge, USA, 1995; pp.99-121.[98] Bishop, D.V. Cerebral asymmetry and language development: cause, correlate, or consequence?.
Science , (6138).[99] Hiscock, M.; Kinsbourne, M. Phylogeny and ontogeny of cerebral lateralization. In Brain asymmetry ; Davidson, A.J.; Hugdahl, K.; MIT Press: Cambridge, USA, 1995; pp.535-578.[100] Seoane LF, Sol´e R. Simplest model of brain reorganization after hemispherectomy. In preparation .[101] Perani, D.; Saccuman, M.C.; Scifo, P.; Anwander, A.; Spada, D.; Baldoli, C.; Poloniato, A.; Lohmann, G.; Friederici,A.D. Neural language networks at birth.
Proc. Nat. Acad. Sci. , (38), pp.16056-16061.[102] Barcel´o-Coblijn, L.; Serna Salazar, D.; Isaza, G.; Castillo Ossa, L.F. and Bedia, M.G. Netlang : A software for the linguisticanalysis of corpora by means of complex networks.
PLoS one , (8), p.e0181341.[103] Llin´as, R.R. I of the vortex: From neurons to self . MIT Press: Cambridge, USA, 1995.[104] Yuste, R.; MacLean, J.N.; Smith, J.; Lansner, A. The cortex as a central pattern generator.
Nat. Rev. Neurosci. , (6), pp.477-483.[105] Cruse, H.; D¨urr, V.; Schmitz, J. Insect walking is based on a decentralized architecture revealing a simple and robustcontroller. Philos. T. R. Soc. A , (1850), pp.221-250.[106] Schilling, M.; Cruse, H. Decentralized control of insect walking: A simple neural network explains a wide range ofbehavioral and neurophysiological results. PLoS Comput. Biol. , (4), p.e1007804.[107] Sherrington, C. The integrative action of the nervous system . Yale Univ. Press: New York, USA, 1948[108] Ijspeert, A.J. Central pattern generators for locomotion control in animals and robots: a review.
Neural Networks , (4), pp.642-653.[109] Marder, E.; Calabrese, R.L. Principles of rhythmic motor pattern generation. Physiol. Rev. , (3), pp.687-717.[110] Herculano-Houzel, S.; Kaas, J.H.; de Oliveira-Souza, R. Corticalization of motor control in humans is a consequence ofbrain scaling in primate evolution. J. Comp. Neurol. , (3), pp.448-455.[111] O’Keefe J, Dostrovsky J. The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-movingrat. Brain Res. , (1), 171-175.[112] O’keefe J, Nadel L. The hippocampus as a cognitive map . Clarendon Press: Oxford, UK, 1978.[113] Hafting T, Fyhn M, Molden S, Moser MB, Moser EI. Microstructure of a spatial map in the entorhinal cortex.
Nature , (7052), 801-806.[114] Bush, D.; Barry, C.; Manson, D.; Burgess, N. Using grid cells for navigation. Neuron , (3), pp.507-520.[115] Stemmler, M.; Mathis, A.; Herz, A.V. Connecting multiple spatial scales to decode the population activity of grid cells. Sci. Adv. , (11), p.e1500816.[116] McNaughton, B.L.; Battaglia, F.P.; Jensen, O.; Moser, E.I.; Moser, M.B. Path integration and the neural basis of the‘cognitive map’. Nat. Rev. Neurosci. , (8), pp.663-678.[117] Fyhn, M.; Hafting, T.; Treves, A.; Moser, M.B.; Moser, E.I. Hippocampal remapping and grid realignment in entorhinalcortex. Nature , (7132), pp.190-194.[118] Moser EI, Kropff E, Moser MB. Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. , pp.69-89.[119] Bush, D.; Barry, C.; Burgess, N. What do grid cells contribute to place cell firing?. Trends Neurosci. , (3),pp.136-145.[120] Foster, D.J.; Wilson, M.A. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature , (7084), pp.680-683.[121] Colgin LL, Moser EI, Moser M-B. Understanding memory through hippocampal remapping. Trends Neurosci. , (9), 469-477.[122] Aronov D, Nevers R, Tank DW. Mapping of a non-spatial dimension by the hippocampal/entorhinal circuit. Nature , (7647), pp.719-722.[123] Rueckert E, Kappel D, Tanneberg D, Pecevski D, Peters J. Recurrent spiking networks solve planning tasks. Sci. Rep. , (1), pp.1-10.[124] Maass W. Noise as a resource for computation and learning in networks of spiking neurons. Proc. IEEE , (5),pp.860-880.[125] Maass W. To spike or not to spike: that is the question. Proc. IEEE , (12), pp.2219-2224.[126] Jeffery KJ. Integration of the sensory inputs to place cells: what, where, why, and how?. Hippocampus , (9),pp.775-785.[127] Lew AR. Looking beyond the boundaries: time to put landmarks back on the cognitive map?. Psychol. Bull. , (3), p.484.[128] Muller RU, Kubie JL. The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J. Neurosci. , (7), pp.1951-1968.[129] Deadwyler SA, Breese CR, Hampson RE. Control of place-cell activity in an open field. Psychobiology , (3),pp.221-227.[130] O’Keefe J. Place units in the hippocampus of the freely moving rat. Exp. Neurol. , (1), pp.78-109.[131] Stachenfeld KL, Botvinick MM, Gershman SJ. The hippocampus as a predictive map. Nat. Neurosci. , (11),p.1643.[132] Behrens TE, Muller TH, Whittington JC, Mark S, Baram AB, Stachenfeld KL, Kurth-Nelson Z. What is a cognitivemap? Organizing knowledge for flexible behavior. Neuron , (2), pp.490-509.[133] Ferbinteanu, J.; Shapiro, M.L. Prospective and retrospective memory coding in the hippocampus. Neuron , (6),pp.1227-1239.[134] Johnson, A.; Redish, A.D. Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point. J. Neurosci. , (45), pp.12176-12189.[135] Sarel, A.; Finkelstein, A.; Las, L.; Ulanovsky, N. Vectorial representation of spatial goals in the hippocampus of bats. Science , (6321), pp.176-180.[136] Gauthier, J.L.; Tank, D.W. A dedicated population for reward coding in the hippocampus. Neuron , (1), pp.179-193.[137] Kaas, J.H. Evolution of somatosensory and motor cortex in primates. Anat. Rec. Part A , (1), pp.1148-1156.[138] Kaas, J.H. The evolution of the complex sensory and motor systems of the human brain. Brain Res. Bull. , (2-4),pp.384-390.[139] Beck, P.D.; Pospichal, M.W.; Kaas, J.H. Topography, architecture, and connections of somatosensory cortex in opossums:evidence for five somatosensory areas. J. Comp. Neurol. , (1), pp.109-133.[140] Walsh, T.M.; Ebner, F.F. Distribution of cerebellar and somatic lemniscal projections in the ventral nuclear complex ofthe Virginia opossum. J. Comp. Neurol. , (4), pp.427-445.[141] Krubitzer, L.; K¨unzle, H.; Kaas, J. Organization of sensory cortex in a Madagascan insectivore, the tenrec ( Echinopstelfairi ). J. Comp. Neurol. , (3), pp.399-414.[142] Wu, C.W.H.; Kaas, J.H. Somatosensory cortex of prosimian Galagos: physiological recording, cytoarchitecture, andcorticocortical connections of anterior parietal cortex and cortex of the lateral sulcus. J. Comp. Neurol. , (3),pp.263-292.[143] Fang, P.C.; Stepniewska, I.; Kaas, J.H. Ipsilateral cortical connections of motor, premotor, frontal eye, and posteriorparietal fields in a prosimian primate, Otolemur garnetti. J. Comp. Neurol. , (3), pp.305-333.[144] Conant, R.C.; Ross Ashby, W. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. , (2), pp.89-97.[145] Kaas, J.H. What, if anything, is SI? Organization of first somatosensory area of cortex. Physiol. Rev. , (1),pp.206-231.[146] Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. , (3), 181-204.[147] Yuille, A.; Kersten, D. Vision as Bayesian inference: analysis by synthesis?. Trends Cogn. Sci. , (7), pp.301-308.[148] Rao, R.P.; Ballard, D.H. Predictive coding in the visual cortex: a functional interpretation of some extra-classicalreceptive-field effects. Nature Neurosci. , (1), pp.79-87.[149] Lee, T.S.; Mumford, D. Hierarchical Bayesian inference in the visual cortex. J. Opt. Soc. Am. A , (7), pp.1434-1448.[150] Friston, K. A theory of cortical responses. Philos. T. R. Soc. B , (1456), pp.815-836.[151] Brown, H.; Friston, K.J.; Bestmann, S. Active inference, attention, and motor preparation. Front. Psychol. , ,p.218.[152] Jehee, J.F.; Ballard, D.H. Predictive feedback can account for biphasic responses in the lateral geniculate nucleus. PLoSComput. Biol. , (5), p.e1000373.[153] Huang, Y.; Rao, R.P. Predictive coding. WIREs Cogn. Sci. , (5), pp.580-593.[154] Hohwy, J. Functional Integration and the mind. Synthese , (3), pp.315-328.[155] Dayan, P.; Hinton, G.E.; Neal, R.M.; Zemel, R.S. The helmholtz machine. Neural Comput. , (5), pp.889-904.[156] Dayan, P.; Hinton, G.E. Varieties of Helmholtz machine. Neural Networks , (8), pp.1385-1403.[157] Hinton, G.E. Learning multiple layers of representation. Trends Cogn. Sci. , (10), pp.428-434.[158] Fukushima K, Miyake S. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recog-nition. In Competition and cooperation in neural nets , 267-285. Berlin, Germany: Springer.[159] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In Advancesin neural information processing systems , 1097-1105. Lake Tahoe, NV: Neural Information Processing SystemsFoundation, Inc.[160] Voss, H.U. Anticipating chaotic synchronization.
Phys. Rev. E , (5), p.5115.[161] Voss, H.U. Dynamic long-term anticipation of chaotic states. Phys. Rev. Lett. , (1), p.014102.[162] Matias, F.S.; Carelli, P.V.; Mirasso, C.R.; Copelli, M. Anticipated synchronization in a biologically plausible model ofneuronal motifs. Phys. Rev. E , (2), p.021922.[163] Ciszak, M.; Mayol, C.; Mirasso, C.R.; Toral, R. Anticipated synchronization in coupled complex Ginzburg-Landausystems. Phys. Rev. E , (3), p.032911.[164] Markram, H. The blue brain project. Nat. Rev. Neurosci. , (2), pp.153-160.[165] Hawkins J, Blakeslee S. On intelligence: How a new understanding of the brain will lead to the creation of truly intelligentmachines . Macmillan: New York, NY, USA, 2007.[166] Meyer, H.S.; Egger, R.; Guest, J.M.; Foerster, R.; Reissl, S.; Oberlaender, M. Cellular organization of cortical barrelcolumns is whisker-specific.
Proc. Nat. Acad. Sci. , (47), pp.19113-19118.[167] Oberlaender, M.; de Kock, C.P.; Bruno, R.M.; Ramirez, A.; Meyer, H.S.; Dercksen, V.J.; Helmstaedter, M.; Sakmann,B. Cell type–specific three-dimensional structure of thalamocortical circuits in a column of rat vibrissal cortex. Cereb.Cortex , (10), pp.2375-2391.[168] Haeusler S, Maass W. A statistical analysis of information-processing properties of lamina-specific cortical microcircuitmodels. Cereb. Cortex , (1), pp.149-162.[169] Nikoli´c D, H¨ausler S, Singer W, Maass W. Distributed fading memory for stimulus properties in the primary visual cortex. PLoS Biol. , , e1000260.[170] Bernacchia A, Seo H, Lee D, Wang XJ. A reservoir of time constants for memory traces in cortical neurons. Nat. Neurosci. , , 366.[171] Chen JL, Carta S, Soldado-Magraner J, Schneider BL, Helmchen F. Behaviour-dependent recruitment of long-rangeprojection neurons in somatosensory cortex. Nature , (7458), pp.336-340. [172] Diamond ME, Von Heimendahl M, Knutsen PM, Kleinfeld D, Ahissar E. ‘Where’ and ‘what’ in the whisker sensorimotorsystem. Nat. Rev. Neurosci. , , 601.[173] Von Neumann, J. Probabilistic logics and the synthesis of reliable organisms from unreliable components. AutomataStudies , pp.43-98.[174] Winograd, S.; Cowan, J.D. Reliable computation in the presence of noise . Mit Press: Cambridge, USA, 1963.[175] Attwell, D.; Laughlin, S.B. An energy budget for signaling in the grey matter of the brain.
J. Cerebr. Blood. F. Met. , (10), pp.1133-1145.[176] Levy, J. The mammalian brain and the adaptive advantage of cerebral asymmetry. Ann. N.Y. Acad. Sci. , (1),pp.264-272.[177] Ghirlanda, S.; Vallortigara, G. The evolution of brain lateralization: a game-theoretical analysis of population structure. Proc. Roy. Soc. Lond. B Bio. , (1541), pp.853-857.[178] Gazzaniga, M.S. Forty-five years of split-brain research and still going strong. Nat. Rev. Neurosci. , (8), pp.653-659.[179] Rogers, L.J. Evolution of hemispheric specialization: advantages and disadvantages. Brain Lang. , (2), pp.236-253.[180] G¨unt¨urk¨un, O.; Diekamp, B.; Manns, M.; Nottelmann, F.; Prior, H.; Schwarz, A.; Skiba, M. Asymmetry pays: visuallateralization improves discrimination success in pigeons. Curr. Biol. , (17), pp.1079-1081.[181] Coello, C.C. Evolutionary multi-objective optimization: a historical view of the field. IEEE Comput. Intell. M. , (1), pp.28-36.[182] Schuster, P. Optimization of multiple criteria: Pareto efficiency and fast heuristics should be more popular than they are. Complexity , (2), pp.5-7.[183] Seoane, L.F. Multiobjetive optimization in models of synthetic and natural living systems Universitat Pompeu Fabra,Barcelona, Spain, 2016.[184] Seoane, L.F.; Sol´e, R.V. A multiobjective optimization approach to statistical mechanics. arXiv arXiv:1310.6372.[185] Seoane, L.F.; Sol´e, R. Systems poised to criticality through Pareto selective forces. arXiv arXiv:1510.08697.[186] Seoane, L.F.; Sol´e, R. Multiobjective optimization and phase transitions. In Proceedings of ECCS 2014 : Springer, Cham,2016 (pp. 259-270).[187] Seoane, L.F.; Sol´e, R. Phase transitions in Pareto optimal complex networks.
Phys. Rev. E , (3), p.032807.[188] Chakraborty, M; Jarvis, E.D. Brain evolution by brain pathway duplication. Phil. T. R. Soc. B , (1684),p.20150056.[189] Hurley, I.; Hale, M.E.; Prince, V.E. Duplication events and the evolution of segmental identity. Evol. Develop. , (6),pp.556-567.[190] Oakley, T.H.; Rivera, A.S. Genomics and the evolutionary origins of nervous system complexity. Curr. Opin. Genet. Dev. , (6), pp.479-492.[191] Caporale, N.; Dan, Y. Spike timing–dependent plasticity: a Hebbian learning rule. Annu. Rev. Neurosci. , ,pp.25-46.[192] Raup, D.M. Geometric analysis of shell coiling: general problems. J. Paleontol. , pp.1178-1190.[193] Niklas, K.J.
The evolutionary biology of plants . University of Chicago Press: Chicago, USA, 1997.[194] McGhee, G.R.
Theoretical morphology: the concept and its applications . Columbia University Press: New York, USA,1999.[195] Niklas, K.J. Computer models of early land plant evolution.
Annu. Rev. Earth Planet. Sci. , , pp.47-66.[196] Corominas-Murtra, B.; Go˜ni, J.; Sol´e, R.V.; Rodr´ıguez-Caso, C. On the origins of hierarchy in complex networks. Proc.Nat. Acad. Sci. , (33), pp.13316-13321.[197] Go¨ni, J.; Avena-Koenigsberger, A.; Velez de Mendizabal, N.; van den Heuvel, M.P.; Betzel, R.F.; Sporns, O. Exploringthe morphospace of communication efficiency in complex networks. PLoS One , (3), p.e58070.[198] Avena-Koenigsberger, A.; Go˜ni, J.; Sol´e, R.; Sporns, O. Network morphospace. J. R. Soc. Interface , (103),p.20140881.[199] Seoane, L.F.; Sol´e, R. The morphospace of language networks. Sci. Rep. , (1), pp.1-14.[200] Bickerton, D. Language and species . University of Chicago Press: Chicago, USA, 1992.[201] Ferrer i Cancho, R.; Sol´e, R.V. Least effort and the origins of scaling in human language.
Proc. Nat. Acad. Sci. , (3), pp.788-791.[202] Sol´e, R.V.; Seoane, L.F. Ambiguity in language networks. Linguist. Rev. , (1), pp.5-35.[203] Marzen, S.; DeDeo, S. Weak universality in sensory tradeoffs. Phys. Rev. E , (6), p.060101.[204] Marzen, S.E.; DeDeo, S. The evolution of lossy compression. J. R. Soc. Interface ,14