On Artificial Life and Emergent Computation in Physical Substrates
OOn Artificial Life and Emergent Computation inPhysical Substrates st Kristine Heiney
Department of Computer ScienceOslo Metropolitan University
Oslo, [email protected] nd Gunnar Tufte
Department of Computer ScienceNorwegian University of Science and Technology
Trondheim, [email protected] rd Stefano Nichele
Department of Computer ScienceOslo Metropolitan University,Department of Holistic SystemsSimula Metropolitan
Oslo, [email protected]
Abstract —In living systems, we often see the emergence ofthe ingredients necessary for computation—the capacity forinformation transmission, storage, and modification—begging thequestion of how we may exploit or imitate such biological systemsin unconventional computing applications. What can we gainfrom artificial life in the advancement of computing technology?Artificial life provides us with powerful tools for understand-ing the dynamic behavior of biological systems and capturingthis behavior in manmade substrates. With this approach, wecan move towards a new computing paradigm concerned withharnessing emergent computation in physical substrates notgoverned by the constraints of Moore’s law and ultimatelyrealize massively parallel and distributed computing technology.In this paper, we argue that the lens of artificial life offersvaluable perspectives for the advancement of high-performancecomputing technology. We first present a brief foundationalbackground on artificial life and some relevant tools that may beapplicable to unconventional computing. Two specific substratesare then discussed in detail: biological neurons and ensemblesof nanomagnets. These substrates are the focus of the authors’ongoing work, and they are illustrative of the two sides of theapproach outlined here—the close study of living systems and theconstruction of artificial systems to produce life-like behaviors.We conclude with a philosophical discussion on what we can learnfrom approaching computation with the curiosity inherent to thestudy of artificial life. The main contribution of this paper is topresent the great potential of using artificial life methodologiesto uncover and harness the inherent computational power ofphysical substrates toward applications in unconventional high-performance computing.
Index Terms —bio-inspired computation, biological neural net-works, nanomagnetic ensembles, artificial life, philosophy ofcomputation
I. I
NTRODUCTION
The field of artificial life concerns itself with how toproduce complex macroscopic behaviors from the interac-tion of many simple interacting components. Where biologyworks to understand existing organisms and the complicatedmachineries that underlie observed physiological behaviorsusing an approach of deconstruction and element-by-elementdescription, artificial life seeks to construct systems displaying
This work was conducted as part of the SOCRATES project, which ispartially funded by the Norwegian Research Council (NFR) through theirIKTPLUSS research and innovation action on information and communicationtechnologies under the project agreement 270961. interesting emergent behaviors by aggregating many simpleobjects governed by basic rules [1]. Here, “emergent” refersto some feature of the entire system that cannot be describedby the constituent parts of the system. For example, thephysical concept of pressure has no meaning when consideringonly one or a few individual gas molecules; it is only whena large volume of gas is considered that this characteristicemerges as a meaningful descriptor of the system. Similarly,the movement and behavior of a single ant is qualitativelydistinct from that of an entire colony. In many ways, theseemergent behaviors may be seen as a form of computation,with the systems or organisms providing the machinery bywhich computations are performed.The tools of artificial life allow us to capture these emergentbehaviors without explicitly encoding them into the system.Rather, by creating simple sets of rules to describe the behaviorof individual agents within the system as they move, connect,and interact, complex behaviors emerge of their own accord.This approach to modeling and engineering dynamical systemscan offer new perspectives on how to perform computationin substrates showing emergent properties, positioning us toanswer the question posed by Langton [2] for targeted physicalsystems: under what conditions might the capacity to performcomputation emerge in a physical system? Ongoing workbeing conducted by the authors involves the study of twophysical substrates—networks of biological neurons [3, 4]and nanomagnetic ensembles [5]—along with models of thesesystems at different levels of abstraction [6], and we arguethat this inquisitive and bottom-up approach to understanding,mimicking, and constructing dynamical systems will provefruitful in the advancement of bio-inspired parallel and dis-tributed computing technology.Sipper [7] highlights the three cornerstones of this com-putational paradigm, which he terms “cellular computing”:simplicity, vast parallelism, and locality. In this paradigm,computation is performed with a vast number of very simplefundamental units whose connections are sparse and mostoften in the immediate vicinity. Thanks to this local connec-tivity, these machines thus perform without any centralizedcontrol, and their function is resilient against faults in thesystem. Currently, much exploration into cellular computing is a r X i v : . [ q - b i o . N C ] S e p ig. 1. Schematic of Grey Walters electronic tortoises, Elsie and Elmer.Reproduced from Walter [10]. confined to simulation, though the ultimate aim is to constructactual machines, be they biological [4, 8] or manmade [5],that can realize this behavior. In comparison with classic vonNeumann computing technologies, machines based in cellularcomputation principles will be more scalable, energy efficient,and resilient to failures of single elements [7, 9].This paper first explores basic concepts and motivationsin artificial life and cellular computing. A selection of ap-proaches to capturing in models components of biologicalprocesses we see in nature are then presented. Finally, thetwo abovementioned physical substrates—neuronal networksand nanomagnetic arrays—are explored in greater detail, anda philosophical discussion on how the lens of artificial lifemay inform our investigation of these substrates is presented.II. A N A RTIFICIAL L IFE P ERSPECTIVE ON C OMPLEXITY
Human curiosity has long driven us to uncover how organ-isms function—what drives their behaviors on a macroscopicscale and what microscopic processes control physiologicalfunction. This has in turn driven many to explore means torecreate lifelike behaviors using mechanical components—these are the machineries of artificial life.Early efforts in the field of artificial life focused on imitatingbehaviors observed in natural systems. For example, in 1950,William Grey Walter built a pair of artificial electronic “tor-toises” named Elsie and Elmer that moved toward dim lightbut away from bright light [Fig. 1; 10]. When he attachedlights to the tortoises themselves, their interaction resultedin complex and interesting behavior, which he described asgiving “an eerie expression of purposefulness, independenceand spontaneity” [10]. Grey Walter took great delight inlearning from these tortoises, named for the Mock Turtle’steacher in
Alice’s Adventures in Wonderland , who, aptly, wasalso not exactly a tortoise: when Alice asked about the choiceof name, the Mock Turtle replied, “We called him Tortoisebecause he taught us.” Although Grey Walter also predictedour captivation with the “marvelous processes” of life mayebb as our aptitude to imitate it grows, that prediction has notborne itself to fruition. Rather, it seems that the better ablewe are to capture complex dynamics in artificial systems, thegreater our appreciation for the natural systems we strive tounderstand and emulate.But what makes a system “artificial”? It is not simply themechanics or the fact that it is inorganic, nor is it the behavior the system shows, as this behavior is meant to be as close tothat of the natural systems that inspire its construction. HerbertSimon gave an elegant definition of the artificial in his bookThe Sciences of the Artificial [11]:“Artificiality connotes perceptual similarity but es-sential difference, resemblance from without ratherthan within [W]e may say that the artificial objectimitates the real by turning the same face to the outersystem, by adapting, relative to the same goals, tocomparable ranges of external tasks.”Simons discussion in this section centers on the distinctionbetween the task fulfilled by a designed system and thecapability of the system itself, and what can be accomplishedby artificial and simulated systems. We may create artificialsystems that are perceptually similar to natural systems, sys-tems that produce precisely the behaviors we wish to see on thescale at which we wish to see them, despite being inexorablydifferent from within; this, indeed, is the situation we strivefor in the study of artificial life.What then do we do with such systems once we havecaptured their behavior? In some cases, the developed sys-tems represent a means to understanding the system theymimic; in others, the system is the final product and givesus some capability that would be otherwise unattainable inconventional man-made systems. Models and simulations mayprovide insight into systems that would be unattainable bymere inspection of the assumptions and laws employed togovern the simulated system. This is useful when the precisemechanisms governing the behavior of a system are known at acertain scale but the system dynamics become more difficult todescribe when the system is scaled up by the addition of moresuch simple components. Along the same lines, this complexlarger-scale behavior can be useful in decoding the responseof the system to different types of inputs, enabling the use ofa dynamical system as a computational system.However, the pursuits of researchers interested in artificiallife have often been motivated not by some end goal butsimply by curiosity. Much of the burgeoning research that hasrecently been conducted in the realm of artificial intelligencehas been preoccupied with building better classifiers, bettercomputer vision, better autonomous systems—summarily, theoptimization of tools used as a means for computational tasks.Artificial life research is not necessarily oriented to these aimsbut serves to explore how we can harness the dynamics ofbehaviors of interest we observe in nature. As Langton put itin his paper on artificial life, artificial intelligence “has focusedprimarily on the production of intelligent solutions rather thanon the production of intelligent behavior. There is a worldof difference between these two possible foci” [1]. Thus, inline with this curious, process-driven approach, this paper doesnot consider in great detail what we hope to ultimately dowith the behaviors we observe as a motivation for studyingartificial life but instead engages with the behaviors themselvesas a point of interest with the assumption that applicationsand advancements will fall out as a natural product of theknowledge we gain with the approaches described here.2II. A
RTIFICIAL L IFE M ODELS AND T OOLS FOR U NCONVENTIONAL C OMPUTING
When considering the use of complex systems for computa-tion, a system should be capable of three basic operations [2]:the transmission, storage, and modification of information. Tofind systems that support these behaviors, we must be ableto construct models of complex systems, develop metrics toquantify their performance, and search the space of all possiblemodels to target the desired behaviors.This section presents a general approach to capturing dy-namic systems in models and characterizing their behavior.The concepts presented here represent some of the tools thatare commonly used to explore the smaller-scale mechanismsunderlying larger-scale complex dynamics. Dynamic systemsare often modeled using an approach in which the systemis considered to comprise a number of discrete componentsthat interact with a set of predefined rules. Each of thesecomponents can be in a given state and can influence or beinfluenced by the states of a number of other componentsto which it is connected. Some examples of such systems—graphs and cellular automata (CAs)—will be presented in thefollowing subsections.The manner in which these dynamic models behave is gov-erned by a set of parameters and rules defining the connectionsin the system, the states each component can take on, andhow the state of each component influences the states of thecomponents to which it is connected. If we wish to find theset of parameters and rules that produces a desired behaviorin the system, we have a vast space in which to search, andthat space is often largely occupied by many “boring” orundesirable models that fail to produce the behavior we seek.Thus, algorithms have been developed based on the biologicalprinciple of evolution to allow for a more targeted searchingof the space of possibilities in model design. This approachwill be discussed in more detail at the end of this section.
A. Complex system models
Complex systems are generally modeled using ensembles ofdiscrete elements that interact with each other using a givenset of rules. These elements can take on discrete or continuousstates and can be connected to each other in a regular, irregular,or random fashion, or they can be agents allowed to movefreely in space. This section presents two illustrative examplesof different types of models used to represent the dynamicalbehavior of complex systems: graphs and CAs.These two types of models were selected for their relevanceto the question of computing in physical substrates. Graphsrepresent the connections between elements in a system andhow those connections mediate the dynamics of the system;this type of model is very relevant in the field of neuroscience[12] and can capture the intricate arrangements of individualcells in a network or connections among brain regions. CAs,on the other hand, are one of the simplest representationsof a dynamical system, and they offer insight into the vastrange of dynamical behavior that can be achieved by tuning
Fig. 2. Examples of graph theoretical measures commonly used in the studyof complex systems. Reproduced from Sporns et al. [13]. the parameters of the system, even in the case of a very simpleelementary CA.
1) Graphs:
A graph is composed of nodes, representing thediscrete components of the system, and edges, representing theconnections between the nodes. The state of each node affectsthe states of all of its neighbors in a manner defined by a set ofrules or equations, much like the flight of a bird is affected byits fellow flock members. Graphs may also be constructed tohave static or dynamic structures, with connections remainingfixed or evolving over time. The variation in the states of thenodes is referred to as dynamics on the network, whereas thechange in the connections between the nodes is referred toas dynamics of the network. Systems that show both types ofdynamics are referred to as dynamic systems with dynamicstructures ((DS) ) or adaptive networks. The brain is a primeexample of an adaptive network, with numerous plasticitymechanisms constantly dynamically tweaking the weights ofneural connections.A number of useful measures can be extracted from graphmaps of a given structure to give some insight into thedynamics on the network. Examples highlighted by Spornset al. [13] as applicable in the study of brain connectivityare shown in Fig. 2, and further details on graph theoreticalmeasures can be found in Sporns et al. [12].Two network structures that represent opposite extremes areregular and random networks. Regular networks consist ofnodes with the same number of neighbors and range fromstrongly regular, where every two adjacent nodes have thesame number n of neighbors in common and every twonon-adjacent nodes have the same number n of neighborsin common, to randomly regular, where every node has thesame degree but the connections are randomly distributed. Atthe other extreme, random networks have a binomial degreedistribution (or Poisson in the limit of a large number ofnodes), meaning the nodes can be well-described by theaverage degree.Many real systems tend to not show regular or randomconnectivity but lie somewhere in between these two extremes.Two types of commonly discussed models representative of3eal-world behaviors are scale-free and small-world networks.Scale-free networks have degree distributions that follow apower law: P ( k ) ∝ k − γ , where P ( k ) is the fraction of nodeswith degree k and γ is a constant. This means that many nodeshave a low degree and few nodes have a high degree. Althoughmany real networks have been reported to show scale-freeconnectivity, it is notoriously difficult to rigorously confirmpower-law scaling from empirical data of finite systems [14].Scale-free networks are characterized by the presence ofhubs and modularity. They tend to show local clustering andlong-range integration and are robust against random removalor failure of nodes, as the vast majority of the nodes have fewconnections. However, if the hubs in the network are targeted,the system breaks down [see, e.g., 15]. In comparison to arandom network of the same size and average degree, themean path length of a scale-free network is smaller whereasthe clustering coefficient is much larger, demonstrating themodularity of the structure lends itself to efficient networkcommunication.Graph-based simulations can give insight into the dynamicsof a number of different types of systems and how the inter-actions of their components can produce emergent macroscalebehavior. Crucially, information flow through a system ofinterconnected elements can be readily represented with thismodeling approach.
2) Cellular automata:
A CA is classically defined as aregular n -dimensional lattice structure (or regularly connectedgraph) composed of discrete elements called cells that cantake on discrete states. The state of each cell in the networkprogresses in discrete time steps according to a lookup tableof rules that give the state at time step t +1 based on the statesof the cells in the neighborhood at time step t . Although CAsare actually a type of graph, the extra simplifying constraintsplaced on them make them a useful case to consider in therealm of computation.The binary ( K = 2 ) one-dimensional CA with a neigh-borhood of size N = 3 is one of the simplest possibledynamical system models to show complex behavior, and therange of behaviors that can be achieved by this model hasbeen investigated in great depth. Such a model is an excellentexemplar of a complex system, as it shows a wide rangeof dynamic behavior that can serve as an analogue for thebehavior of more complicated systems. In his seminal paper onthe dynamical behavior of simple CAs, Langton [2] exploredthe different behaviors that are attainable with this type ofsystem and focused on how physical systems may show anemergent capacity for computation. This section will brieflyexplain the aims and achievements of his study.As Langton [2] stated, the focus of his paper was to deter-mine “the conditions under which [the] capacity to supportcomputation itself might emerge in physical systems” byconsidering CAs as an exemplary simple model system. To thisend, he qualitatively and quantitatively characterized a numberof one-dimensional CAs with different rules and developed anew quantitative measure, the λ parameter, that can be usedto identify the qualitative class of behavior of a CA. This Fig. 3. Examples of CAs from the four different classes with K = 4 statesand N = 5 neighbors. Images reproduced from [2]. parameter is defined such that the cases where λ = 0 . and . correspond to the most homogeneous and heterogeneousrulesets, respectively.Wolfram [16] had earlier defined four classes of CA be-havior, with classes I and II corresponding to fixed andperiodic behavior, class III showing aperiodic patterns with noidentifiable structure, analogous to chaotic behavior. Finally,class IV CAs yield “complex patterns of localized structures,”effectively having “very long transients” [2]. These typesof CAs show rich and interesting patterns of behavior withcomplex fractal-like structures emerging and propagating overspace and time. Langtons survey of the possible rulesets forsystems with K = 4 and N = 5 revealed an interestingcorrespondence between the qualitative behavior observed andthe λ parameter. Some examples of the observed behavior andthe relationship between λ and the class of behavior are shownin Fig. 3.Shifting his focus to a larger two-dimensional CA, Langton[2] also explored the relationships among λ , the averagesingle-cell entropy H , and the mutual information (MI) be-tween a cell and itself at the next time step. The relation-ships between these parameters revealed the presence of asharp phase transition, corresponding to the transition betweenclasses II and III. Langtons results reveal high clustering ofmany CA rulesets in two distinct regions corresponding tothese classes, with classes I and II occupying a region of low H and low MI and class III a region of high H and low MI.However, in the wide gap between these two regions lie a fewsparse points representing the class IV CAs in the transitionalregime poised delicately between order and chaos; here at a4oint of intermediate entropy, the MI is maximized in a sharppeak between the low-MI regions on either side.Langtons delving into the dynamics at the “edge of chaos”is a remarkably—and somewhat overwhelmingly—thoroughexploration of how dynamics at the phase transitional regimemay give insight into the nature of computation in the physicalworld. But what is most crucial to take away here? First, thereis the more conceptual and more challenging lesson: compu-tation can be accomplished by striking a precarious balancebetween information storage, which requires a lowering of theentropy, and information transmission, which requires raisingthe entropy. This lesson is applicable to any dynamical system,not only CAs. At the transitional regime between periodic andchaotic dynamics, we have the behavior needed for long-termand long-range correlations, allowing information to propagatearbitrarily far and remain for arbitrarily long periods of time.Second, Langtons parameter λ gives us a way to survey thevast space of all possibilities to hone in on the systems that canshow the behaviors we wish to see in computational systems.Considering how rapidly the number of possible systems canexpand as a modeling framework is adjusted to representever more nuanced features of actual physical systems, thissurveying ability is highly valuable. The following section willalso address a more general approach to surveying the spaceof possible systems to find desired behavior. B. Evolutionary Algorithms
As may have been apparent from the explanations of thetwo example model systems in the previous section, evenrelatively small systems with simple rules governing theirlocal dynamics can often show very complex behavior thatcannot be predicted by examining their structure and rulesetsalone; rather, the system must be run to observe its behavior.For any interesting system, there will certainly exist a vastnumber of possible configurations and rulesets, to the extentthat it would not be reasonable to brute force our waythrough checking the behavior of each one. Furthermore, asdiscussed by Langton [2] and exemplified by his CA survey,the number of configurations showing interesting behaviorbecomes vanishingly small with respect to the space of allpossible configurations as the system size scales up. All ofthese factors make it challenging to select for systems thatshow a targeted type of behavior.One approach to tackling this issue is the use of evolutionaryalgorithms, which take inspiration from the process of evo-lution by natural selection to iteratively improve generationsof machines to produce the desired behavioral outcome [1].In this approach, the rules that govern local interactions areencoded into a simple representation of a possible systemconfiguration, and the behavior that is produced when thesystem is run is evaluated to determine how well it performsbased on a desired metric called the fitness function. Therepresentation and output behavior are conceptually similarto the genotype (genetic makeup) and phenotype (observablecharacteristics) of an organism. The process of computational evolution then follows a pathanalogous to that in nature. An initial population of individualmachines is created and run. Their fitness is evaluated basedon a fitness function quantifying how closely their behaviorresembles the target behavior. The descriptions of those thatperform the best from the population (parent machines) arethen used to generate new descriptions for a new generationof machines (offspring machines). For this purpose, geneticalgorithms are commonly used. These involve selecting pairsof machines from the parent generation that show high fitnessand performing genetics-inspired operations like crossover andmutation. This process is iterated over many generations andallows for a more intelligent search of the space of possiblemachines. In the realm of computation in physical substrates,a technique known as evolution-in-materio is commonly used[17], where genetic algorithms are used to search for physicalsignals that can be applied to a physical system to configureits properties to a desired state, heightening its capacity forcomputation.IV. W
ORKING WITH P HYSICAL S UBSTRATES
To better understand the behaviors driving complex dy-namics in actual physical systems, data-driven modeling ap-proaches can be employed, where data obtained from actualphysical substrates is obtained and analysis of this data isused to recapture targeted behaviors in models. In the caseof engineerable substrates, targeted behaviors may even betranslatable from natural biological substrates. Data-drivenmodeling serves a dual purpose: (1) providing insight into thebehavior of the studied system, including through simulationsof situations that may not be easily achievable experimentally,and (2) enabling the emulation of natural systems in artificialsystems.Furthermore, increasing attention has turned to the ex-ploitation of physical nonlinear systems for computation [18].Current computing substrates are inflexible and power-hungry;the use of complex nonlinear systems in computing hardwarewould open up the possibility for more efficient and powerfulhardware with the capacity to learn and could pave the wayfor rapid advancement in artificial intelligence. One exampleof a computing paradigm exploiting the nonlinearity of certainsystems is reservoir computing [19, 20], one great benefit ofwhich is that the system acting as the reservoir does not need tobe trained or modified; rather, the connections and dynamicalbehaviors it shows can be harnessed by finding the appropriateway to encode inputs to apply to the system and decode theoutput behavior it produces. This computing paradigm hasbeen exploited in artificial intelligence applications [20], andartificial life approaches to computational reservoirs may offertools for the advancement of such applications.This section gives an overview of the dynamics of twophysical substrates investigated by the authors—networks ofneurons and nanomagnetic arrays—and some approaches thathave been taken to capture their behavior in computationalmodels. These two substrates both show the necessary nonlin-ear behavior for them to be well-suited for computation.5 . Neuronal networks
The human brain is arguably the most complicated machinewe know of. With very low power consumption, it can makesense of an impressive range of inputs and control widelyvaried bodily responses to these inputs, and it can store vastamounts of information. Neurons in the brain encode andtransmit information in stereotyped electrical signals calledaction potentials, or spikes, which are produced by a carefullysynchronized flow of ions across the cell membrane and caninduce spikes in other neurons via connections called synapses.Although the mechanisms of spike generation, propagation,and transmission are fairly well characterized at the cellularlevel—though, admittedly, even on a single-cell level, whatis known is far from simple and there remains much to belearned—larger-scale behaviors cannot be wholly explained bysimply combining many of these smaller-scale models.Much research has been devoted to understanding the im-mense computational capabilities of the brain and how net-works of neurons process, transmit, and store information, andadvances in recording technology and data handling techniqueshave opened the door to new lines of investigation that werepreviously inaccessible. The electrical behavior of neuronsallows them to be studied at many scales using differenttechniques, including single-cell recording by measuring thevoltage across the cell membrane and electroencephalogram(EEG) at the whole-brain scale. To limit the scope of the dis-cussion here, we focus on the study of the electrophysiologicalbehavior of neurons at the network or population level usingmicroelectrode array (MEA) technology [21].An MEA is a set of electrodes embedded in a substrate, suchas glass, on which neurons can be grown. MEAs enable thelong-term nondestructive recording of populations of neuronsas well as controlled network stimulation by electrical pulses.An example of a 60-electrode MEA is shown in Fig. 4,along with an example of the voltage signals recorded by theMEA, from which the spiking behavior of the network can beextracted.The advent and recent advancement of MEA technology,including accessibility to commercial recording setups andanalysis tools (e.g., Multi Channel Systems MEA2100 systemsand MEAs, along with their corresponding software suite; seeFig. 4) have made it possible for researchers to perform long-term observation of the behavior of populations of neuronsin vitro. This capacity to record from whole populations ofneurons, both in living animals and in disembodied cultures,using MEAs and related technology has brought about a shiftin focus from the spiking of single neurons to network-leveldynamics and how neuronal assemblies within the brain cancollectively drive the dynamics and function of the entiresystem [22]. Indeed, although complex and inarguably worthyof the attention it has received, the behavior of a singleneuron can only tell us so much when decoupled from its“neighborhood” of other interconnected neurons.This shift in focus has brought about the necessity toinquire into the organization of such populations of neurons,
Fig. 4. Microscope image of an MEA (Multi Channel Systems GmbH,Germany) with a network of neurons cultured on top of it [4] (left) along witha screen capture from the Multi Channel Suite of software (Multi ChannelSystems GmbH, Germany). raising a number of questions. What physical feature of thenetwork constitutes a connection between two neurons? Howcan we capture these connections in the network throughour observations of it? And what measures can we use tocharacterize the network connections?
1) Connectivity in neuronal networks:
The organizationof networks of neurons is typically described in terms ofthree types of connectivity: structural, functional, and effectiveconnectivity [22]. A structural connection indicates an anatom-ical feature that mediates a physical interaction between twoneurons, namely a synapse. Structural connectivity is extractedfrom imaging data of morphological features or synapticmarkers. Although methods exist for extracting such featuresin relatively low-density networks [e.g., 23, 24], where themorphology of individual neurons can be observed, in higher-density networks on MEAs, such features can be difficult toextract from images.Functional connectivity represents the temporal correlationbetween the spiking patterns of pairs of neurons obtainedusing, for example, the cross-correlation between pairs of spiketrains. In this type of connectivity, two neurons are said tobe connected if the spiking of one can be predicted fromthe spiking of the other; however, this does not necessarilyindicate a causal relationship between the spiking of the two.The effective connectivity, in contrast, refers to connectionswhere the activity of one neuron can be said to directly causethat of another neuron.In one noteworthy study [23], the structural and functionalconnectivity were obtained from imaging and electrophysio-logical data from a high-density MEA and then combined toyield a refined functional connectivity map that may be saidto better indicate the actual organization and activity of thenetwork (Fig. 5). It should be noted that all of these typesof connectivity are subject to change as a result of differenttypes of plasticity mechanisms operating on a wide range oftime scales. This plasticity means that neuronal systems showboth dynamics on the network, in the form of spiking activitytraveling through the network, and dynamics of the network,with the connections between neurons subject to changing asa result of their activity.6 ig. 5. Combined structural and functional connectivity to capture a moreaccurate representation of the organization and activity of an in vitro neuronalnetwork. Adapted from Ullo et al. [23].
2) Connectivity and dynamics: How does informationflow?:
These types of connectivity give us an idea of whatmay be considered to constitute a connection and how tocapture such connections—that is, either visual observation ofan anatomical connection or temporal correlations in the elec-trophysiological spiking data recorded from pairs or neurons.What conclusions can then be drawn from these connectivitymaps, and what tools can we use to get there?A first clear step once the connectivity is obtained is toapply graph theory measures (see Fig. 2) to characterizethe organization of the connectivity map. These measurescan tell us if, for example, the network is modular andcontains many hubs or if it is more randomly connectedwith many nodes having roughly the same degree. The brainis known to strike a balance between functional segrega-tion and functional integration, allowing information to flowbetween spatially distant parts of the brain while allowingthe generation of coherent brain states [12]. Mapping theconnectivity of neuronal populations may give us insightinto how this balance is struck. Additionally, as mentionedpreviously, the connectivity of a network changes over time,both as a consequence of maturation and in response to stimuli.Connectivity analysis can provide us a deeper understandingof how neurons organize themselves over time and what maydrive these organizational structures, in both healthy networksand networks that mimic diseases or other abnormal states,and can show how the connectivity is affected by the adaptiveor maladaptive responses the networks have to external stimulior perturbations.The connectivity of a network is also intimately tied withthe dynamics that happen on the network and the mannerin which elements in the network communicate and processinformation. One approach to capturing the dynamic state ofa network is to study the distribution of the size of network-wide cascades of activity called “neuronal avalanches” [25].It has been theorized that the brain lies in the critical state, astate analogous to the “edge of chaos” explored by Langton[2] in which information processing is optimized, and in thisstate, the size distribution of neuronal avalanches follows apower law. Massobrio et al. [26] have shown that a modelwith scale-free connectivity is able to reproduce the power-lawavalanche scaling we expect to see in networks at criticality,and Shew et al. [27, 28] have shown optimized dynamicrange, information capacity, and information transmission in networks showing power-law avalanche scaling. Additionally,preliminary results indicate it may be possible to manipulatesupercritical networks into the critical state by increasingnetwork inhibition [4], enabling comparative studies betweentheir behavior in different states and their exploitation ascomputational reservoirs. These results demonstrate the linkbetween connectivity, dynamic state, and information process-ing in neuronal networks.Future work into these three perspectives on the dynamicalbehavior of neurons may give us invaluable insight into howwe can construct self-organizing systems to show the samekind of capacity for efficient information processing—howwe can organize connections between elements of the systemand construct rules for how they affect each others behavior.Building models like this can in turn give us a deeper under-standing of the brain as well, as we target specific behaviors toemulate and see how simplified models can produce behaviorsanalogous to those observed in the original system.
B. Magnetic substrates for computation
A number of substrates based in ensembles of differenttypes of magnetic materials show interesting dynamic in-terplay between the elements arising from various physicalphenomena, and these substrates are promising candidates forreservoir computing applications. There are a number of recentexamples of studies exploring the possibility of exploitingmagnetic substrates for computation [e.g., 29, 30]; however,we focus here on a specific paper by Jensen et al. [5] concernedwith exploring how to tune the dynamic behavior of artificialspin ice (ASI) by varying the parameters of an external drivingfield. For a recent review of the use of physical substrates forreservoir computing, see Tanaka et al. [19].ASI [31] consists of an array of coupled nanomagnetsarranged on a two-dimensional lattice. Each nanomagnet canbe viewed as a single bit because of its dipolar state. The statebehavior of these nanomagnetic islands arises from the smallscale of their dimensions and their shape. Nanomagnets consistof a single such domain, meaning the magnetization does notvary across the magnet. Oftentimes, the nanomagnets will befabricated with an elongated rectangular shape, causing thereto be two states that are energetically favorable, termed spinup and spin down and corresponding to magnetization alongthe longitudinal axis in the positive or negative direction. Thisshape of nanomagnet was used in the square ASI array studiedby Jensen et al. [5] (Fig. 6).Jensen et al. [5] perturbed this simulated nanomagnetic arraywith a time-varying external magnetic field, B ( t ) = A sin ωt ,at a direction ◦ from the horizontal, and the field parameters A and ω were tuned to achieve different types of dynamicalbehavior. Because each nanomagnet in the system can takeon one of two states, the overall state of the system canbe represented by 40 bits, yielding 240 possible states. Toquantify the complexity of the behavior of the system, 100cycles of the external field were applied to the system withdifferent strengths A and frequencies ω , and the number S ofunique states observed at the end of every cycle was counted7 ig. 6. Schematic showing the layout of the ASI studied by Jensen et al.[5] (left). The number of unique states visited by the array was counted fordifferent external field parameters (right). Reproduced from [5]. ( ≤ S ≤ ). The results are shown in the right-hand panelof Fig. 6.For weak fields, none of the magnets switch their state( S = 1 ), and for very strong fields, all of the magnets switchat the half-cycle point and then switch back ( S = 1 again).Additionally, at low frequencies, any transient behavior hasdied out by the end of a single cycle, so the number of statesremains low, whereas very high frequencies produced chaoticbehavior, with the nanomagnets not having sufficient time to“keep up” with the switching of the oscillating field direction.At the intermediate frequency of 100 MHz, long transientswere observed in a manner analogous to the complex CAbehavior seen in the right-hand panel of Fig. 3.This type of substrate cannot show the same kind of long-range connectivity that is observed in neuronal assemblies,as a single element of a magnetic system can only directlyaffect other elements within a certain local radius, whereasneurons can grow axons that can extend over very longdistances within the network. Thus, the “connectivity” of ASIis governed by quite a different set of physical rules. However,the study described here demonstrates that complex behaviorscan be captured in this substrate and information can propagatethrough the local interactions that occur between pairs of nano-magnets. By modeling the essence of the behaviors of neuralsystems that allow them to achieve the kinds of dynamicsand optimal information processing behaviors described in theprevious section, we can drive the development of magneticsubstrates such as these towards a realm of greater efficiencyand power, with the possibility of capturing some of theinformation processing capacity of the brain in engineerablecomputational substrates.V. D ISCUSSION
The fields of complexity and artificial life offer a great manytools to study emergent behaviors and complex dynamicsin different types of dynamical systems. Close inspection ofsuch systems—be they model systems like cellular automata,natural systems like the brain, or fabricated systems likeartificial spin ice—reveals rich and varied behaviors that arechallenging to capture in simple metrics but beautiful to watchunfold.With the use of models like graphs and CAs, we can capturethe wide range of possible dynamical behaviors observed invarious physical systems and target those behaviors that align with the hallmarks of computational power and edge-of-chaosdynamics. This vanishingly small space of critical systemsdescribed by Langton [2] and Wolfram [16] is accessibleto us if we know how or where to search, and tools likeevolutionary algorithms put this possibility at our fingertips.Furthermore, with the connection between neuronal avalanchesand criticality [25], we may also work backward from theproduct—the neuronal system identified as complex—to thedescription of its connectivity and response to inputs. Fromthis understanding our focus may then turn to eliciting frommanmade substrates the same capacity for computation we seein natural systems. However, it is important to remember thatthese models and neuro-inspired substrates are not the brain,nor do they behave precisely as the brain does—rememberagain Simon’s (1996) statement: “the artificial object ... turn[s]the same face to the outer system.” Interpretations of thebehavior we capture in our models must be tempered with thisunderstanding: that as we mimic, we do not precisely recreate,and there may be just as much to gain from the differences asthe similarities.An artificial life approach can provide solutions to a widerange of practical questions. The focus here has been on bio-inspired parallel and distributed computation, with potentialapplications spanning from the implementation of biologicallyplausible models for computation to the development of bio-inspired computational substrates with the ability to learnand adapt, offering a physical environment better suited forartificial intelligence applications than conventional hardware.In addition, teaming up with biologists and neuroscientists tobetter characterize the dynamics of the brain may open thedoor to previously unconsidered diagnostic or clinical tools.But apart from the practical, these systems spark in manyresearchers an innate and powerful curiosity needing no prag-matic outlet. We argue this drive to understand how complexbehaviors emerge from tauntingly simple components andrulesets describing their interaction—coupled with a mindopen to the possibilities of what an exploration of thesesystems will reveal—is what will ultimately prove fruitful infuture research, giving us fodder for the practical applicationswhere such answers were not at first sought. There stillremains much for the tortoises to teach us.R
EFERENCES [1] C. G. Langton, “Artificial Life,” in
Proceedings of theInterdisciplinary Workshop on the Synthesis and Simu-lation of Living Systems . Addison-Wesley PublishingCompany, 1987, pp. 1–48.[2] C. Langton, “Computation at the edge of chaos: Phasetransitions and emergent computation,”
Physica D: Non-linear Phenomena , vol. 42, no. 1-3, pp. 12–37, 1990.[3] P. Aaser, M. Knudsen, O. H. Ramstad, R. van de Wijde-ven, S. Nichele, I. Sandvig, G. Tufte, U. S. Bauer, Ø. Ha-laas, S. Hendseth, A. Sandvig, and V. D. Valderhaug,“Towards making a cyborg: A closed-loop reservoir-neuro system,” in
ECAL , 2017.84] K. Heiney, O. H. Ramstad, I. Sandvig, A. Sandvig,and S. Nichele, “Assessment and manipulation of thecomputational capacity of in vitro neuronal networksthrough criticality in neuronal avalanches,” in ,2019, pp. 247–254.[5] J. H. Jensen, E. Folven, and G. Tufte, “Computation inartificial spin ice,” in
The 2018 Conference on ArtificialLife . Cambridge, MA: MIT Press, 2018, pp. 15–22.[6] S. Pontes-Filho, P. Lind, A. Yazidi, J. Zhang, H. Hammer,G. Mello, I. Sandvig, G. Tufte, and S. Nichele, “Evo-dynamic: A framework for the evolution of generallyrepresented dynamical systems and its application tocriticality,” in
EvoApplications 2020, Held as part ofEvoStar 2020 , 2020.[7] M. Sipper, “The emergence of cellular computing,”
Com-puter , vol. 32, no. 7, pp. 18–26, 1999.[8] Y. Benenson, “Biomolecular computing systems: princi-ples, progress and potential,”
Nature Reviews Genetics ,vol. 13, no. 7, pp. 455–468, 2012.[9] A. Das, R. Dasgupta, and A. Bagchi, “Overview ofcellular computing-basic principles and applications,”in
Handbook of Research on Natural Computing forOptimization Problems , J. K. Mandal, S. Mukhopadhyay,and T. Pal, Eds. Hershey, PA, USA: IGI Global, 2016,pp. 637–662.[10] W. Grey Walter, “An imitation of life,”
Scientific Ameri-can , pp. 42–45, 1950.[11] H. A. Simon,
The Sciences of the Artificial , 3rd ed.Cambridge, MA: MIT Press, 1996.[12] O. Sporns, G. Tononi, and G. M. Edelman, “Connectivityand complexity: The relationship between neuroanatomyand brain dynamics,”
Neural Networks , vol. 13, no. 8-9,pp. 909–922, 2000.[13] O. Sporns, “Structure and function of complex brainnetworks,”
Dialogues in Clinical Neuroscience , vol. 15,no. 3, pp. 247–62, 2013.[14] A. Clauset, C. R. Shalizi, and M. E. Newman, “Power-law distributions in empirical data,”
SIAM Review ,vol. 51, no. 4, pp. 661–703, 2009.[15] G. Del Ferraro, A. Moreno, B. Min, F. Morone, ´U. P´erez-Ram´ırez, L. P´erez-Cervera, L. C. Parra, A. Holodny,S. Canals, and H. A. Makse, “Finding influential nodesfor integration in brain networks using optimal percola-tion theory,”
Nature Communications , vol. 9, no. 1, p.2274, 2018.[16] S. Wolfram, “Universality and complexity in cellularautomata,”
Physica D: Nonlinear Phenomena , vol. 10,pp. 1–35, 1984.[17] H. J. Broersma, J. F. Miller, and S. Nichele,
Com-putational matter: evolving computational functions innanoscale materials , ser. Emergence, Complexity andComputation. Springer, 2016, no. 23, pp. 397–428.[18] S. Stepney, “The neglected pillar of material computa-tion,”
Physica D: Nonlinear Phenomena , vol. 237, no. 9,pp. 1157–1164, 2008. [19] G. Tanaka, T. Yamane, J. B. Hroux, R. Nakane,N. Kanazawa, S. Takeda, H. Numata, D. Nakano, andA. Hirose, “Recent advances in physical reservoir com-puting: A review,”
Neural Networks , vol. 115, pp. 100–123, 2019.[20] M. Lukoˇseviˇcius and H. Jaeger, “Reservoir computingapproaches to recurrent neural network training,”
Com-puter Science Review , vol. 3, no. 3, pp. 127–149, 2009.[21] M. E. J. Obien, K. Deligkaris, T. Bullmann, and D. J.Bakkum, “Revealing neuronal function through micro-electrode array recordings,”
Frontiers in Neuroscience ,vol. 8, pp. 1–30, 2015.[22] D. Poli, V. P. Pastore, and P. Massobrio, “Functionalconnectivity in in vitro neuronal assemblies,”
Frontiersin Neural Circuits , vol. 9, pp. 1–14, 2015.[23] S. Ullo, T. R. Nieus, D. Sona, A. Maccione, L. Berdon-dini, and V. Murino, “Functional connectivity estima-tion over large networks at cellular resolution basedon electrophysiological recordings and structural prior,”
Frontiers in Neuroanatomy , vol. 8, pp. 1–15, 2014.[24] G. B. Moreno e Mello, S. Pontes-Filho, I. Sandvig, V. D.Valderhaug, E. Zouganeli, O. H. Ramstad, A. Sand-vig, and S. Nichele, “Method to Obtain NeuromorphicReservoir Networks from Images of in Vitro CorticalNetworks,” in . IEEE, 2019, pp. 2360–2366.[25] J. M. Beggs and D. Plenz, “Neuronal Avalanches inNeocortical Circuits,”
Journal of Neuroscience , vol. 23,no. 35, pp. 11 167–11 177, 2003.[26] P. Massobrio, V. Pasquale, and S. Martinoia, “Self-organized criticality in cortical assemblies occurs inconcurrent scale-free and small-world networks,”
NaturePublishing Group , vol. 5, 2015.[27] W. L. Shew, H. Yang, T. Petermann, R. Roy, andD. Plenz, “Neuronal avalanches imply maximum dy-namic range in cortical networks at criticality,”
Journal ofNeuroscience , vol. 29, no. 49, pp. 15 595–15 600, 2009.[28] W. L. Shew, H. Yang, S. Yu, R. Roy, and D. Plenz,“Information capacity and transmission are maximizedin balanced cortical networks with neuronal avalanches,”
Journal of Neuroscience , vol. 31, no. 1, pp. 55–63, 2011.[29] G. Bourianoff, D. Pinna, M. Sitte, and K. Everschor-Sitte, “Potential implementation of reservoir computingmodels based on magnetic skyrmions,”
AIP Advances ,vol. 8, no. 5, p. 055602, 2018.[30] J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa,D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji,A. Fukushima, H. Kubota, S. Yuasa, M. D. Stiles, andJ. Grollier, “Neuromorphic computing with nanoscalespintronic oscillators,”
Nature , vol. 547, no. 7664, pp.428–431, 2017.[31] S. H. Skjærvø, C. H. Marrows, R. L. Stamps, and L. J.Heyderman, “Advances in artificial spin ice,”