Causal Emergence in Discrete and Continuous Dynamical Systems
CCausal Emergence in Discrete & Continuous DynamicalSystems
Thomas F. Varley , March 31, 2020 Complex Networks & Systems, School of Informatics, Computing & Engineering, In-diana University, Bloomington IN, 47401, USA. Psychological & Brain Sciences, Indiana University, Bloomington IN, 47401, USA.
Abstract
Emergence, the phenomena where a system’s micro-scale dynamics facilitate thedevelopment of non-trivial, informative higher scales, has become a foundational conceptin modern sciences, tying together fields as diverse as physics, biology, economics, andecology. Despite it’s apparent universality and the considerable interest, historicallyresearchers have struggled to provide a rigorous, formal definition of emergence that isapplicable across fields. Recent theoretical work using information theory and networkscience to formalize emergence in state-transition networks (causal emergence) hasprovided a promising way forward, however the relationship between this new frameworkand other well-studied system dynamics is unknown. In this study, we apply causalemergence analysis to two well-described dynamical systems: the 88 unique elementarycellular automata and the continuous Rossler system in periodic, critical, and chaoticregimes. We find that emergence, as well as its component elements (determinism,degeneracy, and effectiveness) vary dramatically in different dynamical regimes insometimes unexpected ways. We conclude that the causal emergence framework provides arich new area of research to explore both to theoreticians and natural scientists in manyfields.
The emergence of informative high scales in complex systems is a foundational conceptin modern sciences. Fields from physics to life-sciences have explored how higher-orderstructures can emerge from lower-order dynamics, however complexity science as a whole1 a r X i v : . [ n li n . C G ] M a r till lacks a coherent, agreed-upon formalism that describes emergence, and more crucially,how much emergence a system is capable of displaying. Recently, Hoel et al. proposedan information-theoretic formalization of emergence based on the information encoded ina system’s state-transition network (Hoel et al., 2013; Hoel et al., 2016). Called causalemergence (CE), this framework provides a method to calculate the amount of informationa system encodes in it’s state-transition structure at the micro-scale and then, after renor-malization, at the macro-scale. If a system has a more informative macro-scale than it’smicro-scale, then it can be said to display ”emergent structure.” The the micro-scale is themost informative, then we say it displays ”causal reduction.”The majority of work using the causal emergence framework has been done using sim-ple Boolean networks (Hoel et al., 2013; Hoel et al., 2016), which can be solved throughbrute-force search. Klein & Hoel (Klein and Hoel, 2019) applied CE analysis to a series oflarger real and synthetic complex networks and found that the topology and structure ofa given network can have a significant impact on it’s capacity to encode information andemergence. However, a significant limitation of this work is that the majority of real net-works explored are not readily understood as state-transition networks, instead often beingnetwork representations of interacting components (eg. gene networks, PGP web-of-trust),or routing networks (eg. airline networks, power-grids). We say this not to diminish thesignificance or validity of the work, but rather to draw attention to an outstanding question: given a discrete or continuous system with a well-defined state-transition network, what doesCE analysis of the state-transition network tell us about the behaviour and dynamics of oursystem? Without a paired system and it’s state-transition graph, CE analysis provideslimited intuitive understanding of what ”emergence” might actually mean, beyond a kindof optimal community-detection schema.To address this, we applied CE analysis (using the spectral-clustering algorithm pro-posed by Griebenow, Klein, & Hoel (Griebenow et al., 2019) to two foundational classesof dynamical systems, each of which displays rich ranges of behaviour. The first, is theset of 88 unique discrete elementary cellular automata (ECA) (Wolfram, 2002). The ECAare an excellent starting point for this analysis as the different rules generate a wide rangeof behaviours that are easily visualized, including static, periodic, chaotic, and fractal-likeregimes (described in more detail in Section 2.2). In addition to this rich repertoire, foran ECA of a given size, all possible states the system can take on can be enumerated, andthe transitions from state to state easily represented as a directed state-transition graph.Finally, as the ECA are deterministic in nature (a given state’s immediate future can bepredicted with total certainty), the information encoded in the causal structure is drivenonly by the degeneracy (determinism is constant). Previous work has found that high de-generacy is a key component of causal emergence (Hoel et al., 2016), and so the ECA allowsus to explore that relationship without the added confound of varying determinism.2he second system we analysed was the Rossler system (Rssler, 1976). A canonical modelin chaos theory, the Rossler system is a set of coupled differential equations (described in de-tail in Section 2.3) which can display periodic, chaotic, or critical behaviour depending on thevalues of it’s parameters. By varying these values, we can explore the information-structureof the system as it passes through the phase transition from periodicity to chaos and backagain. Unlike the ECA, however, the Rossler system is continuous and therefore doesn’tnaturally lend itself to this kind of discrete information-theoretic analysis. To construct astate-transition network for such a system requires a method of discretizing the attractor tocreate a finite number of states and the probabilities of transition from one to another. Todo this, we used the method of constructing ordinal partition networks (OPNs), describedin detail in 2.3.1 (Small, 2013; McCullough et al., 2015; Myers et al., 2019). Briefly, anOPN represents the state-transition dynamics of a continuous time-series by representingdiscrete patterns of activity as nodes, and then counting the number of transitions fromone state to another, which are stored as directed edge weights and can be normalized intoprobabilities for CE analysis. Unlike the ECA, which are perfectly deterministic, the OPNcan have variable determinism and degeneracy depending on the dynamics of the systemproducing the source time-series. In many respects, the OPN is quite similar to the notionof the (cid:15) -machine proposed by Crutchfield (Crutchfield, 1994; Crutchfield, 2012), which pro-vides a provably optimal network representation of a continuous complex system. Once theOPN has been constructed, it can be used for the same analysis as the ECA state-transitionnetwork, although here determinism is variable, unlike in the ECA. By sweeping throughthe various dynamical regimes of the Rossler system and reconstructing a discrete attractorat every step, we can map system dynamics to a network topology for information-theoreticanalysis.The ECA and the Rossler OPN represent distinctly different kinds of dynamical sys-tem with many different behaviours and causal structures, but both are amenable for CEanalysis. This provides an opportunity to relate well-understood dynamics such as chaos,periodicity, and complexity to the novel formalism of causal emergence and provide newinsights into how dynamical systems encode information and why some may develop infor-mative higher scales, while others do not.
The causal structure of a state-transition graph is comprised of three fundamental elements: determinism , degeneracy and ef f ectiveness . Recall that for a weighted, directed state-3ransition graph G = ( V, E ) where | V | = N , out-going edges are weighted by the probabilityof transitioning to a given future based on the current state and so satisfy a probabilitydistribution. This allows us to define the determinism of the graph as a function of theaverage entropy of the distribution of out-going edges across all vertices: Det ( G ) = log ( N ) − (cid:104) H ( W outi ) (cid:105) | i ∈ V log ( N )The determinism gives an average measure of how reliably you can predict the futureknowing the present. If every vertex in the graph has a single output with p = 1, then thedeterminism of whole network is 1. Dividing by log N introduces a normalizing fact whichallows us to compare the determinism of networks with different numbers of vertices.In contrast, the degeneracy is a function of the entropy of the ”average” network distri-bution of out-going edges: Deg ( G ) = log ( N ) − H ( (cid:104) W outi (cid:105) ) | i ∈ V log ( N )In this way, the degeneracy is the amount of information lost due to uncertainty aboutthe past when multiple states could lead to the same present state. If every state had anequal probability of being preceded by any other state, the degeneracy is maximal at 1. Ifevery state only had one possible past, then the network has 0 degeneracy.The difference between the determinism and the degeneracy is the effectiveness: howmuch information does the network encode in it’s causal structure. Ef f ( G ) = Det ( G ) − Deg ( G )In a network with a very high effectiveness, determinism would be high (it is easy topredict future trajectories) with a low degeneracy (paths rarely overlap or feed into each-other). In a network with very low effectiveness, it is hard to both predict the future andreconstruct past. The formalism of effectiveness allows us to define an intuitive sense of what it means fora system to display ”emergence”: are there higher scales that have a greater effectivenessthan the micro-scale? In the context of a defined state-transition graph, higher scales can beexplored through ”renormalization” where the micro-scale are coarse-grained by combiningindividual vertices into ”macro-vertices”. Renormalization is essentially a form of Louvain-like community detection, where the communities get collapsed into single vertices.To implement the renormalization, we followed the algorithm detailed by Griebenowet al., (Griebenow et al., 2019), which uses a modified form of spectral clustering to find4ausally similar vertices and assign them to a set. For the theoretical discussion of why thisparticular algorithm works, see the above-referenced paper. Briefly:1. Calculate the eigendecomposition of the graph’s adjacency matrix, resulting in a spec-trum of typically complex eigenvectors ( V = { v i } ) and their associated eigenvalues(Λ = { λ i } ).2. Remove the kernel K of the spectrum, where K = { v i | λ i = 0 } .3. Rescale the remaining unit-length eigenvectors by the associated eigenvalue s.t. V (cid:48) = { v (cid:48) i | λ i v i } , creating a set of N -dimensional vectors.4. We can associate the i th vertex in the network with a vector comprised of the i th elements of all the vectors in V (cid:48) . This creates a set of N vectors (one for each vertexin the network) that can be readily embedded in a space of dimension N − | K | .5. We then create a distance matrix for our N vectors. If two vectors map to verticesthat are within each-other’s Markov blankets, we define the distance between themas the cosine distance between their associated vectors. If two vertices are not withineach-other’s Markov blankets, we define the distance to be ∞ , as they should ever beclustered.6. Finally, using our pre-computed distance matrix, we cluster the vertices using theOPTICS clustering algorithm (Ankerst et al., 1999). The output is a vector of assign-ments for each vertex. Nodes can either be mapped to a cluster (which get collapsedtogether into a macro-vertex), or treated as outliers, who remain independent in therenormalized network.Once the macro-vertices have been assigned, the question remains: how do they fit intothe rest of the network? We chose a relatively simple schema that erases information aboutthe internal community structure while conserving as much information about the transitionprobabilities as possible. Briefly: • The macro-vertex has one self-loop: it’s weight is given by the average probabilitythat a walker on any vertex within the community would transition to another vertexwithin the same community. • All in-coming edges incident on vertices within the macro-community terminate onthe new macro-vertex, with the same probability. If multiple edges originate from thesame source vertex and terminate on separate vertices in the macro-community, thoseedges are collapsed and their weights summed.5
All out-going edges from vertices within the community originate from the new macrovertex. The weight of this edge is the average probability that a random walker onany vertex in the community would transition to the target vertex.The result of this procedure is that all information about the structure of the communitiesis lost after renormalization, however random walkers placed on both the micro- and macro-scale networks would follow similar paths. This method also ensures that, the weights of allthe out-going edges for each vertex still sum to one.
Arguably the simplest cellular automata to explore are the 256 ”Elementary Cellular Au-tomata” (ECA), described in excruciating detail by Stephen Wolfram in his book ”A NewKind of Science” (Wolfram, 2002). Each ECA consists of a one-dimensional array of cells,which can be in an on or off state. At every moment, each cell updates according to: it’scurrent state, and the states of it’s immediate left and right neighbours. The constraint thateach cell’s immediate future is totally specified by it’s present environment makes these sys-tems highly amenable to research. Considerable work has been done on the basic ECA(again, see Wolfram’s 1,000 page tome on the subject) and a number of intriguing findingshave emerged, including the discovery that at least one rule is a universal computer.For our purposes, the ECA are useful because they display a wide range of behaviours.Some, such as Rules 0 and 255 (the universal off and on rules respectively) produce trivial fu-tures, regardless of their initial conditions, while others, like Rule 60 show highly non-trivialand complex behaviours. This allows us to ask: what kinds of behaviours are associatedwith causal emergence? At the outset, we were unsure: on one hand, the complex, fractalpatterns that rules like Rule 60 produce are often used as examples of unexpected emer-gence, but on the other hand, each state in that case is highly individuated and it maynot be easy to find a way to aggregate states in such variable systems. In contrast, simplepatterns may be more compressible, but also less interesting. This analysis gives us anopportunity to test whether causal emergence in state-transition graphs corresponds withour own intuitions about what constitutes interesting or ”non-trivial” emergence. Wolframclassified all of the ECA into four broad categories:
Classes of Elementary Cellular Automata
1. Those ECA that converge to a uniform state.2. Those ECA that converge to a stable, repeating state.3. Those ECA that remain in a random state.4. Those ECA that combine elements of randomness and predictability (”complex”).6hile these are qualitative designations as opposed to quantitative ones, how emergenceis distributed over representatives of these four classes will go a long way to providingintuitive insight into what kind of behaviours support emergence and which do not.The ECA can also be explored at a variety of scales, corresponding to the number ofcells. For each rule, we created eight state-transition graphs, each one corresponding toa size of five cells (with a 32-vertex state-transition graph) to 12 cells (corresponding toa 4096-vertex state-transition graph). This provides a unique opportunity to explore howthe size of a system contributes to it’s determinism, degeneracy, and capacity to supportemergence while holding the generating dynamics constant. While there are 256 possiblerules, quite a few of them are identical: there are in fact only 88 ”unique” rules (all othersbeing symmetrical to at least one other rule), so for these results here, we picked only those88 distinct rules (the numbers for each rule can be found in Supplementary Datasets).
The Rossler attractor is a three-dimensional dynamical system commonly used as a toy-model when exploring chaotic dynamics. It is defined by: dxdt = − y − zdydt = x + aydzdt = b + z ( x − c )where a, b and c are free parameters that control the dynamics of the system. For thisstudy, we held the values of b and c constant at 2 and 4 respectively and varied the value of a within a range of 0.37 - 0.43 in increments of 0.001, following Myers et al., (2019). Fromthe attractor we took only the x -series, giving us a large set of time-series corresponding todynamics in period, critical, and chaotic regimes. The ordinal partition network (OPN) represents a discretized approximation of a continuousattractor. Based on the notion of permutation embedding, each node represents a patternof activity and the weights between the nodes represent the probabilities that, given somecurrent state S i , at the next timestep, the system evolves into S j . Constructing the OPN isreasonably simple and requires choosing only two free parameters: the embedding dimension d , and the temporal lag τ . Given some real-valued time-series X = [ x , x , x , x , x ...x n ],we construct a series of d -dimensional vectors: V = [ x σ , x τσ , x τσ ...x dτσ ] . We then find theordinal permutation of V , which is the rank of each element in the vector. For example, sup-7ose V = [0 . , − . , . , . , − . π ( V ) would be [2 , , , , V that would sort it.The primary benefit of the permutation embedding is that it allows us to map everypossible real-valued vector V into one of only d ! possible ordinal partitions. To construct thestate-transition network we count how many times π ( V i ) is followed by π ( V j ) and calculatethe P ( π ( V j ) | π ( V i )). The result is a Markovian network with a finite number of nodes, andfor which the out-degrees of each node define a probability distribution. A random walk onthis network returns a plausible series of ordinal permutations which could conceivably bereconstructed into a probable continuous time-series.The question of what are the optimal values of d and τ are contentious in the literature.In the absence of a ”best practice” for the Rossler attractor we used the same embeddingas Myers et al (2019): d = 6 , τ = 40. All the relevant code can be found in the associated GitHub repository:https://github.com/thosvarley/causal emergenceHosted therein is the source code for the ECA, the Rossler system, and the causalemergence package (written in Cython) as well as the relevant analysis scripts. The actualdata (the state-transition graphs for all the ECA and the Rossler system) are not includeddue to hosting limits. All analyses were done in Python 3.7 and Cython (Behnel et al., 2011).Other packages used include the Numpy library (version 1.15.4) (Walt et al., 2011), Scipy(version 1.3.1) (Jones and Oliphant, 2001), Scikit-Learn (version 0.20.0) (Pedregosa et al.,2011), Matplotlib (version 2.2.2), (Hunter, 2007), Spyder (version 3.2.3), iGraph (version0.7.1) (Csardi and Nepusz, 2006). Analysis was done in the Anaconda Python Environment(version 5.0.0).
Approximately 8% of the rules had state-transition graphs that collapsed down to a singlepoint (where causal emergence is undefined) across all scales. These rules were: { , , , , , } This set of rules shows a range of behaviours: rule 45 is one of the canonically ”inter-esting” rules, showing complex, chaotic behaviour, while rule 8 evolves to a uniform steadystate, and the rest fall into oscillatory modes. This suggests that this kind of collapse is notrestricted to a single long-term behaviour. 8igure 1: How the components of causal emergence (determinism, degeneracy, and effec-tiveness, at micro and macro-scales) relate to the size of the state-transition graph for eachof the 88 unique ECA at all 7 scales. Note that as the systems get larger, macro-networksbecome more deterministic, while degeneracy seems to be largely unaffected. There may bea subtle trend that systems with a larger number of state are capable of displaying slightlyhigher emergence as well. 9igure 2: How the components of causal emergence (determinism, degeneracy, and effective-ness, at micro and macro-scales) relate to each-other. Note that as micro-scale degeneracygrows, capacity for causal-emergence climbs, while as the macro-degeneracy grows, capacityfor causal emergence falls. Macro-scale determinism and degeneracy do not appear to becompletely independent either. 10igure 3: The five rules that displayed the highest causal emergence over the whole rangeof scales, using the same random initial condition for each rule. None of these rules areparticularly exciting, in terms of long-term behaviour.11igure 4: The state-transition graphs for the three rules that showed the greatest causalemergence. Rule 1 (top row), rule 2 (middle row), and rule 4 (bottom row), when the systemis comprised of 6 elements (left column), 8 elements (middle column) and 10 elements (rightcolumn). When visualizing the state transition graphs, it is clear that all of them contain alarge number of star-like motifs that are easily collapsed under renormalization.12pproximately 43% of the rules displayed true causal emergence on average across therange of all scales, while approximately 49% showed causal reduction on average. As withthe rules that collapsed, there was not an obvious relationship between rule class and causalemergence. Of the 17 rules typically described as being class 3 or class 4 (showing complex,non-oscillatory behaviour), approximately 30% showed true causal emergence when averagedacross scales, while 70% showed causal reduction (for a list of the 17 ”interesting” rules, seeSupplementary Datasets). An interesting example of these differences is to compare rules 30,60, and 90, all of which are commonly cited as ECA exhibiting nontrivial behaviour. Rule90 had a mean causal emergence across scales of 1.05, while rules 30 and 60 showed causalreduction, with mean values of 0.89 and 0.94 respectively. Rule 110, another nontrivialECA, with behaviour described as on the critical boundary between periodicity and chaosalso showed causal reduction, with a mean value of 0.94 across scales.Interestingly, the rules that showed the highest causal emergence generally had fairlytrivial long-term behaviours. The top five rules with the highest average causal emergencewere (in descending order): { , , , , } With mean values ranging from 1.95 for rule 1 down to 1.34 for rule 12 (for visualizationsof the space-time diagrams for these fives rules, see Figure 3). By examining the state-transition graphs of these rules, we can see that high CE seems to be associated with a largenumber of communities arranged in a star, or hub-and-spoke motifs. This is consistent withthe intuition: all spokes on a star motif lead to the same attractor state and as a result, arecausally equivalent and can be collapsed without loss of information about the future of thesystem. In contrast, the systems that displayed the least CE (but did not collapse to singlepoints) had tree-like state transition graphs (data not shown).In general, causal emergence was largely consistent across scales, which we take as apromising sign that the algorithm proposed by Griebenow et al., (Griebenow et al., 2019) isrobust: the same system at different sizes returns largely the same results. Unexpectedly,however, despite the relative constancy of emergence, the macro-scale determinism grew asthe size of the network increased, while degeneracy at the macro scale was generally constantor trending towards zero as the network size increased (with a few notable exceptions).In the ECA, we were also able to see how the components of causal emergence (micro-scale determinism and degeneracy, as well as macro-scale components) were related to each-other (see Fig. 1). Unsurprisingly, as micro-scale degeneracy increased, the capacity ofthe system to support CE increased. This is consistent with the notion that degeneracy atthe micro-scale allows for efficient coarse-graining and the emergence of informationally richhigher scales. There was also a positive relationship between the macro-scale determinismand CE, which was mirrored by a negative relationship between macro-scale degeneracy13nd CE. This too is consistent with the notion that CE is an increase in effectiveness at themacro-scale relative to the micro-scale: as effectiveness is highest when determinism is highand degeneracy is low, an increase in macro-scale determinism and a decrease in macro-scaledegeneracy corresponds to greater CE. This is also reflected in a strong negative correlationbetween micro-scale effectiveness and causal emergence.These results provide an empirical verification of the theory of causal emergence anddetail how different dynamical systems (corresponding to the 88 unique elementary cellularautomata), with different long-term behaviour, display unique combinations of determinism,degeneracy, and effectiveness at macro- and micro-scales.
The ordinal partition network (OPN) of the Rossler System time series provides a naturalway to bring information theoretic analysis to bear on on a continuous dynamical system,as detailed in (McCullough et al., 2015; Myers et al., 2019). By fixing b = 2 , c = 4 andsweeping through a from a range of 0.37-0.43, we can observe how CE changes as the systemundergoes a bifurcation cascade leading up to a critical phase transition. We found thatdeterminism underwent a brief climb, followed by a significant drop immediately follow-ing the first bifurcation and then generally climbed until the onset of the period-doublingcascade, at which point, it collapsed precipitously. In general, chaotic dynamics were asso-ciated with lower levels of determinism. Upon brief transitions back into periodic dynamics,the determinism climbed dramatically, resulting in ”deterministic plateaus” surrounded bylow-determinism, chaotic dynamics. For visualization, see Figure 5.Degeneracy displayed a similar pattern to determinism. At the onset of the period dou-bling cascade, the degeneracy spiked before falling. As with determinism, the degeneracyseemed to increase any time the system transitioned into a periodic regime, collapsing fol-lowing the transition back to chaos. Interestingly, the degeneracy” appears to ”anticipate”(at risk of anthropomorphizing a mathematical construct) the transition back to determin-ism and begins to rise prior to the significant leap that occurs on the onset of determinism.At this point, we do not know exactly what dynamical process it producing this effect.The effectiveness, being the difference between determinism and degeneracy, displayedlargely similar patterns as determinism: a spike upon the onset of the period doublingcascade, followed by a dramatic decrease. As both determinism and degeneracy spikedduring periodic regimes, the effectiveness was necessarily lower, although the transitionsfrom chaos to periodicity were still marked by increases in value. For visualization see6. Causal emergence was arguably the least interesting of the measures reported here.It remained noisy around a relatively constant value, with the notable exception that itplunged upon the onset of the period doubling cascade. We might have have expected thatemergence may climbed at the onset of the phase transition, as many have proposed that14igure 5: How the determinism (left), degeneracy (middle), and efficacy (right) change asthe Rossler attractor is swept across the period doubling cascade and onset of deterministicchaos. In the Rossler equations, b =2, c =4, and a is the independent variable. It is clear thatall three components of causal emergence are sensitive to changes in the dynamical regimeof the system.complex, emergent phenomena emerge at the ”Edge of Chaos.” In this paper, we present how the causal emergence framework, first introduced by Hoel etal., (Hoel et al., 2013; Hoel et al., 2016), performs when analysing toy-models of discrete andcontinuous dynamical systems. The discrete system we used were the 88 unique elementarycellular automata, and we found that the set of rules displayed a wide distribution of deter-minism, degeneracy, and emergence. Interestingly, the rules that displayed the consistentlyhighest values of emergence were not the ones that displayed the most visually interestinglong-term dynamics. To explore emergence in continuous dynamical systems, we constructeddiscretized state-transition networks for a Rossler system that we swept through a perioddoubling cascade. We found that micro-scale determinism, degeneracy, and efficacy changedsignificantly depending on the dynamical regime and seemed to be sensitive to changes indynamics prior to critical phase transitions. Upon the onset of the period doubling cascade,the capacity for the system to support higher-level causal emergence decreased dramatically.15igure 6: The change in causal emergence calculated from ordinal partition networks as theRossler system is swept through the onset of chaos. Left: Notice the significant decrease inemergence that occurs near exactly on the onset of the period doubling cascade. Right, top:this is the same as the bottom left figure, but zoomed in on the period doubling cascade.Notice that the drop includes 4-5 distinct points, suggesting that it is not an artefact. Right,bottom: to test whether the drop was robust, we re-ran the analysis, with a higher samplingrate covering just the period doubling cascade. The drop, reaching it’s minimum at 0.384persists, although it is not nearly as significant in terms of absolute magnitude.16 cknowledgements
I would like to thank Dr. YY Ahn, Dr. Randall Beer, Dr. Olaf Sporns, and Dr. Alice Pata-nia for their thoughtful insights and discussions over the course of these projects. I wouldalso like to thank Ross Griebenow for assistance coding the spectral clustering algorithm,and Dr. Eric Hoel for helping me understand the intuition behind causal emergence.
References
Ankerst, M., Breunig, M. M., Kriegel, H.-p., and Sander, J. (1999). OPTICS: OrderingPoints To Identify the Clustering Structure. pages 49–60. ACM Press.Behnel, S., Bradshaw, R., Citro, C., Dalcin, L., Seljebotn, D. S., and Smith, K. (2011).Cython: The Best of Both Worlds.
Computing in Science Engineering , 13(2):31–39.Crutchfield, J. P. (1994). The calculi of emergence: computation, dynamics and induction.
Physica D: Nonlinear Phenomena , 75(1):11–54.Crutchfield, J. P. (2012). Between order and chaos.
Nature Physics , 8(1):17–24.Csardi, G. and Nepusz, T. (2006). The igraph software package for complex network re-search.
InterJournal , Complex Systems:1695.Griebenow, R., Klein, B., and Hoel, E. (2019). Finding the right scale of a network: Effi-cient identification of causal emergence through spectral clustering. arXiv:1908.07565[physics] . arXiv: 1908.07565.Hoel, E. P., Albantakis, L., Marshall, W., and Tononi, G. (2016). Can the macro beatthe micro? Integrated information across spatiotemporal scales.
Neuroscience of Con-sciousness , 2016(1).Hoel, E. P., Albantakis, L., and Tononi, G. (2013). Quantifying causal emergence shows thatmacro can beat micro.
Proceedings of the National Academy of Sciences , 110(49):19790–19795.Hunter, J. D. (2007). Matplotlib: A 2d Graphics Environment.
Computing in ScienceEngineering , 9(3):90–95.Jones, E. and Oliphant, T. (2001). SciPy, Open source scientific tools for Python.Klein, B. and Hoel, E. (2019). Uncertainty and causal emergence in complex networks. arXiv:1907.03902 [physics] . arXiv: 1907.03902.17cCullough, M., Small, M., Stemler, T., and Iu, H. H.-C. (2015). Time lagged ordinalpartition networks for capturing dynamics of continuous dynamical systems.
Chaos:An Interdisciplinary Journal of Nonlinear Science , 25(5):053101.Myers, A., Munch, E., and Khasawneh, F. A. (2019). Persistent Homology of ComplexNetworks for Dynamic State Detection. arXiv:1904.07403 [nlin, physics:physics] . arXiv:1904.07403.Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel,M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau,D., Brucher, M., Perrot, M., and Duchesnay, . (2011). Scikit-learn: Machine Learningin Python.
J. Mach. Learn. Res. , 12:2825–2830.Rssler, O. E. (1976). An equation for continuous chaos.
Physics Letters A , 57(5):397–398.Small, M. (2013). Complex networks from time series: Capturing dynamics. In , pages 2509–2512.Walt, S. v. d., Colbert, S. C., and Varoquaux, G. (2011). The NumPy Array: A Structurefor Efficient Numerical Computation.
Computing in Science Engineering , 13(2):22–30.Wolfram, S. (2002).