Computational capabilities at the edge of chaos for one dimensional system undergoing continuous transitions
E. Estevez-Rams, D. Estevez-Moya, K. Garcia-Medina, R. Lora-Serrano
aa r X i v : . [ n li n . C G ] M a r Computational capabilities at the edge of chaos for one dimensional systemsundergoing continuous transitions
E. Estevez-Rams, a) D. Estevez-Moya, K. Garcia-Medina, and R. Lora-Serrano Facultad de F´ısica-Instituto de Ciencias y Tecnolog´ıa de Materiales(IMRE),University of Havana, San Lazaro y L. CP 10400. La Habana. Cuba. Facultad de F´ısica, University of Havana, San Lazaro y L. CP 10400. La Habana.Cuba. Universidade Federal de Uberlandia, AV. Joao Naves de Avila,2121- Campus Santa Monica, CEP 38408-144, Minas Gerais,Brasil. (Dated: 15 March 2019)
While there has been a keen interest in studying computation at the edge of chaosfor dynamical systems undergoing a phase transition, this has come under questionfor cellular automata. We show that for continuously deformed cellular automatathere is an enhancement of computation capabilities as the system moves towardscellular automata with chaotic spatiotemporal behavior. The computation capabil-ities are followed by looking into the Shannon entropy rate and the excess entropy,which allows identifying the balance between unpredictability and complexity. En-hanced computation power shows as an increase of excess entropy while the systementropy density has a sudden jump to values near one. The analysis is extended toa system of non-linear locally coupled oscillators that have been reported to exhibitspatiotemporal diagrams similar to cellular automata. a) Electronic mail: estevez@fisica.uh.cu . INTRODUCTION The transition from regular towards chaotic behavior is one of the defining characteristicsof complex non-linear dynamical systems. There has been a keen interest in the behaviorof a dynamical system as it undergoes such transition, which comes from the fact that itis precisely in such region where enhanced or improved computational capabilities of thesystem are claimed to be found , this is known as the edge of chaos (EOC) hypothesis. Inthis context, computation in a dynamical system can have different meanings . The mostcommon interpretation of computational capacity is related to the ability of the system toperform some “meaningful” task. Meaningful can be understood, given some initial condi-tion, as the system transforming the input in some prescribed way into some output data.More broadly, it can be understood related to the system capability to behave as a univer-sal Turing machine given some proper input and “programming”. A related but differentpoint of view is to associate computation to the generic production, storage, transmissionand manipulation of information. It is in this last sense that any dynamical system can beseen as some device amenable to be characterized by its intrinsic computational abilities .Measures of the computational capabilities of a system have been developed that aim atquantifying the efficiency, in terms of resources, while characterizing the balance betweenirreducible randomness and pattern production .In favor of the EOC hypothesis several systems modeling natural and artificial phenomenaexhibit interesting, if not some type of critical behavior, precisely at the edge of chaos .It is not clear, for some one dimensional systems, if the EOC criticality plays a role inthe enhancement of computational capabilities, for example in cellular automata. Cellularautomata (CA) are discrete space, dynamical systems that act over a spatially extended gridtaking values over a finite alphabet, resulting in the evolution of the spatial configuration indiscrete time steps. There has been a discussion if meaningful computation abilities in CAare found and should be sought at EOC regions.Langton has used the density of quiescent states in the cellular automaton rule, usuallydenoted by the letter λ , as the control parameter to explore such questions. The quiescentstate is chosen as some arbitrary value in the CA alphabet, and the λ parameter is the ratiobetween the number of times the quiescent state does not appear in the CA rule table and thenumber of entries in the same table. For a binary alphabet CA with nearest neighbor rules2elementary cellular automata or ECA), λ = 1 − n × − = 1 − n ×
8, where n is the numberof times the quiescent state (say value 1) appears in the rule table. Monte Carlo simulationswere performed over the CA space to conclude that, on average, as λ , increases from 0,the rules behavior changes from a fixed point regime, to periodic, to complex and finallyto chaotic, each regime corresponding to one of Wolfram classification scheme . Packard further developed these ideas to conclude that CA capable of meaningful computationshappens precisely at those critical values of λ where a transition to the chaotic behavioroccurs. Packard results were questioned by Crutchfield et al. who performed Monte-Carlosimulations similar to the ones made by Packard failing to find EOC enhanced computationcapabilities.In this paper, we show that improvement of computational capabilities indeed occurs atthe edge of chaos for some continuously extended CA rules, but not following Langton λ parameter and therefore, not in the sense of Packard simulations. We will not be seeking todiscover useful computational of the CA but instead, improvement of intrinsic computationas measured by entropic based measures of structure and disorder. To support this claimwe follow the behavior of a CA as one entry in its rule table is continuously varied between0 and 1 and use it as control parameter the value of the continuously varying entry. Afterdiscussing CA, we turn our attention to a system of one dimensional non-linear locallycoupled oscillators that have been found to exhibit dynamical behavior similar to thoseof CA . The quest is to find if in this system with a continuous control variable space,enhanced computation at the EOC can also be found. II. CONTINUOUS STATES CELLULAR AUTOMATA
To start formalizing some of the expressed ideas, for our purposes it suffices to considerone dimensional CA as a tuple {A , r, φ } consisting of a set of states A , a radius r whichdefines a neighborhood structure from − r to r , and φ a local rule φ : A r +1 → A . The localrule acts simultaneously over a numerable ordered set S of cells which can be bi-infinite,but in any case, each cell is properly labeled by natural numbers as in { . . . , s − , s , s , . . . } .At a given time step t each cell is in a state belonging to A and the whole cell arrayconfiguration is denoted by S ( t ) . The evolution of the cell array from time t to time t + 1is given by the global function induced by φ : ∀ S ∈ A Z , S ( t +1) = Φ( S t ) such that s ( t +1) i =3 ( s ( t ) i − r , . . . , s ( t ) i , . . . , s ( t ) i + r ). S ( t ) is also called the spatial configuration of the cell array at time t . The set of { S (0) , S (1) , . . . , S ( t ) } is called the spatiotemporal diagram or evolution up to time t . The spatiotemporal diagram of the CA can show a wide range of behaviors including theemergence of complex patterns, fractal nature, chaotic behavior, periodic, random evolution,or evolve to homogeneous spatial configurations. CA have been used as models for differentnatural phenomena as well as a model for computing devices. It is known at least of a onedimensional CA that behaves as a Universal Turing Machine, the so-called rule 110 .Consider two state A = { , } cells with a rule defined over a one neighborhood radius r = 1 such that, a cell s i will be updated in the next time step according to the current valueof the cell and its two nearest neighbors: s ( t +1) i = φ ( s ( t ) i − , s ( t ) i , s ( t ) i +1 ). Such cellular automataare named elementary cellular automata or ECA . To specify a rule function φ a look uptable can be given, for example as shown in Table I s ( t ) i − s ( t ) i s ( t ) i +1 s ( t +1) i When ordered lexicographically the rule has the binary representation 00101110 equiv-alent to the decimal number 46 (the reader should notice that according to the definitiongiven in the introducion, the λ parameter for this rule is 0 . = 2 = 256 rules.There is no apparent geometry in the CA discrete space, as there is no natural way to4efine a transition between CA rules in the hypercube allowing for the smooth change ofone cellular automaton to a neighboring one while their behavior also changes gradually.One may think in ordering the CA rules as Gray codes , where successive CA differ by achange of only one entry in the rule table. However, Gray codes ordering does not lead to acorresponding smooth transition in the CA regimes. There is not even any apparent relationin behavior between adjacent rules. Consider rules 238 and 110 with binary representation00101110 and 01101110, respectively. Both rules differ only in the seventh entry and yet,the former result always in a homogeneous state where each cell in the array has the samevalue, while the later, as already stated, can perform as a Universal Turing Machine.The inability to define a natural ordering among the CA rules along which behaviorchanges gradually, has hindered the analysis of transitions between CA rules. To overcomethis limitation, Pedersen devised a procedure to continuously deform a CA rule into anotherat the expense of sacrificing the discrete nature of the cells states . Consider rule 46and rule 110 , a continuous transition from the former to the later can be done if wedefine a CA rule as 0 ξ ξ goes from 0 to 1 in a continuous manner (Figure1), such that for ξ = 0 we get rule 46 and for ξ = 1 rule 110 is recovered. In orderfor the transition to work, first a real valued function β must be defined, for example β ( s i − s i s i +1 ) = 4 s i − + 2 s i + s i +1 , next, an interpolating function f ( x ) is used over thevalues of β , in our case: f ( β ( s i − s i s i +1 )) : [0 , → [0 , f ( x ) = f ( n + 1) − f ( n )]( x − n ) + f ( n ) , n ≤ x ≤ n + 1 / f ( n ) − f ( n + 1)]( x − n − + f ( n + 1) , n + 1 / ≤ x ≤ n + 1 , (1)where n ( ≡ ⌊ x ⌋ ) is the largest integer smaller than x . As explained, if n is an integer number f ( n ) is given by the discrete CA rule.The price paid in Pedersen extension is that the states of the cells are no longer binaryvalues { , } but any real value in the interval [0 , × × timesteps. Simulation results are then presented by averaging over ten simulations for each ξ parameter value, each with different initial configurations. The last ten spatial configurationswere taken for each run, so the final averaging was over 10 spatial configurations. For testingthe robustness of the result, calculations were occasionally performed for 10 , 5 × , 10 and 10 cells, and correspondingly equal time steps, showing no change in the results. Cyclicboundary conditions are used. In all cases, and contrary to Packard Monte Carlo simulations,the variance of the mean values for our simulations is too small even to be represented aserror bars.Before we proceed, we need some measures that can capture the computational capabil-ities of a system and distinguish from the random portion of the output on the one hand,and data structuring on the other. III. ENTROPIC MEASURES
As a measure of unpredictability we use Shannon entropy rate h µ . When a bi-infinitesequence S = . . . s − s − s − s s s s . . . of emitted symbols s i has been observed, h µ is theamount of information produced per symbol. If we consider the bi-infinite sequence as atime series, then, it is the amount of new information in an observation of let say cell s i ,considering the state of all previous cells s j , j < i . In terms of Shannon entropy h µ = H ( s i | . . . , s i − ) , (2)where H ( X | Y ) denotes the Shannon conditional entropy of random variable X given variable Y .The block entropy of a bi-infinite string S measures the average uncertainty of findingblocks of a given length within the string. Block entropy is calculated by the equation H S ( L ) = − X S L ∈{A L } p ( S L ) log p ( S L ) , (3)6here the sum goes over all sequences S L of length L drawn from the alphabet A , and p ( S L )is the probability of observing one particular string S L in the bi-infinite string S . Entropydensity, also known as entropy rate, can be understood as the unpredictability that remainswhen all that can be seen has been observed, and can also be defined through a limitingprocess of the normalized block entropies h µ = lim L →∞ H S ( L ) L . (4)For a necessarily finite data, the entropy rate has to be estimated. There is a body ofliterature dealing with the estimation of entropy rate for finite sequences . Here h µ willbe estimated through Lempel-Ziv factorization procedure that has been extensively discussedin the literature and has been used before in the context of CA . In short, Lempel-Zivfactorization performs a sequential partition of a finite length string adding a new factor ifand only if, it has not been found in the prefix up to the current position of analysis . Inorder to emphasize that a finite size estimate of h µ through the Lempel-Ziv procedure isbeing used, we change to the notation h LZ instead.Formally, if s ( i, j ) is the substring of consecutive symbols s i s i +1 . . . s j and π the ”drop”operator defined as s ( i, j ) π = s ( i, j −
1) and consequently, π l = s ( i, j − l ), where l is apositive number. The Lempel-Ziv factorization is a partition F ( S N ) of a finite string S N oflength N F ( S N ) = s (1 , l ) s ( l + 1 , l ) . . . s ( l m − + 1 , N ) , in m factors such that each factor s ( l k − + 1 , l k ) complies with1. s ( l k − + 1 , l k ) π ⊂ s (1 , l k ) π s ( l k − + 1 , l k ) s (1 , l k ) π except, perhaps, for the last factor s ( l m − + 1 , N ).The partition F ( S N ) is unique for every string. For example the sequence u = 101101110001has a Lempel-Ziv factorization F ( s ) = 1 . . . . .
01, each factor is delimited by a dot.For a data stream of length 1 ≪ N < ∞ , h LZ is defined by h LZ ( N ) = C LZ ( S ) N/ log N , (5)where C LZ ( S ) = | F ( S N ) | is the number of factors in the Lempel-Ziv factorization. Theentropy rate for an ergodic source is then given by h µ = lim sup N →∞ h LZ ( N ) . (6)7or a finite data stream of length N where the above limit can not be strictly realized, theentropy rate is then estimated by h LZ where we drop the argument N when no ambiguityarises. The error in the estimation for a given value of N has been discussed before .Excess entropy E , introduced by Grassberger as effective complexity measure , has beenused in several contexts including the analysis of CA as a measure of correlation alongdifferent scales in a process and related to the intrinsic memory of a system. For a bi-infinite string, the excess entropy measures the mutual information between two infinitehalves of the sequence and is related to the correlation of the symbols at all possible scales.Excess entropy can be defined in several equivalent ways, for example as the convergence ofthe entropy rate E ( S ) = I [ . . . , s − : s , s , . . . ]= ∞ P L =1 [ h L ( S ) − h µ ( S )] = ∞ P L =1 ∆ H [ L ] , (7) I [ X : Y ] = H [ X ] + H [ Y ] − H [ X, Y ] is the mutual information between X and Y , and h L ( S ) = H S ( L ) − H S ( L − H ( X ) is the Shannon entropy of the random variable X . As already explained, mutual information is a measure of the amount of information onevariable carries regarding the other, and it is a symmetric measure I [ X : Y ] = I [ Y : X ]. Theinformation gain ∆ H [ L ] = h L ( S ) − h µ ( S ) measure how much the entropy density is beingoverestimated as a result of making measurements of the probability of only string blocksup to length L , in other words, how much information still has to be learned in order toassert the true entropy of the source . As entropy decreases with length L , if such decreaseis fast enough, the sum (7) will converge. E ( S ) is the intrinsic redundancy of the source orapparent memory of the source . It measures structure in the sense of redundancy beinga measure of pattern production. Feldman has proven that equation (7) is equivalent,in the ergodic case, to the mutual information between two infinite halves of a bi-infinitestring coming from the source output. Structure in this context then means how muchinformation one halve carries about the other and vice-versa, a measure related again withpattern production and context preservation as redundancy.Numerically, when dealing with finite data streams, excess entropy has to be effectivelyestimated. Here we use a random shuffle based excess entropy estimate using the Lempel-Ziv8stimation of entropy rate and given by E LZ = M max X M =1 [ h LZ ( S ( M ) ) − h LZ ( S )] . (8) S ( M ) is a surrogate string obtained by partitioning the string S in non-overlapping blocks oflength M and performing a random shuffling of the blocks. The shuffling for a given blocklength M destroys all correlations between symbols for lengths larger than M while keepingthe same symbol frequency. M max is chosen appropriately given the sequence length as toavoid fluctuations. In spite that E LZ is not strictly equivalent to the excess entropy as givenby equation (7), it is expected to behave in a similar manner .Finally, information distance d ( s, p ) will be used. Information distance is derived fromKolmogorov randomness theory. The Kolmogorov randomness s ∗ = K ( s ) of a sequence s ,is the length of the shortest program ( s ∗ = | K ( s ) | ), that can reproduce the sequence s ,accordingly, K ( s | p ∗ ), known as Kolmogorov conditional randomness, is the length of theshortest program able to reproduce s if the program p ∗ , reproducing the string p is given.The information distance is defined by d ( s, p ) = max { K ( s | p ∗ ) , K ( p | s ∗ ) } max { K ( s ) , K ( p ) } . (9) d ( s, p ) is an information based distance measuring, from an algorithmic perspective, howcorrelated are two sequences: if two sequences can be derived one from the other by asmall-sized algorithm, then the corresponding d ( s, p ) is small.Following Estevez et. al. , we estimate d ( s, p ) via Lempel-Ziv by d LZ ( s, p ) = C LZ ( sp ) − min { C LZ ( s ) , C LZ ( p ) } max { C LZ ( s ) , C LZ ( p ) } . (10)which will have the same interpretation than d ( s, p ) as much as the normalized (5) estimatesthe entropy density. IV. RESULTSA. Continuously deformed cellular automata
Having defined our entropic measures of randomness and complexity, we first exploredthe CA space. The logic followed was to focus in rule 110 and all rules that result from9ipping a single entry in its rule table (flipping can happen as 0 → → ξ value, to adisordered state where entropy rate reaches the maximum value of 1 and then at some other ξ value, the system transforms to a behavior similar to the final rule. For specific endingrules, a clear jump in the excess entropy was found at the verge of falling into the final rule.That is the case for the transformation to ending rule 46 (binary representation 00101110),so the behavior around this rule was studied modifying each entry one at a time.Figure 2 shows the observed entropic measures for ξ values near zero in the fifth entry(001 ξ ξ = 0) when moving towards rule 62 ( ξ = 1). A small range of ξ values is shown to emphasize details of the transition. E LZ has a clear peak shapedmaximum at a narrow window of ξ values coincident with the jump in h LZ . At the startingrule, 46 the spatiotemporal diagram (Fig. 2 left a ) shows a shift diagram, the initial randomconditions rapidly sets into a dynamic where the site values are spatially shifted with respectto the previous time step. At the value of ξ = 0 .
05 (Fig. 2 left c ) the spatiotemporaldiagram is characteristic of cellular automata chaotic behavior as has been reported before(compare, for example, with the spatiotemporal diagram of rule 101 of class 3 in Wolframautomata classification ). The spatiotemporal diagram, at the transformation point, when ξ = 0 .
018 (Fig. 2 left b ), while largely keeping the shift behavior, shows the production oflocal structures that persist in time while traveling across the cell sites. Also, interactionsbetween the structures are also seen. The value of ξ = 0 . E LZ is observed, we believe is a region of enhancement of computation capabilities of the systemat the EOC.Large E LZ comes as a result of pattern prevalence over disorder: a high disorder configu-ration has a large entropy density estimated by h LZ and almost zero excess entropy estimatedby E LZ . This relation between entropy rate and excess entropy has been discussed numeroustimes in the past . In a random sequence, there is no correlation between sites valuesat any length scale, and therefore, no mutual information can be found between two halvesof the sequence. An increase in excess entropy can come from the formation of patternsnot involving symbol erasure. If we relate computational capability, as stated in the in-10roduction, as the storage, transmission, and manipulation of information, then the excessentropy increase signals an enhancement of such capabilities. Indeed, an increase in excessentropy means larger mutual information between two halves of a sequence which is directlyrelated to storage and, less directly, to transmission of information. As excess entropy isalso related to pattern formation, it can also signal the manipulation of information. So,the increase of E LZ is related to the enhancement of computation. The second conditionthat must be tested is that the jump in entropy density is witnessing a phase transition toa chaotic regime.In order to test the hypothesis of transition to a chaotic regime, we performed the fol-lowing simulation: the initial configuration was perturbed by changing one cell value in thecell array, and the system was left to evolve the same amount of time steps. We then calcu-lated the Lempel-Ziv distance d LZ between the final configuration of the non perturbed andperturbed system. A high value of d LZ shows high sensitivity to initial conditions while theopposite is also true, low values of Lempel-Ziv distance is taken as showing low sensitivity toinitial conditions. A similar procedure was used by Packard and Crutchfield et al. using adifferent parameter based on Hamming distance and called difference-pattern spreading rate γ . As discussed above, d LZ differs from γ in that the former is not Hamming distance relatedbut instead, is based in how innovative is one sequence with respect to the other concerningpattern production . In all our simulations, d LZ does not have the fluctuations reportedfor γ . In the two upper plot of figure 2 the excess entropy and the d LZ as a function of the ξ parameter are shown. At the same values of ξ where the E LZ reaches for a maximum, the d LZ also jumps in a step-like manner. For the range of ξ ( ∈ [0 . , . d LZ . High sensitivity to initial conditionsis a fingerprint of chaotic behavior. As soon as h LZ jumps, so does d LZ , so the high entropyregion is also a region with chaotic behavior. The region of peaked E LZ has intermediatevalues of entropy rate and d LZ .Low values of entropy rate can come as a result of the erasing of one symbol, say 0, atthe expense of the other. In order to clarify how much of the entropy rate behavior couldbe due to the erasure (production) of 0s (1s) the entropy rate for random arrangementsof cell values, with different symbol density, was calculated for the whole range of possiblevalues. In figure 2b, the curve labeled h ρ is the entropy rate for the random arrangementhaving the same symbol density as the corresponding CA sequence with entropy density11 LZ for the given ξ value. The difference between both curves is a measure of how muchlowering of the entropy rate is due to the structuring of the spatial cell values independent ofthe production of one symbol. Such a difference has been called multi-information in othercontexts . It must be noted that just before the entropy jump, in the region of the EOC,multi-information increases significantly as h ρ jumps before h LZ .Besides the sensitivity to initial conditions, we looked for an additional criterion thatpoints to the idea that after the E LZ peak, the system is indeed in chaotic behavior. Adynamical system is said to be chaotic if and only if it is transitive and its set of periodicpoints is dense in the phase space . For a system to be transitive, it is not hard to seethat surjectivity is a necessary condition. Hedlund proved that a one dimensional discreteCA is surjective if and only if it is k -balanced for all k ∈ N . We say that a discrete CAwith radius r = 1 is k -balanced if ∀ y ∈ S k − , |{ x ∈ S k +1 | φ ( x ) = y }| = | S | , wherefor a set V , | V | means cardinality. For the binary alphabet used here, 1-balance meansthat the rule table will have the same number of 1s and 0s, namely 4. To prove k -balance,a random initial configuration is chosen, such that the density of either symbol is 1 /
2, allspatial configurations up to time step k , must conserve the same density for the symbols.Rule 46 is 1-balanced but is not k -balanced for k >
1. For continuous CA, proving k -balancefor all k ∈ N can be more involved as time evolution occurs over a continuous state cell array,but the output is the discrete binary projection of the continuous cell array. We use thefollowing criteria: if starting with a random initial binary configuration, after a sufficientnumber of time steps, the spatial configurations of the discrete projection preserves thedensity of both symbol at 1 /
2, then the continuous rule is taken to be k -balanced. If weassume that Hedlund condition is still necessary, we need to prove that the high entropystates in figure 2 are k -balanced, while the low entropy states are not. We performed suchtest numerically and found it to be true. The compliance of Hedlund rule, together with thesensitivity to the initial conditions points to the idea that certainly the values of ξ where E LZ has a peaked maximum, is a region in the EOC.If k -balance of the CA is a necessary condition for chaotic behavior, then CA rules whichare not 1-balanced for more than one entry will never achieve such balance by changing asingle entry in the rule table. Of the neighboring CA to rule 110 that is the case for rules 238,126 and 111 where indeed we did not find any chaotic regime upon continuous deformation ofany entry in the rule table and therefore no EOC enhancement (See supplementary material).12urthermore, it could be hypothesized that non-chaotic rules that are 1-balanced or can bebrought to 1-balance by the deformation of a single entry in the rule table, are those werea transformation to a chaotic regime could be expected. Such is the case in the set of ECAfor thirty-seven rules including rule 46 but also rules 78 and 108 which are also neighborsof rule 110. We have found a chaotic transition for rule 78 at the rule table entry sitecorresponding to the 101 triplet entry, and also enhancement at the EOC for this transition.Strong fluctuations in density were found in the EOC region, differing from rule 46. Thechaotic transition was also found for rule 108 together with a smaller but clear peak of E LZ at the EOC. B. Non-linear locally coupled oscillators (NLCO)
Next, we set to explore if the same enhancement of computational capabilities at theEOC could be found in a system of non-linear coupled oscillators introduced by Alonso .Alonso has argued about the similarities in the behavior of the NLCO and the CA. Couplednonlinear oscillators have been extensively studied in relation with many natural phenomenasuch as chemical reactions, neural brain activity, insects synchronization . The modelused here consist in a system of locally coupled oscillators using, under the weak couplingassumption, a simply modified Adler equation . The system exhibits, for certain ranges ofvalues of its control parameters, complex behavior. The local nature of the coupling, wherethe phase of one oscillator is directly related to two other neighbors, and the rich set ofbehaviors resembles that of the phenomena found in CA . The array of coupled oscillatorsobeys, for the phase evolution, a system of equations given by dθ i dt = ω + γ cos θ i + ( − i [cos θ i − + cos θ i +1 ] . (11) ω is the proper frequency of the oscillators, and γ controls the self-feedback of the oscillatorrelative to the coupling given by the third term in the right hand of equation (11). Thenumber of oscillators N is taken even, and the alternating term in the coupling term is toguarantee balance in the interaction of the array of oscillators. Cyclic boundary conditionsare enforced. The state of the system is given by the value, at a given time, of the phases θ i of each oscillator which will be observed indirectly by the activity given by sin θ i . As donewith the CA, a similar binary partition scheme is introduced: the mean value of the activityis measured for a given time step and used as a threshold to binarize the data.13imulations were performed following the same criteria as in the CA case, 5000 coupledoscillators were left to evolve for 5000 discrete time steps while the activity of the oscillatorswas recorded. Averaging was performed over 30 different random initial phases configura-tions. The system of equations (11) was solved numerically using a Runge-Kutta method oforder 4.In the case of the oscillators, the variables ( ω, γ ) vary continuously, and the controlparameters space is therefore continuous. A detailed study of the “phase” diagram has beendone before (Figure 3 shows the phase diagram taken from Ref. 36). There are severalparticular regions in the control parameter space. We focused on the needle-shaped regionsurrounded by a connected chaotic region (Region labeled as needle in Figure 3 below).Previous studies have indicated that the needle region exhibits low values of entropy rateand is insensitive to perturbations in the initial configuration of the oscillators, while thesurrounding chaotic region has high entropy rate and high d LZ values (See figures 3 and 7 inEstevez et al. ). We chose a fixed γ value of 1 . ω vary between 2 .
10 and 2 . E LZ is witnessed at the same ω values where the entropy rate has a jump from low values to highdisorder attaining the maximum possible value of h µ = 1. The peaked maximum of E LZ alsocoincides with the jump in the d LZ values. The spatiotemporal diagrams at the transitionpoint (Fig. 4 left b and supplementary material) shows waves travelling across time andacross the cell sites together with some local travelling structures, for smaller values of ω (Fig. 4 left a and supplementary material) the local structures have disappeared and forlarger values ω = 2 .
35 (Fig. 4 left c and supplementary material) a chaotic regime can beidentified.In the case of the NLCO, no significant change in the production of one symbol is found,and the density of 1s is more or less constant; therefore the decrease of h µ for the smallervalues of ω is a result of correlations between the oscillators phases.Similar to the CA, enhancement of computation at the EOC is also found.14 . COMPLEXITY-ENTROPY MAPS We finally look into the complexity-entropy map of both systems. The entropy mapfor cellular automaton rule 78 is shown in figure 5a. Rule 46 has a similar diagram. Themap was calculated by continuously varying the sixth entry in the rule table. Feldman etal. discussed the interpretation of complexity-entropy diagrams; it measures the systemintrinsic computation capabilities in a parameterless way. As already discussed, we alsofound enhanced computation at the EOC for rule 78 (See supplementary material). Thecomplexity-entropy map of figure 5a is typical of systems transforming from non-chaoticto a chaotic regime, the reader may compare our plot with that reported by Crutchfieldand Young for the logistic map exhibiting the well-known period-doubling cascading . Forvalues of entropy rate near zero, the system is rather simple, and the E LZ values are small.It is only with the introduction of some unpredictability that the system can accommodatemore complex behavior. It is precisely at those value around h LZ = 0 . E LZ attains a maximum that corresponds to the EOC. However, it is only to a certain valuethat a system can accommodate randomness while attaining higher intrinsic computationcapability.Further increase of entropy rate results in the decrease of the system intrinsic computationcapability and at h LZ = 1, E LZ is again at zero. A similar interpretation can be made of thecomplexity-entropy diagram for the NLCO the main difference is that the curve is smootherpointing to a continuous first derivative. The striking similarity between our plots and thosereported by Crutchfield and Young for well-studied systems, further points to the fact thatthe studied systems undergo a transition to chaotic behavior as the parameter ξ is varied. VI. CONCLUSIONS
In conclusion, we have argued for the existence of enhanced computation at the edge ofchaos for continuously deformed cellular automata, different from the reports of Langtonand Packard. By using a continuously varying parameter that allows transforming fromone CA rule to another, geometry in the CA space was recovered. Our approach does notcarry the limitations pointed out against Langton studies. The evidence heavily points tothe existence, for specific CA rules, of a transition towards a chaotic regime where enhanced15ntrinsic computation can be found at the EOC. We directly measured such enhancement bylooking into a measure of structural complexity while following the behavior of the entropyrate of the system. k -balanced analysis seems to be a useful tool to identify other ruleswhere the same phenomena could be expected. The analysis was extended to a system ofnonlinear coupled oscillators whose behavior has a strong resemblance to CA spatiotemporalpatterns. For this system also improved intrinsic computation at the EOC was identified.The complexity-entropy diagrams show a striking resemblance to the ones reported for wellstudied non-linear systems were chaotic transitions are known to happen. VII. ACKNOWLEDGMENTS
This work was partially financed by FAPEMIG under the project BPV-00047-13. EERwhich to thank PVE/CAPES for financial support under the grant 1149-14-8. Infrastructuresupport was given under project FAPEMIG APQ-02256-12.
REFERENCES J. P. Crutchfield and K. Young. Computation at the onset of chaos. In W. H. Zurek, editor,
Complexity, entropy, and the physics of information , pages 223–269. Addison Wesley,Redwood City, 1990. C. G.Langton. Computation at the edge of chaos.
Physica D , 42:12–37, 1990. S. A. Kauffman and S. Johnson. Co-evolution to the edge of chaos: Coupled fitnesslandscapes, pised states, and co-evolutionary avalanches. In C.G Langton, G. Taylor,J. Doyne Farmer, and S. Rasmussen, editors,
Artificial Life II , pages 325–368. AddisonWesley, Redwood City, 1992. J. P. Crutchfield, P. T. Haber, and M. Mitchell. Revisiting the edge of chaos: evolvingcellular automata to perform computations.
Comput. Systems , 7:89–130, 1993. J. P. Crutchfield. Between order and chaos.
Nature , 8:17–24, 2012. Z. Su, R. W. Rollins, and E. R. Hunt. Universal properties at the inset of chaos in dioderesonator systems.
Phys. Rev. A , 40:2689–2697, 1989. R. V. Sole, S. C. Manrubia, B. Luque, J. Delgado, and J. Bascompte. Phase transitionsand complex systems.
Complexity , XX:13–26, 1996.16
N. Bertschinger and T. Natschlager. Real time computation at the edge of chaos inrecurrentneural networks.
Neural computation , 16:1413–1436, 2004. J. M. Beggs. The criticality hypothesis: how local cortical networks might optimiza in-formation processing.
Phil. Trans. of the Royal Soc. of London A: Math, Phys. and Eng.Science , 366:329–343, 2008. J. Boedecker, O. Obst, J. T. Lizier, N. M. Mayer, and M. Asada. Information processingin echo state networks at the edge of chaos.
Theory in Bioscience , 131:205–213, 2012. S. Wolfram.
A new kind of science . Wolfram media Inc., Champaign, Illinois, 2002. N. H.Packard. Adaptation towards the edge of chaos. In J. A. S. Kelso, A. J. Mandell,and M. F. Shlesinger, editors,
Dynamic patterns in complex systems , pages 293–301. WorldScientific, Singapore, 1988. L. M. Alonso. Complex behavior in chains of nonlinear oscillators.
Chaos , DOI:10.1063/1.4984800, 2017. M. Cook. Univrersality in elementary cellular automata.
Complex Systems , 15:1–40, 2004. C. Savage. A survey of combinatorial gray codes.
SIAM Rev. , 39:605–629, 1997. J. Pedersen. Continous transitions of cellular automata.
Complex systems , 4:653–665,1990. J. P. Crutchfield and D. Feldman. Regularities unseen, randomness observed: Levels ofentropy convergence.
Chaos , 13:25–54, 2003. T. M. Cover and J. A. Thomas.
Elements of information theory. Second edition . WileyInterscience, New Jersey, 2006. T. Schurmann and P. Grassberg. Entropy estimation of symbol sequence.
Chaos , 6:414–427, 1999. P. E. Rapp, C. J. Cellucci, K. E. Korslund, T. A. A. Watanabe, and M. A. Jimenez-Montano. Effective normalization of complexity measurements for epoch length and sam-pling frequency.
Phys. Rev. E , 64:016209–016217, 2001. A. Lesne, J.L.Blanc, and L. Pezard. Entropy estimation of very short symbolic sequences.
Phys. Rev. E , 79:046208–046217, 2009. F. Kaspar and H. G. Schuster. Easily calculable measure for the complexity of spatiotem-poral patterns.
Phys. Rev. A , 36:842–848, 1987. E. Estevez-Rams, R. Lora-Serrano, C. A. J. Nunes, and B. Arag´on-Fern´andez. Lempel-Zivcomplexity analysis of one dimensional cellular automata.
Chaos , 25:123106–123116, 2015.17 A. Lempel and J. Ziv. On the complexity of finite sequences.
IEEE Trans. Inf. Th. ,IT-22:75–81, 1976. J. Ziv. Coding theorems for individual sequences.
IEEE Trans. Inf. Th. , IT-24:405–412,1978. J. M. Amigo and M. B. Kennel. Variance estimators for the lempel ziv entropy rateestimator.
Chaos , 16:43102–43108, 2006. P. Grassberger. Towards a quantitative theory of self-generated complexity.
Int. J. Theo.Phys. , 25:907–938, 1986. O. Melchert and A. K. Hartmann. Analysis of the phase transition in the two-dimensionalising ferromagnet using a lempel-ziv string-parsing scheme and black-box data-compressionutilities.
Phys. Rev. E , 91:023306–023317, 2015. A. N. Kolmogorov. Three approaches to the concept of the amount of information.
Probl.Inf. Transm. (English Trans.). , 1:1–7, 1965. M. Li, X. Chen, X. Li, B. Ma, and P. M. B. Vitanyi. The similarity metric.
IEEE Trans.Inf. Th. , 50:3250–3264, 2004. D. P. Feldman, C. S. McTeque, and J. P. Crutchfield. The organization of intrinsic compu-tation: complexity-entropy diagrams and the diversity of natural information processing.
Chaos , 18:043106–043121, 2008. J.-C Dubacq, B. Durand, and E. Formenti. Kolmogorov complexity and cellular automataclassification.
Th. Comp. Science , 259:271–285, 2001. G. A. Hedlund. Endomorphism and automorphism of the shift dynamical systems.
Math.Systems Theory , 3:320–375, 1969. E. Mosekilde, Y. MAistrenko, and D. Postnov.
Chaotic Synchronization: Application toLiving Systems . World Scientific, Singapore, 2006. R. Adler. A study of locking phenomena in oscillators.
Proc. of the IEEE , 61:1380–1385,1973. E. Estevez-Rams, D. Estevez-Moya, and B. Aragon-Fernandez. Phenomenology of couplednonlinear oscillators.
Chaos , 28:23110–23121, 2018. J. P. Crutchfield and K. Young. Inferring statistical complexity.
Phys. Rev. Lett , 63:105–108, 1989. 18
10 1 1 100 10 1 1 1000 10 1 1 101 rule 110rule 46
FIG. 1. Diagram of the continuous transition from rule 46 to rule 110 by changing the seventhentry. cb abc FIG. 2. E LZ (upper left plot) and Lempel-Ziv information distance d LZ (middle left plot) forthe cellular automaton rule 46 ( ξ = 0) changing the fifth entry in the rule table towards rule 62( ξ = 1). The plot shows ξ values in the range (0 . , .
05) to zoom into the initial phase transitionregion. Note the clear jump at around ξ ≈ .
018 for the entropy rate and the d LZ just beforethe E LZ jump reaches a maximum. The blue line labeled h ρ corresponds to the entropy rate ofa uniform distributed random configuration of cells with the same density of one as the cellularautomaton cells for the same ξ value. Calculations were performed with 5000 cells for 5000 timesteps. The reported values are the average value over simulations for 10 different random initialcell configurations. For each run, the last ten spatial configurations were taken making the effectiveaveraging over 10 spatial configurations. On the right, the spatiotemporal diagrams at the pointsmarked by arrows in the upper left plot. absorbing region w e d g e r e g i o n n e e d l e r e g i o n chaotic region c h a o t i c r e g i o n s p o o n r e g i o n turbulentregion FIG. 3. The entropy density estimates for the NLCO from the Lempel-Ziv complexity over thecontrol parameters ( ω, γ ) space. In all cases the number of oscillators is N = 10 and 10 time stepsare taken. (abovel) Corresponds to the c LZ value from the final configuration of the oscillatorsafter 10 steps. Each point is the average of 100 runs. (below) A diagram of the different regionsidentified in the two maps above. Taken from Ref. . bc a b c FIG. 4. The same measures as in Fig. 2 but for an array of 5000 non-linear coupled locally coupledoscillators following an Adler type equation. A partition over the phase activity is taken usingthe mean value of the phases. Notation follows the ones used in Fig. 2. In this case, h ρ is theentropy rate for the random arrangement having the same symbol density as the correspondingsequence of the NLCO for the given ω value. Again note the clear jump in entropy rate and d LZ around ω ≈ .
23 just after E LZ reaches a maximum. Averaging was performed over 30 randominitial phase configurations for the oscillators. On the right, the spatiotemporal diagrams at thepoints marked by arrows in the upper left plot (High resolution spatiotemporal diagrams can bealso found in the supplementary material). LCOCA
FIG. 5. Entropy-complexity diagrams for the cellular automaton rule 78 in the upper plot, andthe system of non-linear oscillators in the lower plot. Rule 78 was continuously deformed using as ξ parameters the fifth entry in the rule table (010 ξ h ∗ LZ ≈ .
2, below that point the regular behavior corresponding to rule 78 prevails whichbehaves as a type II automata according to Wolfram classification. Above h ∗ LZ chaotic behavioris observed. The diagram corresponding to the oscillator system has a more smooth curve, butstill, only a limited amount of randomness is allowed in the system in order for E LZ to achieve amaximum, above that level on unpredictability the system complexity start falling towards zero.to achieve amaximum, above that level on unpredictability the system complexity start falling towards zero.