Coarse Graining of Partitioned Cellular Automata
CCoarse Graining of Partitioned Cellular Automata
Pedro C.S. Costa and Fernando de Melo Department of Physics and Astronomy, Macquarie University,Sydney, New South Wales 2109, Australia Centro Brasileiro de Pesquisas Físicas - CBPF – Rua Dr. Xavier Sigaud 150,Urca, Rio de Janeiro, 22290-180, RJ, Brasil (Dated: April 13, 2020)Partitioned cellular automata are known to be an useful tool to simulate linear and nonlinear problems inphysics, specially because they allow for a straightforward way to define conserved quantities and reversibledynamics. Here we show how to construct a local coarse graining description of partitioned cellular automata.By making use of this tool we investigate the effective dynamics in this model of computation. All examplesexplored are in the scenario of lattice gases, so that the information lost after the coarse graining is related to thenumber of particles. It becomes apparent how difficult it is to remain with a deterministic dynamics after coarsegraining. Several examples are shown where an effective stochastic dynamics is obtained after a deterministicdynamics is coarse grained. These results suggest why random processes are so common in nature.
I. INTRODUCTION
The process of emergence in physics typically occurswhen we move from a microscopic to a macroscopic de-scription [1–3]. Frequently, because of the weak sensi-bility of our detectors, associated with the lack of infor-mation about the complete system, the dynamics we ob-serve does not unveil knowledge about the totality of themicroscopic system. For instance, an electrically neu-tral structure, in general, is established out of interac-tions between positive and negative charges. Often ourdetector cannot access the full description of the sys-tem, and as such it gives us the information that the sys-tem is neutral. The same idea can be transposed to spinparticles. Very often, our detectors cannot distinguishwhether there are two neighboring particles with spinspointing to the same direction, and in the end, it only pro-cesses the information about an effective spin. But thisis exactly what we want in several cases, that is, to workwith less degrees of freedom, thus demanding fewer re-sources, while still catching all essential information. Inmore general aspects, emergent processes arise sponta-neously because of the high number of interacting sub-systems, with no central control [4]. Furthermore, evenif we have the complete understanding of these individ-ual parts, we cannot predict when and what will emerge,which makes the study of emergence a hard task [1].In physics, a tool that is very often used to study emer-gence is known as coarse graining (CG). In statisticalmechanics, the concept of CG appears when we deal withrenormalization methods [1], and it also plays an impor-tant role in models for biomolecular dynamics [5]. More-over, when a huge number of particles are considered ina microscopic system, one has to deal with several cou-pled differential equations. In general, in realistic cases,there are several boundary conditions involved in these problems, so that, in the end, one is forced to rely onnumerical methods for differentials equation to describesystems with large number of degrees of freedom, whichis very difficult to manage [6].Then, in this particular case, a good alternative is to tryto figure out which are the relevant degrees of freedom todescribe this system, i.e. the properties of interest in thesimulation at issue. By doing that, fewer parameters canbe employed, and it renders a more efficient simulation interms of the required resources. Therefore, it is clear whyit is so important in physics to understand and predict theemergence of large scale behavior in a system, startingfrom its microscopic description.In the present work, cellular automata are employed tostudy emergence. A cellular automaton (CA) is a latticeof cells such that, at any moment in time, each cell isin one out of a finite set of discrete states. At each dis-crete time step the state of each and every cell is updatedaccording to some local transition function . Cellular au-tomata are paradigmatic forms and models of complexsystems, since their temporal dynamics is totally givenby local operations, without any central control [7, 8].Many systems in nature have these characteristics, suchas ant colonies [9] and brains [10]; after all, despite thefact that the individual components of these systems arerelatively well understood, as also are the local interac-tion between them, it is hard, if not impossible manytimes, to predict what will emerge in terms of formationof complex colonies and brain functionalities. AlthoughCAs have a simple formulation – local rules uniformlyacting on all cells in synchronous fashion – their dynam-ics is extremely rich, which render them appealing to cre-ate computational models for a range of systems, as in inbiology [11], cryptography [12] and fluid dynamics [13].Since the focus here is to study emergence in physics,where the properties of conservation and reversibility a r X i v : . [ n li n . C G ] A p r lay an important role, we employ a cellular automataclass known as partitioned cellular automata (PCA). Al-though the notions of reversibility and/or conservationare present in the context of CAs [14–17] these proper-ties can be achieved more easily in the PCA or blockautomaton, as proposed by Toffoli and Margolus [18]and further developed by Morita [14]. By employing aPCA, the concepts of reversibility and conservation be-come straightforward.In tune with the results by Israeli and Goldenfeld [2]and by Oleg [3], our main goal is to develop a tool tostudy effective dynamics of classical systems. Just likethey did, we developed a coarse graining technique in or-der to allow us to explore emergent dynamics in differentscales. But there are two noticeable differences betweenour work and theirs. First while Oleg’s work does notuse any internal space structure, constrained with localrules of evolution and interaction, ours does, as it reliesupon a description in terms of a PCA; see Fig. 1. Thesecond difference that departs our work from the one in[3] comes from the fact that, while he developed a clas-sical CG technique to explore only stochastic processesour model can work both with deterministic and stochas-tic processes. In comparison with the results establishedin [2], that employ the Wolfram CA’s [8] and is more di-rected toward the computer science community, ours ismore interesting for physics understanding, as PCAs candescribe many distinct dynamics in physics [13, 19–22].Furthermore the differences between our approach andthat of [2] are also manifested in terms of their structuraldifferences: as we rely upon a PCA, the structure of itstransition function allows us to establish not only tem-poral but also spatial CG. All these differences will beclarified later on in the text. FIG. 1. PCA coarse graining illustration. While E representsthe transition function of some PCA, that maps its state φ fromtime t to t (cid:48) , ˜ E represents the effective transition function thatnow maps the PCA state ˜ φ , achieved after the coarse graining,from t to t (cid:48) . This paper is organized as follows. In Section II weintroduce a definition of partitioned cellular automata(PCAs), and also show their general behavior in one di- mension in terms of permutation operators. In Section IIIthe procedure for coarse graining the PCA is presented,which is then analysed and discussed in the subsequentsection. Section V concludes, by summarizing and com-menting the results achieved, as well as discussing theperspectives of the procedure in possible future efforts.
II. CHARACTERIZATION OF A PCA
Formally, a partitioned cellular automaton (PCA) canbe defined as follows:
Definition 1. [PCA] A Partitioned Cellular Automatonis a 5-tuple ( L , N , Σ , { T i } , { σ i } ) consisting of:1. A d-dimensional lattice of cells indexed by integersL ⊆ Z d ;2. A finite neighborhood scheme N ⊆ L;3. Each cell is divided in n subcells, and to the i-thsubcell we assign a copy Σ i of a finite alphabet.The total alphabet associated to each cell is then Ξ = Σ × . . . × Σ n − ;4. A finite set of M tilings { T i } M − i = . Each tiling isthe union of identical non-overlapping tiles, T i = (cid:83) j T ( i ) j , with each tile T ( i ) j containing only subcellsof neighboring cells;5. A set of local functions { σ i } M − i = . The operator σ i is applied to each tile T ( i ) j of the tiling T i . With this definition, the transition function E : Ξ L → Ξ L , which updates the global automaton state Φ t ∈ Σ L from the time t to t +
1, is given by E = M − ∏ i = × T ( i ) j ∈ T i σ i . (1)In this perspective, the state update, from t to t + σ . The number oflocal operators is defined by the number of tilings, i.e. auniform partition of the set of subcells, used to define thePCA. This definition gives us freedom to access differentdynamics and to apply our model to more complicatedgeometries.In order to work with tilings more precisely, it is con-venient to put labels in each subcell. Given the cellat position x ∈ L , its subcells are denoted by x i , with2 ∈ { . . . , n − } . For instance, suppose we have a one-dimensional lattice, L = Z , where each cell has two sub-cells, and the neighbor scheme is N x = { x − x , x + } .In this case two tilings are sufficient to evolve the au-tomaton: the first one given by T = (cid:83) x ∈ Z T ( ) x , witheach tile defined as T ( ) x = { x , x } ; the second tilingcould be T = (cid:83) x ∈ Z T ( ) x , each tile given by T ( ) x = { x , ( x + ) } . The first tiling is responsible for “read-ing” the state of each cell, while the second is respon-sible for the interaction between the neighboring cells.Now that the tilings’ structure is established, the actionof the operator is clear: σ : ( Σ ) x × ( Σ ) x → ( Σ ) x × ( Σ ) x , σ : ( Σ ) x × ( Σ ) x + → ( Σ ) x × ( Σ ) x + for all x ∈ Z . Therefore, in this example, the transitionfunction can be written explicitly as E = × T ( ) x ∈ T σ × T ( ) x ∈ T σ . (2)By choosing σ i for i ∈ {
0, 1 } as a permutation functions,which are reversible, the PCA becomes reversible. Thesequence of steps leading to E in this example is illus-trated in Fig. 2. FIG. 2. Each cell is split into two subcells, and the operators σ i are applied in accordance with the two tilings. The operators σ i can be either deterministic orstochastic. In the first case, the local functions are givenby permutation matrices π ( i ) , while in the stochastic evo-lution it is given by a convex combination of permuta-tions, σ i = n ! ∑ j = p ij π ( j ) , with p i ≥ ∑ n ! i = p i = III. COARSE GRAINING THE PCA
Although the approach chosen here is quite general,in the sense that it can be applied to different geometriesand with an arbitrary number of particles or excitations,for simplicity we focus on the one-dimensional case witha single excitation. Given that, there is only the need toemploy subcells with two states Σ i = {
0, 1 } , state 1 rep-resenting the particle existence (excitation), and state 0the empty subcell. Thus, from now on, one bit per subcellis referred to herein as ( Z ) i , instead of Σ i . Therefore, Z n now stands for the finite set of cell states, given thatthere are n bits per cell. Despite the fact that we restrictto one-dimensional PCAs, the cases with more than twosubcells per cell are explored. Interactions, however, willremain only between two subcells from different cells,which means the tiles of the second tiling have the struc-ture T ( ) x = (cid:8) x n − , ( x + ) (cid:9) . Interaction between thecells happens thus only across boundary subcells.Without loss of generality, the evolution will be re-stricted to the cases of two tilings, since with this num-ber of tilings all non-trivial possible dynamics of the one-dimensional PCA can be accessed. Moreover, in order toallow interaction between the cells, we will always em-ploy the Swap as the second operator, σ = Swap , theone related with the second tiling T .Since the present context relies on two tilings, with themaps σ and σ related to the first and the second tiling,respectively, we will often write E ( σ , σ ) , to indicatethe transition function employed. A. The coarse graining procedure
The first thing to be done in order to get the CG is toconstruct a supercell. The starting point is a PCA globalstate at time t , Φ t , with | L | cells, each with n subcells, Φ t ∈ ( Z n × . . . × Z n ) | L | .As the next step, s cells are joined, s being an integerdefining the supercell size. Thus, a PCA global state interms of supercells is achieved, Φ st ∈ ( Z sn × . . . × Z sn ) | L | / s , (3)3ith | L | / s supercells. We need to stress the fact that thechoice for | L | is such that | L | / s ∈ N . Furthermore, itis important to notice that while the number of cells isreduced when we move to the supercell representation,the number of subcells is increased in such a way that bythe end of the process the total number of subcells is keptconstant. By doing that, the same transition function isstill well defined in terms of supercells. That is, at thepresent work we are considering the following equality E Φ st = ( E Φ t ) s . Once | L | / s supercells are obtained, aCG map is constructed as follows: Λ CG : Z sn → Z n (cid:48) . (4)For being considered a coarse-graining map, we demand n (cid:48) < sn . Some information about the full state is then lostafter the action of Λ CG . Here we will be restricted to thecase of n (cid:48) = n . This map is applied to all supercells, inorder to achieve a possible CA candidate with | L | / s cellsand with n subcells, Λ | L | / sCG Φ st = ˜ Φ T , (5)where Λ | L | / sCG = Λ CG × . . . × Λ CG (cid:124) (cid:123)(cid:122) (cid:125) | L | / s times ,and ˜ Φ T is a PCA global state at time T where in general T (cid:54) = t , as we will see later, in the upper level. Howeverwe do not know yet the transition function, ˜ E , for ˜ Φ T .Moreover, like in [2] the interest here is to construct ˜ E from the transition function in the lower level. With thisgoal in mind, an analogous procedure [2] for the PCA isproposed.The first step is to apply the transition function in thelower level h times, i.e., E h Φ st = Φ st + h , with h ≤ s . (6)Different from the related result by Israeli and Golden-feld, which requires h = s . Here we can relax this con-straint, leading to the cases we denote by temporal andspatial coarse graining . Subsequently, the CG map isapplied to get a PCA state in the “upper level” at time T +
1, that is, Λ | L | / sCG Φ st + h = ˜ Φ T + . (7)Then, we say that a PCA in the upper level is emergentfrom the lower level, as long as there exists a PCA tran-sition function ˜ E satisfying the PCA definition for tran-sition functions, i.e, composed by local operators, thatconnects these two PCA states. Mathematically speak-ing, we are looking for a transition function ˜ E ( ˜ Φ T ) = ˜ Φ T + . (8) Besides being a valid PCA transition function, it mustalso observe that given any two distinct states Φ st and Θ st , such that Λ | L | / sCG ( Φ st ) = Λ | L | CG / s ( Θ st ) , then ˜ E (cid:16) Λ | L | / sCG Φ st (cid:17) = Λ | L | / sCG (cid:16) E h Φ st (cid:17) (9) = Λ | L | / sCG (cid:16) E h Θ st (cid:17) = ˜ E (cid:16) Λ | L | / sCG Θ st (cid:17) .So far we have described the CG procedure acting inthe PCA state, that includes all supercells Eq.(3). How-ever, from the PCA space homogeneity and from its timeand space translation invariance, the procedure can bedone just by analyzing the states within the neighborhoodscheme.Notice that h > s is not allowed, since in these casesthere will be enough time for the excitation to cross theneighborhood scheme in the upper level. This restrictioncan be better understood with a simple example. Let uschoose s = N x = { x − x , x + } . Now, if we were to choose h > h = x ± N = { ˜ x − ˜ x , ˜ x + } where ˜ x refers to the position in the upper level. Then,by allowing h > s , there is a chance of an emergent struc-ture with non-local operators to appear, i.e., a transitionfunction that interacts the cells ˜ x and ˜ x ± FIG. 3. Schematic diagram summarizing the general proce-dure.
At this point it is important to discuss some character-istics of the Λ CG employed in this model. From Eq.(4)and from the fact that the same number of subcells arekept in both levels (i.e., n (cid:48) = n ), it turns out that Λ CG is amap Λ CG : Z sn → Z n . This means that in the determin-istic cases Λ CG belongs to the space of n × sn matrices,only with 0s or 1s entries. This implies that the map4s not injective, thus different states in the lower levelmay give the same state on the upper level. Physicallyspeaking, there are different microscopic states that cor-respond to the same macroscopic state. Moreover, thereis another important characteristic of the map which isa consequence of the physical interpretation we are us-ing in our investigations. Since the interpretation usedhere is that the value 1 in the subcells is equivalent tothe existence of one particle (or excitation), and 0 that agiven location is empty,in order to preserve the numberof particles during the evolution, we only allow one sin-gle value different of zero in each column and each rowof Λ CG . If that was not the case, the maps could increasethe number of particles after coarse graining, as we il-lustrate in IV, and it could also lead to dynamics in theupper level that do not conserve the number of particles.Another relevant information for further analysis is thenumber of possible CG maps, N CG ( n , s ) , given the super-cell size s and the number of subcells n . There are threemain points that should be considered to account for thetotal number of maps: The size of the matrix, n × ns ; theconstraint about the number of non-zeros; and the factthat maps with rows and columns only with zeros are al-lowed. When these main informations about the problemare combined, the following number is be established N CG ( n , s ) = n − ∑ i = ns ! ( ns − n + i ) ! (cid:18) ni (cid:19) . (10)The restriction about the number of 1s in each columncan be removed when we are restricted to a single particlescenario, which is exactly the case that we work with inwhat follows. By dropping this restriction the number ofpossible CG maps changes to N CG ( n , s ) = ( n + ) ns −
1. (11)Herein, only the results for the CG maps that take twoand three cells ( s =
2, 3) to one are reported. The exten-sion for more dimensions and for different values for s can be done naturally.The last point to be noticed when attempting to applythe CG to deterministic settings is the number of pos-sible connections between the lower and upper levels.There are n ! permutation matrices for n subcells. More-over, the PCAs will be kept with the same structure inthe lower and upper levels (i.e., the same neighborhoodscheme and the same number of subcells). From eachinitial dynamics E ( Swap, π ( i ) ) , n ! permutation matricesgive n ! possible connections to deterministic dynamicsin the upper level ˜ E ( Swap, π ( j ) ) . Now taking in accountthe n ! different initial conditions in the lower level weconclude that there are ( n ! ) possible links between the lower and the upper levels. The word "link" was adoptedto emphasize the existence of connections between thelower and upper levels. In a case with more than one CGmap connecting the same rules between these two levels,they are counted only once, i.e., just one link betweenthese rules. The number of links – whose biggest value is ( n ! ) – gives us the number of different rules connectingthe lower and upper levels. This fact will be important inthe analyses of the results, for a better understanding ofits quantitative and qualitative aspects. IV. COARSE GRAINING RESULTS FORONE-DIMENSIONAL PCA
Throughout this section we assume a neighborhoodscheme N x = { x − x , x + } . A. Deterministic results
1. Spatial coarse-graining
In this first part we describe the results for the casewhere the temporal coarse graining is not applied, whichmeans E h with h = a. Two cells, s = , to one cell: Our starting pointis n =
2, for a case where there is a map from two cellsto one. The idea here is to use this simple exampleto explain how our procedure works and to check theconsistence of the method illustrated in Fig. 3. Giventhe simplicity of this example, here we consider a one-dimensional lattice where the number of particles is ar-bitrary. In this case there are only two different permuta-tion matrices, π ( ) = (cid:18) (cid:19) , π ( ) = (cid:18) (cid:19) . (12)Then, working only with σ = Swap as the local in-teraction operator, only the two deterministic transitionfunctions E (cid:16) Swap, π ( ) (cid:17) and E (cid:16) Swap, π ( ) (cid:17) are possi-ble.Despite the fact that there are four possible links con-necting the lower to the upper level, only the connection E (cid:16) Swap, π ( ) (cid:17) to ˜ E (cid:16) Swap, π ( ) (cid:17) is obtained, with theCG map given by Λ CG = (cid:18) (cid:19) . (13)Now let us to check if these dynamics in the lowerand upper level, alongside Eq.(13), obey the constraintsimposed by the CG procedure.5tarting with the dynamics generated by E (cid:16) Swap, π ( ) (cid:17) , its consequence is to keep the par-ticles confined between two neighboring cells, with aforward and backward movement from one to the other.This is represented as ··· (cid:16) d ( ) , d ( ) (cid:17) x (cid:16) e ( ) , e ( ) (cid:17) x + ··· E ( Swap, π ( ) ) (cid:29) ··· (cid:16) c ( ) , e ( ) (cid:17) x (cid:16) d ( ) , f ( ) (cid:17) x + ··· , where we assigned each lattice site by a Boolean vari-able with superscripts related to the subcells location atthe current time step. As we can see from this dynamicsabove a single particle in the right-most subcell of cell x moves to the left-most subcell of cell x +
1, and vice-versa.Now, let us compose a supercell by putting cells x and x + Λ CG (cid:20)(cid:26)(cid:16) b ( ) , b ( ) (cid:17) x − (cid:16) c ( ) , c ( ) (cid:17) x − (cid:27) , (cid:26)(cid:16) d ( ) , d ( ) (cid:17) x i (cid:16) e ( ) , e ( ) (cid:17) x + (cid:27) , (cid:26)(cid:16) f ( ) , f ( ) (cid:17) x + (cid:16) g ( ) , g ( ) (cid:17) x + (cid:27)(cid:21) , that give us (cid:16) b ( ) , c ( ) (cid:17) ˜ x i − , (cid:16) d ( ) , e ( ) (cid:17) ˜ x , (cid:16) f ( ) , g ( ) (cid:17) ˜ x + . (14)The transition function is then applied before the CGmap, to the same initial state E (cid:20)(cid:26)(cid:16) b ( ) , b ( ) (cid:17) x − (cid:16) c ( ) , c ( ) (cid:17) x − , (cid:27) , (cid:26)(cid:0) d , d (cid:1) x i (cid:16) e ( ) , e ( ) (cid:17) x + (cid:27)(cid:26)(cid:16) f ( ) , f ( ) (cid:17) x (cid:16) g ( ) , g ( ) (cid:17) x + (cid:27) = (cid:20)(cid:26)(cid:16) a ( ) , c (cid:17) x − (cid:16) b ( ) , d ( ) (cid:17) x − , (cid:27) , (cid:26)(cid:16) c ( ) , e ( ) (cid:17) x i (cid:16) d ( ) , f ( ) (cid:17) x + (cid:27)(cid:26)(cid:16) e ( ) , g ( ) (cid:17) x (cid:16) f ( ) , h ( ) (cid:17) x + (cid:27) , and after the CG map, (cid:16) a ( ) , d ( ) (cid:17) ˜ x − , (cid:16) c ( ) , f ( ) (cid:17) ˜ x , (cid:16) e ( ) , h ( ) (cid:17) ˜ x + . (15)Therefore, we can see that the results 14 and 15 are com-patible with the upper transition rule ˜ E (cid:16) Swap, π ( ) (cid:17) .Furthermore, within this simple example, by changingour CG map in Eq.(14) to Λ CG = (cid:18) (cid:19) , (16)the argument for only one single value different fromzero at each row can be verified. Rather then Eq.(14)we would get (cid:16) b ( ) + b ( ) , c ( ) (cid:17) ˜ x − , (cid:16) d ( ) + d ( ) , e ( ) (cid:17) ˜ x , (cid:16) f ( ) + f ( ) , g ( ) (cid:17) ˜ x + , that would create subcells with two particles, that we donot allow here. However, we can also notice that the mapgiven in Eq.(16) is only problematic when we are in thescenario of multiple particles. Within the same example,by changing our CG map now to Λ CG = (cid:18) (cid:19) , (17)we can also verify the constraint in the rows should berespected. Now rather that we would get (cid:16) b ( ) , b ( ) + c ( ) (cid:17) ˜ x − , (cid:16) d ( ) , d ( ) + e ( ) (cid:17) ˜ x , (cid:16) f ( ) , f ( ) + g ( ) (cid:17) ˜ x + . In this scenario, even starting with a single excitation,for instance by setting up all bits to zero except b ( ) = d ( ) which is set to one.Beginning with the state Φ t ∈ Z × Z × Z , with threesupercells in the case for n = Φ t = ,where the subscript 4 refers to the vector composed offour 0s 0 = .As the next step, the CG map in Eq.(13), which is a 2 × Λ CG Λ CG Λ CG = = ˜ Φ T ,where ˜ Φ T corresponds to the state with only three cellsafter the coarse graining Φ t . All matrices that act on6tates in the upper and lower levels are block diagonal.Returning to Φ t and applying the transition function, × π ( ) π ( ) π ( ) π ( ) π ( ) π ( ) = = Φ t + , with the 1s in the matrix that contains the Swap operatormeaning that no operator is being applied in the bound-aries, respecting the neighbor scheme. Subsequently, theCG map is applied to the state Φ t + , Λ CG Λ CG Λ CG = = ˜ Φ T + .From Φ T and Φ T + we can start looking whether thereis some transition function in the upper level. Since thetarget is a permutation operator in the upper level thatconnects these two states, the parameterization of thisoperator can be done as follows, π ( x ) = (cid:18) p qq p (cid:19) ,where x = ( ) if p = ( ) and q = ( ) . The transi-tion function using π ( x ) can now be applied, building thelinear system π ( x ) π ( x ) π ( x ) = .From this simple case, p = q = p = q = ˜ E ( Swap, π ( ) ) as its transition function. One of the main characteristics of our results is thatthe CG maps do not necessarily preserve the number ofparticles, what only can be noticed for some initial con-ditions. For instance, applying the map Eq.(13) eitherto (cid:110) (
0, 0 ) x i , (
1, 0 ) x i + (cid:111) or (cid:110) (
0, 1 ) x i , (
0, 0 ) x i + (cid:111) , we get (
0, 0 ) ˜ x i . This is a consequence of the mathematical struc-ture of the CG maps which are not bijective and are al-ways reducing the number of cells. Because of that, someinformation loss might be expected, and here this loss isrepresented by the number of particles.Moving on to the case with n =
3, there are six differ-ent permutation matrices, as listed below: π ( ) = ; π ( ) = ; (18) π ( ) = ; π ( ) = ; π ( ) = ; π ( ) = .In this scenario, 12 connections are achieved from all thepossible 36. These links were made by only eight dis-tinct CG maps that are listed in Appendix VI A 1. Theseresults are summarized in Fig. 4. FIG. 4. CG results with s = n =
3. In this illustration wehave all possible transitions functions when there are three sub-cells. These arrows are connecting the CA dynamics after theCG maps. For instance, beginning in the lower level with thedynamics driven by π ( ) as the permutation operator it is pos-sible to achieve two others dynamics in the upper level, namely π ( ) and π ( ) . Moreover, there is a map that allows the samedynamics π ( ) to emerge in the upper level. . Three cells, s = , to one cell: We again startwith n =
2. Our results in this case are just an exten-sion of the previous one, since their transition functionsin the lower and the upper levels yield the same dy-namics. One link between the same transition functions E (cid:16) Swap, π ( ) (cid:17) to ˜ E (cid:16) Swap, π ( ) (cid:17) is established, and theCG map that establishes this link is equivalent to Eq.(13)in this new space dimension, namely Λ CG = (cid:18) (cid:19) . (19)Now with n =
3, i.e three subcells, 8 links are obtained,out of 36 possibilities. These connections are given byseven CG maps (see Section VI B 1). The results are il-lustrated in Fig. 5.
FIG. 5. CG results with s = n =
2. Spatial and temporal coarse-graining
The purely spatial setting was defined as the caseswhere h =
1. Here we open to more possibilities forthe number of times which the transition function canbe applied, respecting the bound h ≤ s that we showedbefore. Since the time step in the lower level is given bythe h value the immediate consequence of doing that isthe time flowing in these levels is different and becauseof that we call these cases by temporal coarse-graining . a. Two cells, s = , to one cell: From the previousbound in this case we can use only h =
2, since the casewith h = n =
2, no link is possible between thelower and upper levels. But with n =
3, 8 links are pos- sible, out of six different maps: Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = .These are fewer connections and maps than previouslyestablished with h =
1. These results are summed up inFig. 6.
FIG. 6. CG results with s = h = n = b. Three cells, s = , to one cell: Now we can workwith the two values h = h =
3, since s =
3. Be-ginning with h =
2, no result is obtained with n = n = h =
3, with n = h =
1, since E = E . With n =
3, again four maps areachieved VI B 3 but now only four links, one map to eachlink Fig. 8.
3. Overview of deterministic results
During our investigations, the cases with n = n = relative links , i.e. the links achieved overthe total of possible links, as Figs. 9 and 10 display.From all the results on deterministic dynamics in thelower level it is not possible to have all dynamics in theupper level as emergent ones. However, the number of8 IG. 7. CG results for s = h = n = s = h = n = relative links increases, as shown in Fig. 9 for s =
2, andin Fig. 10 for s =
3. Should we expect all links to ap-pear for some number of subcells? In fact, this seemsquite likely. Let us understand why by considering eitherEq.(10) or Eq.(11), the total number of possible mapsgiven by n and s . Since ( n ! ) is the number of possiblelinks, this means that the number of maps increases fasterthan the total number of links, for a given s , i.e.,lim n → ∞ ( n ! ) N CG ( n , s ) = n R e l a t i v e li n k s s = h=1h=2 FIG. 9. Relative links from two cells to one cell, after ap-plication of the CG map. These results point to us that moreand more dynamics become accessible as we increase the num-ber of subcells in both scenarios: spatial and temporal coarse-graining. n R e l a t i v e li n k s s = h=1h=2h=3 FIG. 10. Relative links from three cells to one cell, after theapplication of the CG map. microscopic dynamics that cannot be distinguished aftercoarse graining.Another observation is that the number of links alsodepends on the values of h employed. The reason is thatthere are values for h – the number of times that the tran-sition function should be applied in the lower level beforethe state is coarse grained – that might lead the particleto stay inside the same initial supercell. In these cases,the trivial dynamics is established in the upper level,which means particles that do not move to their neigh-bors. Once these possibilities are not included, fewerlinks become available for these cases.9 . Stochastic CG results for one-dimensional PCA Until now a strict constraint was made in the dynam-ics after the coarse-graining. The imposition was suchthat only permutation operators are allowed in the upperlevel. However, it might be the case that these constraintsare too artificial to real physical systems, which can ex-plain why it is so difficult to find CG maps linking twodeterministic dynamics.Since it is quite common in physics to deal withstochastic dynamics when we do not have access to thefull information about the system, it seems more gen-uine to search for convex combination of permutationsin the upper level, starting from some fixed CA dynam-ics in the lower level, which is our next step. This ispossible as long as the specific constraint previously on ˜ E , according to Eq.(9) imposed to achieve the transitionfunction, is relaxed. Without that constraint, if we havetwo or more initial states in the lower level leading to thesame state after the coarse graining they might evolve todifferent states in the upper level. Thus, at the end of thisprocess, different transition functions in the upper levelare possible. This idea can be visualized in Fig. 11. FIG. 11. In this illustration, we have two different states in thelower level, namely Φ st and Θ st that represents the same statein the upper level ˜ Θ T . Then, these two states in the lower levelevolve to two different states as well. So far there is nothingnew in this procedure compared with the construction did todeterministic case. However, without the constraint given inEq.(9) it is allowed that these two states at time t + h go to twodifferent states in the upper level. Because of that, differenttransition functions in the upper level can appear. Alternatively of what we saw previously, here there isthe possibility to get different maps from the same link.Thus, it allows for more maps in comparison with theprevious results. In what follows, we only give the resultsfor s = σ = p π ( ) + p π ( ) , (20)where p , p ≥ p + p =
1, and the swap operatorfor the second tiling. The interesting point about thisdynamics is the fact that it describes the Random Walk(RW) problem [23]. To see that let us recall how thisproblem is described.In the RW problem at every point, before its displace-ment at the one-dimensional lattice, the walker flips acoin. In this case, the coin is related to the walker prob-ability to keep moving to the same direction or changeits movement direction. Thus, here we can see Eq.(20)playing the coin’s role, ( p is the walker probability tonot change its direction and p the probability to change)and the shift operator giving the walker displacement,agreeing with what we claimed above.Therefore, this discrete equation of motion is the onethat gives a stochastic partial differential equation ∂ t ρ + D ∂ x ρ = ρ ( x , t ) is the localdensity of particles, D is the diffusion constant given by D = λ τ (cid:18) p ( − p ) (cid:19) , (21)and λ / τ is a constant that comes from the dispersionrelation of the problem. V. CONCLUSION
Similarly to [2], in the present work we studied emer-gent dynamics, but in a different scenario of CA. Differ-ently from the previous results, with PCA we could getCG maps in different time scales. One advantage of thisCA class is its strong connection with physical processes;for instance, the Navier Stokes equation [13] and Ran-dom Walk [19] can be simulated by applying this com-putation model. Moreover, we established two distinctresults: links connecting deterministic CA to determinis-tic CA, and deterministic CA to stochastic CA.Despite the fact that the results in the deterministiccases suggest that all links between the lower and theupper levels will be achieved, for some large numberof subcells, we could see how difficult it is to get theseemergent phenomena, since the total number of CG mapsincreases much faster than the number of possible links,10s the number of subcells increases. Another point thatshould be considered is that, while we could not observedifferent CG maps linking two different transition func-tions in the deterministic results, it happened very oftenin the stochastic cases. This can be interpreted as an in-dication to why stochastic processes in the macroscopicworld naturally emerge from well determined individualparticle actions, in agreement with statistical mechanics.By taking advantage of the PCA, the last sectionshowed that the procedure introduced here can be easilytranslated to the case of multiple particles, sufficing to bemore careful when there is interaction between them.Going beyond the classical CA, the CG prescriptionmight be translated to its quantum counterpart. Insteadof the CA explored in [2, 8], it is the PCA that shouldbe quantized to get the partitioned unitary quantum cel-lular automata (PUQCA, [24]). A core part that does thePCA be its classical analogous is the reversibility estab-lished when we have only permutation operators acting,since the unitary evolution in QCA makes it reversible atany time. Therefore, rather than extending the methodshowed in [2] to the QCA, a natural choice is picking upthe prescription introduced here and extend it to its quan-tum version. In quantum theory this tool can be useful,for instance, to study the transition from the quantum tothe classical world [25, 26].
ACKNOWLEDGEMENTS
We acknowledge financial support from the National In-stitute for Science and Technology of Quantum Informa-tion (INCT-IQ/CNPq) and CAPES, both from Brazil. Wealso would like to thank Pedro De Oliveira for carefulreading of the manuscript.
VI. APPENDICESA. CG maps for s = and n =
1. h=1 Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = .
2. h=2 Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = .11 . CG maps for s = and n =
1. h=1 Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = .
2. h=2 Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = ;
3. h=3 Λ CG = ; Λ CG = ; Λ CG = ; Λ CG = . C. Some maps and dynamics for stochastic CG results
1. Spatial coarse graining
As before, spatial CG means h = • π ( ) : with π ( ) in the lower level we found sevenmaps, for example, Λ CG = ,which yields in the upper level the following con-vex combination for the operator related with thefirst tiling, σ = p π ( ) + p π ( ) , (22)where p , p ≥ p + p =
1. We also got aconvex combination for the operator related withthe second tiling σ , σ = q + q swap, (23)where q , q ≥ q + q =
1, and is the identitypermutation. The latter means that, with probabil-ity q the particle will stay in the same cell, andwith probability q the particle will leave the cell. • π ( ) : in this case only one stochastic evolution isachieved in the upper level, Λ CG = , (24)which leads to the same evolution expressed inEq.(20), except that now π ( ) remains the same. • π ( ) : like the result for π ( ) there is only a singleCG map Λ CG = , (25)leading to some dynamics in the upper level, whereagain it is given by Eq.(20) for the first operatorand swap for the second one. • π ( ) : alternatively from the previous cases, the up-per level has a deterministic operator for σ that is π ( ) , established by the CG map Λ CG = .12ut the second operator has the format σ = +
12 swap,which entails probabilities of 1 / • π ( ) : in this case, a deterministic evolution for thefirst operator in the upper level is again achieved,but now the permutation is π ( ) . Coincidentally,with the result achieved for π ( ) both the CG mapand the σ operator are the same. In fact, by a care-ful analysis of these permutation operators ( π ( ) and π ( ) ) it is possible to see that they are relatedby a transposition transformation, i.e., ( π ( ) ) T = π ( ) , the same type of dynamics but for differentdirections. • π ( ) : finally, for the last permutation operator,there is only one dynamics in the upper level, thesame dynamics obtained for π ( ) , according toEqs. (20) and (23), achieved by three different CGmaps, for instance, Λ CG = ,
2. Spatial and temporal coarse graining
Now the results established for h = • π ( ) : the same dynamics for π ( ) with h = Λ CG = . • π ( ) : the same dynamics achieved in the previousresult is kept, but now there is only one map, theone given by Eq.(25). • π ( ) : like the two previous cases, we achievedthe same stochastic transition function in the up-per level, with Eq.(24) as the CG map. • π ( ) : no dynamics is available in the upper levelbeginning with this deterministic PCA. • π ( ) : as discussed earlier, the dynamics generatedby π ( ) and π ( ) are quite similar. Given that, wereplicated the last result, that is, with no dynamicshaving been established in the upper level. • π ( ) : now we established three CG maps for onlyone dynamics in the upper level, which is the samewe have seen for h = [1] L. P. Kadanoff, Physics Physique Fizika , 263 (1966).[2] N. Israeli and N. Goldenfeld, Phys. Rev. E , 026203(2006).[3] O. Kabernik, Phys. Rev. A , 052130 (2018).[4] T. Y. Choi, K. J. Dooley, and M. Rungtusanatham, Journalof operations management , 351 (2001).[5] H. I. Ingólfsson, C. A. Lopez, J. J. Uusitalo, D. H. de Jong,S. M. Gopal, X. Periole, and S. J. Marrink, Wiley Inter-disciplinary Reviews: Computational Molecular Science , 225 (2014).[6] S. Olariu and A. Y. Zomaya, in Handbook of BioinspiredAlgorithms and Applications (Chapman and Hall/CRC,2005) pp. 291–302.[7] J. Von Neumann, A. W. Burks, et al. , IEEE Transactionson Neural Networks , 3 (1966).[8] S. Wolfram, A new kind of science , Vol. 5 (Wolfram mediaChampaign, IL, 2002).[9] C. Detrain and J.-L. Deneubourg, Physics of life Reviews , 162 (2006). [10] Q. K. Telesford, S. L. Simpson, J. H. Burdette,S. Hayasaka, and P. J. Laurienti, Brain connectivity ,295 (2011).[11] G. B. Ermentrout and L. Edelstein-Keshet, Journal of the-oretical Biology , 97 (1993).[12] S. Nandi, B. K. Kar, and P. Pal Chaudhuri, IEEE Trans-actions on Computers , 1346 (1994).[13] U. Frisch, B. Hasslacher, and Y. Pomeau, Phys. Rev. Lett. , 1505 (1986).[14] K. Morita, in Theory of Reversible Computing (Springer,2017) pp. 299–329.[15] A. Schranko and P. P. De Oliveira, Journal of Cellular Au-tomata (2011).[16] B. Wolnik, A. Dzedzej, J. M. Baetens, and B. D. Baets,Journal of Physics A: Mathematical and Theoretical ,435101 (2017).[17] J. Kari, New Generation Computing , 145 (2018).[18] T. Toffoli and N. Margolus, Cellular automata machines:a new environment for modeling (MIT press, 1987).
19] B. Chopard,
Cellular automata modeling of physical sys-tems (Springer, 2012).[20] J. Hardy, O. de Pazzis, and Y. Pomeau, Phys. Rev. A ,1949 (1976).[21] B. Chopard and M. Droz, Cellular Automata Modeling ofPhysical Systems , Collection Alea-Saclay: Monographsand Texts in Statistical Physics (Cambridge UniversityPress, 1998). [22] G. B. Ermentrout and L. Edelstein-Keshet, Journal of the-oretical Biology , 97 (1993).[23] F. Spitzer,
Principles of random walk , Vol. 34 (SpringerScience & Business Media, 2013).[24] P. C. S. Costa, R. Portugal, and F. de Melo, QuantumInformation Processing , 226 (2018).[25] C. Duarte, G. D. Carvalho, N. K. Bernardes, andF. de Melo, Phys. Rev. A , 032113 (2017).[26] W. H. Zurek, Rev. Mod. Phys. , 715 (2003)., 715 (2003).