Resolution of sigma-fields for multiparticle finite-state action evolutions with infinite past
aa r X i v : . [ m a t h . P R ] A ug Resolution of sigma-fields for multiparticlefinite-state action evolutions with infinite past
Yu Ito (1)(2) , Toru Sera (3)(4) and Kouji Yano (5)(6)
August 31, 2020
Abstract
For multiparticle finite-state action evolutions, we prove that the observation σ -field admits a resolution involving a third noise which is generated by a randomvariable with uniform law. The Rees decomposition from the semigroup theory andthe theory of infinite convolutions are utilized in our proofs. Let us consider the stochastic recursive equation X k = N k X k − a.s. for k ∈ Z , (1.1)which we call the action evolution , where the observation X = ( X k ) k ∈ Z taking values ina measurable space V evolves from X k − to X k at each time k being acted by a randommap N k of V . Here we mean by N k X k − the evaluation N k ( X k − ) of a random mapping N k at X k − ; we always abbreviate the parentheses to write f v simply for the evaluation f ( v ). As our processes are indexed by Z , the state X k we observe at time k is a resultafter a long time has passed.We would like to clarify the structure of the observation noise F Xk = σ ( X j : j ≤ k ).For familes of events, we write A ∨ B := σ ( A S B ). For σ -fields, we say F ⊂ G a.s. (resp. F = G a.s.) if F ⊂ G ∨ N (resp.
F ∨ N = G ∨ N ) with N being the family of null events.By iterating the equation (1.1), we have X k = N k N k − · · · N j +1 X j a.s. for j < k . Onemay then expect that, for any k ∈ Z , F Xk ⊂ \ j The research of this author was supported by JSPS KAKENHI Grant Number JP18K13431 (3) Graduate School of Science, Kyoto University, Kyoto, JAPAN. Research Fellow of Japan Society forthe Promotion of Science. (4) The research of this author was supported by JSPS KAKENHI Grant Number JP19J11798 (5) Graduate School of Science, Kyoto University, Kyoto, JAPAN. (6) The research of this author was supported by JSPS KAKENHI grant no.’s JP19H01791, JP19K21834and JP18K03441 and by JSPS Open Partnership Joint Research Projects grant no. JPJSBP120209921. F Xk can be known by the driving noise F Nk := σ ( N j : j ≤ k ) together with the remote past noise F X −∞ := T k F Xk , which plays the roleof the initial noise at time −∞ . But the a.s. inclusion ? ⊂ in (1.2) is false in general; see[10, (1) of Remark 1.4] for erroneous discussions by Kolmogorov and Wiener. We mustrefer to [1, Section 2.5] for careful treatment of exchanging the order of supremum andintersection between σ -fields. We would like to reveal the extra noise hidden in the observation noise. To this end letus introduce some terminology. Definition 1.1. Let µ be a probability on a measurable space Σ of mappings of V intoitself and call it the mapping law . • A µ -evolution is a pair ( X, N ) of a V -valued process X = ( X k ) k ∈ Z and an iid Σ-valuedprocess N = ( N k ) k ∈ Z defined on a probability space (Ω , F , P ) such that the following holdfor each k ∈ Z :(i) X k = N k X k − holds a.s.;(ii) N k is independent of F X,Nk − := σ ( X j , N j : j ≤ k − N k has law µ . • For a mapping f : V → V and a vector x = ( x , . . . , x m ) ∈ V m , we understand that f operates x componentwise, i.e., f x = ( f x , . . . , f x m ). An m -particle µ -evolution is a µ -evolution ( X , N ) with X = ( X k ) k ∈ Z taking values in V m ; precisely, the following holdfor each k ∈ Z :(i) X k = N k X k − holds a.s., i.e., X ik = N k X ik − holds a.s. for i = 1 , . . . , m ;(ii) N k is independent of F X ,Nk − := σ ( X j , N j : j ≤ k − N k has law µ . • For a µ -evolution, a third noise is a sequence of random variables ( U k ) k ∈ Z such that thefollowing hold for each k ∈ Z :(i) the inclusion F Xk ⊂ F Nk ∨ F X −∞ ∨ σ ( U k ) holds a.s.;(ii) σ ( U k ) ⊂ F X,Nk holds a.s.;(iii) the three σ -fields F Nk , F X −∞ and σ ( U k ) are independent. • For a µ -evolution, a reduced driving noise is a sequence of σ -fields ( G Nk ) k ∈ Z accompanyingwith a sequence of random variables ( U k ) k ∈ Z such that the following hold for each k ∈ Z :(i) the identity F Xk = G Nk ∨ F X −∞ ∨ σ ( U k ) holds a.s.;(ii) G Nk ⊂ F Nk holds a.s.;(iii) the three σ -fields F Nk , F X −∞ and σ ( U k ) are independent.The identity in Condition (i) will be called the resolution of the observation . Note that( U k ) k ∈ Z is necessarily a third noise.It is easy to see that ( X, N ) is a µ -evolution if and only if the Markov property P (cid:16) ( X k , N k ) ∈ B | F X,Nk − (cid:17) = Q µ (cid:16) X k − ; B (cid:17) , k ∈ Z , B ⊂ V × Σ (1.3)2olds with the joint transition probability: Q µ (cid:16) x ; B (cid:17) = µ n f : ( f x, f ) ∈ B o , x ∈ V, B ⊂ V × Σ . (1.4)If ( X, N ) is a µ -evolution, then the marginal process X satisfies the Markov property P (cid:0) X k ∈ A | F Xk − (cid:1) = P µ ( X k − ; A ) , k ∈ Z , A ⊂ V (1.5)with the marginal transition probability: P µ ( x ; A ) = µ { f : f x ∈ A } , A ⊂ V. (1.6)It is also easy to see that, if two µ -evolutions ( X, N ) and ( X ′ , N ′ ) satisfy X k d = X ′ k a.s.for k ∈ Z , then ( X, N ) d = ( X ′ , N ′ ).In this paper, we shall give a general result of resolution of the observation for multi-particle action evolutions when the state space V is a finite set. For our purpose we need several known facts from the theory of semigroups, which werecall without proofs. We may consult [5] for the details.In what follows we assume S be a finite semigroup and we denote the set of all idem-potents in S by E ( S ) = { f ∈ S : f = f } . For A, B ⊂ S and f ∈ S , we write AB = { ab : a ∈ A, b ∈ B } and Af = { af : a ∈ A } , etc. We say that S is completelysimple if S has no proper ideal, i.e., ∅ 6 = IS ∪ SI ⊂ I ⊂ S implies I = S , and if thereexists e ∈ E ( S ) which is primitive , i.e., ef = f e = f ∈ E ( S ) implies f = e . Proposition 1.2. A finite semigroup S has a unique minimal ideal, which will be calledthe kernel of S . In addition, the kernel is completely simple. The proof of Proposition 1.2 can be found, e.g., in [5, Proposition 1.7]. Proposition 1.3 (Rees decomposition) . Suppose S be a completely simple finite semi-group with a primitive idempotent e . Set e L = Se , G = eSe , e R = eS , L = E ( e L ) and R = E ( e R ) . Then the following hold:(i) G is a group whose unit is e .(ii) eL = Re = { e } .(iii) S = LGR .(iv) The product mapping ψ : L × G × R ∋ ( f, g, h ) f gh ∈ S is bijective and itsinverse is given as ψ − ( z ) (cid:0) =: ( z L , z G , z R ) (cid:1) = ( ze ( eze ) − , eze, ( eze ) − ez ) . (1.7)3he proof of Proposition 1.3 can be found, e.g., in [5, Theorem 1.1]. The productdecomposition S = LGR will be called the Rees decomposition of S at e , and G will becalled the group factor .Note by definition that RL ⊂ e R e L = eSSe ⊂ eSe = G and by the product bijectivitythat ψ − (( f gh )( f ′ g ′ h ′ )) = ( f, ghf ′ g ′ , h ′ ). It is obvious that the product z = f gh ∈ S is idempotent if and only if hf = g − . It is also obvious that all idempotents of S areprimitive ; in fact, if e ′ = f ′ g ′ h ′ ∈ E ( S ) and z = f gh ∈ S satisfies e ′ z = ze ′ = z ∈ E ( S ), then we have f ′ = f and h ′ = h by the product bijectivity and thus we have g ′ = ( h ′ f ′ ) − = ( hf ) − = g , which shows e ′ = z so that e ′ is also primitive.Proposition 1.3 is foundamental in the theory of infinite convolutions. Let P ( S ) denotethe set of probability measures on a finite semigroup S and write µν for the convolutionof µ and ν in P ( S ): ( µν )( A ) = X f,g ∈ S A ( f g ) µ { f } ν { g } , A ⊂ S. (1.8)We write S ( µ ) = { f ∈ S : µ { f } > } for the support of µ . It is easy to see that S ( µν ) = S ( µ ) S ( ν ) for µ, ν ∈ P ( S ). We write ω G for the normalized Haar measure of afinite group G , or the uniform law on G . Proposition 1.4 (Convolution idempotents) . Suppose µ = µ ∈ P ( S ) . Then S ( µ ) is acompletely simple subsemigroup of S and µ has a factorization µ = µ L ω G µ R , (1.9) where we fix e ∈ E ( S ( µ )) , take L = E ( S ( µ ) e ) , G = e S ( µ ) e and R = E ( e S ( µ )) so that S ( µ ) = LGR gives the Rees decomposition of S ( µ ) at e , and write µ L ( · ) = µ { z ∈ S ( µ ) : z L ∈ ·} and µ R ( · ) = µ { z : z R ∈ ·} . Consequently, if Z is a random variable whose law is µ , then the projections Z L , Z G and Z R are independent and Z G is uniform on G . The proof of Proposition 1.4 can be found, e.g., in [5, Theorem 2.2].The following proposition plays a key role in our analysis. Proposition 1.5 (Infinite convolutions) . Let µ ∈ P ( S ) and suppose that S coincide with S ∞ n =1 S ( µ ) n , the semigroup generated by S ( µ ) . Then the following hold:(i) The set of subsequential limits of { µ n } is a finite cyclic group of the form K := { η, µη, . . . , µ p − η } (1.10) for some p ∈ N , where η is the unit of K (so that η = η ) and µ p η = η . The support S ( η ) is a completely simple subsemigroup of S (but not in general an ideal of S .)(ii) It holds that n P nk =1 µ k −→ n →∞ ν := p P p − k =0 µ k η (so that ν = ν ). The support S ( ν ) is the kernel of S . iii) Let e ∈ E ( S ( η )) be fixed. Then the Rees decompositions at e of S ( ν ) and of S ( η ) are given as S ( ν ) = LGR and S ( η ) = LHR, (1.11) respectively, where L = E ( S ( η ) e ) , R = E ( e S ( η )) , H = e S ( η ) e and G = e S ( ν ) e .Moreover, the group factor H of S ( η ) is a normal subgroup of the group factor G of S ( ν ) , and the convolution factorizations of ν and η are given as ν = η L ω G η R and η = η L ω H η R , (1.12) respectively, where η L ( · ) = η { z : z L ∈ ·} and η R ( · ) = η { z : z R ∈ ·} .(iv) There exists γ ∈ G such that G/H = { H, γH, . . . , γ p − H } , γ p = e and µ k η = η L γ k ω H η R , k = 0 , , . . . , p − , (1.13) where we identify an element of S with the Dirac mass at it. (We write C = { e, γ, . . . , γ p − } so that CH = S G/H = G .) The proof of Proposition 1.5 can be found, e.g., in [5, Theorem 2.7]. Let V be a non-empty finite set and let Σ denote the set of mappings of V into itself.Note that Σ is also a finite set. For µ ∈ P (Σ) and Λ ∈ P ( V m ), we define µ Λ ∈ P ( V m ) as( µ Λ)( A ) = X f ∈ Σ X x ∈ V m A ( f x ) µ { f } Λ { x } , A ⊂ V m , (1.14)where we understand f x = ( f x , . . . , f x m ) for x = ( x , . . . , x m ) ∈ V m . Denote V m × = { x = ( x , . . . , x m ) ∈ V m : x , . . . , x m are distinct } . (1.15) Proposition 1.6. Let µ ∈ P (Σ) and set S = S ∞ n =1 S ( µ ) n . We apply Proposition 1.5 andadopt its notation. Denote m µ = min { gV ) : g ∈ S } , (1.16) where A ) denotes the number of elements of a set A . Define W µ = { x ∈ V m µ × : f x ∈ V m µ × for all f ∈ S } . (1.17) Then there exists a subset W of eW µ such that the following hold:(i) W µ = LGW . Consequently, eW µ = GW . ii) The product mapping L × G × W ∋ ( f, g, w ) f g w ∈ W µ is bijective. Its inversewill be denoted by x ( x L , x G , x W ) .(iii) Λ ∈ P ( V m µ × ) is µ -invariant , i.e., Λ = µ Λ , if and only if Λ = η L ω G Λ W for some Λ W ∈ P ( W ) . The proof of Proposition 1.6 will be given in Section 3.If an m -particle µ -evolution ( X , N ) is stationary , i.e., ( X · +1 , N · +1 ) d = ( X , N ), thenthe sequence X has a common law which is µ -invariant. Conversely, if Λ ∈ P ( V m ) is µ -invariant, then there exists a stationary m -particle µ -evolution ( X , N ) such that thesequence X has Λ as its common law. We now state our main theorem, which will beproved in Section 4. Theorem 1.7. Suppose the same assumptions of Proposition 1.6 be satisfied. Supposethat Λ ∈ P ( V m µ × ) be µ -invariant and let ( X , N ) be a stationary m µ -particle µ -evolutionsuch that the sequence X has Λ as its common law. Then the following hold:(i) X k ∈ LGW a.s. and X Lk d = η L for all k ∈ Z .(ii) X Gk = ( γ k Y C ) U Hk a.s. for k ∈ Z for some C -valued random variable Y C and some H -valued random variables U Hk such that U Hk is uniform on H .(iii) X Wk = Z W a.s. for k ∈ Z for some W -valued random variable Z W .(iv) If we write M Gj := X Gj ( X Gj − ) − for j ∈ Z and M Gk,j := X Gk ( X Gj ) − = M Gk M Gk − · · · M Gj +1 for j ≤ k , we have the following factorization: X j = X Lj ( M Gk,j ) − ( γ k Y C ) U Hk Z W a.s. for j ≤ k. (1.18) (v) A resolution of the observation holds in the sense that F X k = G Nk ∨ F X −∞ ∨ σ ( U Hk ) a.s. , (1.19) where G Nk = σ (cid:0) X Lj , M Gj : j ≤ k (cid:1) ⊂ F Nk a.s. , (1.20) the three σ -fields F Nk ( ⊃ G Nk ) , F X −∞ and σ ( U Hk ) are independent (1.21) and F X −∞ = σ ( Y C , Z W ) a.s. (1.22) (vi) Y C d = ω C and Z W d = Λ W , where ω C denotes the Haar probability on the cyclic group C . It holds that Y C and Z W are independent. We shall show in Section 5 that the non-stationary case can be reduced to the stationarycase and satisfies Properties (i)-(v) of Theorem 1.7.6 .4 Historical remarks Inspired by Tsirelson [2] of a stochastic differential equation, Yor [13] has made a thoroughstudy of the action evolution X k = N k X k − when both X and N take values in the torus T = { z ∈ C : | z | = 1 } and N is not necessarily idd, where we understand N k X k − as theusual product between two complex values. He obtained a general result of the resolutionof the observation. Hirayama and Yano [4] generalized Yor’s results for the state spacebeing a compact group. In these results the third noise is generated by a random variablewith uniform law on a subgroup of the state space group. See also [12] for a survey ofthis topic.Yano [11] studied mono-particle action evolution on a finite set. He proved existence ofa non-trivial third noise when m µ ≥ 2. He utilized several notion from the road coloringtheory ; for the details see Trahtman [9] and the references therein.The theories of Rees decomposition, convolution idempotents and infinite convolutionsfor finite semigroups are very old results and have nowadays been generalized to topo-logical semigroups; see the textbook [5, Chapters 1 and 2] for the details. In particular,Proposition 1.5, which plays a fundamental tool for our results, dates back to Rosenblatt[7], Collins [3] and Schwarz [8]. The organization of this paper is as follows. In Section 2 we discuss an example. In Section3 we prove Proposition 1.6 and discuss characterization of stationary probabilities. Section4 is devoted to the proof of our main theorem, Theorem 1.7. In Section 5 we discuss thenon-stationary case. Let us investigate an example which was discussed in [11, Subsection 3.3] for mono-particle µ -evolution. We look at it from the viewpoint of multiparticle µ -evolution. See [6] forother examples.Let V = { , , , , } . We write f = [ y , y , y , y , y ] if f : V → V is such that f y , . . . , f y . Consider the two mappings f = [2 , , , , 5] and g = [2 , , , , . (2.1)Let µ = ( δ f + δ g ) / { f, g } , where δ f stands for the Dirac mass at7 . The marginal transition probability P µ of (1.6) is given as P µ (1 , { } ) P µ (1 , { } ) · · · P µ (1 , { } ) P µ (2 , { } ) P µ (2 , { } ) · · · P µ (2 , { } )... ... . . . ... P µ (5 , { } ) P µ (5 , { } ) · · · P µ (5 , { } ) = 12 . (2.2)It is obvious that µλ = λ if and only if λP µ = λ , and it is easy to see that there exists aunique µ -invariant probability measure given as λ = 19 δ + 29 δ + 19 δ + 29 δ + 39 δ . (2.3)In [11, Theorem 1], for a stationary mono-particle µ -evolution ( X, N ) with X having λ as its common law, it was proved that there exists a third noise ( U k ) k ∈ Z such that σ ( U k ) ⊂ F X,Nk a.s. for k ∈ Z and F Xk ⊂ F Nk ∨ σ ( U k ) a.s. for k ∈ Z (2.4)with F X −∞ being trivial a.s. and σ ( U k ) being independent of F Nk .Set S = S ∞ n =1 { f, g } n and apply Propositions 1.5 and 1.6. Let us prove that L = { e, f e } , G = { e, g, g , h, gh, g h } , R = { e, ef } (2.5)where e := g = [4 , , , , ∈ E ( S ) and h := f e = ef = ef e = [2 , , , , L ′ = { e, f e } , G ′ = { e, g, g , h, gh, g h } , R ′ = { e, ef } and K = L ′ G ′ R ′ .Since g = h = e , hg = g h and hg = gh , we see that G ′ is a group. Since ef e = e , f e = ef = h and gf e = ef g = g , we see that SK ∪ KS ⊂ K and hence that K isan ideal of S . For any k ∈ K , we have eke ∈ G ′ since ef e = e and then we see that SkS ⊃ G ′ ∋ e , so that K is the kernel of S , which shows K = S ( ν ). We now see that G = eKe = G ′ , L = E ( Ke ) = E ( L ′ G ′ ) = L ′ and R = E ( eK ) = E ( G ′ R ′ ) = R ′ .Let H = e S ( η ) e be the subgroup of G in Proposition 1.5. Then we have µη L ω H η R = η L γω H η R so that µη L ω H = η L γω H , since η R e = δ e . Let η L = pδ e + qδ fe for some p, q > p + q = 1. Since gf e = gef e = ge = g , we have µη L = (cid:18) δ f + 12 δ g (cid:19) ( pδ e + qδ fe ) = p δ fe + 12 δ g + q δ h . (2.6)Since f e ∈ S ( µη L ω H ) = S ( η L γω H ) = LγH , we see that H = G . We now have( pδ e + qδ fe ) ω G = η L ω G = µη L ω G = (cid:18) p δ fe + 1 + q δ e (cid:19) ω G , (2.7)which yields p = 2 / q = 1 / 3, that is, η L = 23 δ e + 13 δ fe . (2.8)8n the same way we have η R = δ e + δ ef , and thus we have obtained that µ n → η = ν = η L ω G η R . (2.9)Note that f e = [1 , , , , 5] and ef = [2 , , , , a, b, c ) ∈ L × G × R , we have ( a = e ⇐⇒ aV = { , , } a = f e ⇐⇒ aV = { , , } , ( c = e ⇐⇒ c c , c c c = ef ⇐⇒ c c , c c . (2.10)We note that elements of G act as permutations over { , , } : e (2 , , 5) = (2 , , , g (2 , , 5) = (5 , , , h (2 , , 5) = (4 , , . (2.11)It is easy to see that m µ = 3 and W µ = { ( x, y, z ) : a permutation of (2 , , 5) or (1 , , } . (2.12)We may take a set W of Proposition 1.6 as W = { (2 , , } . (2.13)For example, for x = (3 , , ∈ W µ , we see that x L = f e , x G = gh and x W = (2 , , η L ω G (2 , , 5) is the unique µ -invariantprobability measure on V × . Let ( X , N ) be a stationary tri-particle µ -evolution such that X has Λ as its common law. Then we have the factorization X j = X Lj ( M Gk,j ) − U Gk (2 , , 5) a.s. for j ≤ k (2.14)with U Gk = X Gk , M Gj = X Gj ( X Gj − ) − and M Gk,j = M Gk M Gk − · · · M Gj +1 , and consequently, weobtain the resolution F X k = G Nk ∨ σ ( U Gk ) a.s. with G Nk = σ ( X Lj , M Gj : j ≤ k ) (2.15)where the two σ -fields F Nk ( ⊃ G Nk ) and σ ( U Gk ) are independent.Note that the first component ( X , N ) is a mono-particle µ -evolution such that X hasa common law η L ω G (cid:18) δ e + 13 δ fe (cid:19) ω { , , } = 23 ω { , , } + 13 ω { , , } = λ, (2.16)where ω A stands for the uniform law on a finite set A . We note that X k = X Lk U Gk { X k = 1 } = { X Lk = f e } ∩ { U Gk } (2.17) { X k = 2 } = { X Lk = e } ∩ { U Gk } (2.18) { X k = 3 } = { X Lk = f e } ∩ { U Gk } (2.19) { X k = 4 } = { X Lk = e } ∩ { U Gk } (2.20) { X k = 5 } = { U Gk } . (2.21)This shows that σ ( U Gk ⊂ σ ( X k ) a.s. We thus conclude that ( U Gk k ∈ Z is a third noisefor ( X , N ) since F X k ⊂ G Nk ∨ σ ( U Gk 2) a.s. for k ∈ Z , (2.22)where U Gk F Nk ( ⊃ G Nk ). 9 F-cliques and stationary probabilities Throughout this section we suppose all the assumptions of Proposition 1.6 be satisfied.We borrow several notation from the road coloring theory . A pair { x, y } from V will be called a deadlock if gx = gy for all g ∈ S := S ∞ n =1 S ( µ ) n , or in other words, f n f n − · · · f x = f n f n − · · · f y for all n ∈ N and f , . . . , f n ∈ S ( µ ). A subset F of V willbe called an F-clique if every pair from F is a deadlock and if every set F ∪ { x } with x / ∈ F contains a pair which is not a deadlock. In other words, an F-clique F is a maximalsubset of V every pair from which is a deadlock.The F-cliques can be characterized as follows. Lemma 3.1. For g ∈ S , the set gV is an F-clique if and only if gV ) = m µ . Inaddition, it holds that S ( ν ) = { g ∈ S : gV is an F-clique } = { g ∈ S : gV ) = m µ } . (3.1) Proof. If gV ) = m µ , for any f ∈ S we have m µ ≤ f gV ) ≤ gV ) = m µ so that f gV ) = m µ , which implies that gV is an F-clique. Conversely, if gV is an F-clique,then f V ) ≥ f gV ) = gV ) ≥ m µ for any f ∈ S so that gV ) = m µ .To prove (3.1), it suffices to show that K := { g ∈ S : gV ) = m µ } is a minimal idealof S , because S ( ν ) is the unique minimal ideal of S . It is obvious by definition that K isan ideal. Suppose ∅ 6 = IS ∪ SI ⊂ I ⊂ K . Let f ∈ I and g ∈ K . Since gf | gV : gV → gV is bijective, the mapping ( gf | gV ) p is identity for some p ∈ N so that ( gf ) p g = g . Hence g = ( gf ) p − gf g ∈ SIS ⊂ I , which shows I = K .By Lemma 3.1, the set W µ defined in (1.17) can be represented as W µ = { x = ( x , . . . , x m µ ) : { x , . . . , x m µ } is an F-clique } . (3.2) Lemma 3.2. For any x , x ′ ∈ eW µ , the two measures η L ω G x and η L ω G x ′ either coincideor have disjoint supports.Proof. Suppose S ( η L ω G x )(= LG x ) and S ( η L ω G x ′ )(= LG x ′ ) have a common element f g x = f ′ g ′ x ′ for some f, f ′ ∈ L and g, g ′ ∈ G . Since ef = ef ′ = e , we have g x = g ′ x ′ .We thus obtain that η L ω G x = η L ω G g x = η L ω G g ′ x ′ = η L ω G x ′ . (3.3)The proof is complete.We now prove Proposition 1.6. 10 roof of Proposition 1.6. (i) By Lemma 3.2, we can find a subset W of eW µ such thatthe family { η L ω G w : w ∈ W } consists of measures with distinct supports and exhausts { η L ω G x : x ∈ eW µ } . By (3.1) and (3.2), we have W µ = S ( ν ) W µ = LGRW µ . It is easy tosee that RW µ = GW µ = eW µ and that LW µ = W µ . Hence we obtain W µ = LGeW µ = S ( η L ω G ) eW µ = [ x ∈ eW µ S ( η L ω G x ) = [ w ∈ W S ( η L ω G w ) = LGW. (3.4)(ii) We have only to prove injectivity of the product L × G × W ∋ ( f, g, w ) f g w ∈ W µ . Suppose f g w = f ′ g ′ w ′ . Since eL = { e } ⊂ G , we have η L ω G f g w = η L ω G ( ef ) g w = η L ω G w . (3.5)We thus obtain η L ω G w = η L ω G f g w = η L ω G f ′ g ′ w ′ = η L ω G w ′ , which implies w = w ′ bydefinition of W .If we write w = ( w , . . . , w m µ ), then { w , . . . , w m µ } = eV because eV ) = m µ by(3.1). Hence the identity f g w = f ′ g ′ w implies that f g = f ′ g ′ on eV . Since g = ge and g ′ = g ′ e , we see that f g = f ′ g ′ on V , which implies f = f ′ and g = g ′ .(iii) Since η R e Λ W = Λ W , we see that µ ( η L ω G Λ W ) = µνe Λ W = νe Λ W = η L ω G Λ W .Suppose Λ ∈ P ( V m µ × ) be µ -invariant. Since Λ = µ Λ, we have Λ = η Λ and henceΛ = ν Λ = η L ω G η R Λ. By (3.2), we have S (Λ) = S ( ν Λ) ⊂ W µ . We then have S ( η R Λ) = S ( η R ν Λ) = S ( ω G η R Λ) ⊂ GRW µ = eW µ = GW . HenceΛ = ( η L ω G )( η R Λ) = X x ∈ GW ( η L ω G x )( η R Λ) { x } (3.6)= X x ∈ GW ( η L ω G x W )( η R Λ) { x } = η L ω G Λ W , (3.7)where we take Λ W := ( η R Λ) { x W : x ∈ GW } . The proof is now complete. Throughout this section we suppose all the assumptions of Theorem 1.7 be satisfied. Wedivide the proof of Theorem 1.7 into several steps. X k into LG - and W -factors By Proposition 1.6, we have X k ∈ LGW a.s. and X Lk d = η L for all k ∈ Z , and so we haveshown Claim (i) of Theorem 1.7. Let us focus on the factorization X k = ( X Lk X Gk ) X Wk for k ∈ Z . Proposition 4.1. Set Y k = X Lk X Gk for k ∈ Z and Y = ( Y k ) k ∈ Z . Then the following hold: i) ( Y, N ) is a µ -evolution such that the sequence Y has a common law η L ω G .(ii) There exists a W -valued random variable Z W such that X Wk = Z W a.s. for k ∈ Z .(iii) ( Y, N ) and Z W are independent.Proof. Note that Y k X Wk = X k = N k X k − = ( N k Y k − ) X Wk − a.s. (4.1)Since SLG = S S ( ν ) e ⊂ S ( ν ) e = LG and by Proposition 1.6, we have Y k = N k Y k − and X Wk = X Wk − a.s. (4.2)We now obtain Claim (ii) of Proposition 4.1 (and consequently we have shown Claim (iii)of Theorem 1.7).Since N k is independent of F Yk − ( ⊂ F X k − ), we see that ( Y, N ) is a µ -evolution. Since X k d = Λ = η L ω G Λ W and by Proposition 1.6, we see that, for each fixed k ∈ Z , thethree random variables X Lk , X Gk and X Wk are independent and have law η L , ω G and Λ W ,respectively. We now obtain Claim (i).Let k ∈ Z be fixed. By the above argument, we see that Y k = X Lk X Gk is independent of Z W . Since { N j : j > k } is independent of { Y k , Z W } and since Y j = N j N j − · · · N k +1 Y k for j > k , we see that { ( Y j , N j ) : j > k } is independent of Z W . Since k ∈ Z is arbitrary, weobtain Claim (iii). The proof is complete. X Gk into C - and H -factors By definition of C and H in Proposition 1.5, we see that the product mapping C × H ∋ ( γ j , h ) γ j h ∈ G is bijective. Its inverse will be denoted by g ( g C , g H ). For f ∈ S ( ν ) = LGR , we write f C = ( f G ) C and f H = ( f G ) H . For x ∈ LGW , we write x C = ( x G ) C and x H = ( x G ) H .Since H is a normal subgroup of G and since C is a cyclic group, we have( g g ) C H = ( g g ) H = ( g H )( g H ) = ( g C H )( g C H ) = ( g C g C ) H (4.3)so that ( g g ) C = g C g C .We proceed to prove part of Theorem 1.7. Proposition 4.2. Claim (1.20) of Theorem 1.7 holds and it holds that X Ck = γ k Y C a.s. for k ∈ Z for some C -valued random variable Y C . (4.4)12 roof. Set N k,k = e for k ∈ Z and set N k,l := N k N k − · · · N l +1 , k > l. (4.5)Since e ∈ S = S ∞ n =1 S ( µ ) n , we can find f , f , . . . , f n ∈ S ( µ ) such that f n f n − · · · f = e ,and hence we have T ek := sup { l < k − n : N l + n,l = e } > −∞ a.s. (4.6)Since Se ⊂ SLG ⊂ LG = LGe ⊂ Se , we see that N k,T ek = N k,T ek + n N T ek + n,T ek = N k,T ek + n e ∈ Se = LG .Let us prove Claim (1.20). Since X k = N k,T ek X T ek , we have X Lk = ( N k,T ek ) L , X Gk = ( N k,T ek ) G X GT ek a.s. (4.7)Hence we obtain X Lk ∈ F Nk a.s. Since X k = N k X Lk − X Gk − X Wk − and SL = SLe ⊂ Se = LG ,we have X Lk = ( N k X Lk − ) L , X Gk = ( N k X Lk − ) G X Gk − a.s. (4.8)Hence we obtain M Gk = X Gk ( X Gk − ) − = ( N k X Lk − ) G ∈ F Nk a.s. We thus obtain Claim(1.20).Let ξ be a random variable such that ξ d = ω H and ξ is independent of ( X , N ). Let k ∈ Z . By N k X Lk − ∈ LG , we have M Gk ξ = ( N k X Lk − ) G ξ = ( N k X Lk − ξ ) G . (4.9)Since N k X Lk − ξ d = µη L ω H = µηe = η L γω H η R e = η L γω H , (4.10)we have M Gk ξ d = γω H d = γξ , which shows ( M Gk ) C = γ a.s. for k ∈ Z . We now see that X Ck = ( X Gk ) C = ( M Gk X Gk − ) C = ( M Gk ) C ( X Gk − ) C = γ X Ck − a.s. for k ∈ Z , (4.11)which yields (4.4). The proof is now complete. The following lemma plays a key role. Lemma 4.3. For any deterministic sequences { f n } and { h n } from S ( ν ) , it holds that ( f n N N · · · N n h n ) H d −→ ω H . (4.12)13 roof. Let { n ( m ) } be a subsequence of N . We can extract a further subsequence { n ′ ( m ) } from { n ( m ) } such that f n ′ ( m ) → f and h n ′ ( m ) → h for some f , h ∈ S ( ν ) and µ n ′ ( m ) → µ k η = η L γ k ω H η R (4.13)for some k = 0 , , . . . , p − 1. Hence we have( f n ′ ( m ) N N · · · N n ′ ( m ) h n ′ ( m ) ) H d −→ ( f η L γ k ω H η R h ) H . (4.14)Since RL ⊂ H and g − Hg = H for g ∈ G , we have( f η L γ k ω H η R h ) G = f C γ k ( γ − k f H f R η L γ k ) ω H ( η R h L )( h C h H ( h C ) − ) h C (4.15)= f C γ k ω H h C = f C γ k h C ω H , (4.16)which yields ( f η L γ k ω H η R h ) H = ω H . We thus obtain (4.12).We proceed to prove part of Theorem 1.7. Proposition 4.4. For k ∈ Z , set U Hk := X Hk = Y Hk . Then U Hk d = ω H and the three σ -fields F Nk , F X −∞ and σ ( U Hk ) are independent. (Consequently Claims (ii) and (1.21) of Theorem1.7 hold.)Proof. Set F Nk,l = σ ( N k , N k − , . . . , N l +1 ) , k > l. (4.17)Let k ∈ Z and let ϕ : H → R be a test function. Let l < k , n ∈ N , A ∈ F Nk,l and B ∈ F X −∞ .Note that the three σ -fields σ ( N k,T el , A ), σ ( N T el ,T el − n ) and σ ( Y T el − n , B ) are independent,where T ek has been introduced in the proof of Proposition 4.2. We thus have E (cid:2) ϕ ( U Hk )1 A B (cid:3) = E (cid:2) ϕ ( Y Hk )1 A B (cid:3) (4.18)= E (cid:2) ϕ (cid:0) ( N k,T el N T el ,T el − n Y T el − n ) H (cid:1) A B (cid:3) (4.19)= EE ′ (cid:2) ϕ (cid:0) ( N k,T el N ′ N ′ · · · N ′ n Y T el − n ) H (cid:1) A B (cid:3) (4.20)= E E ′ (cid:2) ϕ (cid:0) ( f N ′ N ′ · · · N ′ n h n ) H (cid:1)(cid:3) (cid:12)(cid:12)(cid:12) f = N k,Tel h n = Y Tel − n A B , (4.21)where { N ′ , N ′ , . . . } is an iid sequence with a common law µ which is independent of ( X , N ),and E ′ denotes the expectation with respect to { N ′ , N ′ , . . . } . Noting that N k,T el ∈ S ( ν )(see the proof of Proposition 4.2) and Y T el − n ∈ S ( ν ), we apply Lemma 4.3 to see that(4.21) −→ n →∞ Z ϕ d ω H · E [1 A B ] = Z ϕ d ω H · P ( A ) P ( B ) . (4.22)Since l < k is arbitrary, we obtain E (cid:2) ϕ ( U Hk )1 A B (cid:3) = Z ϕ d ω H · P ( A ) P ( B ) , A ∈ F Nk , B ∈ F X −∞ , (4.23)which leads to the desired result. 14 .4 Determining the remote past noise We need the following lemma. Lemma 4.5. Let (Ω , F , P ) be a probability space and let A , B and C be three sub- σ -fieldsof F . Suppose that A ⊂ B ∨ C a.s. and that A ∨ B be independent of C . Then A ⊂ B a.s. The proof of Lemma 4.5 can be found in [1, Section 2.2], and so we omit it.We shall now complete the proof of Theorem 1.7. Proof of Theorem 1.7. What remains unproved are Claims (iv), (v) and (vi).We have shown that X Ck = γ k Y C , X Hk = U Hk and X Wk = Z W . Let j ≤ k . Since X Gk = M Gk X Gk − by definition of M Gk , we have X Gk = M Gk,j X Gj . Hence we obtain X j = X Lj X Gj Z W = X Lj ( M Gk,j ) − X Gk Z W = X Lj ( M Gk,j ) − γ k Y C U Hk Z W a.s. , (4.24)which shows Claim (iv) and leads to F X k = G Nk ∨ σ ( Y C , Z W ) ∨ σ ( U Hk ) a.s. (4.25)Since σ ( Y C , Z W ) ⊂ F X −∞ a.s., which is obvious by definition, and by (1.21), we can applyLemma 4.5 for A = F X −∞ , B = σ ( Y C , Z W ) and C = F Nk ∨ σ ( U Hk ), and hence we obtain F X −∞ ⊂ σ ( Y C , Z W ) a.s. We thus obtain (1.22). Combining (4.25) and (1.22), we obtain(1.19), which shows Claim (v).Since X k = X Lk γ k Y C U Hk Z W and since Λ = η L ω G Λ W , we see that γ k Y C U Hk and Z W areindependent and that γ k Y C U Hk d = ω G and Z W d = Λ W . Since ω G = ω C ω H , we see that Y C and U Hk are independent, Y C d = ω C and U Hk d = ω H . We now obtain Claim (vi).The proof of Theorem 1.7 is therefore complete. Throughout this section we adopt the settings of Subsection 1.3. Proposition 5.1. For a sequence (Λ k ) k ∈ Z from P ( V m µ × ) , the following are equivalent:(i) Λ k = µ Λ k − , k ∈ Z .(ii) There exist Λ W , . . . , Λ p − W ∈ P ( W ) and constants c , . . . , c p − ≥ such that c + · · · + c p − = 1 such that Λ k = p − X i =0 c i η L γ k + i ω H Λ iW , k ∈ Z . (5.1)15 roof. Since µ ( η L γ k + i ω H ) = η L γ k + i +1 ω H , it is obvious that Condition (ii) implies Condi-tion (i).Suppose Condition (i) be satisfied. Take a subsequence { n ( m ) } of N such that µ n ( m ) → η and Λ − n ( m ) → Λ ∗ for some Λ ∗ ∈ P ( V m µ × ). We then have Λ = µ n ( m ) Λ − n ( m ) → η Λ ∗ ,which shows that Λ = η Λ ∗ = η L ω H η R Λ ∗ . Let X ∗ be a random variable whose law is η R Λ ∗ . Since η L ω H X ∗ ∈ P ( V m µ × ) and X ∗ ∈ RV m µ × = eV m µ × a.s., we have X ∗ ∈ eW µ a.s.Since eW µ = GW = CHW , we have η L ω H X ∗ = η L ω H X C ∗ X H ∗ X W ∗ d = η L X H ∗ ω H X W ∗ , wherewe wrote x C = ( x G ) C and x H = ( x G ) H . For i = 0 , , . . . , p − 1, set c i = P ( X C ∗ = γ i ) and Λ iW ( · ) = P ( X W ∗ ∈ · | X C ∗ = γ i ) . (5.2)We then obtain (5.1) for k = 0. Since µ ( η L γ k + i ω H ) = η L γ k + i +1 ω H , we have (5.1) also for k ≥ k ≤ − k ≤ k − 1. By the same argument as for Λ , we haveΛ k − = p − X i =0 e c i η L γ k − i ω H e Λ iW , k ∈ Z (5.3)for some e Λ W , . . . , e Λ p − W ∈ P ( W ) and some constants e c , . . . , e c p − ≥ e c + · · · + e c p − = 1. We then haveΛ k = µ Λ k − = p − X i =0 e c i η L γ k + i ω H e Λ iW , k ∈ Z . (5.4)Comparing this identity with (5.1) and using Proposition 1.6, we obtain c i = e c i andΛ iW = e Λ iW for i = 0 , , . . . , p − 1. We thus obtain (5.1) for k − 1. We have proved (5.1)for k ≤ − Theorem 5.2. Suppose the same assumptions of Proposition 1.6 be satisfied. Let ( X , N ) be an m µ -particle µ -evolution such that { X , . . . , X m µ } is distinct a.s. Set Λ k ( · ) := P ( X k ∈· ) for k ∈ Z . Then it holds that (Λ k ) k ∈ Z satisfies the equivalent conditions of Proposition5.1, that Claims (i)-(v) of Theorem 1.7 are satisfied, and that P ( Y C = γ i , Z W = w ) = c i Λ iW { w } for i = 0 , . . . , p − and w ∈ W . (5.5) Proof. We have X k = N k,T ek X T ek ∈ LGV m µ , where T ek has been introduced in the proof ofProposition 4.2. This shows that, for each k ∈ Z , every distinct pair from { X k , . . . , X m µ k } is a deadlock. Hence we see that the number of distinct elements of { X k , . . . , X m µ k } isconstant in k ∈ Z a.s. By the assumption that { X , . . . , X m µ } is distinct a.s., we see that X k ∈ V m µ × a.s., which shows Λ k ∈ P ( V m µ × ). 16y definition of µ -evolution, we see that Condition (i) of Proposition 5.1 is satisfied.Hence we have a representation (5.1).We write ω W for the uniform probability on W and write e Λ = η L ω G ω W , which is a µ -invariant probability whose support is W µ . Let ( e X , e N ) under e P be a stationary m µ -particle µ -evolution such that e X has e Λ as its common law. By (vi) of Theorem 1.7, weknow that e P ( e Y C = γ i , e Z W = w ) = 1 p · W ) > e P γ i , w ( · ) := e P ( · | e Y C = γ i , e Z W = w ) (5.7)is well-defined. We then see that ( e X , e N ) under e P γ i , w is a (non-stationary) m µ -particle µ -evolution; in fact, since e Y C , e Z W ∈ F e X −∞ a.s., we can verify the Markov property (1.3).Note that, for each k ∈ Z , the law of e X k under e P γ i , w equals to η L γ k + i ω H w . Moreover,by (1.18), we obtain the following factorization: e X j = e X Lj ( f M Gk,j ) − γ k + i e U Hk w e P γ i , w -a.s. for j ≤ k, (5.8)where f M Gk,j and e U Hk are defined in the same way as in Theorem 1.7. We then see thatClaims (i)-(v) of Theorem 1.7 are satisfied for ( e X , e N ) under e P γ i , w .Define e Q = p − X i =0 c i X w ∈ W Λ iW { w } e P γ i , w . (5.9)We then see that the joint law of ( X , N ) under P equals to that of ( e X , e N ) under e Q ; infact, they are µ -evolutions and e Q ( e X k ∈ · ) = p − X i =0 c i X w ∈ W Λ iW { w } ( η L γ k + i ω H w ) (5.10)= p − X i =0 c i η L γ k + i ω H Λ iW = Λ k = P ( X k ∈ · ) . (5.11)We thus derive from (5.8) the following factorization: e X j = e X Lj ( f M Gk,j ) − γ k e Y C e U Hk e Z W e Q -a.s. for j ≤ k, (5.12)where e Y C and e Z W are defined in the same way as in Theorem 1.7. We therefore obtainthe desired result. 17 eferences [1] L. Chaumont and M. Yor. Exercises in probability . Cambridge Series in Statistical andProbabilistic Mathematics. Cambridge University Press, Cambridge, second edition,2012. A guided tour from measure theory to random processes, via conditioning.[2] B. S. Cirel ′ son. An example of a stochastic differential equation that has no strongsolution. Teor. Verojatnost. i Primenen. , 20(2):427–430, 1975.[3] H. S. Collins. Convergence of convolution iterates of measures. Duke Math. J. ,29:259–264, 1962.[4] T. Hirayama and K. Yano. Extremal solutions for stochastic equations indexed bynegative integers and taking values in compact groups. Stochastic Process. Appl. ,120(8):1404–1423, 2010.[5] G. H¨ogn¨as and A. Mukherjea. Probability measures on semigroups . Probability andits Applications (New York). Springer, New York, second edition, 2011. Convolutionproducts, random walks, and random matrices.[6] Y. Ito, T. Sera, and K. Yano. Examples of third noise problems for action evolu-tions with infinite past. RIMS K¯oky¯uroku, Proceedings of Research on the Theory ofRandom Dynamical Systems and Fractal Geometry in 2019 , to appear.[7] M. Rosenblatt. Limits of convolution sequences of measures on a compact topologicalsemigroup. J. Math. Mech. , 9:293–305, 1960.[8] ˇS. Schwarz. Convolution semigroup of measures on compact noncommutative semi-groups. Czechoslovak Math. J. , 14 (89):95–115, 1964.[9] A. N. Trahtman. The road coloring problem. Israel J. Math. , 172:51–60, 2009.[10] R. van Handel. On the exchange of intersection and supremum of σ -fields in filteringtheory. Israel J. Math. , 192(2):763–784, 2012.[11] K. Yano. Random walk in a finite directed graph subject to a road coloring. J.Theoret. Probab. , 26(1):259–283, 2013.[12] K. Yano and M. Yor. Around Tsirelson’s equation, or: The evolution process maynot explain everything.