A central limit theorem for time-dependent dynamical systems
aa r X i v : . [ m a t h . D S ] O c t A central limit theorem for time-dependent dynamical systems
P´eter N´andori, Domokos Sz´asz and Tam´as Varj´u ∗ July 4, 2018
Abstract
The work [8] established memory loss in the time-dependent (non-random) case of uniformlyexpanding maps of the interval. Here we find conditions under which we have convergence tothe normal distribution of the appropriately scaled Birkhoff-like partial sums of appropriate testfunctions. A substantial part of the problem is to ensure that the variances of the partial sumstend to infinity (cf. the zero-cohomology condition in the autonomous case). In fact, the presentpaper is the first one where non-random, i. e. specific examples are also found, which are not smallperturbations of a given map. Our approach uses martingale approximation technique in the formof [9].
Time-dependent dynamical systems appear in various applications. Recently, [8] could establish ex-ponential loss of memory for expanding maps and, moreover, for one-dimensional piecewise expandingmaps with slowly varying parameters. It also provided interesting motivations and examples for theproblem. For us - beside their work - an additional incentive was the question of J. Lebowitz [6]: boundthe correlation decay for a planar finite-horizon Lorentz process which is periodic apart form the 0-thcell; in it, the Lorentz particle encounters a particular scatterer of the 0-th cell moderately displaced atits each subsequent return to the 0-th cell. (Slightly similar is the situation in the Chernov-Dolgopyatmodel of Brownian Brownian motion, where - between subsequent collisions of the light particle withthe heavy one - the heavy particle slightly moves away, cf. [3].)The results of [8] say that - for sequences of uniformly uniformly expanding maps - distances ofimages of a pair of different initial measures converge to 0 exponentially fast. In the same setup it is alsonatural to expect that probability laws of the Birkhoff-type partial sums of some given function - scaled,of course, by the square roots of their variances - are approximately Gaussian. The main theorem of ourpaper provides a positive answer though our conditions are surprisingly more restrictive than those of[8]. Let us explain the difficulty and some related results. ∗ The support of the Hungarian National Foundation for Scientific Research grant No. K 71693 is gratefully asknowl-edged.
1n functional central limit theorems for functions of autonomous chaotic deterministic systems thezero-cohomology condition is - in quite a generality - known to be necessary and sufficient for thevanishing of the limiting variance (see [7] for instance). For time-dependent systems, however, such acondition is only known for almost all versions of random dynamical systems (see [1]) and for othermodels the situation can be and definitely is completely different. In fact, for time-dependent systems,first [2] had proved a Gaussian approximation theorem in quite a generality; he, however, assumed thatthe variances of the Birkhoff-type partial sums tend to ∞ sufficiently fast; the paper, however, did notprovide any example when this condition would hold. The more recent work [4] proves under somereasonable conditions a dichotomy: either the variances are bounded or the Gaussian approximationholds; the article also provides an example for the latter in the case when the time dependent maps aresmaller and smaller perturbations of a given map. But still there is no general method for ascertainingwhether the variance is bounded or not. Finally we note that [5] has interesting results for higher ordercohomologies but its setup is different.The present work is, in fact, the first one where non-random, i. e. specific examples are also found,that are not small perturbations of a given map. The proof of our main theorem uses martingaleapproximation technique in the form introduced in [9] for treating additive functions of inhomogeneousMarkov chains. The organization of our paper is simple: its section 2 contains our main theorem andprovides examples when it is applicable. Section 3 is devoted to the proof of the theorem. Let A be a set of numbers and ( X, F , µ ) a probability space. For each a ∈ A define T a : X → X .Suppose that µ is invariant for all T a ’s. Now consider a sequence of numbers from A , i.e. a : N → A .Our aim is to prove some kind of central limit theorem for the sequence f ◦ T a , f ◦ T a ◦ T a , ... with some nice function f : X → R .As usual, ˆ T a g ( x ) = g ( T a x )and ˆ T ∗ is the L ( µ )-adjoint of ˆ T (the so called Perron-Frobenius operator). Further, introduce thenotation ˆ T [ i..j ] = ( ˆ T a i . . . ˆ T a j if i ≤ jId otherwiseand for simplicity write ˆ T [ j ] = ˆ T [1 ..j ] .Similarly, define ˆ T ∗ [ i..j ] = ( ˆ T ∗ a j . . . ˆ T ∗ a i if i ≤ jId otherwiseand ˆ T ∗ [ j ] = ˆ T ∗ [1 ..j ] .Further, let F = F , F i = ( T a ) − . . . ( T a i ) − F and assume that there is a Banach space B of functions2n X such that k g k := k g k B ≥ k g k ∞ for all g ∈ B .Finally, for the fixed function f , introduce the notation u k = k X i =1 ˆ T ∗ [ i +1 ..k ] f. With the above notation, our aim is to prove limit theorem for S n ( x ) = P nk =1 ˆ T [ k ] f ( x ). Theorem 1
Assume that f , a and T b , b ∈ A satisfy the following assumptions.1. R f dµ = 0 .2. T b is onto but not invertible for all b ∈ A .3. f ∈ B and there exist K < ∞ and τ < such that for all b sequences and for all k , k ˆ T ∗ b ... ˆ T ∗ b k f k Define ( X, F , µ ) = ( S , Borel, Leb), A = { , , . . . } , T a ( x ) = ax ( mod , B = C = C ( S ) , k g k := sup x ∈ S | g ( x ) | + sup x ∈ S | g ′ ( x ) | . Fix a non constant function f ∈ C satisfying R f dx = 0 . Then there exists some integer L = L ( f ) suchthat with all sequences a for which { k : min { a k , a k +1 , a k +2 } > L } = ∞ the assumptions of Theorem 1 are fulfilled. roof of Example 2. It is easy to see that for all g ∈ C with zero mean, and for all b : N → A , k ˆ T ∗ b g k ≤ b − k g k and similarly, k ˆ T ∗ b . . . ˆ T ∗ b k g k ≤ · − k k g k . (1)Hence Assumption 3 is fulfilled.In order to check Assumption 4, select x, y ∈ S , ε > , δ > z ∈ [ x,x + ε ] f ( z ) > δ + max z ∈ [ y,y + ε ] f ( z ) . This can be done since f is not constant. Now choose L > max { k f k δ , ε } . Whence k ˆ T ∗ L f k ≤ δ/ . Thus if a k > L , then k k − X i =1 ˆ T ∗ [ i +1 ..k ] f k < δ/ a , . . . a k − . This yieldsmin z ∈ [ x,x + ε ] u k ( z ) > δ/ z ∈ [ y,y + ε ] u k ( z ) . Since L > ε , for all g which is ( T L ) − F measurable, one can find h : [0 , ε/ → R and ε ≤ ε/ g ( y + ε + z ) = g ( x + z ) = h ( z ) for all z ∈ [0 , ε/ k u k − g k ≥ Z x + ε/ x ( u k ( z ) − g ( z )) dz + Z x + ε + ε/ y + ε ( u k ( z ) − g ( z )) dz = Z ε/ ( u k ( x + z ) − h ( z )) dz + Z ε/ ( u k ( y + ε + z ) − h ( z )) dz ≥ Z ε/ ( u k ( x + z ) − u k ( y + ε + z )) dz ≥ δ ε 64 (2)Since k u k k < k u k k is bounded, (2) implies that (1 − cos ( χ k )) is uniformly bounded away from zero if min { a k , a k +1 } > L .Hence, Assumption 5 is fulfilled if there exist infinitely many indices k such thatmin { a k , a k +1 , a k +2 } > L. 4n Example 2, expanding maps with large derivative were needed in order to obtain the Gaussianapproximation. Naturally arises the question that what happens in the case when one uses only finitelymany dynamics, for instance, only T and T of Example 2. That is why we discuss the followingexample. Example 3 Define X, F , µ, A, T b , B as in Example 2. If a is a sequence for which there is a b ∈ A suchthat for all integer K , one can find a k for which a k = a k +1 = ... = a k + K − = b, and f ∈ B , R f = 0 is any function for which the equation f = ˆ T b u − u has no solution u , then theassumptions of Theorem 1 are fulfilled. Proof of Example 3. It is enough to verify Assumption 4. To do so, for K ∈ Z + pick k such that a k − K = a k − K +1 = ... = a k +2 = b. (3)Then (1) implies that k u j − ∞ X i =0 (cid:16) ˆ T ∗ b (cid:17) i f k < C − K (4)holds for j = k, k + 1 with some C uniformly in K . Now, if g := P ∞ i =0 (cid:16) ˆ T ∗ b (cid:17) i f is not ( T b ) − F -measurable, then necessarily its L -angle with those functions is positive. Since (3) and (4) hold forinfinitely many k ’s, min { χ k , χ k +1 } has a positive lower bound infinitely many times, inferring Assumption5. On the other hand, if g is ( T b ) − F -measurable, then g = ˆ T b ˆ T ∗ b g and g − ˆ T ∗ b g = f imply that for u = ˆ T ∗ b g , ˆ T b u − u = f .Note, that in Example 3, V ar ( S n ) can be arbitrary small. Indeed, pick a C function f , for which f = ˆ T u − u has no solution u , but there is some v such that f = ˆ T v − v . Now, pick a sequence ofintegers d l , l ∈ N , d l → ∞ fast enough, and define a k = ( ∃ l : d l ≤ k < d l + l E ( | ˆ T [ i ] f · ˆ T [ j ] f | ) ≤ | i − j | +1 k f k (formally it follows from (14)), which inturn yields that V ar ( S k ) is bounded by some constant times k . Now, with the notation l n := max { l : d l ≤ n } , write V ar ( S n ) ≤ V ar ( S d ln − + l n ) + 4 V ar ( S d ln − S d ln − + l n )+4 V ar ( S d ln + l n − S d ln ) + 4 V ar ( S n − S d ln + l n ) . On the other hand, f = ˆ T v − v implies that ˆ T f + ... + ˆ T m f is uniformly bounded in m . Thus thesecond and the last term in the above sum are bounded. Whence V ar ( S n ) is smaller than some constanttimes d l n − . Especially, if d l = 2 l , then V ar ( S n ) n α → n → α positive. Note that in this case the conditions of [2] for the Gaussian approximationare not met. 5 Proof of Theorem 1 This section is devoted to the proof of Theorem 1.As in [7], [9] and [4], the proof is based on martingale approximation. First, observe thatˆ T ∗ [ n ] ˆ T [ n ] = Id and ˆ T [ n ] ˆ T ∗ [ n ] is the orthogonal projection onto the F n measurable functions (for the proof of the latter, see [7]). Nowwe introduce our approximating martingale, which is analogous to the one of [9]: Z k = k X i =1 E h ˆ T [ i ] f |F k i = k X i =1 ˆ T [ k ] ˆ T ∗ [ k ] ˆ T [ i ] f = k X i =1 ˆ T [ k ] ˆ T ∗ [ i +1 ..k ] f = ˆ T [ k ] u k (5)Since ˆ T [ i ] f = Z i − E [ Z i − |F i ] (6)= ( Z i − E [ Z i |F i +1 ]) + ( E [ Z i |F i +1 ] − E [ Z i − |F i ]) , (7)one obtains S n = n − X k =1 ( Z k − E [ Z k |F k +1 ]) + Z n . Now, ξ ( n ) k = 1 p V ar ( S n ) ( Z k − E [ Z k |F k +1 ]) , is a reverse martingale difference for the σ -algebras F , . . . F n . Thus, in particular V ar ( S n ) = n − X k =1 V ar ( Z k − E [ Z k |F k +1 ]) + V ar ( Z n ) . (8)Using our martingale approximation and the well known martingale CLT (see [9] for instance), it isenough to prove that the difference between the martingale approximant and S n is negligible,max ≤ i ≤ n k ξ ( n ) i k ∞ → k n X i =1 E (cid:20)(cid:16) ξ ( n ) i (cid:17) |F i +1 (cid:21) − k → . (10)To prove (9) and (10), we adopt the ideas of [9]. To verify (9), observe that by Assumption 4, k Z k k ∞ ≤ k X j =1 k ˆ T [ k ] ˆ T ∗ [ j +1 ..k ] f k ∞ ≤ k X j =1 k ˆ T ∗ [ j +1 ..k ] f k ∞ ≤ k X j =1 k ˆ T ∗ [ j +1 ..k ] f k ≤ k X j =1 Kτ k − j k f k ≤ C f . (11)6hus k E [ Z k |F k +1 ] k ∞ ≤ C f . (12)Now, we prove that the variance of S n converges to infinity: V ar ( S n ) = µ ( S n ) → ∞ (13)as n → ∞ . Since (11) implies that V ar ( Z n ) is bounded, (8) can be written as V ar ( S n ) = O (1) + n − X k =1 E ( Z k ) + E (cid:0) E [ Z k |F k +1 ] (cid:1) − E ( Z k E [ Z k |F k +1 ])= O (1) + n − X k =1 E ( Z k ) − E (cid:0) E [ Z k |F k +1 ] (cid:1) = O (1) + n − X k =1 k u k k − k u k k cos χ k . Here, we used (5), and the fact that ˆ T [ k ] is L ( µ )-isometry. Now, since V ar ( f ) = V ar ( ˆ T [ i ] f ) ≤ V ar ( Z i ) + 2 V ar ( E [ Z i − |F i ]) ≤ k u i k + 2 k u i − k , one obtains V ar ( S n ) ≥ O (1) + 14 V ar ( f ) n − X k =1 min j ∈{ k,k +1 } (cid:0) − cos χ j (cid:1) , which converges to infinity as n → ∞ by Assumption 4. Thus we have verified (13).Now, (11), (12) and (13) together imply (9) and that the difference between the martingale and S n isnegligible.To verify (10), first observe that for i > j k E h ˆ T [ j ] f |F i i k ∞ = k ˆ T [ i ] ˆ T ∗ [ i ] ˆ T [ j ] f k ∞ = k ˆ T [ i ] ˆ T ∗ [ j +1 ..i ] f k ∞ = k ˆ T ∗ [ j +1 ..i ] f k ∞ ≤ Kτ i − j k f k . (14)Then one can prove the assertion obtained from Lemma 4.4 in [9] by replacing v ( n ) l with E (cid:20)(cid:16) ξ ( n ) n − l (cid:17) |F n − l +1 (cid:21) the same way as it was done in [9], which yields (10). Acknowledgements The authors are highly indebted to Mikko Stenlund and Lai Sang Young for first explaining their resultin October 2010 and second for a most valuable discussion in April 2011.7 eferences [1] Ayyer, A., Liverani, C., Stenlund, M.: Quenched CLT for Random Toral Automorphisms, Discreteand Continuous Dynamical Systems, 24 331-348. (2009)[2] V. I. Bakhtin, Random processes generated by a hyperbolic sequence of mappings. I, Russian Acad.Sci. Izv. Math. 44 (1995), no. 2, 247-279, Random processes generated by a hyperbolic sequence ofmappings. II, Russian Acad. Sci. Izv. Math. 44 (1995), no. 3, 617-627.[3] Chernov, N., Dolgopyat. D.: Brownian Brownian Motion–1, Memoirs AMS. , No. 927, pp 193.[4] Conze, J. P., Raugi, A.: Limit theorems for sequential expanding dynamical systems of [0,1], Con-temporary Mathematics, 430, 89-121 (2007)[5] A. Katok, S. Katok. Higher cohomology for abelian groups of toral automorphisms, Ergod. Th. &Dynam. Sys. I. , 569-592, 1995; II.25