Mixing rates and limit theorems for random intermittent maps
aa r X i v : . [ m a t h . D S ] A ug MIXING RATES AND LIMIT THEOREMS FOR RANDOMINTERMITTENT MAPS
WAEL BAHSOUN † AND CHRISTOPHER BOSE ∗ Abstract.
We study random transformations built from intermittent mapson the unit interval that share a common neutral fixed point. We focus mainlyon random selections of Pomeu-Manneville-type maps T α using the full pa-rameter range 0 < α < ∞ , in general. We derive a number of results arounda common theme that illustrates in detail how the constituent map that isfastest mixing (i.e. smallest α ) combined with details of the randomizing pro-cess, determines the asymptotic properties of the random transformation. Ourkey result (Theorem 1.1) establishes sharp estimates on the position of returntime intervals for the quenched dynamics. The main applications of this esti-mate are to limit laws (in particular, CLT and stable laws, depending on theparameters chosen in the range 0 < α <
1) for the associated skew product;these are detailed in Theorem 3.2. Since our estimates in Theorem 1.1 alsohold for 1 ≤ α < ∞ we study a piecewise affine version of our random transfor-mations, prove existence of an infinite ( σ − finite) invariant measure and studythe corresponding correlation asymptotics. To the best of our knowledge, thislatter kind of result is completely new in the setting of random transformations. Introduction
In recent years, a lot of attention has been given to examples of nonuniformlyexpanding (or nonuniformly hyperbolic) maps with neutral fixed points. It is wellknown that such models can exhibit a range of nonstandard dynamical/probabilisticbehavior; they may be mixing, but display subexponential decay of correlations forH¨older observables, for example. Limit theorems such as CLT and stable laws canbe derived within various classes depending on the strength of the intermittencyaround the fixed point.The purpose of this paper is to investigate similar questions for random trans-formations whose constituent maps are drawn from an appropriate nonuniformlyexpanding family. In particular, we aim to understand how behavior of the ran-dom transformation depends on properties of the maps and the randomizing pro-cess. A brief synopsis of our findings is as follows: At the level of existence (ornon-existence) of a finite invariant measure and the rate of correlation decay forsufficiently regular observables, the random dynamics are completely determinedby the map with fastest relaxation, independent of the randomization. The samealso holds for the dynamical CLT when the correlation decay is strong enough to
Date : August 11, 2016.1991
Mathematics Subject Classification.
Primary 37A05, 37E05.
Key words and phrases.
Interval maps with a neutral fixed point, intermittency, random dy-namical systems, decay of correlations, Central Limit Theorem, stable laws.The second author is supported by a research grant from the National Sciences and EngineeringResearch Council of Canada. be summable. However, we find the randomizing process begins to play an explicitrole at the next finer level of analysis, for example, in sharp correlation asymp-totics for regular observables supported away from the fixed points, and in limittheorems taking the form of stable laws for the associated skew product. Overall,this analysis gives a coherent picture that is consistent with our intuition abouthow randomness interacts with the intermittency.We will work in the following concrete setting. Let ( I, B ( I ) , m ) denote the mea-sure space consisting of the unit interval I = [0 ,
1] with Borel σ − algebra B ( I ) and m = Lebesgue measure on B ( I ). The first part of this paper will concentrate onrandomized one-dimensional maps of Pomeau-Manneville type [15]. A well-known,simplified version of the PM maps is the family of so-called Liverani-Saussol-Vaientimaps [12]. Such systems have attracted the attention of both mathematicians andphysicists (see [11] for a recent work in this area).To set our notation, given a parameter value 0 < α < ∞ , define T α ( x ) = ( x (1 + 2 α x α ) x ∈ [0 , ]2 x − x ∈ ( , . When α = 0, T α is the doubling map. For α > x = 0 is a neutral fixed pointfor the map T α which is consequently a nonuniformly expanding, piecewise C ∞ ,monotone map of the interval (on two pieces).It is well-known that T α admits a finite ACIM with density h α = O ( x − α ) for x near zero when 0 < α < σ − finite ACIM with similar asymptotic near zero when 1 ≤ α < ∞ (see Pianigiani [14] for this range). In fact, the argument in [12] shows that for0 < α <
1, the density h α is locally Lipschitz on (0 ,
1] as well as being continuousand integrable.Now fix two parameters 0 < α < β < ∞ and consider the random LSVtransformation defined as follows. T = { T α ( x ) , T β ( x ); p , p } , where p , p > p = 1 − p . The random transformation T maybe viewed as aMarkov process with transition function P ( x, A ) = p A ( T α ( x )) + p A ( T β ( x ))of a point x ∈ I into a set A ∈ B ( I ). The transition function induces an operator, E T , acting on measures; i.e., if µ is a measure on ( I, B ),( E T µ )( A ) = p µ ( T − α ( A )) + p µ ( T − β ( A )) . A measure µ is said to be T -invariant if µ = E T µ, and µ is said to be an absolutely continuous invariant measure if dµ = f ∗ dm , f ∗ ≥
0. To study absolutely continuous invariant measures, we introduce thetransfer operator (Perron-Frobenius) of the random transformation T :( P T f )( x ) = p P T α ( f ) ( x ) + p P T β ( f ) ( x ) , where P T α , P T β are the transfer operators associated with the T α , T β respectively.Then it is a straight-forward computation to show that a measure µ = f ∗ · m is an . Bahsoun, C. Bose 3 absolutely continuous T − invariant measure if P T f ∗ = f ∗ . A skew product representation.
Define the skew product transformation S ( x, ω ) : I × I → I × I by(1.1) S ( x, ω ) = ( T α ( ω ) , ϕ ( ω )) , where(1.2) α ( ω ) = ( α , ω ∈ [0 , p ) β , ω ∈ [ p ,
1] ; ϕ ( ω ) = ( ωp , ω ∈ [0 , p ) ω − p p , ω ∈ [ p , . The skew product representation in (1.1) is a version of the skew product represen-tation which was studied in Bahsoun, Bose and Quas [5]. We denote the transferoperator associated with S by L S : for g ∈ L ( I × I ) and measurable A ⊆ I × I , Z S − A g d ( m × m )( x, ω ) = Z A L S g d ( m × m )( x, ω ) . Then a measure ν , such that dν = g ∗ d ( m × m ) and R I × I g ∗ d ( m × m ) = 1, is anabsolutely continuous S -invariant probability measure if L S g ∗ = g ∗ . In [5], Theorem 5.2 it is shown that if g ∈ L ( I × I ) and L S g = λg with | λ | = 1,then g ( x, ω ) = f ( x ) · ( ω )and P T f = λf , that is, g depends only on the spatial coordinate x and as a functionof x only, is also an eigenfunction for P T . Setting λ = 1 we obtain L S g ∗ = g ∗ ifand only if g ∗ ( x, ω ) = f ∗ ( x ) with P T f ∗ = f ∗ . Consequently there is a one toone correspondence between invariant densities for S and invariant densities for T .Moreover, dynamical properties such as ergodicity, number of ergodic componentsor weak-mixing, properties that are determined by peripheral eigenfunctions, canbe determined via either system.Our skew product construction is similar to a model constructed by Gou¨ezel [8],however in that paper, the skew product samples continuously from the space ofLSV maps, whereas we sample discretely. This allows us to simplify the analysis andextend the range of parameters in which we can complete the analysis, comparedto [8]. A more detailed discusion and comparison between the two models can befound in Bahsoun, Bose and Duan [4].1.2. Inducing for the skew representation S . The method of inducing (equiv-alently, Markov extensions or Young towers) gives a systematic way to study mapslike T α having localized singularities, for example, as detailed in Young [17]. Wewill begin by doing essentially the same thing with our skew product S , inducingon the right half of the square ∆ := (1 / , × [0 , T nω ( x ) := T α ( ϕ n − ω ) ◦ ... ◦ T α ( ϕω ) ◦ T α ( ω ) ( x ) . The results obtained in Bahsoun, Bose and Quas [5] are valid for any class of measurablenon-singular maps on R q , without any regularity assumptions. Moreover in [5], the probabilitydistribution on the noise space is allowed to be place-dependent. Mixing rates and limit theorems
Then S n ( x, ω ) = ( T nω ( x ) , ϕ n ( ω )) . Also, set P nω := p α ( ϕ n − ω ) × ... × p α ( ϕω ) × p α ( ω ) , where p α ( ω ) = p , for α ( ω ) = α and p α ( ω ) = p , for α ( ω ) = β. We define twosequences of random points { x n ( ω ) } and { x ′ n ( ω ) } in [0 ,
1] which will be used toconstruct the first return map of S to ∆ . The points x n ( ω ) lie in (0 , / x ( ω ) ≡
12 and x n ( ω ) = T − α ( ω ) | [0 , ] [ x n − ( ϕω )] , n ≥ . Observe that with this notation, S ( x n ( ω ) , ω ) = ( T α ( ω ) ( x n ( ω )) , ϕω ) = ( x n − ( ϕω ) , ϕω ) . The points x ′ n ( ω ) lie in ( , x ′ ( ω ) ≡ , x ′ ( ω ) ≡
34 and x ′ n ( ω ) = x n ( ϕω ) + 12 , n ≥ , that is, the x ′ n ( ω ) are preimages of the x n ( ϕω ) in ( ,
1] under the right branch2 x − . First return map of S to ∆ . Let R : ∆ → Z + be the first returntime function and S R : ∆ → ∆ be the return map. For n ≥
1, set I n ( ω ) :=( x n +1 ( ω ) , x n ( ω )] and J n ( ω ) := ( x ′ n ( ω ) , x ′ n − ( ω )]. Observe that every point in J n ( ω )will return to ( ,
1] in n steps under the random iteration T nω as follows: J n ( ω ) → I n − ( ϕω ) → I n − ( ϕ ω ) → ... → I ( ϕ n − ω ) → ( 12 , . Next, we partition ∆ into subsets ∆ ,i , i = 1 , , . . . where∆ ,i := { ( x, ω ) | x ∈ J i ( ω ) } and then further partition each ∆ ,i into subsets ∆ j ,i , j = 1 , , . . . i according tothe 2 i possible values of the string α ( ω ) , α ( ϕω ) , . . . α ( ϕ i − ω ). Defined this way, S i maps each subset ∆ j ,i bijectively to ∆ .For example, in the case i = 2 , there are four sets ∆ j , on which R = 2 and suchthat S R maps each set bijectively to ∆ :∆ j , = J ( ω ) × [0 , p ) , if j = 1 ,J ( ω ) × [ p , p ) , if j = 2 ,J ( ω ) × [ p , p + p · p ) , if j = 3 ,J ( ω ) × [ p + p · p , , if j = 4 . To summarize, ∆ ,i = i [ j =1 ∆ j ,i and ∆ = ∞ [ i =1 2 i [ j =1 ∆ j ,i , . Bahsoun, C. Bose 5 where, for every i and j = 1 , , ..., i ,R | ∆ j ,i = i. For each n , the interval J n ( ω ) depends on only the first n coordinates in ω andmoreover m × m { R = n } = n X j =1 P nω j m ( J n ( ω j )) = E ω ( m ( J n ( ω ))) , where ω j ranges across the 2 n possible configurations ω with distinct values for thestring α ( ω ) , α ( ϕω ) , . . . α ( ϕ n − ω ) and E ω ( · ) denotes expectation with respect to ω .Since m ( J n ( ω )) = m ( I n − ( ω )) we also obtain(1.5) m × m { R > n } = E ω ( x ′ n ( ω ) − /
2) = 12 E ω ( x n ( ω )) . Finally, we adopt the following (standard) notation throughout this paper. Givensequences a n , (respectively b n ) of nonnegative (respectively positive) real numbers,we write a n ≍ b n if there is a constant C ≥ C − b n ≤ a n ≤ Cb n , and a n ∼ b n if lim a n b n = 1.1.4. Statement of the main result in this paper.
There is now a range of stud-ies (including Young [17], Zweim¨uller [18, 19], Sarig [16], Gou¨ezel [7] and Melbourne-Terhesiu [13], for example) that show how careful analysis of the asymptotics of m { R > n } can reveal deep statistical properties of the underlaying map. In ourcase, we are interested in the skew S acting on the square. The strength of our re-sults, therefore, are likely to depend in a critical way, on the sharpness of estimatesobtained on the measures of sets like J n ( ω ) and I n ( ω ).For example, in [17] a key estimate for a single LSV-map T α reads as follows:if x n is the sequence of points generated under the inverse of the leftmost branchof T α , such that T α x n +1 = x n and x = 1 /
2, then there exists c > c − n − α ≤ x n ≤ cn − α . If we introduce the notation x n ( α ) := x n for this sequenceof deterministic points, we can record this observation as(1.6) x n ( α ) ≍ n − α which upper bounds the size of return-time sets and is sufficient for establishingexistence of the invariant density h α and bounds on rate of correlation decay when0 < α < c n ( α ) := n α x n ( α ). Then x n ( α ) = c n ( α ) n − α with lim n c n ( α ) = α − α := c ( α ). That is, in our notation(1.7) x n ( α ) ∼ α α n − α = c ( α ) n − α This sharper estimate is key for analysis of limit theorems for maps like T α . See,for example, Melbourne and Terhesiu [13] and Gou¨ezel [9].Moving to similar estimates on our skew product S , the following rough estimateis obtained as Lemma 4.4 in Bahsoun, Bose and Duan [4] as a first step in theiranalysis: For all ω ∈ [0 ,
1] and n ≥ x n ( α ) ≤ x n ( ω ) ≤ x n ( β ) , Mixing rates and limit theorems where x n ( β ) denotes the sequence of deterministic points for T β . The main resultin this paper is a much sharpened estimate on the location of x n ( ω ) compared toEquation (1.8).Keeping the bounds (1.8) in mind, and following the setup for equation (1.7),for each n, ω define c n ( ω ) := n α x n ( ω ) (so that x n ( ω ) = c n ( ω ) n − α ). We now statethe main result in our paper: Theorem 1.1.
Let < α < β < ∞ . For almost every ω in [0 , we have lim n c n ( ω ) = ( αp ) − α = c ( α ) p − α . That is (1.9) x n ( ω ) ∼ c ( α ) p − α n − α . Moreover, E ω ( | c n ( ω ) − c ( α ) p − α | ) → , in other words, convergence of n α x n to c ( α ) p − α also holds in the L − norm. In the terminology of random dynamical systems, this is a quenched limit theorem (ie: almost everywhere) as opposed to annealed (averaged over ω ). In general,quenched results are harder to obtain than annealed ones. Examples of otherquenched limit theorems can be found in Ayyer, Liverani and Stenlund [3].The significance of Theorem 1.1 is what it implies for asymptotics of the randomsystem. The main applications of this theorem will appear in Sections 3 and 4 wherewe derive limit theorems for the skew product S and study asymptotics for infinitemeasure preserving systems, respectively. However, to illustrate the flavour of ourresults in a simple context, we close this section by revisiting (and sharpening )the main conclusion from [4] that shows one way in which the fast system ( T α )dominates the asymptotic behavior of the skew: Theorem 1.2.
Let < α < β < and S be as defined in (1.1) . Then m × m { R >n } ∼ c ( α ) p − α n − α = ( αp ) − α n − α . Moreover,(1) S admits a unique absolutely continuous invariant probability measure ν with density dν = hd ( m × m ) where h is Lipschitz on compact subsets of (0 , × [0 , ;(2) ( S, ν ) is mixing;(3) for φ ∈ L ∞ ( I × I, m × m ) and ψ a H¨older continuous function on I × I | Cor ( φ, ψ ) | = O ( n − α ) , where Cor ( φ, ψ ) = Z φ ◦ S n · ψdν − Z φdν Z ψdν. With more assumptions on the observables φ and ψ we obtain the following strongerestimate:(4) If φ ∈ L ∞ ( I × I, m × m ) and ψ Lipschitz on I × I , R φ dν = 0 and R ψ dν = 0 with both φ and ψ identically in an open strip containing the line x = 0 ,then Cor ( φ, ψ ) ∼ E ω ( h ( 12 , ω ))( αp ) − α (cid:0) α − (cid:1) − n − α Z φ dν Z ψ dν. Statement (3) in the current Theorem 1.2 is essentially proved in [4]. The exact asymptoticsof m × m { R > n } however, are new, and the precise decay of correlations in (4) for functionssupported away from the line x = 0 are also new. . Bahsoun, C. Bose 7 Proof.
The enumerated statements (1 – 3) follow by identical arguments to thosein [4] once the claimed asymptotics on m × m { R > n } are derived. The latter areeasily established since by Equation (1.5) we know m × m { R > n } = E ω ( x n ( ω )) = n − α E ω ( c n ( ω )), while Theorem 1.1 implies E ω ( c n ( ω )) → c ( α ) p − α ∈ (0 , ∞ ). Itfollows that m × m { R > n } ∼ c ( α ) p − α n − α . The fact that the density h isLipschitz on compact subsets of (0 , × I is proved in Lemma 3.1 in Section 3 ofthis paper.To establish (4) we first assume that φ, ψ are supported on ∆ with the statedregularity. By Theorem 6.3 in Gou¨ezel [7] we have Cor ( φ, ψ ) ∼ X k>n ν { R > k } Z φ dν Z ψ dν. The invariant measure dν = hdm × dm , with h Lipschitz on ∆ . This leads to thefollowing estimate (see Lemma 3.3 in Section 3) on the measure of return time sets: ν { R > k } ∼ E ω ( h ( 12 , ω ))( αp ) − α k − α . Summing over k > n gives the result.Now using the argument from Gou¨ezel [7] Section 7 in our setting we can extendthe support of φ, ψ to ∆ N := { ( x, ω ) : x N ( ω ) ≤ x ≤ } , with the same asymptoticreturn times. For sufficiently large N this picks up the support of φ and ψ . (cid:3) The rest of this paper is organized as follows. In Section 2 we prove Theorem1.1. The computation depends on a classical result of Hoeffding [10] that givesexponental decay of large deviations for sums of bounded, independent randomvariables.In Section 3 we apply our estimates to derive central limit theorems and stablelaws for the Birkhoff sums(1.10) S n f ( x, ω ) := n − X k =0 f ( S k ( x, ω )) , where S is the skew product and parameters 0 < α < β < ≤ α < β a natural analogue of Theorem 1.1 applies, andthe invariant measure is bound to be infinite ( σ − finite). We investigate correlationasymptotics for this case. To the best of our knowledge this is the first detailedanalysis of asymptotics for a random system with an infinite invariant measure.2. Proof of Theorem 1.1
We begin with a basic calculus estimate.
Lemma 2.1.
Let < α < ∞ and ≤ x ≤ . Then − αx ≤ [1 + x ] − α ≤ − αx + α (1 + α )2 x Proof.
Elementary. (cid:3)
Mixing rates and limit theorems
Recall our notation x n ( α ) (resp. x n ( β )) for the sequence of points generatedby first branch inverse of the deterministic map T α (resp. T β ). Recall also thebasic estimates in Equations (1.6), (1.7) and (1.8). Our goal is to obtain sharpdecay estimates on n [ x n ( ω )] α and for that we will need the following classical largedeviations estimate. Proposition 2.2. (Hoeffding [10] , Theorem 1) Suppose that X k = X k ( ω ) , k =1 , , . . . n are independent random variables, uniformly bounded such that ≤ X k ≤ . Let ¯ X n = n − P nk =1 X k and E ( ¯ X n ) = n − P nk =1 E ( X k ) . Then for every t > we have P {| ¯ X n − E ( ¯ X n ) | ≥ t } ≤ exp( − nt )We proceed by a sequence of lemmas. Lemma 2.3.
There is a set G ⊆ [0 , of full measure such that for every ω ∈ G we have lim sup n α x n ( ω ) ≤ [ α α p ] − α = c ( α ) p − α . Proof.
We begin with the standard expression derived directly from the definitionof T α ( ω ) :(2.1) 1[ x n − ( ϕω )] α = 1[ x n ( ω )] α [1 + [2 x n ( ω )] α ( ω ) ] − α . Using the upper bound contained in the right hand side of Lemma 2.1, with x =[2 x n ( ω )] α ( ω ) and reordering terms we obtain1[ x n ( ω )] α − x n − ( ϕω )] α ≥ α α [2 x n ( ω )] α ( ω ) − α − α (1 + α )2 2 α [2 x n ( ω )] α ( ω ) − α . Applying this inequality along the sequence x k ( ϕ n − k ω ) for k = 2 through n , andkeeping in mind that x ( ω ) = for every ω gives the basic inequality1[ x n ( ω )] α ≥ α + α α { n X k =2 [2 x k ( ϕ n − k ω )] α ( ϕ n − k ω ) − α − α n X k =2 [2 x k ( ϕ n − k ω )] α ( ϕ n − k ω ) − α } . Next, we use the estimate contained in Equation (1.8), the notation from Equa-tion (1.7) and division by n to obtain1 n [ x n ( ω )] α ≥ α n + n − n α α (cid:26) n − n X k =2 (cid:20) c k ( α ) k α (cid:21) α ( ϕ n − k ω ) − α − α n − n X k =2 (cid:20) c k ( β ) k β (cid:21) α ( ϕ n − k ω ) − α (cid:27) . Now consider the quantity A n ( ω ) := 1 n − n X k =2 (cid:20) c k ( α ) k α (cid:21) α ( ϕ n − k ω ) − α − α n − n X k =2 (cid:20) c k ( β ) k β (cid:21) α ( ϕ n − k ω ) − α (2.2) . Bahsoun, C. Bose 9 We estimate each sum in A n independently using the large deviations estimatedetailed in Proposition 2.2.For the first sum in Equation (2.2), using the substitution X k ( ω ) = (cid:20) c k ( α ) k α (cid:21) α ( ϕ n − k ω ) − α from which we compute E ω ( X k ) = p + p (cid:20) c k ( α ) k α (cid:21) β − α , and using Proposition 2.2 and a positive value t = t n > P { (cid:12)(cid:12)(cid:12)(cid:12) n − n X k =2 (cid:20) c k ( α ) k α (cid:21) α ( ϕ n − k ω ) − α − n − n X k =2 ( p + p (cid:20) c k ( α ) k α (cid:21) β − α ) (cid:12)(cid:12)(cid:12)(cid:12) ≥ t n }≤ exp( − n − t n ) . (2.3)If we choose t n ↓ P n exp( − n − t n ) < ∞ then by Borel-Cantelli,keeping in mind that c k ( α ) is bounded, and1 n − n X k =2 ( p + p (cid:20) c k ( α ) k α (cid:21) β − α ) = p + O ( n − β/α ) , we conclude that(2.4) 1 n − n X k =2 (cid:20) c k ( α ) k α (cid:21) α ( ϕ n − k ω ) − α → p for almost every ω ∈ [0 , E ω ( 1 + α n − n X k =2 (cid:20) c k ( β ) k β (cid:21) α ( ϕ n − k ω ) − α ) = O ( n − γ ) , where γ = min { α/β, − α/β } = α/β > α < β . Therefore we conclude thatfor almost every ω (2.5) 1 + α n − n X k =2 [ 2 c k ( β ) k β ] α ( ϕ n − k ω ) − α → . Combining Equations (2.4), and (2.5) shows that A n → p almost everywhere.It follows that almost surely (w.r.t. ω ) lim inf n n [ x n ( ω )] α ≥ α α p . The statementof the lemma follows. (cid:3) Lemma 2.4.
There is a set G ⊆ [0 , of full measure such that for every ω ∈ G we have lim inf n α x n ( ω ) ≥ [ α α p ] − α = c ( α ) p − α . t n = n − / does the job, for example. Proof.
Let G be a set of full measure in ω for which convergence is obtained inLemma 2.3. In particular, for every ω ∈ G there exists an N = N ( ω ) such thatfor all n > N ( ω ) we have x n ( ω ) ≤ c ( α ) p − α + 1 n α . Now, starting with Equation (2.1), using the lower bound in Lemma 2.1, dividingby n , and assuming ⌊√ n ⌋ ≥ N ( ω ) we get the following expression:1 n [ x n ( ω )] α ≤ n α + α α n n X k =2 [2 x k ( ϕ n − k ω )] α ( ϕ n − k ω ) − α ≤ n α + α α ⌊√ n ⌋ − n ⌊√ n ⌋ − ⌊√ n ⌋ X k =2 [2 x k ( ϕ n − k ω )] α ( ϕ n − k ω ) − α + n − ⌊√ n ⌋ n n − ⌊√ n ⌋ n X ⌊√ n ⌋ +1 " c ( α ) p − α + 1 n α α ( ϕ n − k ω ) − α Now define, for any ω ∈ [0 , A ′ n ( ω ) := 1 n − ⌊√ n ⌋ n X k = ⌊√ n ⌋ +1 " c ( α ) p − α + 1 n α α ( ϕ n − k ω ) − α Once again, application of the large deviation estimates in Proposition 2.2 combinedwith the direct calculation E ω ( A ′ n ) = p + O ( n − β/α ) shows that A ′ n → p for almostevery ω in a set G of full measure.Finally, fix an arbitrary ω ∈ G ∩ G . Provided n is large enough such that ⌊√ n ⌋ ≥ N ( ω ) we estimate1 n [ x n ( ω )] α ≤ n α + α α ⌊√ n ⌋ n + α α n − ⌊√ n ⌋ n A ′ n ( ω ) . The right hand side of this expression converges to α α p . It follows that for all ω ∈ G ∩ G , lim inf n [ x n ( ω )] α ≥ α α p . The lemma now follows by taking roots. (cid:3) Lemmas 2.3 and 2.4 together give the almost sure convergence claimed in The-orem 1.1. To obtain the L convergence we first observe Lemma 2.5. lim sup n →∞ E ω ( n α x n ( ω )) ≤ c ( α ) p − α . Proof.
Let p ∈ (0 , p ). By Proposition 4.1 of [4] we have(2.7) E ω ( n α x n ( ω )) ≤ n α x ⌊ p n ⌋ ( α ) + n α exp( − n ( p − p ) ) . Let ε > x n ( α ) = c n ( α ) n − n , with inf c n ( α ) > n large enough, (2.7) implies(2.8) E ω ( n α x n ( ω )) ≤ (1 + ε ) n α x ⌊ p n ⌋ ( α ) . We also know from Equation (1.7) that(2.9) x ⌊ p n ⌋ ( α ) = c ⌊ p n ⌋ ( α ) ⌊ p n ⌋ − α , . Bahsoun, C. Bose 11 with lim n →∞ c ⌊ p n ⌋ ( α ) = c ( α ). Consequently, by (2.8) and (2.9), we have E ω ( n α x n ( ω )) ≤ (1 + ε ) n α c ⌊ p n ⌋ ( α ) ⌊ p n ⌋ − α ≤ (1 + ε ) n α c ⌊ p n ⌋ ( α ) p − α ( n − p ) − α . Thus,(2.10) lim sup n →∞ E ω ( n α x n ( ω )) ≤ (1 + ε ) c ( α ) p − α . Since p can be taken arbitrarily close to p , and ε > n →∞ E ω ( n α x n ( ω )) ≤ c ( α ) p − α . (cid:3) Finally, Lemmas 2.3 and 2.4 combined with Lemma 2.5 give the required L convergence due to the following elementary result. Lemma 2.6. (Also Lemma 4.3 of Gou¨ezel [8] ) Let f n be a sequence of integrablefunctions on a probability space, with f n ≥ and f n → f almost everywhere.Suppose that E ( f ) < ∞ and lim sup E ( f n ) ≤ E ( f ) . Then E ( | f n − f | ) → (i.e. f n → f in L norm). We include the proof for completeness.
Proof.
Set g n := min { f n , f } = { f n + f − | f n − f |} . Then 0 ≤ g n ≤ f and g n → f almost everywhere, so by dominated convergence E ( | g n − f | ) →
0. From this, wealso see E ( g n ) → E ( f ).Now | g n − f n | = f n − g n ≥ E ( | g n − f n | ) = lim sup E ( f n ) − lim inf E ( g n ) ≤ E ( f ) − E ( f )= 0It follows that lim E ( | g n − f n | ) = 0 which completes the proof. (cid:3) Application to limit theorems
In this section we apply Theorem 1.1 to establish limit theorems for the Birkhoffsums S n f ( x, ω ) in (1.10) when f : I × I → R is H¨older-continous and R f dν = 0.Here, ν = h dm × dm is the absolutely continuous invariant measure for S .We first observe that for 0 < α < β < S R , ∆ ) is Gibbs-Markov for the return-time partition ∆ ,i (see Aaronson [1] or Aaronson and Denker[2] for a definition of Gibbs-Markov). The required expansion and distortion es-timates are derived in [4] with respect to the metric d ( z , z ) := θ s ( z ,z ) , for asuitable constant 0 < θ < z = ( x , ω ) , z = ( x , ω ) and s ( z , z ) being theusual separation time of two points in ∆ with respect to the return partition.Next, we establish local Lipschitz regularity for the density h away from the fixedpoint. Lemma 3.1.
For < α < β < , h has a version that is Lipschitz on any compactsubset of (0 , × [0 , .Proof. By Theorem 5.2 of [5], we know that the unique invariant density of S isof the form h = g × where P T g = g , P T being the transfer operator associatedwith the random map T . Thus, it is enough to show that g is Lipschitz on compactsubsets of (0 , P T on a suitable cone. Let B denote the set of integrable and C functions on (0 , a > , define acone C a by C a = { f ∈ B | f ≥ , f decreasing , x Z f dλ ≤ ax − β Z f } . Let a ∗ = − β . For a ≥ a ∗ , it is well known that under the action of P T β , the transferoperator associated with the map T β , C a is invariant. Moreover, since α < β thesame is true for P T α . Since P T is a convex combination of P T α and P T β , we concludethat C a is invariant under the action of P T . Consequently, P T has a fixed point in C a (an invariant density), which we denoted by g . Note that g is Lipschitz on anycompact subset of (0 ,
1] by the properties of C a . (cid:3) From this point on we assume h is the (unique) Lipschitz version assured by thislemma since this plays a key role in some of the estimates to follow.Set A := c ( α ) p − α E ω ( h ( , ω )). Let N (0 , σ ) denote the normal distributionwith mean zero and variance σ . Theorem 3.2.
Let < α < β < . Let f : I × I → R be a H¨older continuousfunction satisfying R f dν = 0 . Set c := E ω ( f (0 , ω )) . Then(1) If α < , there exists σ ≥ such that √ n S n f → N (0 , σ ) . (2) If ≤ α < and c = 0 , suppose there exists a γ > βα ( α − ) such that | f ( x, ω ) − f (0 , ω ) | ≤ C f x γ . Then there exists σ ≥ such that √ n S n f → N (0 , σ ) . (3) If α = and c = 0 then S n f / √ c An ln n → N (0 , .(4) If < α < and c = 0 then S n f /n α → Z where the random variable Z has characteristic function given by E (exp( itZ )) = exp {− A | c | α Γ(1 − α ) cos( π/ α ) | t | α (1 − i sgn( ct ) tan( π/ α )) } . The proof depends on a number of careful estimates using Theorem 1.1 that canbe proved in an analogous way to corresponding calculations in Gou¨ezel [8].
Lemma 3.3.
We have ν ( R > n ) ∼ n − /α A . Notation 3.4.
For f : [0 , × [0 , → R and ( x, ω ) ∈ ∆ define f ∆ ( x, ω ) := R ( x,ω ) − X k =0 f ( S k ( x, ω )) . Lemma 3.5.
Let f be H¨older on [0 , × [0 , . If < α < / , then f ∆ ∈ L (∆ , dν ) . Lemma 3.6.
Suppose f : X → R is H¨older continuous with E ω ( f (0 , ω )) = 0 .Suppose there are constants < γ < β and C f < ∞ such that, uniformly in ω , forall x ∈ [0 , | f ( x, ω ) − f (0 , ω ) | ≤ C f x γ . Let ≤ p < min { /α, /α (1 − γβ ) } . Then f ∆ ∈ L p (∆ , ν ) . . Bahsoun, C. Bose 13 We are going to work with a family of metrics based on separation time. For2 ≤ λ < ∞ , the expansion constant for S R and any 0 < θ < d λ − θ ( z , z ) := λ − θs ( z ,z ) . It follows that with respect to d λ − θ , the expansion constant of S R is at least λ θ > S R is Gibbs-Markov for this metric and the partition ∆ s ,i .Let 0 < θ < f . For each n, s let Df ∆ (∆ s ,n ) denotethe Lipschitz constant of f ∆ restricted to the subset ∆ s ,n and computed withrespect to the metric d λ − θ . Lemma 3.7. X n,s ν (∆ s ,n ) Df ∆ (∆ s ,n ) ≤ C X n ν { R = n } n < ∞ . Proof of Theorem 3.2.
We are going to use Theorem 3.1 of Gou¨ezel [8].The basic finite expectation condition (Equation (18) in [8]) is given in our settingby Lemma 3.7.Assume first that α < . Then by Lemma 3.5 we know that f ∆ ∈ L (∆ , ν ).Also the return time function R ∈ L since R = g ∆ with g ≡
1, to which Lemma3.5 also applies. This is the setting of the first case of Theorem 3.1, so we obtainthe central limit theorem in (1).Next we consider 1 / ≤ α < c = 0. Now Lemma 3.6 with conditions givenon γ shows that f ∆ ∈ L and the estimate in Lemma 3.3 shows that ν { R > n } = n − α A ( n ) ∼ n − α A. For z ∈ (0 , ∞ ) we get ν { R > z } = ⌈ z ⌉ − α A ( ⌈ z ⌉ ). Set Λ( z ) := (cid:0) ⌈ z ⌉ /z (cid:1) − α A ( ⌈ z ⌉ ).Note that z → Λ( z ) is slowly varying, Λ( z ) ∼ A and ν { R > z } = z − α Λ( z ). Thesecond sub-condition in the first case of Theorem 3.1 [8] are therefore satisfied with L := Λ and we again get a central limit theorem in (2).The last two cases require a more detailed estimate. Assume ≤ α <
1. Set g ≡ c and note that on g ∆ ( x, ω ) = cn ⇐⇒ R ( x, ω ) = n , so ν {| g ∆ | > z } ∼| c | α z − α Λ( z ) according to the previous calculation. The function j = f − g has thesame regularity as f and satisfies E ω ( j (0 , ω )) = 0 . We next show that ν {| j ∆ | >z } = o ( z − α ). Applying Lemma 3.6 to j , we obtain p > α such that j ∆ ∈ L p (∆ ).It follows that ν {| j ∆ | > z } ≤ Z (cid:0) | j ∆ | /z (cid:1) p dν = Cz − p = o (cid:0) z − α (cid:1) . The elementary decomposition { g ∆ > z (1 + ǫ ) } ∩ {| j ∆ | ≤ ǫz } ⊆ { f ∆ > z } ⊆ { g ∆ > z (1 − ǫ ) } ∪ {| j ∆ | > ǫz } implies ν { g ∆ > z (1 + ǫ ) } − ν {| j ∆ | > ǫz } ≤ ν { f ∆ > z }≤ ν { g ∆ > z (1 − ǫ ) } + ν {| j ∆ | > ǫz } . (3.1)Now consider the case α = in (3). Assume c > ν { f ∆ > z } ∼ z − ( c A + o (1)). On the other hand, if c < g < { g ∆ > z (1 ± ǫ ) } = ∅ , so ν { f ∆ > z } ≤ ν {| j ∆ | > ǫz } ∼ z − o (1). Combining these two estimatesyields ν {| f ∆ | > z } ∼ z − ( c A + o (1)) := z − l ( z ), independent of the sign of c . The only difference is that for c > z while for c < z . We have already established that ν { R > z } ∼ z − A = z − (1 /c ) l ( z ). We can now apply the third case of Theorem3.1 [8] with L ( z ) = 2 c A R z u du = 2 c A ln z (unbounded and slowly varying) and B n := √ c An ln n , whereby nL ( B n ) ∼ B n as required.For the final case, when < α < c >
0, so that g >
0. For z > ν { f ∆ > z } ∼ z − α ( c α A + o (1))while we have already established that ν { R > z } ∼ z − α A . On the other hand, ν { f ∆ < − z } ≤ ν {| j ∆ | > ǫz } = o (cid:0) z − α (cid:1) . We can therefore apply the last case in Theorem 3.1, [8], setting c = c α A , c = 0, c = A , L ≡ B n := n α . In the case c < c and c . Putting this together, the theorem gives an asymptotic stable law withcharacteristic functionexp (cid:2) − c α A Γ(1 − α ) cos (cid:0) π α (cid:1) | t | α (cid:0) − i sgn( c ) sgn( t ) tan (cid:0) π α (cid:1)(cid:1)(cid:3) , which is case (4) in our theorem. This completes the proof.4. Application to correlation asymptotics for infinite measurepreserving random systems
In this section we use arguments analogous to Theorem 1.1 to study the asymp-totics of the transfer operator associated with the skew product of a piecewise affineversion of the random model discussed in the pervious sections. In particular, wewill consider ˜ T α ( ω ) ( x ) = x n − ( ϕω ) − x n − ( ϕω ) x n − ( ω ) − x n ( ω ) ( x − x n ( ω )) + x n − ( ϕω ) , for x ∈ ( x n ( ω ) , x n − ( ω )] , n = 1 , , . . . x − , for x ∈ ( , . Let(4.1) S ( x, w ) := ( ˜ T α ( ω ) ( x ) , ϕω )denote the associated skew product. As in Subsection 1.2, we induce S on ∆ .Thus, by our theorem, we can apply Theorem 1.4 of [9] to obtain asymptotics of L nS . In particular, we obtain the following theorem. Theorem 4.1.
Let S be the skew product defined using (4.1) with ≤ α < β < ∞ .The following hold:(1) S admits a unique (subject to the normalizing condition ν (∆ ) = 1 ) abso-lutely continuous invariant infinite ( σ -finite) measure ν ; We consider a linearized version because for general random LSV transformations we wereable to prove bounded distortion only when 0 < α ≤ β ≤
1. See [4] for the result and a discussionon distortion. For more general observables, one can use the result of [13]. However, one would lose uniformconvergence for all parameters. See [13, 9] for a discussion. . Bahsoun, C. Bose 15 (2) Let L ν be the transfer operator associated with S with respect to the invari-ant measure ν . For α > , let f be a Lipschitz function supported on ∆ .Then lim n →∞ || n − α ∆ L nν f − c Z ∆ f || ∞ = 0 , where c is a constant independent of f . In particular, if g ∈ L (∆ ) , wehave lim n →∞ n − α Z ∆ f · g ◦ S n = c Z ∆ f Z ∆ g. (3) For α = 1 , and f a Lipschitz function supported on ∆ , we obtain the sameresults as in (2) with normalizing sequence ln n instead of n − α .Proof. We first notice that induced skew product S R : ∆ → ∆ is piecewise affineand onto. In particular, it satisfies the assumptions of Aaronson-Denker [2]. Thus,it has a unique absolutely continuous invariant probability measure ν ∆ whosedensity h ∆ ∈ B , where B is the space of Lipschitz functions on ∆ , the system( S R , ν ∆ ) is mixing and the associated transfer operator L S R , with respect to ν ∆ ,has a spectral gap on B . The S -invariant measure, ν , is defined using ν ∆ . Thefact that the measure ν is infinite ( σ -finite) follows from Theorem 1.1 since α ≥ ν | ∆ = ν ∆ the normalization ν (∆ ) = 1 is automatically satisfied. To prove(2), we apply Gou¨ezel [9] Theorem 1.4. For f ∈ B define(4.2) R n f := 1 ∆ L nν (1 { R = n } f ) , where L ν is the transfer operator associated with S with respect to the invariantmeasure ν . Using (4.2) and L S R , we get L S R ( f ) = X n ≥ L nν (1 { R = n } f ) . The spectral properties of L S R imply that ( R n ) n ≥ is an aperiodic renewal sequenceof operators (see Sarig [16] for the definition). We still need to check: • ν ( { R > n } ) ∼ n − α l ( n ), where l is slowly varying function; • ∃ C > || R n || Lip ≤ Cn − α − . The first condition follows from Lemma 3.3. For the second one, using (8) on page649 of [16], ∃ C > || R n || Lip ≤ Cν ( { R = n } ) . Thus, Lemma 3.3 completes the proof of (2). For (3) the proof is essentially thesame as in (2) but using part (a) of Theorem 1.1 in [13]. (cid:3)
Remark . When 0 < α < α < β < ∞ we can use the arguments fromTheorem 1.2 to prove that the skew product of linearized random transformationhas a finite invariant measure over the full range of 0 < β < ∞ , even though the T β maps have only infinite absolutely continuous invariant measures when 1 ≤ β < ∞ . The required bounded distortion condition is automatically satisfied for the skewproduct (4.1). Asymptotic estimates as in Theorem 1.1 lead to decay of correlationresults as in Theorem 1.2 and limit laws as in Theorem 3.2.
Acknowledgment.
The authors thank Ian Melbourne for bringing the piecewiseaffine maps in reference [6] to their attention. The authors are in debt to HenkBruin for raising a question about the linearized random system in section 4. Hisquestion prompted us to correct the definition of the skew product in equation(4.1).
References
1. Aaronson, J. An introduction to infinite ergodic thoery.
Mathematical Surveys and Mono-graphs 1, 50 . Amer. Math. Soc, Providence RI, 1997.2. Aaronson, J. and Denker, M.
Local limit theorems for partial sums of stationary sequencesgenerated by Gibbs-Markov maps.
Stoch. & Dynam. 1 no. 2 (2001), 193–237.3. Ayyer, A. Liverani, C. and Stenlund, M.
Quenched CLT for random toral automorphisms .Discrete Contin. Dyn. Syst. 24, no. 2 (2009), 331–348.4. Bahsoun, W.; Bose, C.; Duan, Y.
Decay of correlation for random intermittent maps . Non-linearity. 27 (2014) 1543–1554.5. Bahsoun, W.; Bose, C.; Quas, A.,
Deterministic representation for position dependent randommaps . Discrete Contin. Dyn. Syst. 22 (2008), 529–540.6. Gaspard, P.; Wang, X.-J.
Sporadicity: between periodic and chaotic dynamical behaviors .Proc. Nat. Acad. Sci. U.S.A. 85 (1988), no. 13, 4591–4595.7. Gou¨ezel, S.,
Sharp polynomial estimates for the decay of correlations . Israel J. Math. 139(2004), 29–65.8. Gou¨ezel, S.,
Statistical properties of a skew product with a curve of neutral points . ErgodicTheory Dynam. Systems 27 (2007), 123–151.9. Gou¨ezel, S.,
Correlation asymptotics from large deviations in dynamical systems with infinitemeasure . Colloq. Math. 125 (2011), no. 2, 193–212.10. Hoeffding, W.,
Probability inequalities for sums of bounded random variables
J. Amer. Stat.Soc. 58
Quasistatic dynamics with intermittency . Available onhttp://arxiv.org/abs/1510.0274812. Liverani, C., Saussol, B. and Vaienti, S.,
A probabilistic approach to intermittency , ErgodicTheory Dynam. Systems 19 (1999), 671–685.13. Melbourne, I. and Terhesiu, D.
Operator renewal theory and mixing rates for dynamicalsystems with infinite measure . Invent. Math. 189 (2012), no. 1, 61–110.14. Pianigiani, G. First return map and invariant measures. Israel J. Math. 35 (1980), 32–48.15. Pomeau, Y. and Manneville, P.
Intermittent transition to turbulence in dissipative dynamicalsystems . Comm. Math. Phys. (74 (1980) 189–197.16. Sarig, O.
Subexponential decay of correlations.
Invent. Math. 150 (2002), 629-653.17. Young, L-S.,
Recurrence times and rates of mixing . Israel J. Math., 110 (1999), 153–188.18. Zweim¨uller, R.
Ergodic structures and invariant densities for non-Markovian inteval mapswith indifferent fixed points.
Nonlinearity. 11 (1998), 1263-1276.19. Zweim¨uller, R.
Mixing limit theorems for ergodic transformations.
J. Theor. Prob. 20 (2007),1059-1071.
Department of Mathematical Sciences, Loughborough University, Loughborough,Leicestershire, LE11 3TU, UK
E-mail address : † [email protected]
Department of Mathematics and Statistics, University of Victoria, PO BOX 3045STN CSC, Victoria, B.C., V8W 3R4, Canada
E-mail address : ∗∗