Asymptotically linear iterated function systems on the real line
AAsymptotically linear iterated function systemson the real line
Gerold Alsmeyer, Sara Brofferio and Dariusz Buraczewski
Abstract
Given a sequence of i.i.d. random functions Ψ n : R → R , n ∈ N , weconsider the iterated function system and Markov chain which is recursivelydefined by X x := x and X xn := Ψ n − ( X xn − ) for x ∈ R and n ∈ N . Under thetwo basic assumptions that the Ψ n are a.s. continuous at any point in R andasymptotically linear at the “endpoints” ±∞ , we study the tail behavior ofthe stationary laws of such Markov chains by means of Markov renewal the-ory. Our approach provides an extension of Goldie’s implicit renewal theory[20] and can also be viewed as an adaptation of Kesten’s work on productsof random matrices [24] to one-dimensional function systems as described.Our results have applications in quite different areas of applied probabil-ity like queuing theory, econometrics, mathematical finance and populationdynamics, e.g. ARCH models and random logistic transforms.
AMS 2010 subject classifications:
Primary 60F05, 60G55. Secondary60J10
Keywords: iterated function system, asymptotically linear, stationary dis-tribution, tail behavior, Markov renewal theory
Gerold Alsmeyer
Institute of Mathematical Stochastics, Department of Mathematics and ComputerScience, University of M¨unster, Einsteinstrasse 62, D-48149 M¨unster, Germany.e-mail: [email protected]
Sara Brofferio
Universit´e Paris-Saclay, CNRS, Laboratoire de Math´ematiques d’Orsay, 91405 OrsayCedex, et Universit´e Paris-Est, CNRS, LAMA94010 Creteil, et Universit´e Gustave Eiffel, LAMA77447 Marne-la-Vall´ee, Francee-mail: [email protected]
Dariusz Buraczewski
Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wro-claw, Poland.e-mail: [email protected] a r X i v : . [ m a t h . P R ] F e b Gerold Alsmeyer, Sara Brofferio and Dariusz Buraczewski
Let
Ψ, Ψ , Ψ , . . . : R → R be i.i.d. random functions, defined on a commonprobability space ( Ω, A , P ), such that Ψ is a.s. continuous at each x ∈ R , i.e. P [ ω : Ψ ( ω, · ) is continuous on R ] = 1 . (1)Then the associated iterated function system ( IFS ), recursively defined by X n = Ψ n ( X n − ) = Ψ n · · · Ψ ( X ) (2)for n ≥
1, where X is independent of the sequence { Ψ n } n ∈ N and Ψ n · · · Ψ isused as shorthand for Ψ n ◦ . . . ◦ Ψ , forms a temporally homogeneous Markovchain on R which, by (1), has the Feller property. For the case when Ψ isasymptotically linear at ±∞ in the sense thatsup x ≤ | Ψ ( x ) − − Ax | ≤ B and sup x ≥ | Ψ ( x ) − + Ax | ≤ B (3)for some real random variables + A, − A, B , the purpose of this article is toprovide general conditions which • ensure that ( X n ) n ≥ possesses a stationary distribution ν and, a fortiori, • allow to describe the tail behavior of ν at ±∞ .Instances of asymptotically linear IFS , shortly called
ALIFS hereafter, appearin many contexts of applied probability and related fields like queueing mod-els, econometrics, financial time series or population dynamics. The follow-ing known examples all fit into this class, at least after suitable conjugation Ψ (cid:32) g − ◦ Ψ ◦ g or extension of Ψ from the positive halfline to the whole realline.(i) Random affine recursions : Ψ ( x ) = Ax + B .(ii) Lindley recursions : Ψ ( x ) = ( Ax + B ) + .(iii) ARCH(1) models : Ψ ( x ) = (cid:0) β + λx (cid:1) / Z with β, λ > AR (1) models with ARCH (1) errors : Ψ ( x ) = αx + (cid:0) β + λx (cid:1) / Z with β, λ > Stochastic Beverton-Holt model : Ψ ( x ) = Ax/ (1 + x/B ), x > Random logistic transforms : Ψ ( x ) = Ax (1 − x ), x ∈ (0 , A, B, Z denote random variables, which in the last two examples are alsosupposed to be positive. In (vi), even 0 < A < Ψ forms a random self-map of (0 , ALIFS can be found in the survey papers by Aldous and Bandyopadhyay [1]and by Diaconis and Freedman [18], and also in [10, Section 6]. symptotically linear iterated function systems 3
To put our work into context, we first mention Kesten’s [24] seminal pa-per on random affine recursions X n = A n X n − + B n on R d (the multivariateversion of (i) with i.i.d. d × d random matrices A n and d -dimensional randomvectors B n ), where it is shown, under conditions ensuring positive recurrence,that the tail behavior of the unique stationary law ν of ( X n ) n ≥ can be deter-mined by use of renewal theory (after a change of measure) for an associatedMarkov random walk ( MRW ). This walk is obtained upon approximating X n by a linear IFS Z n and then decomposing Z n into its distal part, given bythe Euclidean norm | Z n | , and its directional part Z n / | Z n | which forms a re-current Markov chain on the sphere S d − . If d = 1, the latter reduces to thefinite set S = {± } . A renewal-theoretic approach was also taken by Goldie[20] who studied the tail behavior of ν for one-dimensional, asymptoticallylinear Ψ with + A = − A . We refer to a recent monograph [13] for an overviewon random affine recursions.One of the central questions to be answered in the present work is aboutthe impact of distinct + A, − A on the left and right tail of ν . This will beaccomplished by employing Kesten’s method in the one-dimensional setupwhere it applies without various tedious technicalities that occur in higherdimension. The reason for this simplification is that, as already mentioned,the directional part Z n / | Z n | of X n takes values in {± } only and thus re-duces to a simple finite Markov chain. More precisely, we will compare thegiven ALIFS with an approximating
LIFS (for linear
IFS) of random linearfunctions and apply Kesten’s method to the latter one. The comparison ideahas already appeared in recent work by Mirek [27] and by the authors [5, 10].Our approach may also be viewed as an extension of Goldie’s implicit renewaltheory, the extension being that the random walk in Goldie’s approach is nowMarkov-modulated and thus a
MRW . We will return to this point with moreexplanations later.
Our standing assumption (3) on Ψ throughout this work can be expressed inthe more compact formsup x ∈ R | Ψ ( x ) − Λ ( x ) | ≤ B a.s. (4)where Λ ( x ) := sign ( x ) Ax = + Ax, if x > , − Ax, if x < , , if x = 0 . (5)for some real-valued random variables − A, + A and B such that, without lossof generality, B ≥
1. We further put A := 0 and sign ( x ) := R > ( x ) − R < ( x ) Gerold Alsmeyer, Sara Brofferio and Dariusz Buraczewski
Fig. 1
The four possible shapes of Ψ . Slash-type // (top left): − A > + A >
0, thus − Ψ ( −∞ ) = Ψ (+ ∞ ) = + ∞ . Backslash-type \ (top right): − A < + A <
0, thus Ψ ( −∞ ) = − Ψ (+ ∞ ) = + ∞ . Vee-type (cid:87) (bottom left): − A < < + A , thus Ψ ( −∞ ) = Ψ (+ ∞ ) = + ∞ . Wedge-type (cid:86) (bottom right): + A < < − A , thus Ψ ( −∞ ) = Ψ (+ ∞ ) = −∞ . for x ∈ R , where R < := ( −∞ ,
0) and R > := (0 , ∞ ). In other words, we aregiven a sequence ( Ψ n , Λ n , − A n , + A n , B n ) , n = 1 , , , . . . of i.i.d. copies of ( Ψ, Λ, − A, + A, B ) satisfying (5) and consider the Markovchain defined by (2).Provided that the observed values of − A, + A are both nonzero, the perti-nent realization of Ψ as a function may, regarding its overall shape, exhibitone of four distinct types as illustrated in Fig. 1. These are denoted mnemon-ically as slash-type / , backslash-type \ , vee-type ∨ , and wedge-type ∧ . In thesimple affine model with + A = − A = A >
0, the function Ψ is always ofslash-type. Goldie’s implicit renewal theory [20] is designed for ALIFS with Ψ satisfying | Ψ ( x ) − Ax | ≤ B for some random variables A, B . It therefore mixes functions of slash-type andbackslash-type such that − A = + A . The AR (1)-model with ARCH (1) errorsprovides an instance where functions of type / , (cid:87) and (cid:86) are mixed. If thefunction Ψ ( x ) is uniformly bounded for x > x < + A = 0(resp. − A = 0). This occurs, for instance, in the Beverton-Holt model.In view of (4), it is natural to relate the ALIFS X n = Ψ n · · · Ψ ( X ) with the LIFS Λ n · · · Λ ( X ) which in turn, following Kesten’s approach in the presentone-dimensional setting, can be studied with the help of a suitable temporallyhomogeneous Markov chain Ξ = ( ξ n ) n ≥ . Namely, let ξ := sign ( X ) and symptotically linear iterated function systems 5 ξ n := sign ( Λ n ( ξ n − ) ξ n − ) = sign ( − A n ) ξ n − , if ξ n − = − , sign ( + A n ) ξ n − , if ξ n − = 1 , , if ξ n − = 0 , (6)for n ≥
1. This chain has state space S = S ∪ { } , and the state 0, if itappears, is absorbing in which case at least one of the states ± S is identified with the set of signs {− , + } (e.g., in sub- or superscripts as in (5)) because ξ n keeps track of thesign of Λ n · · · Λ ( X ). Let p δ(cid:15) := P [ ξ n = (cid:15) | ξ n − = δ ] = P [ sign ( δ A ) δ = (cid:15) ]for δ, (cid:15) ∈ {− , , +1 } , whence the possibly reduced and therefore substochas-tic transition matrix of Ξ on S is given by P = (cid:18) p −− p − + p + − p ++ (cid:19) = (cid:18) P [ − A > P [ − A < P [ + A < P [ + A > (cid:19) . (7)As common, we put P δ := P [ ·| ξ = δ ] and P χ := (cid:80) δ ∈ S χ δ P δ for any measure χ on S .In order to state our main results on the tail behavior of any stationarydistribution of the given ALIFS ( X n ) n ≥ , we distinguish three cases regardingthe transition structure of the chain Ξ (see Fig. 2). Case 1 (irreducible case) p − + > p + − >
0, that is both + A and − A are negative with positive probability. We will see that the tails of aninvariant distribution ν at + ∞ and −∞ are of the same order in thiscase. Case 2 (unilateral case) p − + > p + − = 0 , that is − A is negativewith positive probability but + A ≥
0. The functions Ψ are only of types / and ∨ . In this case, the order of decay of ν at + ∞ can depend on bothcoefficients + A and − A , while the behavior at −∞ depends only on − A .The corresponding case p + − > p − + = 0 can be treated withoutfurther ado after conjugation by x (cid:55)→ − x . Case 3 (separated case) p − + = 0 and p + − = 0 , that is − A ≥ + A ≥
0. The functions Ψ are only of type / . In this case, the order ofdecay of the tail of ν at + ∞ (resp. −∞ ) depends only on + A (resp. − A ).Fundamental tools in our study are the Cram´er transform of P , definedby P ( θ ) := (cid:18) p −− ( θ ) p − + ( θ ) p + − ( θ ) p ++ ( θ ) (cid:19) := (cid:18) E | − A | θ { − A> } E | − A | θ { − A< } E | + A | θ { + A< } E | + A | θ { + A> } (cid:19) (8) Gerold Alsmeyer, Sara Brofferio and Dariusz Buraczewski – +0 – +0 – +0
Fig. 2
The transition structure of Ξ in the three cases 1–3. The dashed arrowsindicate transitions that may have both positive or zero probability. for θ ∈ D := { ϑ ≥ E | − A | ϑ + E | + A | ϑ < ∞} , and its dominant eigenvalue(spectral radius) ρ ( θ ). They will be discussed in greater detail in Subsection4.1 Case 1 (irreducible case) . p − + > p + − > ρ ( θ ) is associated to the left and rightnonnegative eigenvectors u ( θ ) = ( u − ( θ ) , u + ( θ )) (cid:62) and v ( θ ) = ( v − ( θ ) , v + ( θ )) (cid:62) , respectively, uniquely determined by u ( θ ) (cid:62) v ( θ ) = 1 and u + ( θ ) + u − ( θ ) = 1.This implies that (cid:98) π ( θ ) := (cid:0) u − ( θ ) v − ( θ ) , u + ( θ ) v + ( θ ) (cid:1) (cid:62) . (9)forms a probability distribution. For θ >
0, it will later be identified as thestationary law of ( ξ n ) n ≥ after a suitable change of measure. For θ = 0, thisis only true when P is stochastic or, equivalently, P [ − A = 0] = P [ + A = 0] = 0 (10)holds in which case π = (cid:18) p + − p − + + p + − , p − + p − + + p + − (cid:19) (cid:62) = (cid:98) π (0) (11)equals the unique associated stationary distribution of Ξ .The crucial assumption in the subsequent theorem is that ρ ( κ ) = 1 for some κ > . (12)We denote by C ∗ ( R ) the space of bounded Lipschitz functions φ on R whichvanish in a neighborhood of the origin and by C ∗− ( R ) , C ∗ + ( R ) the subspacesof those φ that also vanish on R (cid:62) , R (cid:54) , respectively. Theorem 2.1
Assuming (12) , symptotically linear iterated function systems 7 E | ± A | κ log | ± A | < ∞ and E B κ < ∞ (13) and P (cid:98) π ( κ ) [log | ξ A | − a ξ + a ξ ∈ d Z ] < for all d > and a ± ∈ R , (14) any stationary distribution ν of the ALIFS ( X n ) n ≥ has power tails of order κ , more precisely lim t →∞ t κ ν (( t, ∞ )) = C + and lim t →∞ t κ ν (( −∞ , − t )) = C − (15) for constants C + , C − ≥ which are explicitly defined in (79) , (80) and satisfy u + ( κ ) C − = u − ( κ ) C + . (16) Furthermore, lim t →∞ t κ (cid:90) φ ( t − x ) ν ( dx ) = (cid:90) ∞ C − φ ( − x ) + C + φ ( x ) x κ +1 dx (17) for any φ ∈ C ∗ ( R ) . Further information on the lattice-type Condition (14) will be providedlater, see Subsect. 4.3.Observe that the above result concerns the asymptotic behavior of the tailof the stationary measure. Our proof, which is based on a renewal theorem,does not entail the positivity of the limiting constants C + , C − . However, wewill be able to verify this in Section 8 under the additional assumption thatthe stationary measure has unbounded support (see Proposition 8.1). Case 2 (unilateral case) . p − + > p + − = 0.In this case, the crucial numbers, if they exist, are κ − , κ + > p −− ( κ − ) = 1 and p ++ ( κ + ) = 1 . As a substitute for Condition (13), we need that either E | − A | κ log | − A | < ∞ and E B κ < ∞ (18)or E | + A | κ log | + A | < ∞ and E B κ < ∞ . (19) Theorem 2.2 (a) If κ − exists, Condition (18) holds for κ = κ − and P [log | − A | ∈ ·| − A > is nonarithmetic, then any stationary distribution ν satisfies lim t →∞ t κ − ν (( −∞ , − t )) = C − (20) as well as Gerold Alsmeyer, Sara Brofferio and Dariusz Buraczewski lim t →∞ t κ − (cid:90) φ ( t − x ) ν ( dx ) = (cid:90) ∞ C − φ ( − x ) x κ − +1 dx (21) for φ ∈ C ∗− ( R ) , where C − is defined in (86) .(b) If κ + exists, p −− ( κ + ) < (thus κ − , if it exists, is greater than κ + ), p − + ( κ + ) < ∞ , Condition (19) holds for κ = κ + and P [log | + A | ∈ ·| + A > isnonarithmetic, then any stationary distribution ν satisfies lim t →∞ t κ + ν (( t, ∞ )) = C + (22) as well as lim t →∞ t κ + (cid:90) φ ( t − x ) ν ( dx ) = (cid:90) ∞ C + φ ( x ) x κ + +1 dx (23) for φ ∈ C ∗ ( R ) , where C + is defined in (85) .(c) If κ − exists, p ++ ( κ − ) < (thus κ − < κ + if the latter exists as well), p ++ ( θ ) < and p − + ( θ ) < ∞ for some θ > κ − , Condition (18) holds for κ = κ − and P − [log | − A | ∈ ·| − A > is nonarithmetic, then any stationarydistribution ν satisfies lim t →∞ t κ − ν (( t, ∞ )) = C + − (24) as well as lim t →∞ t κ − (cid:90) φ ( t − x ) ν ( dx ) = (cid:90) ∞ C − φ ( − x ) + C + − φ ( x ) x κ − +1 dx (25) for φ ∈ C ∗ ( R ) , where C − , C + − are defined in (86) and (87) , respectively. Observe that if both κ − and κ + exist with κ − > κ + , then cases (a) and (b)entail that the stationary measure behaves regularly at + ∞ and −∞ , butwith different tail decay rates.The unilateral case shares several features with the study of the stationarydistribution of the two-dimensional recursive Markov chain defined by theaffine recursions Ψ n ( x ) = A n x + B n in the case when the A n are uppertriangular matrices. Such models have attracted some interest in recent yearsdue to their relevance for some models in econometrics, see [15].Regarding the case κ + = κ − , we further remark that the methods usedin the present paper are not strong enough and thus need to be refined. Itseems reasonable to believe that in this case a first order expansion of P ( θ ) in κ is required with a possible extra term in the power tail of ν , see again [16]for similar considerations in the case of the afore-mentioned two-dimensionalrecursions. Case 3 (separated case) . p − + = 0 and p + − = 0.This is the easiest case and can be treated by Goldie’s implicit renewal theory[20]. We state the result here for completeness and put symptotically linear iterated function systems 9 C δ := E (cid:2) | Ψ ( R ) | κ δ { Ψ ( R ) < } − | δ AR | κ δ { δR> } (cid:3) κ δ p (cid:48) δδ ( κ δ )for δ ∈ {− , + } . Theorem 2.3 (a) If κ − exists, Condition (18) holds for κ = κ − and log | − A | is nonarithmetic, then any stationary distribution ν satisfies lim t →∞ t κ − ν (( −∞ , − t )) = C − (26) where R has law ν and is independent of Ψ, − A . Moreover, (21) holds for any φ ∈ C ∗− ( R ) and with C − as in (26) .(b) If κ + exists, Condition (19) holds for κ = κ + and log | + A | is nonarith-metic, then any stationary distribution ν on R > satisfies lim t →∞ t κ + ν (( t, ∞ )) = C + (27) here R has law ν and is independent of Ψ, + A . Moreover, (23) holds for any φ ∈ C ∗ + ( R ) and with C + as in (27) . In any of the three cases the conditions (12) and (13) ensure the existenceof at least one stationary distribution of ( X n ) n ≥ . We refer to Section 9 fordetails. Let us briefly discuss some examples of
ALIFS that have appeared in theliterature and whose stationary distributions exhibit a tail behavior that,under appropriate conditions, can be read off from our main results. In orderto keep this presentation short, we refrain from giving any technical details.Further applications with a more thorough discussion can be found in [10].
ARCH (1)
Our first example, the autoregressive conditional heteroskedasticity model oforder one , is well-known in econometrics and usually defined by the pair ofrecursive equations X n = Σ n Z n and Σ n = β + λX n − , where { Z n } n ≥ denotes a sequence of i.i.d. random variables with mean zeroand variance one (the noise) and β, λ are positive parameters. Simple inspec-tion shows that this entails the recursive relation X n = Ψ n ( X n − ) for n ≥ and i.i.d. copies Ψ , Ψ , . . . of the random function Ψ ( x ) = Z (cid:112) β + λx . As one can also readily check, { X n } n ≥ forms an ALIFS which satisfies (4)with ± A n = ± Z n √ λ n and is irreducible (Case 1). Unless Z has a symmetriclaw, the constants C + , C − defined in (15) are generally distinct. Let us alsopoint out here that { X n } n ≥ remains an ALIFS if the parameters β n and λ n are allowed to be random. AR (1) models with ARCH (1) errors
This extension of the previous example is obtained by adding an extra linearterm and therefore defined as the
ALIFS generated by i.i.d. copies of therandom function Ψ ( x ) = αx + Z (cid:0) β + λx (cid:1) / for some ( α, β, λ ) ∈ R × R > and a random variable Z as before. It satisfies(4) with ± A = α ± Z √ λ . Depending on the parameters α, λ and the almostsure range of Z , all three cases introduced above may occur. We will returnto this example in Section 10 at the end of this work. IFS on the unit interval
Consider an
IFS generated by i.i.d. copies of a random continuous self-map Φ of the unit interval [0 ,
1] which further satisfies Φ ((0 , ⊆ (0 , Φ is twicedifferentiable at 0 and 1, this IFS can be conjugated to obtain an
ALIFS ofthe real line. Namely, by taking the diffeomorphism r of (0 ,
1) onto R , definedby r ( u ) := − u + 11 − u , the conjugated function Ψ = r ◦ Φ ◦ r − satisfies (4) with − A = (cid:40) Φ (cid:48) (0) if Φ (0) = 0 or 10 if Φ (0) ∈ (0 ,
1) and + A = (cid:40) Φ (cid:48) (1) if Φ (1) = 0 or 10 if Φ (1) ∈ (0 , , (see Section 6.3 in [10] for more details). Note further that ν is an invariantdistribution for the IFS generated by Φ (i.e. Φ ( X ) d = ν if X d = ν , where d =means equality in law) iff r ∗ ν is invariant for the ALIFS generated by Ψ . Thus,under appropriate hypotheses, this system possesses a stationary distributionwhose behaviour close to the boundaries of the interval can be deduced fromour main results. As a particular instance which has received some attention symptotically linear iterated function systems 11 in the literature, we mention here the random logistic transform Φ ( x ) = Ax (1 − x ) with 0 < A < − A = 1 /α and + A = 1 /α , see e.g. [7]. The principal goal of this work is to describe the tail behavior of a stationarymeasure ν of an ALIFS at ±∞ . In Sections 3 and 4, we provide the indispens-able tools to prove our main results. Then we prove the existence of the limitsin Theorems 2.1, 2.2 and 2.3 in Sections 5, 6 and 7, respectively. It will beseen that the nondegeneracy of the limits, i.e., the positivity of the limitingconstants requires different arguments and in fact forms a separate problem.It is postponed to Section 8 and there taken care of in Proposition 8.1. InSection 9, we give conditions about the existence of at least one stationarydistribution ν (see Prop. 9.1) which are directly nseen to be valid in our maintheorems. Uniqueness of ν will not be discussed here, because this requiresgeometric arguments and a local analysis of the process which in such gener-ality is beyond the scope of this work. The AR(1) model with GARCH errorsis an example which has received some interest in the literature [5, 9, 21, 26]and to which our results can be applied, in fact with all three cases beingpossible. It will therefore be discussed in greater detail in the final Section10, followed by a short appendix containing a technical lemma about themaximal eigenvalue ρ ( θ ) in a right neighborhood of 0. Defining S := log | X | and S n := log | Λ n · · · Λ ( X ) | for n ≥ −∞ ), we see that, given Ξ , theincrements ζ n := S n − S n − = log | Λ n ( ξ n − ) | , n = 1 , , , . . . , are conditionallyindependent and P [ ζ n ∈ ·| Ξ, ξ n − = δ, ξ n = (cid:15) ] = P [log | Λ n ( ξ n − ) | ∈ ·| ξ n − = δ, ξ n = (cid:15) ]= P [log | δ A | ∈ ·| δ · sign ( δ A ) = (cid:15) ] . Equivalently, ( ξ n , ζ n ) n ≥ forms a Markov chain such that the conditional lawof ( ξ n , ζ n ) given the past depends on ξ n − only. The transition kernel equals Q ( δ, { (cid:15) } × B ) = P [ δ · sign ( δ A ) = (cid:15), log | δ A | ∈ B ]for measurable B ⊂ R , δ ∈ S and (cid:15) ∈ S . If δ = 0, then Q ( δ, · ) equalsDirac measure at † := (0 , −∞ ). In other words, † is an absorbing state for( ξ n , ζ n ) n ≥ and should be viewed as a grave. It follows that ( ξ n , S n ) n ≥ doesindeed constitute a Markov random walk ( MRW ) with discrete driving chain Ξ and induced by ( Λ n · · · Λ ( X )) n ≥ . However, it may be absorbed at † infinite time (explosion of the additive part). On the other hand, the conditionsin our main Theorems 2.1–2.3 ensure that, after a suitable change of measure P (cid:32) (cid:98) P to be decribed in Subsection 4.2, the driving chain has state space S and explosion does no longer occur. The relevant renewal-theoretic propertiesof the MRW after this measure change, which is essential for the analysisof the tails of the stationary distributions of ( X n ) n ≥ , will be discussed inSubsection 4.3. It is convenient to assume a standard model( Ω, A , ( P x ) x ∈ R , ( ξ n , S n ) n ≥ ) , where ( Ω, A ) denotes the measurable space on which all occurring randomvariables are defined and P x := P [ ·| X = x ], thus P x [ X = x, ξ = sign ( x ) , S = log | x | ] = 1for all x ∈ R . The definition extends the one given before in Thm. 2.1 in acompatible way because P δ [ ξ = δ ] = 1 if δ ∈ S . Moreover, we put P χ := (cid:82) P x χ ( dx ) for any measure χ on R and use P for probabilities that do notdepend on initial conditions. LIFS
The fact that Ψ , Ψ , . . . are i.i.d. implies that, for each n ∈ N , the forwarditeration X n and the backward iteration (cid:98) X n := Ψ · · · Ψ n ( X ) have the samedistribution, more precisely X n = Ψ n · · · Ψ ( X ) d = Ψ · · · Ψ n ( X ) = (cid:98) X n under P x (28)for each n ∈ N . By a similar argument and in analogy with (28), Λ n · · · Λ ( x ) d = Λ · · · Λ n ( x ) under P x symptotically linear iterated function systems 13 for all n ∈ N and x ∈ R .The following simple but crucial lemma is a consequence of Condition (4).Given a Lipschitz continuous function f , let Lip ( f ) be its Lipschitz constant.Note that Lip ( Λ ) = | − A | ∨ | + A | . Further putting Φ n ( x ) := Lip ( Λ n ) x + B n for n ∈ N , we introduce the LIFS ( Z n ) n ≥ and the associated ”error term”( Y n ) n ≥ by setting Y := 0, Z := X , Z n := Λ n · · · Λ ( Z ) and Y n := n (cid:88) k =1 Lip ( Λ n . . . Λ k +1 ) B k for n ≥
1. The corresponding backward iterations are (cid:98) Z n := Λ · · · Λ n ( Z ) and (cid:98) Y n := n (cid:88) k =1 Lip ( Λ . . . Λ k − ) B k (29) Lemma 3.1
If Condition (4) holds true, then sup x ∈ R | Ψ n · · · Ψ ( x ) − Λ n · · · Λ ( x ) | ≤ Y n , (30)sup x ∈ R | Ψ · · · Ψ n ( x ) − Λ · · · Λ n ( x ) | ≤ (cid:98) Y n , (31) and in particular | X n − Z n | ≤ Y n and | (cid:98) X n − (cid:98) Z n | ≤ (cid:98) Y n (32) for all n ∈ N .Proof. It suffices to prove (31) for which we use induction over n . Note that(4) provides the assertion for n = 1. Assuming the assertion be true for n − x ∈ R | Ψ · · · Ψ n ( x ) − Λ · · · Λ n ( x ) |≤ | Ψ · · · Ψ n ( x ) − Λ · · · Λ n − ( Ψ n ( x )) | + | Λ · · · Λ n − ( Ψ n ( x )) − Λ · · · Λ n − ( Λ n ( x )) |≤ (cid:98) Y n − + Lip ( Λ · · · Λ n − ) | Ψ n ( x ) − Λ n ( x ) | (33) ≤ (cid:98) Y n − + Lip ( Λ · · · Λ n − ) B n . Since, by (29), the last line equals (cid:98) Y n , the proof is complete. (cid:117)(cid:116) Aiming at the tail behavior of the stationary distributions of the given
ALIFS ( X n ) n ≥ at ±∞ , the Markov chain ( ξ n ) n ≥ on the set S and its possibly reduced transition matrix P will play an important role. Similar to the workby Goldie [20] and Kesten [24], our approach uses a linear approximation,here of X n by the LIFS Z n = Λ n · · · Λ ( Z ), see Lemma 3.1, and renewal-theoretic arguments after a suitable change of measure. The latter means tofind a harmonic transform under which S becomes the proper state spaceof ( ξ n ) n ≥ , thus making absorption at 0 impossible if this state appears atall. For the case when log | Z n | = S n has i.i.d. increments and thus forms anordinary random walk on R , this transform is usually obtained with the helpof moment generating functions. The method has indeed been effectivelyemployed in [20] and [27] in the study of asymptotically linear stochasticequations, see also [13]. In the present context, however, the sequence ( S n ) n ≥ has increments whose distributions are modulated by a two-state Markovchain, and if this chain is irreducible, then ( ξ n , S n ) n ≥ constitutes a genuine MRW instead of an ordinary one. This in turn calls for the more advancedtool of so-called transfer operators, as in [24, 6, 12, 22] for the analysis ofmultidimensional problems and with a
MRW whose driving chain has statespace S d − , the d -dimensional unit sphere in R d for some d ≥
2. Since S hasonly two elements, the transfer operators reduce here to fairly simple objects,namely 2 × P Recall from (8) the definition of the Cram´er transform P ( θ ) of the transitionmatrix P on its canonical domain D = { θ ≥ E | − A | θ + E | + A | θ < ∞} , inparticular P (0) = P and p δ(cid:15) ( θ ) = E | δ A | θ { sign ( δ A )= δ(cid:15) } = E δ (cid:104) e θS { ξ = (cid:15) } (cid:105) , (34)the latter being a log-convex function on D . Making the further assumptionthat θ ∞ := sup D > , (35)the matrix P ( θ ) has finite entries for any θ ∈ D , thus any θ < θ ∞ , andpossibly even for θ = θ ∞ . Let ρ ( θ ) be the dominant eigenvalue of P ( θ ) for θ ∈ D , explicitly given by ρ ( θ ) = p −− ( θ ) + p ++ ( θ )2 + (cid:114) ( p −− ( θ ) − p ++ ( θ )) p − + ( θ ) p + − ( θ ) > . (36) The matrix P ( θ ) and its spectral radius are strongly related to the product Λ n · · · Λ as confirmed by the subsequent lemma. Lemma 4.1
For any θ ∈ D and (cid:15), δ ∈ S p nδ(cid:15) ( θ ) = E | Λ n · · · Λ ( δ ) | θ { sign ( Λ n ··· Λ ( δ ))= (cid:15) } (37) symptotically linear iterated function systems 15 and lim n →∞ n log E Lip ( Λ n · · · Λ ) θ = log ρ ( θ ) . (38) Proof.
The relation (37) can be proved by induction over n . For n = 1, itholds true by (34), and for the inductive step we note that p n +1 δ(cid:15) ( θ ) = (cid:88) s ∈ S p δs ( θ ) p ns(cid:15) ( θ )= (cid:88) s ∈ S E | Λ ( δ ) | θ { sign ( Λ ( δ ))= s } | Λ n +1 · · · Λ ( s ) | θ { sign ( Λ n +1 ··· Λ ( s ))= (cid:15) } = E | Λ n +1 · · · Λ ( δ ) | θ { sign ( Λ n +1 ··· Λ ( δ ))= (cid:15) } . In particular, the norm of P n ( θ ) as an operator on ( R , | · | ∞ ) equals (cid:107) P n ( θ ) (cid:107) ∞ = max δ (cid:0) p nδ + ( θ ) + p nδ − ( θ ) (cid:1) = max δ E (cid:2) | Λ n · · · Λ ( δ ) | θ (cid:3) . Hence (cid:107) P n ( θ ) (cid:107) ∞ ≤ E Lip ( Λ n · · · Λ ) θ ≤ (cid:107) P n ( θ ) (cid:107) ∞ and Gelfand’s formula yields (38). (cid:117)(cid:116) For further discussion, we consider the three cases as introduced in Sec-tion 2 separately.
Case 1 . p − + ∧ p + − >
0, i.e. P is irreducible.Then there are uniquely determined left and right nonnegative eigenvectors u ( θ ) , v ( θ ), respectively, satisfying u ( θ ) (cid:62) v ( θ ) = 1 and u + ( θ ) + u − ( θ ) = 1 . (39)Moreover, ρ ( θ ) − P ( θ ) has dominant eigenvalue 1 with the same eigenvectorsand is therefore a quasistochastic matrix in the sense of [4, p. 360]. As alsoshown there (see Section 2), it can be transformed into a proper stochasticmatrix (cid:98) P ( θ ), namely (cid:98) P ( θ ) := 1 ρ ( θ ) D ( θ ) − P ( θ ) D ( θ ) = (cid:18) p δ(cid:15) ( θ ) v (cid:15) ( θ ) ρ ( θ ) v δ ( θ ) (cid:19) δ,(cid:15) ∈ S (40)with D ( θ ) := diag ( v − ( θ ) , v + ( θ )). (cid:98) P ( θ ) is irreducible with unique stationarydistribution (cid:98) π ( θ ) = D ( θ ) u ( θ ) = (cid:0) u − ( θ ) v − ( θ ) , u + ( θ ) v + ( θ ) (cid:1) (cid:62) , (41)which may also be written as (cid:98) π ( θ ) = (cid:18) p + − ( θ ) v − ( θ ) p + − ( θ ) v − ( θ ) + p − + ( θ ) v + ( θ ) , p − + ( θ ) v + ( θ ) p + − ( θ ) v − ( θ ) + p − + ( θ ) v + ( θ ) (cid:19) (cid:62) . Therefore u ( θ ) = (cid:18) p + − ( θ ) v − ( θ ) p + − ( θ ) v − ( θ ) + p − + ( θ ) v + ( θ ) , p − + ( θ ) v + ( θ ) p + − ( θ ) v − ( θ ) + p − + ( θ ) v + ( θ ) (cid:19) (cid:62) , which in combination with u − ( θ ) + u + ( θ ) = 1 further entails p + − ( θ ) v − ( θ )(1 − v − ( θ )) + p − + ( θ ) v + ( θ )(1 − v + ( θ )) = 0 . By the ergodic theorem for positive recurrent Markov chains,lim n →∞ (cid:98) P ( θ ) n = (cid:18) π − ( θ ) π + ( θ ) π − ( θ ) π + ( θ ) (cid:19) = (cid:18) u − ( θ ) v − ( θ ) u + ( θ ) v + ( θ ) u − ( θ ) v − ( θ ) u + ( θ ) v + ( θ ) (cid:19) (42)if (cid:98) P ( θ ) is aperiodic, while (cid:98) P ( θ ) n = I = (cid:32) (cid:33) , if n is even , (cid:98) P ( θ ) = (cid:32) (cid:33) , if n is odd (43)for all n ≥ p −− + p ++ = 0), noting in passing thatall (cid:98) P ( θ ) have the same period. Now it follows that, with := (1 , (cid:62) ,lim n →∞ n log (cid:16) D ( θ ) (cid:98) P ( θ ) n D ( θ ) − (cid:17) = 0 , and this remains true with any other w = ( w − , w + ) (cid:62) ∈ R (cid:62) \{ (0 , } insteadof . Since ρ ( θ ) − n P ( θ ) n = D ( θ ) (cid:98) P ( θ ) n D ( θ ) − by (40), we arrive atlog ρ ( θ ) = lim n →∞ n log[ P ( θ ) n w ] ± . (44)for each w = ( w − , w + ) (cid:62) ∈ R (cid:62) \{ (0 , } which will be utilized in the proof ofthe subsequent lemma. Lemma 4.2
The function D (cid:51) θ (cid:55)→ log ρ ( θ ) is continuous, convex, and on (0 , θ ∞ ) also smooth. Moreover, log ρ ( θ ) = lim n →∞ n log E ± | Λ n · · · Λ ( X ) | θ . (45) Proof.
Since the components of P ( θ ) are continuous functions on D and evensmooth on the interior of this set, the same properties hold for log ρ ( θ ) be-cause, by irreducibility, p + − ( θ ) p − + ( θ ) > θ ∈ D and thus ρ ( θ ) > symptotically linear iterated function systems 17 and by (36). The log-convexity of ρ ( θ ) is a direct consequence of (45) (seealso [17, Prop. 1 and Cor. 2]). Finally, in order to obtain (45), we can argueas follows after the observation that P ( θ ) = (cid:16) p δ(cid:15) E (cid:104) | δ A | θ (cid:12)(cid:12)(cid:12) sign ( δ A ) δ = (cid:15) (cid:105)(cid:17) δ,(cid:15) ∈ S . Let [ B ] i,j denote the ( i, j )-th component of a matrix B . Then by Lemma 4.1 E δ | Λ n · · · Λ ( X ) | θ = (cid:88) s ∈ S [ P ( θ ) n ] δ,s = P ( θ ) n e δ = ρ ( θ ) n (cid:88) s ∈ S [ (cid:98) P ( θ ) n ] δ,s , for any δ ∈ S which, by (44), yields (45). (cid:117)(cid:116) Case 2 . p − + > p + − = 0 [the case when p + − > p − + = 0 cannaturally be treated analogously].Then P ( θ ) is upper triangular for any θ ∈ D with eigenvalues p −− ( θ ) , p ++ ( θ ),giving ρ ( θ ) = p −− ( θ ) ∨ p ++ ( θ ). As a direct consequence, ρ ( θ ) is continuousand log-convex as the maximum of two such functions. It is also smooth atany θ with p −− ( θ ) (cid:54) = p ++ ( θ ), but may not be so if p −− ( θ ) = p ++ ( θ ). Case 2A . p ++ ( θ ) > p −− ( θ ).Then the left and right eigenvectors u ( θ ) , v ( θ ) satisfying (39) are u ( θ ) = e := (0 , (cid:62) and v ( θ ) = (cid:18) p − + ( θ ) p ++ ( θ ) − p −− ( θ ) , (cid:19) (cid:62) . (46)The matrix (cid:98) P ( θ ), defined by (40) and no longer irreducible, equals (cid:98) P ( θ ) = p −− ( θ ) p ++ ( θ ) 1 − p −− ( θ ) p ++ ( θ )0 1 , (47)with unique stationary distribution (cid:98) π ( θ ) = e . Now it is readily checked that(42) as well as (45) from Lemma 4.2 remain valid. Case 2B . p −− ( θ ) > p ++ ( θ ).Then the left and right eigenvectors u ( θ ) , v ( θ ) satisfying (39) are u ( θ ) = (cid:18) p −− ( θ ) − p ++ ( θ ) p −− ( θ ) + p − + ( θ ) − p ++ ( θ ) , p − + ( θ ) p −− ( θ ) + p − + ( θ ) − p ++ ( θ ) (cid:19) (cid:62) and v ( θ ) = (cid:18) p −− ( θ ) + p − + ( θ ) − p ++ ( θ ) p −− ( θ ) − p ++ ( θ ) , (cid:19) (cid:62) , but (cid:98) P ( θ ) cannot be defined because D ( θ ) is not invertible. Furthermore,recalling (37)log ρ ( θ ) = lim n →∞ n log p −− ( θ ) n = lim n →∞ n log E − | Λ n · · · Λ ( X ) | θ { ξ = ... = ξ n = − } = lim n →∞ n log E − | Λ n · · · Λ ( X ) | θ { ξ n = − } (48)holds instead of (45). Case 2C . p −− ( θ ) = p ++ ( θ ).This is the boundary case where left and right eigenvectors equal u ( θ ) = e and v ( θ ) = e := (1 , (cid:62) , respectively, and are thus orthogonal. Furthermore,(48) as well as log ρ ( θ ) = lim n →∞ n log E + Λ n · · · Λ ( X ) (49)hold true. Let F be the σ -field generated by ( ξ n , S n ) n ≥ and recall that ζ , ζ , . . . denotethe increments of the S n . Case 1 . p − + ∧ p + − > ρ ( κ ) = 1 for some κ ∈ (0 , θ ∞ ] (50)and note that κ is unique because ρ ( θ ) is convex, ρ (0) ≤
1, and ρ ( θ ) < θ in a right neighborhood of 0 (Lemma 11.1 in the Appendix). In particular,the monotonicity of ρ (cid:48) ( θ ) for θ ∈ (0 , θ ∞ ) entails ρ (cid:48) ( κ ) = lim θ ↑ κ ρ (cid:48) ( θ ) > P ( κ ) isquasistochastic and the associated transformation (cid:98) P ( κ ), defined by (cid:98) P ( κ ) = (cid:32) p −− ( κ ) p − + ( κ ) v + ( κ ) v − ( κ ) p + − ( κ ) v − ( κ ) v + ( κ ) p ++ ( κ ) (cid:33) = (cid:18) v (cid:15) ( κ ) v δ ( κ ) E δ (cid:2) e κS { ξ = (cid:15) } (cid:3)(cid:19) δ,(cid:15) ∈ S (see (40)), is an irreducible Markov transition matrix with unique stationarydistribution (cid:98) π := (cid:98) π ( κ ) given by (41). We can actually extend any (cid:98) P ( θ ) asdefined in (40) for θ ∈ D to a transition operator Q θ on S × R by setting symptotically linear iterated function systems 19 Q θ f ( δ, x ) := 1 v δ ( θ ) ρ ( θ ) E δ (cid:104) e θζ v ξ ( θ ) f ( ξ , ζ ) (cid:105) (51)for bounded functions f : S × R → R . Hence, the conditional law of ( ξ , ζ )depends only on δ but not on x . Lemma 4.3
For θ ∈ D and δ ∈ S , define the probability measure P ( θ ) δ on F by E ( θ ) δ f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) = E δ (cid:2) e θS n v ξ n ( θ ) f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) (cid:3) v δ ( θ ) ρ ( θ ) n for all n ∈ N and bounded measurable f : ( S × R ) n → R . Then the followingholds true under each P ( θ ) δ :(a) ( ξ n , ζ n ) n ≥ is a Markov chain with transition operator Q θ and ξ = δ .(b) ( ξ n ) n ≥ is an irreducible Markov chain on S with transition matrix (cid:98) P ( θ ) and unique stationary distribution (cid:98) π ( θ ) .(c) ( ξ n , S n ) n ≥ is a MRW with driving chain ( ξ n ) n ≥ and S = 0 .Proof. (a) It suffices to note that, for arbitrary bounded measurable f, g withobvious domains and n ∈ N , E ( θ ) δ [ f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) g ( ξ n +1 , ζ n +1 )]= E δ (cid:2) e θS n f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) e θζ n +1 v ξ n +1 ( θ ) g ( ξ n +1 , ζ n +1 ) (cid:3) v δ ( θ ) ρ ( θ ) n +1 = E δ (cid:2) e θS n f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) v ξ n ( θ ) Q θ g ( ξ n , ζ n ) (cid:3) v δ ( θ ) ρ ( θ ) n +1 = E ( θ ) δ [ f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) Q θ g ( ξ n , ζ n ) (cid:3) holds true, implying E ( θ ) δ (cid:2) g ( ξ n +1 , ζ n +1 ) (cid:12)(cid:12) F n (cid:3) = Q θ g ( ξ n , ζ n ) a.s.for F n := σ (( ξ k , ζ k ) , ≤ k ≤ n ). (b) is a direct consequence of (a). (cid:117)(cid:116) Recall that (cid:98) π ( θ ) = (cid:0) u − ( θ ) v − ( θ ) , u + ( θ ) v + ( θ ) (cid:1) (cid:62) defined in (41) equals thestationary distribution of (cid:98) P ( θ ) and thus of ( ξ n ) n ≥ under P ( θ ) δ , δ ∈ S . Lemma 4.4
Under P ( θ ) (cid:98) π ( θ ) , the MRW ( ξ n , S n ) n ≥ has drift E ( θ ) (cid:98) π ( θ ) S = ρ (cid:48) ( θ ) ρ ( θ ) (52) for any θ ∈ D and is finite for θ ∈ (0 , θ ∞ ) , the interior of D . The drift under P ( κ ) (cid:98) π ( κ ) is positive, but possibly infinite if κ = θ ∞ . In the latter case, (52) stillholds with ρ (cid:48) ( κ ) := lim θ ↑ κ ρ (cid:48) ( θ ) .Proof. Recalling that u ( θ ) (cid:62) v ( θ ) = 1, thus u ( θ ) (cid:62) P ( θ ) v ( θ ) = ρ ( θ ) for θ ∈ D ,it follows by differentiation that, for θ ∈ (0 , θ ∞ ), ρ (cid:48) ( θ ) = ddθ (cid:2) u ( θ ) (cid:62) P ( θ ) v ( θ ) (cid:3) = u (cid:48) ( θ ) (cid:62) P ( θ ) v ( θ ) + u ( θ ) (cid:62) P (cid:48) ( θ ) v ( θ ) + u ( θ ) (cid:62) P ( θ ) v (cid:48) ( θ )= ρ ( θ ) (cid:16) u (cid:48) ( θ ) (cid:62) v ( θ ) + u ( θ ) (cid:62) v (cid:48) ( θ ) (cid:17) + u ( θ ) (cid:62) P (cid:48) ( θ ) v ( θ )= ρ ( θ ) ddθ (cid:2) u ( θ ) (cid:62) v ( θ ) (cid:3) + u ( θ ) (cid:62) P (cid:48) ( θ ) v ( θ )= u ( θ ) (cid:62) P (cid:48) ( θ ) v ( θ ) , where P (cid:48) ( θ ) = ( p (cid:48) δ(cid:15) ( θ )) δ,(cid:15) ∈ S denotes the (componentwise) derivative of P ( θ ).Since, on the other hand, E ( θ ) (cid:98) π ( θ ) S = 1 ρ ( θ ) (cid:88) δ ∈ S (cid:98) π δ ( θ ) v δ ( θ ) E δ (cid:2) e θS S v ξ ( θ ) (cid:3) = 1 ρ ( θ ) (cid:88) δ,(cid:15) ∈ S u δ ( θ ) E δ (cid:2) e θS S { ξ = (cid:15) } (cid:3) v (cid:15) ( θ )= 1 ρ ( θ ) (cid:88) δ,(cid:15) ∈ S u δ ( θ ) p (cid:48) δ(cid:15) ( θ ) v (cid:15) ( θ )= u ( θ ) (cid:62) P (cid:48) ( θ ) v ( θ ) ρ ( θ ) , we see that (52) holds. We have already pointed out above (see after (50))that ρ (cid:48) ( κ ) = lim θ ↑ κ ρ (cid:48) ( θ ) >
0, and we add that ρ (cid:48) ( κ ) is finite if κ < θ ∞ , butmay be infinite if κ equals the upper boundary of D . (cid:117)(cid:116) Remark 4.5
Note that u ( κ ) = (cid:98) π ( κ ) and v ( κ ) = (1 , (cid:62) implies ρ (cid:48) ( κ ) = E ( κ ) (cid:98) π ( κ ) S = (cid:88) δ,(cid:15) ∈ S (cid:98) π δ ( κ ) E δ (cid:2) log | δ A | | δ A | κ { sign ( δ A ) δ = (cid:15) } (cid:3) = (cid:98) π − ( κ ) E | − A | κ log | − A | + (cid:98) π + ( κ ) E | + A | κ log | + A | (53)and since (cid:98) π ( κ ) has positive entries, we see that E ( κ ) (cid:98) π ( κ ) S < ∞ holds iff E | − A | κ log | − A | + E | + A | κ log | + A | < ∞ . (54) Case 2 . p − + > p + − = 0. symptotically linear iterated function systems 21 κ κ − κ + κ κ − κ + Fig. 3
The two functions p ++ ( κ ) (blue) and p −− ( κ ) (red) and the two possible con-stellations for κ − and κ + if both values exist. Recall that P ( θ ) is upper triangular and ρ ( θ ) = p −− ( θ ) ∨ p ++ ( θ ) for anypositive θ ∈ D . Case 2A . p ++ ( θ ) > p −− ( θ ).Then ρ ( θ ) = 1 and the following lemma is almost identical with Lemma 4.3,the only difference being that ( ξ n ) n ≥ is obviously no longer irreducible. It istherefore stated without proof. Lemma 4.6
For positive θ ∈ D and δ ∈ S , define the probability measure P ( θ ) δ on F as in Lemma 4.3. Then the following holds under each P ( θ ) δ :(a) Lemma 4.3(a) and (c) remain valid with (cid:98) P ( θ ) , v ( θ ) as stated in (46) and (47) .(b) State − is transient and +1 absorbing for ( ξ n ) n ≥ on S . The fact that state + is absorbing for ( ξ n ) n ≥ entails that ( S n ) n ≥ formsan ordinary random walk under P ( θ ) + and E ( θ ) + g ( ζ , . . . , ζ n ) = E + (cid:2) e θS n g ( ζ , . . . , ζ n ) (cid:3) for any bounded measurable g : R n → R . Since (cid:98) π ( θ ) = e , the stationarydrift of ( S n ) n ≥ is given by E ( θ ) (cid:98) π ( θ ) S = E ( θ ) + S = p (cid:48) ++ ( θ ) . (55)and thus positive if p (cid:48) ++ ( θ ) > Case 2B and 2C . p −− ( θ ) ≥ p ++ ( θ ).In the remaining two subcases 2B and 2C, which can be treated together,the chain Ξ is constant under each P ( θ ) δ as defined below and thus ( S n ) n ≥ is an ordinary random walk under these probability measures. The result issummarized in the subsequent lemma which we state again without proof. Lemma 4.7
For positive θ ∈ D and δ ∈ S , define P ( θ ) δ on F by E ( θ ) δ f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) = E δ (cid:2) e θS n f (( ξ , ζ ) , . . . , ( ξ n , ζ n )) { ξ n = δ } (cid:3) p δδ ( θ ) n for all n ∈ N and bounded measurable f : ( S × R ) n → R . Then the followingholds true under each P ( θ ) δ :(a) ξ n = δ a.s. for all n ≥ .(b) ( S n ) n ≥ is an ordinary random walk with S = 0 and drift E ( θ ) δ S = p (cid:48) δδ ( θ ) p δδ ( θ ) which equals p (cid:48) −− ( θ ) and is positive if δ = − and p −− ( θ ) = 1 . For the very last assertion, we note that p −− ( θ ) = 1, i.e. θ = κ − , doesindeed entail p (cid:48) −− ( θ ) > p −− (0) < p −− ( · ) is a convex functionon D . If p −− ( θ ) = p ++ ( θ ) = 1 (Case 2C), then E ( θ ) + S equals p (cid:48) ++ ( θ ) and isalso positive. However, positivity may fail otherwise. We are now ready to state the renewal theorems for the
MRW ( ξ n , S n ) n ≥ with driving chain Ξ = ( ξ n ) n ≥ needed to derive our main results stated inSection 2. Case 1 . p − + ∧ p + − > ρ ( κ ) = 1 for some κ ∈ D . By Lemma 4.3, Ξ is anirreducible finite Markov chain under P ( κ ) δ . Therefore, the subsequent lemmafor ( ξ n , S n ) n ≥ follows from essentially any version of the Markov renewaltheorem that has appeared in the literature, see e.g. [23, 28, 8, 2] and mostrecently [4] where it is derived probabilistically from the classical Blackwelltheorem. Only the usual lattice-type assumption requires a little more care:Following Shurenkov [28], the MRW ( ξ n , S n ) n ≥ is called nonarithmeticwith respect to the probability measures P ( κ ) δ , δ ∈ S , if P ( κ ) (cid:98) π ( κ ) ( S ∈ g ( ξ ) − g ( ξ ) + d Z ) < g : S → R and d ∈ (0 , ∞ ), where d Z := { , ± d, ± d, . . . } . Equivalently,see [2, Lemma 3.3], P ( κ ) δ ( S τ ( δ ) ∈ · ) is nonarithmetic in the usual sense for each δ ∈ S , where τ ( δ ) := inf { n ≥ ξ n = δ } for δ ∈ S ∪ { } . symptotically linear iterated function systems 23 Since S = log | ξ A | , we see that, putting a ± := g ( ± ξ n , S n ) n ≥ being nonarithmetic. Lemma 4.8
Given the stated assumptions, suppose further that the
MRW ( ξ n , S n ) n ≥ is nonarithmetic. Then (cid:88) n ≥ E ( κ ) (cid:15) g ( ξ n , t − S n ) t →∞ −−−→ ρ (cid:48) ( κ ) (cid:88) δ ∈ S (cid:98) π δ ( κ ) (cid:90) R g ( δ, x ) dx (56) for any (cid:15) ∈ S and any measurable g : S × R → R such that x (cid:55)→ g ( δ, x ) isdirectly Riemann integrable (dRi) for each δ ∈ S . As a direct consequence of this lemma, we obtain a key renewal theoremfor the
MRW ( ξ n , S n ) n ≥ under the original probability measures P (cid:15) . Proposition 4.9
Under the assumptions of the previous lemma, e κt (cid:88) n ≥ E (cid:15) g ( ξ n , t − S n ) t →∞ −−−→ v (cid:15) ( κ ) ρ (cid:48) ( κ ) (cid:88) δ ∈ S u δ ( κ ) (cid:90) R e κx g ( δ, x ) dx (57) for any (cid:15) ∈ S and any measurable g : S × R → R such that x (cid:55)→ e κx g ( δ, x ) is dRi for each δ ∈ S .Proof. In view of the previous lemma, (57) follows from e κt (cid:88) n ≥ E (cid:15) g ( ξ n , t − S n ) = (cid:88) n ≥ E (cid:15) (cid:20) e κ ( t − S n ) v ξ n ( κ ) g ( ξ n , t − S n ) e κS n v ξ n ( κ ) (cid:21) = (cid:88) n ≥ v (cid:15) ( κ ) E ( κ ) (cid:15) (cid:20) e κ ( t − S n ) v ξ n ( κ ) g ( ξ n , t − S n ) (cid:21) t →∞ −−−→ v (cid:15) ( κ ) ρ (cid:48) ( κ ) (cid:88) δ ∈ S (cid:98) π δ ( κ ) (cid:90) R e κx v δ ( κ ) g ( δ, x ) dx when recalling that (cid:98) π δ ( κ ) = u δ ( κ ) v δ ( κ ). (cid:117)(cid:116) Case 2 . p + − = 0 < p − + .Recall from before Thm. 2.2 that κ − and κ + are the unique positive num-bers satisfying p −− ( κ − ) = 1 and p ++ ( κ + ) = 1, respectively, provided thatthese numbers exist. Otherwise, we put κ ± := ∞ but make the additionalassumption that at least one of them is finite, thus κ := κ − ∧ κ + < ∞ . (58) Case 2A . p + − = 0 < p − + , (58) holds and p ++ ( κ ) = 1 > p −− ( κ ). In this case κ = κ + and − Ξ . Therefore, Ξ will eventually reach +1 or 0. The state 0 is absorbing and so S n = −∞ whenever ξ n = 0. Put τ := τ (0) ∧ τ (1) and write τ as shorthand for τ (1).Defining U − ( B ) : = (cid:88) n ≥ P − [ S n ∈ B, ξ n = − (cid:88) n ≥ p n −− P − [ S n ∈ B | ξ n = −
1] = E − (cid:34) τ − (cid:88) n =0 { S n ∈ B } (cid:35) for measurable B ⊂ R , we obtain the renewal measure associated with adefective distribution, namely p −− P − [ S ∈ ·| ξ = − S τ + n ) n ≥ forms an ordinary random walk on R ∪ {−∞} under both P − and P + , with initial value S τ and increment distribution P + [ ξ = 1 , S ∈ · ] = P + [ S ∈ · ]. In particular (cid:88) n ≥ P − [ S n ∈ B, ξ n = +1] = E − (cid:34) (cid:88) n ≥ τ { S n ∈ B } (cid:35) = E − (cid:2) U + ( B − S τ )1 { τ< ∞} (cid:3) where U + ( B ) := E + (cid:34) (cid:88) n ≥ { S n ∈ B } (cid:35) . After these observations, the subsequent result is proved by a standardcombination of measure change with classical renewal theory. For an arbitraryfunction g : R → R and θ ∈ R , we put g θ ( x ) := e θx g ( x ) and stipulate as usual ∞ − := 0. We further denote by R θ the space of functions g : R → R suchthat g θ is dRi. Proposition 4.10
Under the stated assumptions, the following assertionshold true (with κ = κ + ): e θt g ∗ U − ( t ) = e θt E − (cid:34) τ − (cid:88) n =0 g ( t − S n ) (cid:35) | t |→∞ −−−−−→ for each θ > such that p −− ( θ ) < and g ∈ R θ . If κ − < ∞ , g ∈ R κ − and P − [ S ∈ ·| ξ = − is nonarithmetic, then in addition to (59) e κ − t g ∗ U − ( t ) t →∞ −−−→ p (cid:48) −− ( κ − ) (cid:90) R g κ − ( x ) dx. (60) Furthermore, if S is nonarithmetic under P + , then symptotically linear iterated function systems 25 e κt E − (cid:88) n ≥ τ g ( t − S n ) t →∞ −−−−→ p − + ( κ )(1 − p −− ( κ )) p (cid:48) ++ ( κ ) (cid:90) R g κ ( x ) dx (61) and e κt g ∗ U + ( t ) = e κt E + (cid:88) n ≥ g ( t − S n ) t →∞ −−−−→ p (cid:48) ++ ( κ ) (cid:90) R g κ ( x ) dx (62) for any function g ∈ R κ .Proof. Let θ > p −− ( θ ) <
1. With P ( θ ) − as defined in Lemma4.6 and U ( θ ) − := (cid:88) n ≥ P ( θ ) − [ S n ∈ · , ξ n = −
1] = (cid:88) n ≥ p −− ( θ ) n P ( θ ) − [ S ∈ ·| ξ = − ∗ n , we obtain e θt g ∗ U − ( t ) = E − (cid:88) n ≥ e θS n g θ ( t − S n ) { ξ n = − } = E ( θ ) − (cid:88) n ≥ g θ ( t − S n ) { ξ n = − } = g θ ∗ U ( θ ) − ( t ) , and since U ( θ ) − is a defective and thus finite renewal measure, (59) follows forany g ∈ R θ .If κ − < ∞ , then p −− ( κ − ) = 1 thus P ( κ − ) − [ S ∈ B ] := E − [ e κ − S B ( S ) { ξ = − } ]for measurable B ⊂ R defines a probability measure and e κ − t g ∗ U − ( t ) = E − (cid:34) ∞ (cid:88) n =0 e κ − S n g κ − ( t − S n ) { ξ n = − } (cid:35) = g κ − ∗ U ( κ − ) − ( t ) , where U ( κ − ) − := (cid:88) n ≥ P ( κ − ) − [ S ∈ · ] ∗ n . Hence, (60) follows by another appeal to the key renewal theorem.Turning to (61) and (62) which can be shown together, we note that e κt E − (cid:34) (cid:88) n ≥ τ g ( t − S n ) (cid:35) = E − (cid:34) (cid:88) n ≥ τ e κS n g κ ( t − S n ) (cid:35) = E − (cid:104) e κS τ g κ ∗ U ( κ ) + ( t − S τ ) { τ< ∞} (cid:105) where U ( κ ) + := (cid:88) n ≥ P ( κ ) + [ S n ∈ · , ξ n = 1] = (cid:88) n ≥ P ( κ ) + [ S n ∈ · ] = (cid:88) n ≥ P ( κ ) + [ S ∈ · ] ∗ n is an ordinary renewal measure of a random walk with nonarithmetic incre-ment distribution P + [ S ∈ · ] and positive drift p (cid:48) ++ ( κ ). We also note that g κ ∗ U ( κ ) + ( t ) = e κt g κ ∗ U + ( t ). Hence, if g κ is dRi, then g ∗ U ( κ ) + ( t ) is boundedand converges to the limit stated in (62) by the key renewal theorem. Further,it then follows by the dominated convergence theorem that E − (cid:104) e κS τ g ∗ U ( κ ) + ( t − S τ )1 { τ< ∞} (cid:105) t →∞ −−−→ E − e κS τ { τ< ∞} p (cid:48) ++ ( κ ) (cid:90) R g κ ( x ) dx and thereby (61) if E − e κS τ { τ< ∞} = p − + ( κ )1 − p −− ( κ ) . But this follows from E − e κS τ { τ< ∞} = (cid:88) n ≥ E − (cid:2) e κS n { τ = n } (cid:3) = (cid:88) n ≥ P ( κ ) − [ τ = n ]= (cid:88) n ≥ p −− ( κ ) n − p + − ( κ ) = p − + ( κ )1 − p −− ( κ )which completes the proof. (cid:117)(cid:116) Case 2B . p + − = 0 < p − + , (58) holds and p −− ( κ ) = 1 > p ++ ( κ ).In this case κ = κ − and ( S n ) n ≥ forms an ordinary random walk under each P ( κ ) δ because P ( κ ) δ [ ξ n = δ for all n ≥
0] = 1 (Lemma 4.7). With U − , U + asdefined before, the following result holds and is again shown by standardrenewal-theoretic arguments. In order to state it, we need to define U + − ( B ) := (cid:88) n ≥ P − [ ξ = 1 , S n ∈ B ] = E − (cid:2) U + ( B − S ) { ξ =1 } (cid:3) for measurable B ⊂ R . For later use (see the proof of Thm. 2.2), we alsopoint out that symptotically linear iterated function systems 27 (cid:90) R e κx U + − ( dx ) = (cid:88) n ≥ E − (cid:2) { ξ =1 } e κS n (cid:3) = E − (cid:2) { ξ =1 } e κS (cid:3) (cid:88) n ≥ E + e κS n − = p − + ( κ )1 − p ++ ( κ ) . (63)Hence, it is finite iff p − + ( κ ) < ∞ , as p ++ ( κ ) < p −− ( κ ) = 1. Proposition 4.11
Under the stated assumptions, the following assertionshold true for any function g : R → R such that g ∈ R κ (with κ = κ − ): If S is nonarithmetic under P − , then g ∗ U + − ∈ R κ and e κt g ∗ U − ( t ) t →∞ −−−→ p (cid:48) −− ( κ ) (cid:90) R g κ ( x ) dx. (64) Moreover, e θt g ∗ U + ( t ) = e θt E + (cid:88) n ≥ g ( t − S n ) | t |→∞ −−−−→ , (65) for each θ > such that p ++ ( θ ) < , p − + ( θ ) < ∞ and g ∈ R θ . If p ++ ( θ ) < , p − + ( θ ) < ∞ hold for some θ ∈ ( κ − , κ + ) and g ∈ R κ , then g ∗ U + − ∈ R κ and e κt E − g ∗ U + ( t − S τ ) { τ< ∞} = e κt E − (cid:88) n ≥ g ( t − S τ + n ) { τ< ∞} t →∞ −−−→ p (cid:48) −− ( κ ) (cid:90) R ( g ∗ U + − ) κ ( x ) dx (66) Finally, if κ + < ∞ , P + [ S ∈ ·| ξ = 1] is nonarithmetic and g ∈ R κ + , then e κ + t g ∗ U + ( t ) t →∞ −−−→ p (cid:48) ++ ( κ + ) (cid:90) R g κ + ( x ) dx (67) holds in addition to (65) .Proof. In view of the proof of Prop. 4.10, only (66) needs our attention.Without loss of generality, let g be nonnegative. Note also that g is λλ -almosteverywhere continuous ( λλ Lebesgue measure) because g κ is dRi. For each t ∈ R , we have e κt E − g ∗ U + ( t − S τ ) { τ< ∞} = e κt E − (cid:34) (cid:88) n ≥ g ( t − S τ + n ) { τ< ∞} (cid:35) = e κt (cid:88) k ≥ E − (cid:2) { ξ k = − , ξ k +1 =1 } g ∗ U + ( t − S k − X k +1 ) (cid:3) = e κt (cid:88) k ≥ E − (cid:2) { τ>k } g ∗ U + − ( t − S k ) (cid:3) = e κt E − (cid:34) τ − (cid:88) k =0 g ∗ U + − ( t − S k ) (cid:35) = ( g ∗ U + − ) κ ∗ U ( κ ) − ( t ) . Hence, (66) follows by the key renewal theorem if we can verify that ( g ∗ U + − ) κ is dRi. To this end, pick ε > κ + ε ≤ θ , thus p ++ ( κ + ε ) < p − + ( κ + ε ) = E − (cid:2) { ξ =1 } e ( κ + ε ) S (cid:3) < ∞ . Since g ∗ U + − ( t ) = E − (cid:2) g ∗ U + ( t − S ) { ξ =1 } (cid:3) for t ∈ R , we infer with the helpof (65) that ( g ∗ U + − ) κ ( t ) = e κt g ∗ U + − ( t ) ≤ C e κt E − (cid:2) e − ( κ ± ε )( t − S ) { ξ =1 } (cid:3) ≤ C (cid:0) p − + ( κ − ε ) ∨ p − + ( κ + ε ) (cid:1) e − ε | t | for some C ∈ R > , and this in combination with the λλ -almost everywherecontinuity of g mentioned above yields the desired result. (cid:117)(cid:116) For φ ∈ C ∗ ( R ), let as usual (cid:107) φ (cid:107) ∞ be its supremum norm and K φ the maximalpositive value such that | φ ( x ) − φ ( y ) | ≤ Lip ( φ ) | x − y | [ K φ , ∞ ) ( | x | ∨ | y | ) (68)for all x, y ∈ R > . Note that in addition to (68), we have | φ ( x ) − φ ( y ) | ≤ (cid:107) φ (cid:107) ∞ [ K φ , ∞ ) ( | x | ∨ | y | ) (69)for all x, y ∈ R > . Given a stationary distribution ν of ( X n ) n ≥ , let R bea generic random variable with this law independent of all other occurringrandom variables, notably Ψ, Λ, − A, + A and B .The following two lemmata do not require the assumption p − + ∧ p + − > Lemma 5.1
Assuming ρ ( θ ) < and E B θ < ∞ , the random variable R withlaw ν satisfies E | R | θ < ∞ for all < θ < κ .Proof. With (cid:98) Y n as defined in (29) for n ∈ N and (cid:98) Y ∞ := B + (cid:88) n ≥ Lip ( Λ · · · Λ n ) B n +1 , symptotically linear iterated function systems 29 Lemma 3.1 provides us with P [ | R | > t ] = P [ | Ψ · · · Ψ n ( R ) | > t ] ≤ P [ (cid:98) Y n + | Λ · · · Λ n ( R ) | > t ] n →∞ −−−−→ P [ (cid:98) Y ∞ > t ]for all t ≥
0, where Λ · · · Λ n ( R ) → n − log (cid:107) Lip ( Λ · · · Λ n ) (cid:107) θ n →∞ −−−−→ log ρ ( θ ) / ( θ ∨ < θ ∈ (0 , κ ). Hence, it suffices to show E (cid:98) Y θ ∞ < ∞ for θ ∈ (0 , κ ). Putting (cid:107) X (cid:107) θ := E | X | θ for 0 < θ ≤ E | X | θ ) /θ for θ ≥
1, we find (cid:107) (cid:98) Y ∞ (cid:107) θ ≤ (cid:107) B (cid:107) θ (cid:88) n ≥ (cid:0) e log (cid:107) Lip ( Λ ··· Λ n ) (cid:107) θ /n (cid:1) n < ∞ (71)where (70) has once again been utilized. (cid:117)(cid:116) The following lemma about the direct Riemann integrability of certainfunctions appearing in the proofs of our main results is crucial and formulatedin such a way that it can be used in any of these proofs, there for κ = κ − ∧ κ + as one should expect. We also note that the moment condition on R is guaranteed by Lemma 5.1. Lemma 5.2
Let R be as stated before Lemma 5.1 and κ > such that E | ± A | κ < ∞ , E | B | κ < ∞ and E | R | θ < ∞ for θ ∈ (0 , κ ) . (72) Further defining h ( ± ) φ ( x ) := E (cid:2) φ ( δe − x | Ψ ( R ) | ) {± Ψ ( R ) > } − φ ( δe − x | Λ ( R ) | ) {± Λ ( R ) > } (cid:3) for x ∈ R and any φ ∈ C ∗ δ ( R ) , δ ∈ S , the function (cid:98) h ( (cid:15) ) φ ( x ) := e κx h ( (cid:15) ) φ ( x ) isdRi, i.e. h ( (cid:15) ) φ ∈ R κ for each (cid:15) ∈ S . Furthermore the function h ϕ ( x ) := E (cid:2) | ϕ ( e − x Ψ ( R )) − ϕ ( e − x Λ ( R )) | (cid:3) is also in R κ for any ϕ ∈ C ∗ ( R ) .Proof. Define ϕ ( t ) = φ ( δ | t | ) { (cid:15)t> } for (cid:15) = ± h ϕ ( x ) := E (cid:2) ϕ ( e − x Ψ ( R )) − ϕ ( e − x Λ ( R )) (cid:3) . Then h ϕ = h (cid:15)φ because Ψ ( R ) and e − x Ψ ( R ) (resp. Λ ( R ) and e − x Λ ( R )) havethe same sign, and it suffices to show that, for any Lipschitz function ϕ in C ∗ ( R ), (cid:88) n ∈ Z sup n
1, then E B κ < ∞ is guaranteed by (72). For the case κ = 1, wepoint out that symptotically linear iterated function systems 31 E B log( M/B ) ≤ E B log(1 + Λ ( R ) /B ) ≤ E B log(1 + Lip ( Λ ) | R | /B ) ≤ E B log(1 + Lip ( Λ ) /B ) + E log(1 + | R | )where E log(1 + | R | ) < ∞ by another appeal to (72). Left with the firstexpectation in the previous display, the inequality x log (cid:16) yx (cid:17) = y (cid:20) xy log (cid:16) yx (cid:17)(cid:21) ≤ y for 0 < x < y combined with Lip ( Λ ) = | − A | ∨ | + A | and (72) provides us with E B log(1 + Lip ( Λ ) /B ) ≤ (log 2) E B + E B log(1 + Lip ( Λ ) /B ) { B< Lip ( Λ ) } ≤ E B + E Lip ( Λ ) < ∞ . Finally, if κ >
1, note first that E B ( M ∨ κ − is bounded by a constanttimes E B Lip ( Λ ) κ − E | R | κ − + E B κ . Use H¨older’s inequality to infer E B Lip ( Λ ) κ − ≤ ( E B κ ) /κ ( E Lip ( Λ ) κ ) ( κ − /κ . Now E B ( M ∨ κ − < ∞ follows again by (72). (cid:117)(cid:116) Proof (of Thm. 2.1)
As (15) is an almost immediate consequence of (17),it suffices to prove the latter and identity (16).Let R be as in the previous lemma and independent of the i.i.d. randomvariables ( Ψ, Λ ) , ( Ψ , Λ ) , ( Ψ , Λ ) , . . . Observe that ( ξ n , S n ) n ≥ and Ψ ( R ) , Λ ( R )are conditionally independent given ξ , thus P [( sign ( Λ n · · · Λ ( x )) , log | Λ n · · · Λ ( x ) | ) n ≥ ∈ · ]= P sign ( x ) [( ξ n , S n + log | x | ) n ≥ ∈ · ]= P [( ξ n , S n + log | x | ) n ≥ ∈ · | ξ = sign ( x ) , Ψ ( R ) , Λ ( R )]for each x ∈ R . Defining φ (cid:63) ν ( t ) := (cid:90) φ ( e − t x ) ν ( dx ) and φ δ ( t ) : = φ ( t ) R > ( δt ) ∈ C ∗ δ ( R )for δ ∈ S , we have φ (cid:63) ν ( t ) = φ − (cid:63) ν ( t ) + φ (cid:63) ν ( t ) (76)and will prove that ρ ( θ ) < θ ∈ (0 , κ ) and E B κ < ∞ are enough to infer φ δ (cid:63) ν ( t ) = (cid:88) (cid:15) ∈ S (cid:88) n ≥ E (cid:15) h ( (cid:15) ) φ δ ( t − S n ) { ξ n = δ } (77)for all t ∈ R and δ ∈ S . In particular, irreducibility ( p + − ∧ p − + >
0) is notrequired, a fact we will take advantage of later when dealing with the other cases. Note that, by Lemmata 4.1 and 5.1, (cid:0) E | Λ n · · · Λ ( R ) | θ (cid:1) /n ≤ (cid:0) E Lip ( Λ n · · · Λ ) θ E | R | θ (cid:1) /n n →∞ −−−−→ ρ ( θ ) < φ ∈ C ∗ ( R ) and with C such that φ ( x ) ≤ C | x | θ , E φ ( e − t Λ n · · · Λ ( R )) ≤ Ce − θt E | Λ n · · · Λ ( R ) | θ n →∞ −−−−→ , and then φ δ (cid:63) ν ( t ) = n − (cid:88) j =0 (cid:2) E φ δ ( e − t Λ j · · · Λ ( R )) − E φ δ ( e − t Λ j +1 · · · Λ ( R )) (cid:3) + E φ δ ( e − t Λ n · · · Λ ( R ))= n − (cid:88) j =0 E (cid:2) φ δ ( e − t Λ j · · · Λ ( Ψ ( R ))) − φ δ ( e − t Λ j · · · Λ ( Λ ( R ))) (cid:3) + o (1)(78)as n → ∞ . Moreover, E (cid:2) φ δ ( e − t Λ j · · · Λ ( x )) (cid:3) = E sign ( x ) (cid:104) φ δ ( ξ j e − ( t − S j ) | x | ) (cid:105) = E sign ( x ) (cid:104) φ δ ( δe − ( t − S j ) | x | ) { ξ j = δ } (cid:105) = E + (cid:104) φ δ ( δe − ( t − S j ) | x | ) { x> } { ξ j = δ } (cid:105) + E − (cid:104) φ δ ( δe − ( t − S j ) | x | ) { x< } { ξ j = δ } (cid:105) for all x ∈ R , hence E (cid:104) φ δ ( e − t Λ j · · · Λ ( Ψ ( R ))) − φ δ ( e − t Λ j · · · Λ ( Λ ( R ))) (cid:105) = E + (cid:104)(cid:16) φ δ ( δe − ( t − S j ) | Ψ ( R ) | ) { Ψ ( R ) > } − φ δ ( δe − ( t − S j ) | Λ ( R ) | ) { Λ ( R ) > } (cid:17) { ξ j = δ } (cid:105) + E − (cid:104)(cid:16) φ δ ( δe − ( t − S j ) | Ψ ( R ) | ) { Ψ ( R ) < } − φ δ ( δe − ( t − S j ) | Λ ( R ) | ) { Λ ( R ) < } (cid:17) { ξ j = δ } (cid:105) = E + h (+) φ δ ( t − S j ) { ξ j = δ } + E − h ( − ) φ δ ( t − S j ) { ξ j = δ } for each δ ∈ S and j ∈ N . By combining this with (78), we obtain (77).Since, by Lemma 5.2, the (cid:98) h ( (cid:15) ) φ δ ( t ) := e κt h ( (cid:15) ) φ δ ( t ) are dRi for δ, (cid:15) ∈ S , we inferwith the help of Prop. 4.9 that e κt φ (cid:63) ν ( t ) t →∞ −−−→ Θ ( φ ) + Θ ( φ − ) , where Θ ( φ δ ) := u δ ( κ ) (cid:80) (cid:15) ∈ S v (cid:15) ( κ ) (cid:82) ∞−∞ (cid:98) h ( (cid:15) ) φ δ ( x ) dx (cid:98) π − ( κ ) E | − A | κ log | − A | + (cid:98) π + ( κ ) E | + A | κ log | + A | symptotically linear iterated function systems 33 where, using two substitutions and Fubini’s Theorem (absolute integrabilityis guaranteed by Lemma 5.2), (cid:90) ∞−∞ (cid:98) h ( (cid:15) ) φ δ ( x ) dx = (cid:90) ∞−∞ e κx E (cid:2) φ δ ( δe − x | Ψ ( R ) | ) { (cid:15)Ψ ( R ) > } − φ δ ( δe − x | Λ ( R ) | { (cid:15)Λ ( R ) > } ) (cid:3) dx = (cid:90) ∞ x κ +1 E (cid:2) φ δ ( δ | Ψ ( R ) | x ) { (cid:15)Ψ ( R ) > } − φ δ ( δ | Λ ( R ) | x ) (cid:15)Λ ( R ) > } (cid:3) dx = E (cid:104) | Ψ ( R ) | κ { (cid:15)Ψ ( R ) > } − | Λ ( R ) | κ { (cid:15)Λ ( R ) > } (cid:105) (cid:90) ∞ φ δ ( δx ) x κ +1 dx. This yields Θ ( φ ) = C + (cid:82) ∞ φ ( x ) x κ +1 dx with C + = u + ( κ ) (cid:80) (cid:15) v (cid:15) ( κ ) E (cid:2) | Ψ ( R ) | κ { (cid:15)Ψ ( R ) > } − | Λ ( R ) | κ { (cid:15)Λ ( R ) > } (cid:3)(cid:98) π − ( κ ) E | − A | κ log | − A | + (cid:98) π + ( κ ) E | + A | κ log | + A | (79)and accordingly Θ ( φ − ) = C − (cid:82) ∞ φ ( − x ) x κ +1 dx with C − = u − ( κ ) (cid:80) (cid:15) v (cid:15) ( κ ) E (cid:2) | Ψ ( R ) | κ { (cid:15)Ψ ( R ) > } − | Λ ( R ) | κ { (cid:15)Λ ( R ) > } (cid:3)(cid:98) π − ( κ ) E | − A | κ log | − A | + (cid:98) π + ( κ ) E | + A | κ log | + A | . (80)This completes the proof of (17), and Relation (16) is now a direct conse-quence of the two formulae for C − and C + . (cid:3) Let φ ∈ C ∗ ( R ). Being in the unilateral case, p + − = 0 < p − + entails that P + [ ξ n = −
1] = 0 for all n ≥ φ ∗ ν simplifies to φ (cid:63) ν ( t ) = h ( − ) φ − ∗ U − ( t ) + E − (cid:34) (cid:88) n ≥ τ h ( − ) φ ( t − S n ) (cid:35) + h (+) φ ∗ U + ( t ) (81)with τ = τ (1) and U − , U + as defined before Prop. 4.10. In particular, φ (cid:63) ν ( t ) = h ( − ) φ ∗ U − ( t ) (82)if φ ∈ C ∗− ( R ) and thus φ = φ − . In order to prove Thm. 2.2, we will usethe above decomposition (81) and determine asymptotics for its terms on theright-hand side with the help of Props. 4.10 and 4.11 in combination withLemmata 5.1 and 5.2 which ensure the direct Riemann integrability of the functions (cid:98) h ( (cid:15) ) φ δ ( t ) = e κt h ( (cid:15) ) φ δ ( t ) for δ, (cid:15) ∈ S . As usual, let κ = κ − ∧ κ + . Inparticular ρ ( κ ) = 1 and ρ ( θ ) < θ ∈ [0 , κ ).(b) Here κ = κ + . Given any φ ∈ C ∗ ( R ), we use (81). By Prop. 4.10, thefirst term on the right-hand side is of the order o ( e − κ + t ) as t → ∞ , the secondterm convergent to C (1) + (cid:82) ∞ x − ( κ + +1) φ ( x ) dx with C (1) + := p − + ( κ + ) E (cid:2) | Ψ ( R ) | κ + { Ψ ( R ) < } − | Λ ( R ) | κ + { Λ ( R ) < } (cid:3) (1 − p −− ( κ + )) p (cid:48) ++ ( κ + ) , (83)and the last one convergent to C (2) + (cid:82) ∞ x − ( κ + +1) φ ( x ) dx with C (2) + := E (cid:2) | Ψ ( R ) | κ + { Ψ ( R ) > } − | Λ ( R ) | κ + { Λ ( R ) > } (cid:3) p (cid:48) ++ ( κ + ) . (84)Consequently, (23) holds with C + = C (1) + + C (2) + . (85)(c) Again, pick any φ ∈ C ∗ ( R ) and note that κ = κ − . In this case, e κt times the first term on the left-hand side in (81) converges to C − (cid:82) ∞ φ ( − x ) dx with C − given by C − = E (cid:2) | Ψ ( R ) | κ − { Ψ ( R ) < } − | Λ ( R ) | κ − { Λ ( R ) < } (cid:3) p (cid:48) −− ( κ − ) . (86)By (65) of Prop. 4.11, e κt times the last term in (81) converges to 0. Left withan inspection of the middle term multiplied by e κt , use (66) of Prop. 4.11 toinfer e κt E − (cid:34) (cid:88) n ≥ τ h ( − ) φ ∗ U + ( t − S n ) (cid:35) t →∞ −−−→ p (cid:48) −− ( κ ) (cid:90) R ( h ( − ) φ ∗ U + − ) κ ( x ) dx. By proceeding in a similar manner as for the derivation of (79) and (80)(using partial integration and substitution), we find that (cid:90) R ( h ( − ) φ ∗ U + − ) κ ( x ) dx = (cid:90) R e κx h ( − ) φ ∗ U + − ( x ) dx and this is readily evaluated as (cid:82) ∞ φ ( x ) x κ +1 dx times E (cid:2) | Ψ ( R ) | κ { Ψ ( R ) < } − | Λ ( R ) | κ { Λ ( R ) < } · (cid:90) R e κx U + − ( x ) dx. Recalling (63) for the last term in the previous line, this shows that symptotically linear iterated function systems 35 e κt E − (cid:34) (cid:88) n ≥ τ h ( − ) φ ∗ U + ( t − S n ) (cid:35) t →∞ −−−→ C + − (cid:90) ∞ φ ( x ) x κ +1 dx with C + − = p − + ( κ ) E (cid:2) | Ψ ( R ) | κ { Ψ ( R ) < } − | Λ ( R ) | κ { Λ ( R ) < } (cid:3) p (cid:48) −− ( κ )(1 − p ++ ( κ )) . (87)A combination of the previous results yields (25). (cid:3) Part (a) of Thm. 2.2 and Thm. 2.3 can both be deduced from Goldie’s implicitrenewal theory (see [20, Thm. 2.3 and Cor 2.4] and also [27] and [5]). Weconfine ourselves to details regarding Thm. 2.2(a) because those for Thm. 2.3are similar.
Proof (Proof of Thm. 2.2(a)).
The claimed left tail behavior of ν at −∞ follows directly with the help of Cor. 2.4 in [20] after checking the followingconditions: With (cid:101) A := − A ∨ E (cid:101) A κ − = 1 , E (cid:101) A κ − log (cid:101) A < ∞ , the law of (cid:101) A is nonarithmetic , and E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Ψ ( R ) ∧ (cid:12)(cid:12) κ − − (cid:12)(cid:12) (cid:101) AR ∧ (cid:12)(cid:12) κ − (cid:12)(cid:12)(cid:12) < ∞ . But the first three of them are immediate by the assumptions of Thm. 2.2,and the last condition follows from the observation thatsup x ∈ R | Ψ ( x ) ∧ − (cid:101) Ax ∧ | ≤ B a.s.and E B κ − < ∞ (see (18)). (cid:117)(cid:116) As already said, the proof of Thm. 2.3 follows along the same lines, forpart (b) using a conjugation with a homeomorphism r : R → [1 , + ∞ ). The purpose of this section is to provide conditions that entail positivity ofthe constants C + , C − figuring in Thms. 2.1, 2.2 and 2.3. This does usuallynot follow from the existence of the limit and therefore requires additionalarguments. Our approach here is based on a recent paper [11] and consistsin proving that, if the support of the stationary measure ν is unbounded and some contraction property holds, then the constants are indeed positive. Theresults are stated in Props. 8.1, 8.5 and 8.6 below, but we confine ourselvesto the proof in the irreducible case because the remaining ones can be eithertreated in an analogous way or reduced to Goldie’s implicit renewal theory[20]. Case 1 (irreducible case) . p − + > p + − > Proposition 8.1
In the situation of Thm. 2.1, the constants C + and C − in (15) are strictly positive if the stationary measure ν has unbounded support. The function s (cid:55)→ ρ ( s ) is smooth, convex (Lemma 4.2) and satisfies ρ (0) = ρ ( κ ) = 1 under our hypotheses. As a consequence, ρ ( s ) < s ∈ (0 , κ ).This fact follows from Lemma 11.1 in the Appendix and will be used for theproof of the proposition (see after (95)).Recall from Section 3.3 that Lip ( Λ n ) = | − A |∨| + A | and (cid:98) Z n = Λ · · · Λ n ( X ).Put L → n := Lip ( Λ · · · Λ n ) and L ← n := Lip ( Λ n · · · Λ )for n ∈ N and L → = L ← := 1. Note that, with this notation, (cid:98) Y ∞ = (cid:88) n ≥ L → n B n +1 for n ∈ N . Finally, let X = ξ throughout this section and T t := inf { n ≥ (cid:98) Z n > t } for t >
0. The proof of Prop. 8.1 is based on the subsequent lemma whichdoes not require irreducibility.
Lemma 8.2
Let ν be a stationary distribution with support unbounded tothe right. If P + (cid:2) T t < ∞ (cid:3) ≥ K t − κ (88) and P (cid:2) (cid:98) Y ∞ > t (cid:3) ≤ K t − κ , (89) for suitable constants K , K > and all t ≥ , then lim inf t →∞ t κ ν (( t, ∞ )) > . Proof.
We first show that ν (( t, ∞ )) ≥ (cid:16) P [ (cid:98) Z T t > t ] − P [ (cid:98) Y ∞ > ( K − t ] (cid:17) ν (( K, ∞ )) (90)for all t, K >
0. Fix t , write T as shorthand for T t , let G n denote the σ -fieldgenerated by Ψ , . . . , Ψ n for n ≥
1, and put Ψ · · · Ψ T ( x ) = Λ · · · Λ T ( x ) := 0 symptotically linear iterated function systems 37 for any x ∈ R if T = ∞ . Observe that, by the Ψ -invariance of ν and theindependence of ( Ψ n ) n ≥ and R , the sequence M n := P [ Ψ · · · Ψ n ( R ) > t | G n ] = (cid:90) ( t, ∞ ) ( Ψ · · · Ψ n ( x )) ν ( dx ) , n ≥ P + . By combining (31) of Lemma 3.1, Λ · · · Λ T ( x ) = xΛ · · · Λ T (1) = x (cid:98) Z T P + -a.s. for all x > , and the optional sampling theorem, we hence obtain for any K > ν (( t, ∞ )) = E + (cid:20)(cid:90) { Ψ ··· Ψ T ( x ) >t } ν ( dx ) (cid:21) ≥ E + (cid:34)(cid:90) [ K, ∞ ) { Λ ··· Λ T ( x ) − (cid:98) Y T >t } ν ( dx ) (cid:35) ≥ P + (cid:2) T < ∞ , K (cid:98) Z T − (cid:98) Y T > t (cid:3) ν ([ K, ∞ )) ≥ P + (cid:2) T < ∞ , (cid:98) Y T ≤ ( K − t (cid:3) ν ([ K, ∞ ))= (cid:16) P + (cid:2) T < ∞ (cid:3) − P (cid:2) (cid:98) Y T > ( K − t (cid:3)(cid:17) ν ([ K, ∞ )) ≥ (cid:16) P + (cid:2) T < ∞ (cid:3) − P (cid:2) (cid:98) Y ∞ > ( K − t (cid:3)(cid:17) ν ([ K, ∞ ))and thus (90). Now use (88), (89) and choose K large enough to obtain theassertion of the lemma. (cid:117)(cid:116) In view of this lemma, the proof of Prop. 8.1 reduces to a verificationof (88) and (89). The first of these conditions is shown as part of the nextlemma, the second one in Lemma 8.4.
Lemma 8.3
Under the hypotheses of Prop. 8.1, there exists a constant
K > such that, for all t ≥ , t κ P (cid:20) sup n ≥ L → n > t (cid:21) ≥ K − , (91) and sup n ≥ t κ P [ L → n > t ] ≤ K. (92) Moreover, (88) holds, in fact t κ P δ [ T t < ∞ ] = t κ P (cid:20) sup n ≥ Λ · · · Λ n ( δ ) > t (cid:21) ≥ K (93) for some K > and each δ ∈ S . Proof.
Recall that ( ξ n , S n ) n ≥ with S n = log | Λ n · · · Λ ( ξ ) | constitutes a MRW , which is nonarithmetic under the hypotheses of Thm. 2.1. Put N ( t ) := inf { n ≥ S n > t } , R ( t ) := ( S N ( t ) − t ) { N ( t ) < ∞} and Z ( t ) := ξ N ( t ) { N ( t ) < ∞} . We first show that C ( δ ) := lim t →∞ t κ P (cid:20) sup n ≥ | Λ n · · · Λ ( δ ) | > t (cid:21) (94)is strictly positive for each δ ∈ S . This implies (92) when combined with thefact that L ← n d = L → n for each n ≥
1. On the other hand, it does not directlyimply (91) because the equality in law holds for fixed n only and the maindifficulty in our proof is indeed to reverse the order of random iterations.To prove our claim, define f ( δ, t ) := e − κt /v δ ( κ ) R (cid:62) ( t ) for ( δ, t ) ∈ S × R .Then e κt v δ ( κ ) P (cid:20) sup n ≥ | Λ n · · · Λ ( δ ) | > e t (cid:21) = e κt v δ ( κ ) P δ (cid:20) sup n ≥ S n > t (cid:21) = e κt v δ ( κ ) P δ [ N ( t ) < ∞ ]= (cid:88) n ≥ E ( κ ) δ (cid:104) { N ( t )= n } e − κ ( S n − t ) /v ξ n ( κ ) (cid:105) = (cid:88) n ≥ E ( κ ) δ (cid:2) { N ( t )= n } f ( ξ n , S n − t ) (cid:3) = E ( κ ) δ (cid:2) { N ( t ) < ∞} f ( Z ( t ) , R ( t )) (cid:3) , and the last expectation converges to a positive limit by an extension of theMarkov Renewal Lemma 4.8, see [25, Thm. 1] and [3, Cor. 3.2].Turning to the proof of (91), we fix m ∈ N , define the stopping time τ m,t := inf { n ≥ t < L → n < mt } and point out that symptotically linear iterated function systems 39 (cid:88) n ≥ P [ t < L → n ≤ mt ]= (cid:88) n ≥ P [ τ m,t < ∞ , t < L → τ m,t + n ≤ mt ] ≤ (cid:88) n ≥ P [ τ m,t < ∞ , L → τ m,t · Lip ( Λ τ m,t +1 · · · Λ τ m,t + n ) > t ] ≤ (cid:88) n ≥ P [ τ m,t < ∞ , Lip ( Λ τ m,t +1 · · · Λ τ m,t + n ) > /m ] ≤ P [ τ m,t < ∞ ] (cid:88) n ≥ P [ L → n > /m ] . (95)Since ρ ( ϑ ) < ϑ >
0, Lemma 4.1 entails that there exist constants ρ <
K < ∞ such that E (cid:2) L → n ϑ (cid:3) ≤ Kρ n (96)for all n ∈ N , the constant β := (cid:88) n ≥ P [ L → n > /m ]is finite. Then (95) yields P (cid:20) sup n ≥ L → n > t (cid:21) ≥ P [ τ m,t < ∞ ] ≥ β − (cid:88) n ≥ P [ t < L → n ≤ mt ] . Next, use L → n d = L ← n and L ← n = | Λ n · · · Λ (1) | ∨ | Λ n · · · Λ ( − | (97)for each n ∈ N to obtain (cid:88) n ≥ P [ t < L → n ≤ mt ] = (cid:88) n ≥ P [ t < L ← n ≤ mt ] ≥ (cid:88) n ≥ P [ t < | Λ n · · · Λ (1) | ≤ mt, | Λ n · · · Λ ( − | ≤ mt ] ≥ (cid:88) n ≥ (cid:16) P [ t < | Λ n · · · Λ (1) | ≤ mt ] − P [ | Λ n · · · Λ ( − | > mt ] (cid:17) = (cid:88) n ≥ (cid:16) P + [0 ≤ S n − log t ≤ log m ) − P − [ S n − log t > log m ] (cid:17) =: I ( t ) + I ( t ) . Moreover, with v ∗ ( κ ) := v − ( κ ) ∨ v + ( κ ) > I ( t ) = t − κ v + ( κ ) v ∗ ( κ ) (cid:88) n ≥ E ( κ ) + e κ (log t − S n ) [0 , log m ) ( S n − log t )and, similarly, I ( t ) = t − κ v − ( κ ) v ∗ ( κ ) (cid:88) n ≥ E ( κ ) − e κ (log t − S n ) (log m, ∞ ) ( S n − log t )By another appeal to Lemma 4.8,lim t →∞ t κ I ( t ) = v + ( κ ) v ∗ ( κ ) ρ (cid:48) ( κ ) (cid:90) log m e − κx dx = v + ( κ )(1 − m − κ ) κv ∗ ( κ ) ρ (cid:48) ( κ )and lim t →∞ t κ I ( t ) = v − ( κ ) v ∗ ( κ ) ρ (cid:48) ( κ ) (cid:90) ∞ log m e − κx dx = v − ( κ ) m − κ κv ∗ ( κ ) ρ (cid:48) ( κ ) . By putting the previous estimates together and fixing m sufficiently large,we see that,lim inf t →∞ t κ P (cid:20) sup n ≥ L → n > t (cid:21) ≥ lim inf t →∞ t κ β ( I ( t ) + I ( t )) > , and thus (91) holds true.Left with the proof of (93), we first verify the weaker assertion t κ P (cid:20) sup n ≥ | Λ · · · Λ n ( δ ) | > t (cid:21) = t κ P δ (cid:20) sup n ≥ | (cid:98) Z n | > t (cid:21) ≥ K (98)for some K >
0, each δ ∈ S and all t ≥
1. It is enough to consider δ = +1.By irreducibility, we can fix η ∈ (0 ,
1) sufficiently small such that p : = P ( + A < − η ) ∧ P ( − A > η ) > . Next, put τ = τ t/η := inf { n ≥ L → n > t/η } with associated events B + := { L → τ = | Λ · · · Λ τ (+1) | , τ < ∞} ,B − := { L → τ = | Λ · · · Λ τ ( − | , τ < ∞} . Notice that | Λ · · · Λ τ (+1) | > t/η > t on B + , | Λ · · · Λ τ +1 (+1) | = | Λ · · · Λ τ ( − || Λ τ +1 (+1) | > tη · η = t symptotically linear iterated function systems 41 on B − ∩ { Λ τ +1 (+1) < − η } , and that Λ τ +1 (+1) is independent of B − withthe same law as Λ (+1) = + A . Then it follows that P (cid:20) sup n ≥ | Λ · · · Λ n (+1) | > t (cid:21) ≥ P (cid:20) τ < ∞ , sup n ≥ | Λ · · · Λ n (+1) | > t (cid:21) ≥ P [ B + ] + P [ B − ∩ { Λ τ +1 (+1) < − η } ]= P [ B + ] + P [ B − ] P [ Λ τ +1 (+1) < − η ] ≥ p (cid:0) P [ B + ] + P [ B − ] (cid:1) ≥ p P [ τ < ∞ ] ≥ pη κ Kt κ = K t κ for all t ≥ P (cid:20) sup n ≥ Λ · · · Λ n (+1) > t (cid:21) = P (cid:2) ∃ n ≥ Λ · · · Λ n (+1) > , | Λ · · · Λ n (+1) | > t (cid:3) ≥ P (cid:2) ∃ n ≥ Λ · · · Λ n (+1) > , | Λ · · · Λ n (+1) | > t/η (cid:3) and P (cid:20) sup n ≥ Λ · · · Λ n (+1) > t (cid:21) ≥ P (cid:20) Λ ( − > η, sup n ≥ Λ · · · Λ n (+1) > t (cid:21) ≥ P (cid:2) − A > η, ∃ n ≥ Λ · · · Λ n (+1) < , − A | Λ · · · Λ n (+1) | > t (cid:3) ≥ p P (cid:2) ∃ n ≥ Λ · · · Λ n (+1) < , | Λ · · · Λ n (+1) | > t/η (cid:3) Hence, assuming (98), a combination of both yields P (cid:20) sup n ≥ Λ · · · Λ n (+1) > t (cid:21) ≥ P (cid:2) ∃ n ≥ Λ · · · Λ n (+1) > , | Λ · · · Λ n (+1) | > t/η (cid:3) + p P (cid:2) ∃ n ≥ Λ · · · Λ n (+1) < , | Λ · · · Λ n (+1) | > t/η (cid:3) ≥ p P (cid:20) sup n ≥ | Λ · · · Λ n (+1) | > t/η (cid:21) ≥ pK η κ t − κ for all t ≥ (cid:117)(cid:116) Lemma 8.4
Under the hypotheses of Prop. 8.1, Condition (89) holds.Proof.
Recall that (cid:98) Y ∞ = (cid:80) n ≥ L → n B n +1 . For (89), it therefore suffices toverify P (cid:104) max n ≥ L → n B n +1 > t (cid:105) ≤ Kt κ . (99) and P (cid:104) (cid:98) Y ∞ > t, max n ≥ L → n B n +1 ≤ t (cid:105) ≤ Kt κ . (100)Here and in the following K denotes a generic positive constant that maydiffer from line to line. To prove (99), we define the two events V in = (cid:8) e i t < L → n B n +1 ≤ e i +1 t (cid:9) , U in = (cid:8) e i t < L ← n B ≤ e i +1 t (cid:9) . (101)of equal probability for all i, n ∈ N .Fix ϑ > ρ < M ∈ N large enoughsuch that 2 e ϑ Kρ M < − ρ M . Then P (cid:104) max n ≥ L → n B n +1 > t (cid:105) = (cid:88) i ≥ P (cid:104) e i t < max n ≥ L → n − B n ≤ e i +1 t (cid:105) = (cid:88) i ≥ P (cid:20) (cid:91) n ≥ V in (cid:21) = (cid:88) i ≥ P (cid:20) M − (cid:91) m =0 (cid:91) n ≥ V inM + m (cid:21) ≤ M − (cid:88) m =0 (cid:88) i ≥ P (cid:20) (cid:91) n ≥ V inM + m (cid:21) ≤ M − (cid:88) m =0 (cid:88) i ≥ (cid:88) n ≥ P (cid:2) V inM + m (cid:3) = M − (cid:88) m =0 (cid:88) i ≥ (cid:88) n ≥ P (cid:2) U inM + m (cid:3) , whence (99) follows if we prove that (cid:88) i ≥ (cid:88) n ≥ P (cid:2) U inM + m (cid:3) ≤ Kt κ (102)for m = 0 , . . . , M −
1. We confine ourselves to the case m = 0 and note firstthat (cid:88) n ≥ P (cid:2) U inM (cid:3) ≤ P (cid:20) (cid:91) n ≥ U inM (cid:21) + (cid:88) n ≥ (cid:88) j>n P [ U inM ∩ U ijM ] (103)for any i . Then, by using how ρ and M have been chosen, we find for anyfixed n that (cid:88) j>n P [ U inM ∩ U ijM ] ≤ (cid:88) j>n P (cid:2) L ← nM B ≤ e i +1 t, L ← jM B > e i t (cid:3) ≤ P [ U inM ] (cid:88) j>n P (cid:2) Lip ( Λ jM · · · Λ nM +1 ) > e − (cid:3) ≤ P [ U inM ] (cid:88) j>n e ϑ Kρ ( j − n ) M ≤ P [ U inM ] . which in combination with (103) leads to symptotically linear iterated function systems 43 (cid:88) n ≥ P (cid:2) U inM (cid:3) ≤ P (cid:20) (cid:91) n ≥ U inM (cid:21) and then finally to (cid:88) i ≥ (cid:88) n ≥ P (cid:2) U inM (cid:3) ≤ (cid:88) i ≥ P (cid:20) (cid:91) n ≥ U inM (cid:21) ≤ (cid:88) i ≥ P (cid:20) sup n ≥ L ← n > e i t/B (cid:21) ≤ (cid:18) K E B κ (cid:88) i ≥ e − iκ (cid:19) · t − κ , where the penultimate inequality follows from the definition of the U in (see(101)) and the last one by (94) and the independence of sup n L → n and B .This completes the proof of (102) and thus also of (99).Turning to inequality (100), we define the family of events W j := (cid:26) n : te j +1 < L → n − B n ≤ te j (cid:27) , j ≥ ρ ∈ (0 ,
1) and all j ∈ N , P (cid:2) card ( W j ) > (cid:96) (cid:3) ≤ K ρ (cid:96) e κj t κ , where card denotes cardinality of a set. To verify this, pick ϑ ∈ (0 , κ ) and ρ ∈ (0 ,
1) in accordance with (96) and observe that P (cid:20) sup m ≥ (cid:96) L → m − B m ≥ s (cid:21) ≤ (cid:88) m ≥ (cid:96) E L → m − ϑ E B ϑ s ϑ ≤ K ρ (cid:96) E B ϑ s ϑ . Let τ i = τ i ( j ) for i = 1 , W j , with τ := ∞ if W j is empty and τ := ∞ if card ( W j ) ≤
1. Put also L k + mk := Lip ( Λ k · · · Λ m + k ).Then P [ card ( W j ) > (cid:96) + 1] ≤ P (cid:20) τ < ∞ , ∃ m ≥ (cid:96) : te j +1 < L → τ + m − B τ + m < te j (cid:21) ≤ P (cid:20) τ < ∞ , ∃ m ≥ (cid:96) : L → τ − Lip ( Λ τ ) L τ + m − τ +1 B τ + m > te j +1 (cid:21) ≤ P (cid:20) τ < ∞ , ∃ m ≥ (cid:96) : L τ + m − τ +1 B τ + m > te j +1 Π τ − B τ B τ Lip ( Λ τ ) (cid:21) ≤ P (cid:20) τ < ∞ , ∃ m ≥ (cid:96) : L τ + m − τ +1 B τ + m > B τ e Lip ( Λ τ ) (cid:21) ≤ Ke ϑ ρ (cid:96) E ( B ϑ ) E (cid:20) { τ < ∞} Lip ( Λ τ ) ϑ B ϑτ (cid:21) (since B τ ≥ ≤ Ke ϑ ρ (cid:96) E ( B ϑ ) E (cid:2) Lip ( Λ τ ) ϑ (cid:3) P (cid:2) τ < ∞ (cid:3) ≤ Ke ϑ ρ (cid:96) E ( B ϑ ) E (cid:2) Lip ( Λ τ ) ϑ (cid:3) P (cid:20) sup n ≥ L → n − B n > te j +1 (cid:21) ≤ Kρ (cid:96) +1 e κj t − κ , where the last inequality follows from (94).Returning to the proof of (100), we note that the occurrence of (cid:98) Y ∞ > t and sup n ≥ L → n − B n ≤ t entails that at least one W j must be relatively large,more precisely, that a.s. card ( W j ) > e j / j + 1) for some j ≥
0. Indeed, ifthe latter fails, then (cid:98) Y ∞ = (cid:88) j ≥ (cid:88) n ∈ W j L → n − B n ≤ (cid:88) j ≥ card ( W j ) · te j ≤ (cid:88) j ≥ e j j + 1) · te j < t. Hence, we finally arrive at P (cid:2) (cid:98) Y ∞ > t, max Π n − B n ≤ t (cid:3) ≤ (cid:88) j ≥ P (cid:20) card( W j ) > e j j + 1) (cid:21) ≤ K (cid:88) j ≥ ρ e j / j +1) e κj t κ < Kt − κ and thus at the desired conclusion. (cid:117)(cid:116) Case 2 (unilateral case) . p − + > p + − = 0. Proposition 8.5 (a) If the hypotheses of Thm. 2.2(a) and p (cid:48) −− (0) < hold,then the constant C − in (20) is strictly positive for any stationary law ν ofunbounded support at −∞ .(b) If the hypotheses of Thm. 2.2(b) and p (cid:48) ++ (0) < hold, then the constant C + in (22) is strictly positive for any stationary law ν of unbounded supportat + ∞ .(c) If the hypotheses of Thm. 2.2(c) and p (cid:48) −− (0) < hold, then the constant C + − in (24) is strictly positive for any stationary law ν of unbounded supportat both −∞ and + ∞ .. Case 3 (separated case) . p − + = 0 and p + − = 0. Proposition 8.6 (a) If the hypotheses of Thm. 2.3(a) and p (cid:48) −− (0) < hold,then the constant C − is strictly positive for any stationary law ν of unboundedsupport at −∞ .(b) If the hypotheses of Thm. 2.3(b) and p (cid:48) ++ (0) < , then the constant C + is strictly positive for any stationary law ν of unbounded support at + ∞ . symptotically linear iterated function systems 45 In order to wrap up our presentation, this very short section provides con-ditions which ensure the existence of at least one stationary distribution ofthe given
ALIFS and are directly seen to hold in our main results. We do notstrive for utmost generality here, nor do we address the uniqueness question.While existence of a stationary distribution depends on the behavior of theIFS at infinity and could be derived from weaker assumptions, uniqueness isa ”local” property and needs ”local” assumptions that are not imposed in thevery general setting of this work. On the other hand, the subsequent result istailored to our needs and very easily deduced by a tightness argument using(71).
Proposition 9.1
Suppose that there exists ϑ > such that ρ ( ϑ ) < and E B ϑ < ∞ . Then the ALIFS ( X n ) n ≥ admits at least one stationary distribu-tion. As a particular consequence, the convexity of the spectral radius ρ ( θ )yields the existence of an invariant law whenever (13) holds with κ such that ρ ( κ ) = 1. Proof.
Suppose that ( X n ) n ≥ has initial state X = 0 and recall from Lemma3.1 that | X n | = | X n − Λ n · · · Λ (0) | ≤ Y n for each n ∈ N . Using also Y n d = (cid:98) Y n ↑ (cid:98) Y ∞ , we infer that P [ | X n | > K ] ≤ P [ Y n > K ] = P [ (cid:98) Y n > K ] ≤ P [ (cid:98) Y ∞ > K ]for each K > X n ) n ≥ under P be-cause, by (71), (cid:98) Y ∞ is almost surely finite under the given assumptions. Since( X n ) n ≥ is also a Feller chain, the existence of a stationary distribution nowfollows by the Krylov-Bogoliubov theorem, see e.g. [14, Thm. 3.1.1]. (cid:117)(cid:116)
10 The AR (1) model with ARCH errors revisited
This model has already been mentioned in Subsection 2.1. Defined as the
ALIFS generated by i.i.d. copies of the random function Ψ ( x ) = αx + Z (cid:0) β + λx (cid:1) / for some ( α, β, λ ) ∈ R × R > and a random variable Z , it provides an idealexample to illustrate our results because all three cases can occur depending on how the parameters α, β, λ and (the range of) the random variable Z arechosen. Since0 ≤ (cid:0) β + λx (cid:1) / − (cid:0) λx (cid:1) / = β ( β + λx ) / + ( λx ) / ≤ β / for all x ∈ R , we see that Condition (4) holds with ± A = α ± λ / Z and B = β / Z, so that p − + = P [ Z > α/λ / ] and p + − = P [ Z < − α/λ / ] . and P ( θ ) = (cid:32) E (cid:2) | α − λ / Z | θ { Z<α/λ / } (cid:3) E (cid:2) | α − λ / Z | θ { Z>α/λ / } (cid:3) E (cid:2) | α + λ / Z | θ { Z< − α/λ / } (cid:3) E (cid:2) | α + λ / Z | θ { Z> − α/λ / } (cid:3)(cid:33) . Now one can easily see that all three cases can occur, namely the • irreducible case if P [ Z > α/λ / ] and P [ Z < − α/λ / ] are both positive, • unilateral case if Z > − α/λ / a.s. and P [ Z > α/λ / ] >
0, and • separated case if | Z | ≤ α/λ / a.s.By invoking our results, we conclude under the respective additional condi-tions imposed there, especially (in all three cases) E | Z | κ log | Z | < ∞ , that any stationary law of unbounded support has • irreducible case: left and right power tails of order κ with κ defined as theminimal positive value such that ρ ( κ ) = 1 and with constants C − , C + > • unilateral case: has left power tails of order κ − and/or right power tailsof order κ − ∧ κ + with κ − , κ + being the unique positive numbers (if theyexist and are distinct) such that E | α − λ / Z | κ − { Z<α/λ / } = 1 and E | α + λ / Z | κ − { Z> − α/λ / } = 1and with constants C − , C + , C + − > • separated case : has left power tails of order κ − and/or right power tailsof order κ + with κ − , κ + as in the previous case (if they exist) and withconstants C − , C + > Z has a symmetric law, which rules out the unilateral case,has already been studied in [19, Sect. 8.4] for α = 0 and Gaussian Z , in [9],and in [5, Subsect. 6.1] by showing that a stationary law must be symmetricas well and thus have left and right tails of the same order which in factallows to resort to Goldie’s implicit renewal theory. symptotically linear iterated function systems 47
11 Appendix
The following lemma confirms that ρ ( θ ) = 1 in a right neighborhood of 0can occur in the irreducible case only if the nonlattice assumption (14) isviolated. Lemma 11.1
Suppose that P ( θ ) exists and has spectral radius ρ ( θ ) = 1 forall θ ∈ I = [0 , θ ] , θ > . Then one of the following alternatives holds:(a) p − + ∧ p + − = 0 and thus | − A | ∨ | + A | = 1 a.s.(b) p − + ( θ ) p + − ( θ ) ≡ γ > for all θ ∈ I and − A = (cid:40) if − A > − a − if − A < and + A = (cid:40) if + A > a if + A < a.s.for some a > . Regarding (14), Alternative (b) indeed implies that it fails because P (cid:98) π ( κ ) [log | ξ A | − a ξ + a ξ ∈ d Z ] = 1when choosing a ± = 0 and d = log a . Proof.
Using Formula (36) for ρ ( θ ), one can readily check that ρ ( θ ) = 1 forall θ ∈ I holds iff(1 − p −− ( θ ))(1 − p ++ ( θ )) = p − + ( θ ) p + − ( θ ) for all θ ∈ I. (104)Assuming p − + ∧ p + − > p −− ∨ p ++ <
1, we infer p −− ( θ ) ∨ p ++ ( θ ) < θ ∈ I (cid:48) = [0 , θ ] ⊆ I for some θ >
0, w.l.o.g. I (cid:48) = I . Observe also that p − + ( θ ) = p − + E [ | − A | θ | − A <
0] is a moment generating function modulo thescalar p − + and therefore log-convex on I . The same holds naturally for p + − ( θ ).On the other hand, the functionslog(1 − p −− ( θ )) and log(1 − p ++ ( θ ))are concave, being compositions of an increasing concave function with aconcave function. Consequently, the logarithms of the products in (104) areboth concave and convex and thus linear on I . This shows that, with γ := p − + p + − , p − + ( θ ) p + − ( θ ) = γe bθ and 1(1 − p −− ( θ ))(1 − p ++ ( θ )) = γ − e − bθ (105)for all θ ∈ I and some b ∈ R . Since (1 − p −− ( θ )) − and (1 − p ++ ( θ )) − arethe moment generating functions of the defective renewal measures H − = δ + (cid:88) n ≥ p n −− P [log | − A | ∈ ·| − A > ∗ n and H + = δ + (cid:88) n ≥ p n ++ P [log | + A | ∈ ·| + A > ∗ n , respectively, where δ denotes Dirac measure at 0, we infer that H − ∗ H + putsall mass at − b which is only possible if b = 0 and P [ − A = 1 | − A >
0] = P [ + A = 1 | + A >
0] = 1 . Now the first identity of (105) provides that γ − p − + ( θ ) p + − ( θ ) is the momentgenerating function of both δ and of two independent random variables withrespective laws P [log | − A | ∈ ·| − A <
0] and P [log | + A | ∈ ·| + A < P [ − A = − a − ∈ ·| − A <
0] = P [ + A = a | + A <
0] = 1 for some a > . This completes the proof. (cid:117)(cid:116)
Acknowledgements.
G. Alsmeyer was partially funded by the Deutsche Forschungs-gemeinschaft (DFG) under Germany’s Excellence Strategy EXC 2044–390685587,Mathematics M¨unster: Dynamics–Geometry–Structure.D. Buraczewski was partially supported by the National Science Center, Poland (grantnumber 2019/33/B/ST1/00207).
References
1. D. J. Aldous and A. Bandyopadhyay. A survey of max-type recursive distribu-tional equations.
Ann. Appl. Probab. , 15(2):1047–1110, 2005.2. G. Alsmeyer. On the Markov renewal theorem.
Stochastic Process. Appl. ,50(1):37–56, 1994.3. G. Alsmeyer. The Markov renewal theorem and related results.
Markov Process.Related Fields , 3(1):103–127, 1997.4. G. Alsmeyer. Quasistochastic matrices and Markov renewal theory.
J. Appl.Probab. , 51A(Celebrating 50 Years of The Applied Probability Trust):359–376,2014.5. G. Alsmeyer. On the stationary tail index of iterated random Lipschitz functions.
Stochastic Process. Appl. , 126(1):209–233, 2016.6. G. Alsmeyer and S. Mentemeier. Tail behaviour of stationary solutions of ran-dom difference equations: the case of regular matrices.
J. Difference Equ. Appl. ,18(8):1305–1332, 2012.7. K. B. Athreya and J. Dai. Random logistic maps. I.
J. Theoret. Probab. ,13(2):595–608, 2000.8. K. B. Athreya, D. McDonald, and P. Ney. Limit theorems for semi-Markovprocesses and renewal theory for Markov chains.
Ann. Probab. , 6(5):788–797,1978.9. M. Borkovec and C. Kl¨uppelberg. The tail of the stationary distribution of anautoregressive process with ARCH(1) errors.
Ann. Appl. Probab. , 11(4):1220–1241, 2001.symptotically linear iterated function systems 4910. S. Brofferio and D. Buraczewski. On unbounded invariant measures of stochasticdynamical systems.
Ann. Probab. , 43(3):1456–1492, 2015.11. D. Buraczewski and E. Damek. A simple proof of heavy tail estimates for affinetype Lipschitz recursions.
Stochastic Process. Appl. , 127(2):657–668, 2017.12. D. Buraczewski, E. Damek, Y. Guivarc’h, and S. Mentemeier. On multidimen-sional Mandelbrot cascades.
J. Difference Equ. Appl. , 20(11):1523–1567, 2014.13. D. Buraczewski, E. Damek, and T. Mikosch.
Stochastic models with power-lawtails . Springer Series in Operations Research and Financial Engineering. Springer,[Cham], 2016. The equation X = AX + B .14. G. Da Prato and J. Zabczyk. Ergodicity for infinite-dimensional systems , volume229 of
London Mathematical Society Lecture Note Series . Cambridge UniversityPress, Cambridge, 1996.15. E. Damek, M. Matsui, and W. ´Swiatkowski. Componentwise different tailsolutions for bivariate stochastic recurrence equations with application toGARCH(1 ,
1) processes.
Colloq. Math. , 155(2):227–254, 2019.16. E. Damek and J. Zienkiewicz. Affine stochastic equation with triangular matrices.
J. Difference Equ. Appl. , 24(4):520–542, 2018.17. B. de Saporta. Tail of the stationary solution of the stochastic equation Y n +1 = a n Y n + b n with Markovian coefficients. Stochastic Process. Appl. , 115(12):1954–1978, 2005.18. P. Diaconis and D. Freedman. Iterated random functions.
SIAM Rev. , 41(1):45–76, 1999.19. P. Embrechts, C. Kl¨uppelberg, and T. Mikosch.
Modelling extremal events , vol-ume 33 of
Applications of Mathematics (New York) . Springer, Berlin, 1997. Forinsurance and finance.20. C. M. Goldie. Implicit renewal theory and tails of solutions of random equations.
Ann. Appl. Probab. , 1(1):126–166, 1991.21. D. Gu´egan and J. Diebolt. Probabilistic properties of the β -ARCH-model. Statist.Sinica , 4(1):71–87, 1994.22. Y. Guivarc’h and E. Le Page. Spectral gap properties for linear random walks andPareto’s asymptotics for affine stochastic recursions.
Ann. Inst. Henri Poincar´eProbab. Stat. , 52(2):503–574, 2016.23. J. Jacod. Th´eor`eme de renouvellement et classification pour les chaˆınes semi-markoviennes.
Ann. Inst. H. Poincar´e Sect. B (N.S.) , 7:83–129, 1971.24. H. Kesten. Random difference equations and renewal theory for products ofrandom matrices.
Acta Math. , 131:207–248, 1973.25. H. Kesten. Renewal theory for functionals of a Markov chain with general statespace.
Ann. Probability , 2:355–386, 1974.26. G. Maercker.
Statistical inference in conditional heteroscedastic autoregressivemodels . PhD thesis, Technische Universit¨at Braunschweig, 1997.27. M. Mirek. Heavy tail phenomenon and convergence to stable laws for iteratedLipschitz maps.
Probab. Theory Related Fields , 151(3-4):705–734, 2011.28. V. M. Shurenkov. On the theory of Markov renewal.