Slowly varying asymptotics for signed stochastic difference equations
aa r X i v : . [ m a t h . P R ] J u l Slowly varying asymptoticsfor signed stochastic difference equations
Dmitry KorshunovJuly 28, 2020
Abstract
For a stochastic difference equation D n = A n D n − + B n which sta-bilises upon time we study tail distribution asymptotics of D n under the as-sumption that the distribution of log(1 + | A | + | B | ) is heavy-tailed, thatis, all its positive exponential moments are infinite. The aim of the presentpaper is three-fold. Firstly, we identify the asymptotic behaviour not only ofthe stationary tail distribution but also of D n . Secondly, we solve the prob-lem in the general setting when A takes both positive and negative values.Thirdly, we get rid of auxiliary conditions like finiteness of higher momentsused in the literature before.MSC: 60H25; 60J10 Let ( A, B ) be a random vector in R such that E log | A | = − a < . Let ( A k , B k ) , k ∈ Z , be independent copies of ( A, B ) . Consider the following stochastic differ-ence equation D n = A n D n − + B n = Π n D + n X k =1 Π nk +1 B k , n ≥ , (1)where D is independent of ( A k , B k ) ’s, Π nk := A k · . . . · A n for k ≤ n and Π nn +1 = 1 . The process D n clearly constitutes a Markov chain and satisfies thefollowing equality in distribution D n = st Π − − n D + − X k = − n Π − k +1 B k . a < ∞ then, by the strong law of large numbers applied to the logarithm of | Π | , with probability , e − an ≤ Π n ≤ e − an/ ultimately in n , hence the process D n , n ≥ , is stochastically bounded if and only if E log(1 + | B | ) < ∞ . If P { A = 0 } > which implies a = ∞ , then the process D n is always stochasticallybounded. In both cases, the Markov chain D n is stable, its stationary distributionis given by the following random series D ∞ := − X k = −∞ Π − k +1 B k = st ∞ X k =1 Π k − B k and D n weakly converges to the stationary distribution as n → ∞ ; in the contextof financial mathematics such random variables are called stochastic perpetuities.Stability results for D n are dealt with in [17], see also [2]; the case where E log | A | is not necessarily finite is treated in [9].Both perpetuities and stochastic difference equations have many important ap-plications, among them life insurance and finance, nuclear technology, sociology,random walks and branching processes in random environments, extreme-valueanalysis, one-dimensional ARCH processes, etc. For particularities, we refer thereader to, for instance, Embrechts and Goldie [5], Rachev and Samorodnitsky [15]and Vervaat [17] for a comprehensive survey of the literature.If A ≥ and P { A > } > , then E A γ → ∞ as γ → ∞ , so E A β ≥ for some β < ∞ . If in addition B ≥ and P { B > } > , then E D β ∞ > E D β ∞ E A β ≥ E D β ∞ which implies that E D β ∞ = ∞ , in other words, with necessity,not all moments of D ∞ are finite; see [8] for a similar conclusion for signed A and B . It was proven in the seminal paper by Kesten [11, Theorem 5], see also [7],that if E | A | β = 1 for some β > , then a power tail asymptotics for the stationarydistribution holds, P {| D ∞ | > x } ∼ c/x β as x → ∞ , for some c > .The problem we address in this paper is about the tail asymptotic behaviour of D n and of its stationary version D ∞ in the case where the distribution of log | A | is heavy-tailed, that is, all positive exponential moments of log | A | are infinite, inother words, E | A | γ = ∞ for all γ > . It can only happen if the random variable | A | has right unbounded support.The only result in that direction we are aware of is that by Dyszewski [4] wherein the context of iterated random functions it is proven that the stationary tail dis-tribution is asymptotically equivalent to a Z ∞ x P { log C > y } dy as x → ∞ , where C := max( A, B ) , provided A , B ≥ , the integrated tail distribution of log C is subexponential and under additional moment condition that E log γ C < for some γ > . In the case of a signed B , only lower and upper asymptoticbounds are derived in [4]. An alternative approach to lower and upper bounds forthe tail of D ∞ is developed in [3] in the case of positive A and B .The aim of the present paper is three-fold. Firstly, we identify the asymptoticbehaviour not only of the stationary tail distribution but also of D n in the heavy-tailed case. Secondly, we solve the problem in the general setting when A takesboth positive and negative values. Thirdly, we get rid of auxiliary conditions likefiniteness of higher moments.Our approach to the problem is based on reduction of D n – roughly speakingby taking the logarithm of it – to an asymptotically homogeneous in space Markovchain with heavy-tailed jumps and on further analysis of such chains. Namely, wedefine a Markov chain X n on R as follows X n := (cid:26) log(1 + D n ) if D n ≥ , − log(1 + | D n | ) if D n < , (2)hence the distribution tail of D n may be computed as P { D n > x } = P { X n > log(1 + x ) } for x > . (3)At any state x ≥ , the jump of the Markov chain X n is a random variable dis-tributed as ξ ( x ) = (cid:26) log(1 + A ( e x −
1) + B ) − x if A ( e x −
1) + B ≥ , − log(1 + | A ( e x −
1) + B | ) − x if A ( e x −
1) +
B < , (4)and at any state x ≤ , ξ ( x ) = (cid:26) log(1 + A (1 − e − x ) + B ) − x if A (1 − e − x ) + B ≥ , − log(1 + | A (1 − e − x ) + B | ) − x if A (1 − e − x ) + B < . (5)Also define a sequence of independent random fields ξ n ( x ) , x ∈ R , which areindependent copies of ξ ( x ) . Then the recursion (1) may be rewritten as X n +1 = X n + ξ n ( X n ) . The Markov chain X n is asymptotically homogeneous in space , that is, the dis-tribution of its jump ξ ( x ) weakly converges to that of ξ := log A as x → ∞ ;it is particularly emphasised in [7, Section 2]. Let us underline that, in general, log( A + (1 − A + B ) e − x ) may not converge to ξ as x → ∞ in total variationnorm.Asymptotically homogeneous in space Markov chains are studied in detail in[1, 13] from the point of view of their asymptotic tail behaviour in subexponen-tial case. However, that results for general asymptotically homogeneous in space3arkov chains are not directly applicable to stochastic difference equations as itis formally assumed in [1, Theorem 3] that the distribution of a Markov chain X n converges to the invariant distribution in total variation norm which is not alwaystrue for stochastic difference equations. Secondly, stochastic difference equationspossess some specific properties that allow us to find tail asymptotics in a simplerway than it is done in [1, Theorem 3] or in [4, Theorem 3.1]; we explore that belowhowever our approach still follows some ideas of the proof for Markov chains in[1]. Let us recall some relevant classes of distributions needed in the heavy-tailedcase. Definition 1.
A distribution H with right unbounded support is called long-tailed , H ∈ L , if, for each fixed y , H ( x + y ) ∼ H ( x ) as x → ∞ ; hereinafter H ( x ) = H ( x, ∞ ) is the tail of H .A random variable A > has slowly varying at infinity distribution if and onlyif the distribution of ξ := log A is long-tailed. Definition 2.
A distribution H on R + with unbounded support is called subex-ponential , H ∈ S , if H ∗ H ( x ) ∼ H ( x ) as x → ∞ . Equivalently, P { ζ + ζ >x } ∼ P { ζ > x } , where random variables ζ and ζ are independent with com-mon distribution H . A distribution H of a random variable ζ on R with right-unbounded support is called subexponential if the distribution of ζ + is so.As well-known (see, e.g. [6, Lemma 3.2]) the subexponentiality of H on R + implies long-tailedness of H . In particular, if the distribution of a random variable ζ ≥ is subexponential then ζ is heavy-tailed.For a distribution H with finite mean, we define the integrated tail distribution H I generated by H as follows: H I ( x ) := min (cid:16) , Z ∞ x H ( y ) dy (cid:17) . Definition 3.
A distribution H on R + with unbounded support and finite meanis called strong subexponential , H ∈ S ∗ , if Z x H ( x − y ) H ( y ) dy ∼ mH ( x ) as x → ∞ , where m is the mean value of H . It is known that if H ∈ S ∗ then both H and H I are subexponential distributions, see e.g. [6, Theorem 3.27].In what follows we use the following notation for distributions: we denote(i) the distribution of log(1 + | A | + | B | ) by H ;(ii) the distribution of log(1 + | A | ) by F ;(iii) the distribution of log(1 + | B | ) by G ;4iv) the distribution of log(1 + B + ) by G + ;(v) the distribution of log(1 + B − ) by G − .The paper is organised as follows. In Sections 2, 4 and 5 we assume that log | A | has finite negative mean and successively investigate three different casesin the order of increasing difficulty: (i) both A and B are positive, see Theorem 1;(ii) A is positive and B is a signed random variable, see Theorem 6; (iii) both A and B are signed, see Theorem 7. In the case (i) we also explain in Theorem 4 the mostprobable way by which large deviations of D n can occur – it is a version of theprinciple of a single big jump playing the key role in the theory of subexponentialdistributions. The aim of Section 3 is to explain what happens if the distribution of A has an atom at zero; in that case the tail asymptotics of D n is essentially differentfrom what we observe if A has no atom at zero. In this section we consider a positive D n , so A > , B ≥ – we exclude thecase where A has an atom at zero as then the tail asymptotics of D n are essentiallydifferent, see the next section. Then the Markov chain X n := log(1 + D n ) ispositive too. As above, we denote ξ := log A and the distribution of the randomvariable log(1 + A + B ) by H . Theorem 1.
Suppose that
A > , B ≥ , E ξ = − a ∈ ( −∞ , and E log(1 + B ) < ∞ , so that D n is positive recurrent.If the integrated tail distribution H I is long-tailed, then P { D ∞ > x } ≥ ( a − + o (1)) H I (log x ) as x → ∞ . (6) If, in addition, the distribution H is long-tailed itself, then P { D n > x } ≥ o (1) a Z log x + na log x H ( y ) dy as x → ∞ uniformly for all n ≥ . (7) If the integrated tail distribution H I is subexponential then P { D ∞ > x } ∼ a − H I (log x ) as x → ∞ . (8) If moreover the distribution H is strong subexponential then P { D n > x } ∼ a Z log x + na log x H ( y ) dy as x → ∞ uniformly for all n ≥ . (9)5he main contribution of Theorem 1 is (9) that states uniform asymptotic be-haviour for all n ≥ . It is much stronger than a rather simple conclusion that (9)holds for a fixed n earlier proven by Dyszewski in [4, Theorem 3.3] by inductionargument that clearly does not work if we wanted to describe tail asymptotics forthe entire range of n ≥ .In [4], a sufficient condition for the asymptotics (8) is formulated in terms of thedistribution of log max( A, B ) instead of H . Let us show that these two approachesare equivalent. Indeed, for any two positive random variables A and B , since max(log(1 + A ) , log(1 + B )) ≤ log(1 + A + B ) < log 2 + max(log(1 + A ) , log(1 + B )) , it follows that(i) the distribution H is long-tailed/subexponential/strong subexponential ifand only if the distribution of max(log(1 + A ) , log(1 + B )) is long-tailed/sub-exponential/strong subexponential respectively;(ii) the distribution H I is subexponential if and only if the integrated tail distri-bution of max(log(1 + A ) , log(1 + B )) is so.Denote the distribution of log(1 + A ) by F and that of log(1 + B ) by G . In thenext result we discuss some sufficient conditions for subexponentiality and relatedproperties of H . Lemma 2.
Let A and B be any two positive random variables such that either ofthe following two conditions holds:(i) the distribution H of log(1 + A + B ) is long-tailed or(ii) the random variables A and B are independent.Then if the distribution ( F + G ) / is subexponential or strong subexponential,then the distribution H is subexponential or strong subexponential respectively.If the integrated tail distribution ( F I + G I ) / is subexponential, then H I issubexponential too.Proof. First assume that (i) holds. On one side, H ( x ) = P { log(1 + A + B ) > x }≥ P { log(1 + A ) > x } + P { log(1 + B ) > x } (cid:0) F ( x ) + G ( x ) (cid:1) / (10)and thus, for all sufficiently large x , H I ( x ) ≥ (cid:0) F I ( x ) + G I ( x ) (cid:1) / . (11)6n the other side, H ( x ) ≤ P { log(1 + 2 A ) > x } + P { log(1 + 2 B ) > x } < F ( x − log 2) + G ( x − log 2) . (12)If ( F + G ) / is subexponential then it is long-tailed and hence H ( x ) ≤ (1 + o (1)) (cid:0) F ( x ) + G ( x ) (cid:1) as x → ∞ . (13)If ( F I + G I ) / is subexponential then similarly H I ( x ) ≤ (1 + o (1)) (cid:0) F I ( x ) + G I ( x ) (cid:1) as x → ∞ . (14)The two bounds (13) and (10) in the case of long-tailed H allow us to apply The-orem 3.11 or 3.25 from [6] and to conclude subexponentiality or strong subexpo-nentiality of H respectively provided ( F + G ) / is so.The two bounds (14) and (11) in the case of long-tailed H I allow us to applyTheorem 3.11 from [6] and to conclude subexponentiality of H I provided ( F I + G I ) / is so.Now let us consider the case where A and B are independent which yields thefollowing improvement on the lower bound (10). For all x > , H ( x ) ≥ P { log(1 + A ) > x } + P { log(1 + A ) ≤ x } P { log(1 + B ) > x } = F ( x ) + F ( x ) G ( x ) ∼ F ( x ) + G ( x ) as x → ∞ . Therefore, H inherits the tail properties of the distribution ( F + G ) / , and H I thetail properties of ( F I + G I ) / . Proof of Theorem 1.
At any state x ≥ , the Markov chain X n has jump ξ ( x ) = log(1 + A ( e x −
1) + B ) − x = log( A + e − x (1 − A + B )) ≥ log( A − e − x A ) , as B ≥ . Fix an ε > . Choose x sufficiently large such that log(1 − e − x ) ≥− ε/ . Then the family of jumps ξ ( x ) , x ≥ x , possesses an integrable minorant ξ ( x ) ≥ ξ + log(1 − e − x ) ≥ ξ − ε/ η. (15)7n the other hand, since A > and B ≥ , the family of jumps ξ ( x ) , x ≥ x ,possesses an integrable majorant ζ ( x ) := log( A + e − x (1+ B )) . For a sufficientlylarge x , E log( A + e − x (1 + B )) ≤ E ξ + ε, (16)owing to the dominated convergence theorem which applies because firstly log( A + e − x (1 + B )) → log A = ξ a.s. as x → ∞ and secondly, by the concavity of thefunction log(1 + z ) , log( A + e − x (1 + B )) < log(1 + A + e − x (1 + B )) ≤ log(1 + A ) + log(1 + e − x (1 + B )) , which is integrable by the finiteness of E ξ and E log(1 + B ) .Let us first prove the lower bound (6) following the single big jump techniqueknown from the theory of subexponential distributions. Since D n is assumed to beconvergent, the associated Markov chain X n is stable, so there exists a c > suchthat P { X n ∈ (1 /c, c ] } ≥ − ε for all n ≥ . Let us consider the event Ω( k, n, c ) := { η k +1 + . . . + η k + j ≥ − c − n ( a + ε ) for all j ≤ n } , (17)where η k are independent copies of η defined in (15). By the strong law of largenumbers, there exists a sufficiently large c such that P { Ω( k, n, c ) } ≥ − ε for all k and n. (18)It follows from (15) that any of the events { X k − ≤ c, X k > x + c + ( n − k )( a + ε ) , Ω( k, n − k, c ) } (19)implies X n > x and they are pairwise disjoint. Therefore, by the Markov propertyand (18), P { X n > x }≥ n X k =1 P { X k − ≤ c, X k > x + c + ( n − k )( a + ε ) } P { Ω( k, n − k, c ) }≥ (1 − ε ) n X k =1 P { X k − ∈ (1 /c, c ] , X k > x + c + ( n − k )( a + ε ) } . k th probability on the right hand side equals Z c /c P { X k − ∈ dy } P { y + ξ ( y ) > x + c + ( n − k )( a + ε ) } = Z c /c P { X k − ∈ dy } P { log(1 + A ( e y −
1) + B ) > x + c + ( n − k )( a + ε ) } . For all y > /c , log(1 + A ( e y −
1) + B ) ≥ log(1 + A ( e /c −
1) + B ) ≥ log(1 + A + B ) + log( e /c − , because e /c − < √ e − < . Therefore, the value of the last integral is not lessthan P { X k − ∈ (1 /c, c ] } P { log(1 + A + B ) > x + c + ( n − k )( a + ε ) } , where c := c − log( e /c − . Hence, due to the choice of c , P { X n > x } ≥ (1 − ε ) n X k =1 H ( x + c + ( n − k )( a + ε )) . Since the tail is a decreasing function, the last sum is not less than a + ε Z n ( a + ε )0 H ( x + c + y ) dy. (20)Letting n → ∞ we obtain that the tail at point x of the stationary distribution ofthe Markov chain X is not less than (1 − ε ) a + ε Z ∞ H ( x + c + y ) dy = (1 − ε ) a + ε H I ( x + c ) ∼ (1 − ε ) a + ε H I ( x ) as x → ∞ , due to the long-tailedness of the integrated tail distribution H I . Summarising alto-gether we deduce that, for every fixed ε > , lim inf x →∞ P { D ∞ > x } H I (log x ) ≥ (1 − ε ) a + ε , which implies the lower bound (6) due to the arbitrary choice of ε > .9f the distribution H is long-tailed itself, then the integral in (20) is asymptoti-cally equivalent to the integral Z x + n ( a + ε ) x H ( y ) dy as x → ∞ uniformly for all n ≥ , which implies the second lower bound (7).Now let us turn to the asymptotic upper bound under the assumption that theintegrated tail distribution H I is subexponential. Fix an ε ∈ (0 , a ) . Let x bedefined as in (16), so E ζ ( x ) ≤ − a + ε . Let J be the distribution of ζ ( x ) . Since log(1 + A + B ) − x ≤ ζ ( x ) ≤ log(1 + A + B ) , we have H ( x + x ) ≤ J ( x ) ≤ H ( x ) . Then subexponentiality of H I yields subex-ponentiality of the integrated tail distribution J I and J I ( x ) ∼ H I ( x ) as x → ∞ .By the construction of ζ ( x ) , x + ξ ( x ) ≤ y + ζ ( x ) for all y ≥ x ≥ x . (21)Also, by the positivity of A , x + ξ ( x ) = log(1 + A ( e x −
1) + B ) ≤ log(1 + A ( e x −
1) + B )= x + ξ ( x ) ≤ x + ζ ( x ) for all x ≤ x . (22)Consider a random walk Z n delayed at the origin with jumps ζ ( x ) : Z := 0 , Z n := ( Z n − + ζ n ( x )) + , where ζ n ( x ) are independent copies of ζ ( x ) . The upper bounds (21) and (22)yield that the two chains X n and Z n can be constructed on a common probabilityspace in such a way that, with probability , X n ≤ x + Z n for all n, (23)so X n is dominated by a random walk on [ x , ∞ ) delayed at point x . Since theintegrated tail distribution J I is subexponential, the tail of the invariant measure ofthe chain Z n is asymptotically equivalent to J I ( x ) / ( a − ε ) ∼ H I ( x ) / ( a − ε ) as x → ∞ , see, for example, [6, Theorem 5.2]. Thus, the tail of the invariant measureof X n is asymptotically not greater than H I ( x − x ) / ( a − ε ) which is equivalentto H I ( x ) / ( a − ε ) , since H I is long-tailed by subexponentiality. Hence, lim sup x →∞ P { D ∞ > x } H I (log x ) ≤ a − ε . ε > and the lower bound proven above this completesthe proof of the first asymptotics (8).The same arguments with the same majorant (23) allow us to conclude thefinite time horizon asymptotics for D ∞ if we apply Theorem 5.3 from [6] insteadof Theorem 5.2.Theorem 1 makes it possible to identify a moment of time after which the taildistribution of D n is equivalent to that of D ∞ , in some particular strong subexpo-nential cases. Corollary 3.
Suppose that E log A = − a < , B > and E log(1 + B ) < ∞ .If the distribution H of log(1 + A + B ) is regularly varying at infinity withindex α < − , then P { D n > x } ∼ P { D ∞ > x } as n , x → ∞ if and only if n/ log x → ∞ .If H ( x ) ∼ e − x β for some β ∈ (0 , , then P { D n > x } ∼ P { D ∞ > x } as n , x → ∞ if and only if n/ log − β x → ∞ . We conclude this section by a version of the principle of a single big jump for D n . For any c > and ε > consider events Ω k := { /c < X k − ≤ c, X k > log x + c + ( n − k )( a + ε ) , | X k + j − X k + aj | ≤ c + jε for all j ≤ n − k (cid:9) or, in terms of D n , Ω Dk := { /c < D k − ≤ c, A k /c + B k > xe c +( n − k )( a + ε ) ,e − c − j ( a + ε ) ≤ D k + j /D k ≤ e c − j ( a − ε ) for all j ≤ n − k (cid:9) . Roughly speaking, it describes a trajectory such that, for large x , the D k − is nei-ther too far away from zero nor too close, then a single big jump occurs, both A k and B k may contribute to that big jump, and then the logarithm of D k + j , j ≤ n − k ,moves down according to the strong law of large numbers with drift − a . As statedin the next theorem, the union of all these events describes more precisely than thelower bound of Theorem 1 the most probable way by which large deviations of D n do occur. Theorem 4.
Let the distribution H of log(1 + A + B ) be strong subexponential.Then, for any fixed ε > , lim c →∞ lim x →∞ inf n ≥ P {∪ n − k =0 Ω k | D n > x } = 1 . Proof.
The events Ω( k ) , k ≤ n , are pairwise disjoint and any of them implies { X n > log x } . Then similar arguments as in the proof of lower bound in Theorem1 apply. 11 Impact of atom at zero
In this section we demonstrate what happens if the distribution of A has an atomat zero. It turns out that then the tail asymptotics of D n are essentially different –they are proportional to the tail of H which is lighter than given by integrated taildistribution H I in the case where A > – because the chain satisfies Doeblin’scondition, see e.g. [14, Ch. 16]. As above, we denote by H the distribution of therandom variable log(1 + A + B ) . For simplicity, we assume that B > . Theorem 5.
Suppose that A ≥ , B > and p := P { A = 0 } ∈ (0 , . If thedistribution H is long-tailed and D > , then P { D n > x } ≥ (cid:16) − (1 − p ) n p + o (1) (cid:17) H (log x ) (24) as x → ∞ uniformly for all n ≥ . In particular, P { D ∞ > x } ≥ ( p − + o (1)) H (log x ) as x → ∞ . (25) If the distribution H is subexponential, D > and { D > x } = o ( H ( x )) then P { D n > x } ∼ − (1 − p ) n p H (log x ) (26) as x → ∞ uniformly for all n ≥ . In particular, P { D ∞ > x } ∼ p − H (log x ) as x → ∞ . (27) Proof.
Let H be the distribution of log(1 + A + B ) conditioned on A > and G be the distribution of log(1 + B ) conditioned on A = 0 , then H = p G + (1 − p ) H .Let us decompose the event X n > x according to the last zero value of A k ,which gives equality P { X n > x } = P { A , . . . , A n > , X n > x } + n X k =1 P { A k = 0 , A k +1 > , . . . , A n > , X n > x } = (1 − p ) n P { X n > x | A , . . . , A n > } + p n X k =1 (1 − p ) n − k P { X n > x | A k = 0 , A k +1 , . . . , A n > } = (1 − p ) n P { X n > x | A , . . . , A n > } + p n − X k =0 (1 − p ) k P { X k +1 > x | A = 0 , A , . . . , A k +1 > } , (28)12y the Markov property. In particular, the sum from to n − on the right handside is increasing as n grows as all terms are positive. For that reason, for the lowerbounds for P { D n > x } it suffices to prove by induction that, for any fixed k ≥ and γ > , there exists a c < ∞ such that P { X k +1 > x | A = 0 , A , . . . , A k +1 > }≥ (1 − γ ) (cid:0) G ( x + c ) + kH ( x + c ) (cid:1) , (29) P { X k +1 > x | A , . . . , A k +1 > } ≥ (1 − γ )( k + 1) H ( x + c ) (30)for all sufficiently large x , because then P { X n > x } ≥ (1 − γ ) (cid:18) (1 − p ) n nH ( x + c )+ p n − X k =0 (1 − p ) k (cid:0) G ( x + c ) + kH ( x + c ) (cid:1)(cid:19) = (1 − γ )( (cid:0) − (1 − p ) n (cid:1)(cid:16) G ( x + c ) + 1 − p p H ( x + c ) (cid:17) = (1 − γ ) 1 − (1 − p ) n p H ( x + c ) , with further application of long-tailedness of H .To prove (29), first let us note that the induction basis k = 0 is immediate,since the distribution of X conditioned on A = 0 is G . Now let us assume that(29) is true for some k . Denote G k ( dy ) := P { X k +1 ∈ dy | A = 0 , A , . . . , A k +1 > } , k ≥ , which is a distribution on (0 , ∞ ) . Then G k +1 ( x ) = Z ∞ P { log(1 + A ( e y −
1) + B ) > x | A > } G k ( dy ) ≥ Z /εε P { log(1 + Aδ + B ) > x | A > } G k ( dy )+ Z ∞ x +1 /ε P { log( A ( e y − > x | A > } G k ( dy )=: I + I , for any ε ∈ (0 , / where δ = e ε − < √ e − < . Let us observe that then P { log(1 + Aδ + B ) > x | A > } = P { log(1 /δ + A + B/δ ) > x − log δ | A > }≥ H ( x − log δ ) . I ≥ H ( x − log δ ) G k ( ε, /ε ] . The second integral may be bounded below as follows: I ≥ P { log( A ( e x +1 /ε − > x | A > } G k ( x + 1 /ε ) ≥ P { log( Ae x +1 / ε ) > x | A > } G k ( x + 1 /ε )= P { A > e − / ε | A > } G k ( x + 1 /ε ) , for all sufficiently large x . Letting ε → we obtain that, for any fixed γ > , thereexists a c < ∞ such that the following lower bound holds G k +1 ( x ) ≥ (1 − γ ) (cid:0) H ( x + c ) + G k ( x + c ) (cid:1) for all sufficiently large x , which implies the induction step.The second lower bound, (30), follows by similar arguments provided D > .Let us now proceed with a matching upper bound under the assumption that H is a subexponential distribution. Since A , B ≥ , ξ ( x ) = log( A + e − x (1 − A + B )) (31) ≤ log(1 + A + B ) for all x > . (32)Let η and ζ be random variables with the following tail distributions P { η > x } = min (cid:18) , P { log(1 + A + B ) > x } P { A = 0 } (cid:19) , P { ζ > x } = min (cid:18) , P { log(1 + A + B ) > x } P { A > } (cid:19) , x > . Both are subexponential random variables provided log(1 + A + B ) is so, see e.g.[6, Corollary 3.13]. It follows from (31) that, for all x > , P { ξ ( x ) > y | A = 0 } ≤ P { η > y } , P { ξ ( x ) > y | A > } ≤ P { ζ > y } , which implies that P { X k +1 > x | A = 0 , A , . . . , A k +1 > } ≤ P { η + ζ + . . . + ζ k > x } , where ζ i ’s are independent copies of ζ independent of η . Then standard techniquebased on Kesten’s bound for convolutions of subexponential distributions, see e.g.14heorem 3.39 in [6], allows us to deduce from (28) that, for any fixed γ > , P { X n > x }≤ (1 + γ ) (cid:16) (1 − p ) n nG ( x ) + p n − X k =0 (1 − p ) k ( G ( x ) + kH ( x ) (cid:17) for all n ≥ and sufficiently large x . Therefore, P { X n > x } ≤ (1 + γ ) 1 − (1 − p ) n p H ( x ) , which together with the lower bound proves (26). A and signed B In this section we consider the case where D n takes both positive and negativevalues because of singed B , while A is still assumed positive in this section, A > .The Markov chain X n is defined as in (2).As B is no longer assumed positive, it makes the tail behaviour of D quitedifferent if no further assumptions are made on dependency between A and B .For example, in the extreme case where B = − cA for some c > , so D n +1 = A n ( D n − c ) , we have that D n is eventually negative, D ∞ < with probability 1.More generally, if B = Aη where η is independent of A and takes values ofboth signs, then we conclude similar to (6) that, as x → ∞ , P { D ∞ > x } ≥ (cid:18) a Z R P { η > − c } P { D ∞ ∈ dc } + o (1) (cid:19) F I (log x ) , provided the distribution F I is long-tailed. However, the technique used in Section2 for proving the matching upper bound does not work in such cases as the Lindleymajorant returns the coefficient a − which is greater than that in the lower boundabove. For that reason we restrict further considerations to the case where A and B are independent. Theorem 6.
Suppose that
A > , A and B are independent, E ξ = − a ∈ ( −∞ , and E log(1 + | B | ) < ∞ .If the integrated tail distributions F I and G + I are long-tailed, then P { D ∞ > x } ≥ ( a − + o (1)) (cid:16) P { D ∞ > } F I (log x ) + G + I (log x ) (cid:17) as x → ∞ . (33)15 f, in addition, the distributions F and G + are long-tailed itself, then, as x , n → ∞ , P { D n > x }≥ o (1) a (cid:18) P { D ∞ > } Z log x + na log x F ( y ) dy + Z log x + na log x G + ( y ) dy (cid:19) . (34) If P { D ∞ = 0 } = 0 , the integrated tail distributions F I , G + I and G − I arelong-tailed, G − I ( z ) = O ( F I ( z ) + G + I ( z )) and H I is subexponential then P { D ∞ > x } ∼ a − (cid:16) P { D ∞ > } F I (log x ) + G + I (log x ) (cid:17) as x → ∞ . (35) If moreover the distributions F , G + and G − are long-tailed, G − ( z ) = O ( F ( z ) + G + ( z )) and H is strong subexponential then, as x , n → ∞ , P { D n > x } ∼ a (cid:18) P { D ∞ > } Z log x + na log x F ( y ) dy + Z log x + na log x G + ( y ) dy (cid:19) . (36) Proof.
Fix an ε > . As follows from (4), for x ≥ , ξ ( x ) ≥ (cid:26) log( A (1 − e − x ) − e − x B − ) if A ( e x −
1) + B ≥ , − log(1 + A + | B | ) if A ( e x −
1) +
B < , where the second line follows due to A > . The minorant on the right hand sideis stochastically increasing as x grows, therefore, there exists a sufficiently large x and a random variable η such that ξ ( x ) ≥ η for all x ≥ x and E η > − a − ε/ . (37)As in the last proof, we start with the lower bound (33) following the singlebig jump technique. Since D n is assumed to be convergent, the associated Markovchain X n is stable, so there exist n and c > such that P { X n ∈ (1 /c, c ] } ≥ (1 − ε ) P { D ∞ > } for all n ≥ n , P {| X n | ≤ c } ≥ − ε for all n, and also P { A ≤ c } ≥ − ε , P {| B | ≤ c } ≥ − ε . For all k , n and c , let us considerthe events Ω( k, n, c ) defined in (17) and satisfying (18). It follows from (37) thatany of the events (19) implies X n > x and they are pairwise disjoint. Therefore,16y the Markov property and (18), P { X n > x }≥ n X k =1 P { X k − ≤ c, X k > x + c + ( n − k )( a + ε ) } P { Ω( k, n − k, c ) }≥ (1 − ε ) n X k =1 P { X k − ≤ c, X k > x + c + ( n − k )( a + ε ) } , (38)The k th term of the sum is not less than (cid:18)Z − c + Z c (cid:19) P { X k − ∈ dy } P { y + ξ ( y ) > z n − k } = Z − c P { X k − ∈ dy } P { log(1 + A (1 − e − y ) + B ) > z n − k } + Z c P { X k − ∈ dy } P { log(1 + A ( e y −
1) + B ) > z n − k } =: I + I , where z k = x + c + k ( a + ε ) . For all y ∈ [ − c, and z > , owing to the condition A > and independence of A and B P { log(1 + A (1 − e − y ) + B ) > z } ≥ P { log(1 − Ae c + B ) > z }≥ P { A ≤ c } P { log(1 − ce c + B ) > z }≥ P { A ≤ c } G + ( z + 1) for all sufficiently large z which yields that I ≥ P { A ≤ c } P { X k − ∈ [ − c, } G + ( z n − k + 1) ≥ (1 − ε ) P { X k − ∈ [ − c, } G + ( z n − k + 1) , (39)due to the choice of c . For all y > , P { log(1 + A ( e y −
1) + B ) > z }≥ P {| B | ≤ c } P { log(1 + A ( e y − − c ) > z } + P { log(1 + B ) > z } , which yields that I ≥ P {| B | ≤ c } Z c /c P { log(1 + A ( e y − − c ) > z n − k } P { X k − ∈ dy } + G + ( z n − k ) P { X k − ∈ (0 , c ] }≥ (1 − ε ) P { log(1 + A ( e /c − − c ) > z n − k } P { X k − ∈ (1 /c, c ] } + G + ( z n − k ) P { X k − ∈ (0 , c ] } . c , for all sufficiently large x and k > n , I ≥ (1 − ε ) P { D ∞ > } F ( z n − k + 1) + G + ( z n − k ) P { X k − ∈ (0 , c ] } . (40)Substituting (39) and (40) into (38) we deduce that P { X n > x } ≥ (1 − ε ) n X k = n +1 (cid:16) P { D ∞ > } F ( x + c + 1 + ( n − k )( a + ε ))+ G + ( x + c + 1 + ( n − k )( a + ε )) (cid:17) Since the tail is a non-increasing function, the last sum is not less than a + ε Z ( n − n − a + ε )0 (cid:16) P { D ∞ > } F ( x + c + 1 + y ) + G + ( x + c + 1 + y ) (cid:17) dy. (41)Letting n → ∞ we obtain that the tail at point x of the stationary distribution ofthe Markov chain X is not less than (1 − ε ) a + ε Z ∞ (cid:16) P { D ∞ > } F ( x + c + 1 + y ) + G + ( x + c + 1 + y ) (cid:17) dy = (1 − ε ) a + ε (cid:16) P { D ∞ > } F I ( x + c + 1) + G + I ( x + c + 1) (cid:17) (42) ∼ (1 − ε ) a + ε (cid:16) P { D ∞ > } F I ( x ) + G + I ( x ) (cid:17) as x → ∞ , due to the long-tailedness of the integrated tail distributions F I and G + I . Sum-marising altogether we deduce that, for every fixed ε > , lim inf x →∞ P { D ∞ > x } P { D ∞ > } F I (log x ) + G + I (log x ) ≥ (1 − ε ) a + ε , which implies the lower bound (33) due to the arbitrary choice of ε > .If the distributions F and G + are long-tailed itself, then the integral in (41) isasymptotically equivalent to the integral Z x + n ( a + ε ) x (cid:16) P { D ∞ > } F ( y ) + G + ( y ) (cid:17) dy as x, n → ∞ , and the second lower bound (34) follows too.18o prove matching upper bounds let us first observe that | D n +1 | ≤ A n | D n | + | B n | for all n, (43)where the right hand side is increasing in D n . Hence, | D n | ≤ e D n , where e D n is apositive stochastic difference recursion, e D n +1 = A n e D n + | B n | . Since H I is subexponential, Theorem 1 applies to e D n , so P { e D ∞ > x } ∼ a − H I (log x ) as x → ∞ , and hence P {| D ∞ | > x } ≤ ( a − + o (1)) H I (log x ) as x → ∞ , It follows from (12) that H ( x ) ≤ P { log(1 + A ) > x − } + P { log(1 + | B | ) > x − } . Integrating the last inequality we get an upper bound H I ( x ) ≤ F I ( x −
1) + G − I ( x −
1) + G + I ( x − (44) ∼ F I ( x ) + G − I ( x ) + G + I ( x ) as x → ∞ , because all three distributions, F I , G − I and G + I are assumed long-tailed. Hence thefollowing upper bound holds for the tail of D ∞ , as x → ∞ : P {| D ∞ | > x } ≤ ( a − + o (1)) (cid:0) F I (log x ) + G − I (log x ) + G + I (log x ) (cid:1) . (45)The long-tailedness of F I and G − I similarly to (33) implies that P { D ∞ < − x } ≥ ( a − + o (1)) (cid:0) P { D ∞ < } F I (log x ) + G − I (log x ) (cid:1) , and the two lower bounds together imply that, as x → ∞ , P {| D ∞ | > x } ≥ ( a − + o (1)) (cid:0) F I (log x ) + G + I (log x ) + G − I (log x ) (cid:1) , because P { D ∞ = 0 } = 0 . Together with the upper bound (45) it yields that P { D ∞ > x } = a − (cid:0) P { D ∞ > } F I (log x ) + G + I (log x ) (cid:1) + o ( H I (log x )) , and the first asymptotics (35) follows by the condition G − I ( z ) = O ( F I ( z ) + G + I ( z )) .The second asymptotics (36) follows along similar arguments.19 Balance of negative and positive tails in the case of signed A In this section we turn to the general case where D n takes both positive and neg-ative values, with A taking values of both signs. Denote ξ := log | A | and thedistribution of log(1 + | A | ) by F . Recall that the distribution of log(1 + | B | ) isdenoted by G and the distribution of log(1 + | A | + | B | ) by H .The Markov chain X n is defined as above in (2). Theorem 7.
Suppose that P { D ∞ = 0 } = 0 , < P { A > } < , (46) A and B are independent, E ξ = − a ∈ ( −∞ , and E log(1 + | B | ) < ∞ .If the integrated tail distribution H I is long-tailed, then P { D ∞ > x } ≥ (1 / a + o (1)) H I (log x ) as x → ∞ . (47) If, in addition, the distribution H is long-tailed itself, then P { D n > x } ≥ o (1)2 a Z log x + na log x H ( y ) dy as n, x → ∞ . (48) If the integrated tail distribution H I is subexponential then P { D ∞ > x } ∼ a H I (log x ) as x → ∞ . (49) If moreover the distribution H is strong subexponential then P { D n > x } ∼ a Z log x + na log x H ( y ) dy as n, x → ∞ . (50) Proof.
The same arguments based on the single big jump technique used in the lastsection for proving (42) show that, for any fixed ε > , there exists a c < ∞ suchthat P {| X ∞ | > x } ≥ − εa (cid:0) P { D ∞ = 0 } F I ( x + c + 1) + G I ( x + c + 1) (cid:1) for all sufficiently large x . Similar to (44), H I ( x ) ≤ F I ( x −
1) + G I ( x − x , which together with the condition P { D ∞ = 0 } = 0 implies that P {| X ∞ | > x } ≥ − εa H I ( x + c + 2) ∼ − εa H I ( x ) as x → ∞ , due to the long-taileness of the distribution H I . Therefore, P {| X ∞ | > x } ≥ ( a − + o (1)) H I ( x ) as x → ∞ . (51)At any time large absolute value of X n changes its sign with asymptotic (as x →∞ ) probability p − = P { A < } and keeps its sign with asymptotic probability p + = P { A > } , so sign change may be asymptotically described as a Markovchain with transition probability matrix (cid:18) p + p − p − p + (cid:19) , whose asymptotic distribution is (1 / , / , owing to the condition (46). For thatreason, the probability of a large positive value of X n is approximately at least onehalf of the right hand side of (51), and the proof of (47) is complete. The proof of(48) follows the same lines.To prove the upper bound (49), similar to (43) we first note that | D n +1 | ≤ | A n || D n | + | B n | for all n, which allows to conclude the proof as it was done in the last section. References [1] B
OROVKOV , A. A., K
ORSHUNOV , D. (2002) Large-deviation probabilitiesfor one-dimensional Markov chains. Part 3: Prestationary distributions in thesubexponential case.
Theory Probab. Appl. , 603–618.[2] B URACZEWSKI , D., D
AMEK , E., M
IKOSCH , T. (2016)
Stochastic Modelswith Power-Law Tails. The Equation X = AX + B . Springer.[3] C
HEN , B., R
HEE , C.-H., Z
WART , B. (2018) Importance sampling of heavy-tailed iterated random functions.
Adv. Appl. Probab. , 805–832.[4] D YSZEWSKI , P. (2016) Iterated random functions and slowly varying tails.
Stochastic Process. Appl. , 392–413.215] E
MBRECHTS , P.
AND G OLDIE , C.M. (1994). Perpetuities and random equa-tions. In
Asymptotic Statistics (Prague, 1993). Contrib. Statist. , 75–86. Heidel-berg: Physica.[6] F
OSS , S., K
ORSHUNOV , D., Z
ACHARY , S. (2011)
An Introduction to Heavy-Tailed and Subexponential Distributions . Springer, New York.[7] G
OLDIE , C. M. (1991) Implicit renewal theory and tails of solutions of ran-dom equations.
Ann. Appl. Probab. , 126–166.[8] G OLDIE , C. M., G R ¨ UBEL , R. (1996) Perpetuities with thin tails.
Adv. Appl.Probab. , 463–480.[9] G OLDIE , C. M., M
ALLER , R. A. (2000) Stability of perpetuities.
Ann.Probab. , 1195–1218.[10] G REY , D. (1994) Regular variation in the tail behaviour of solutions of ran-dom difference equations.
Ann. Appl. Probab. , 169–183.[11] K ESTEN , H. (1973) Random difference equations and renewal theory forproducts of random matrices.
Acta Math. , 207–248.[12] K
ONSTANTINIDES , D.G., M
IKOSCH , T. (2005) Large deviations and ruinprobabilities for solutions to stochastic recurrence equations with heavy-tailedinnovations.
Ann. Probab. , 1992–2035.[13] K ORSHUNOV , D. (2002) Large-deviation probabilities for maxima of sumsof independent random variables with negative mean and subexponential dis-tribution.
Theory Probab. Appl. , 355–366.[14] M EYN , S., T
WEEDIE , R. (2009)
Markov Chains and Stochastic Stability ,2nd Ed., Cambridge Univ. Press.[15] R
ACHEV , S.T., S
AMORODNITSKY , G. (1995) Limit laws for a stochasticprocess and random recursion arising in probabilistic modeling.
Adv. Appl.Probab. , 185–202.[16] DE S APORTA , B. (2005) Tail of the stationary solution of the stochastic equa-tion Y n +1 = a n Y n + b n with Markovian coefficients. Stochastic Process. Appl. , 1954–1978.[17] V
ERVAAT , W. (1979) On a stochastic difference equation and a representationof nonnegative infinitely divisible random variables.
Adv. Appl. Probab.11