The two-sided exit problem for a random walk on \mathbb{Z} and having infinite variance II
aa r X i v : . [ m a t h . P R ] F e b The two-sided exit problemfor a random walk on Z and having infinite variance II Kˆohei UCHIYAMA Abstract
Let F be a distribution function on the integer lattice Z and S = ( S n ) the randomwalk with step distribution F . Suppose S is oscillatory and denote by U a ( x ) and u a ( x ) therenewal function and sequence, respectively, of the strictly ascending ladder height processassociated with S . Putting A ( x ) = R x [1 − F ( t ) − F ( − t )] dt , H ( x ) = 1 − F ( x ) + F ( − x ) wesuppose A ( x ) / (cid:0) xH ( x ) (cid:1) → −∞ ( x → ∞ ) . Under some additional regularity condition on the positive tail of F , we show that u a ( x ) ∼ U a ( x )[1 − F ( x )] / | A ( x ) | as x → ∞ and uniformly for 0 ≤ x ≤ R , as R → ∞ P [ S leaves [0 , R ] on its upper side | S = x ] ∼ c − A ( x ) u a ( x ) , where c = P ∞ n =1 P [ S n > S ; S k < S for 0 < k < n ] and the regularity condition issatisfied at least if S is recurrent, lim sup[1 − F ( x )] /F ( − x ) <
1, and x [1 − F ( x )] /L ( x )( x ≥
1) is bounded away from zero and infinity for some slowly varying function L . Wealso give some asymptotic estimates of the probability that S visits R before entering thenegative half-line for asymptotically stable walks and obtain asymptotic behaviour of theprobability that R is ever hit by S conditioned to avoid the negative half-line forever. Keywords : exits from interval; relatively stable; infinite variance; renewal sequences forladder heights.
AMS MSC 2010 : Primary 60G50, Secondary 60J45.
This paper is a continuation of [27]. We use the same notation as in [27] which together with thesetting of the present work and [27] we present below. Let S = ( S n ) ∞ n =0 be a random walk (r.w.)on the integer lattice Z with i.i.d. increments and an initial point S which is an unspecifiedinteger. Let X be a generic random variable having the same law as the increment S − S .For x ∈ Z let P x denote the law of the r.w. S started at x and E x the expectation by P x ; thesubscript x is dropped from P x and E x if x = 0. We suppose throughout the paper that S isirreducible, oscillating , and σ := EX = ∞ . For a subset B ⊂ ( −∞ , ∞ ) such that B ∩ Z = ∅ ,denote by σ B the first time when S visits B after time zero, namely σ B = inf { n ≥ S n ∈ B } .For simplicity, we write σ x for σ { x } . As in [27] we shall be primarily concerned with the P x probability of the event Λ R = { σ ( R, ∞ ) < T } , Department of Mathematics,Tokyo Institute of Technology, Japan Ω = ( −∞ , − T = σ Ω and R is a positive integer. Denote by U a ( x ) and u a ( x ) ( V d ( x )and v d ( x )) the renewal function and sequence of the strictly ascending (weakly descending)ladder height process associated with S . Put A ( x ) = Z x [1 − F ( t ) − F ( − t )] dt and H ( x ) = P x [ | X | > x ] . In [27] we observed that P x (Λ R ) always admits the upper bound P x (Λ R ) ≤ V d ( x ) /V d ( R ) ( x ≥ P x (Λ R ) ∼ V d ( x ) /V d ( R ) uniformly for 0 ≤ x < R as R → ∞ . (1.1)One of them is fulfilled by (PRS) A ( x ) /xH ( x ) → ∞ as x → ∞ so that (1.1) holds under (PRS), while in [27] we also showed that if (NRS) A ( x ) /xH ( x ) → −∞ as x → ∞ , then P x (Λ R ) = o (cid:0) V d ( x ) (cid:14) V d ( R ) (cid:1) uniformly for 0 ≤ x < δR . In this paper, we obtain the preciseasymptotic form of P x (Λ R ) in case (NRS) under some additional regularity condition on thepositive tail of F that is satisfied at least when F is in the domain of attraction of a stablelaw with exponent 1, EX = 0 and P [ S n > →
1. This result is accompanied by the exactasymptotic form of u a .The condition (PRS) holds if and only if the r.w. S is positively relatively stable (abbreviatedas p.r.s. ) (i.e., there exists a positive sequence B n such that S n /B n → F to be p.r.s. or n.r.s. according as (PRS) or (NRS) holds.Similarly we shall say F to be recurrent (or transient) if so is the r.w. S . If F is p.r.s. (n.r.s.)both u a and V d (both v d and U a ) are s.v. at infinity (cf. Remark 1.1 of [27]).We present the results of the paper in two subsections below. In the first subsection,we state our main results in Theorems 1 and 2 and some results complementary to them inPropositions 1.1 to 1.3. In the second one, we suppose F to be attracted to a stable law andpresent our results as to asymptotic estimates of the probability that S visits R before enteringthe negative half-line for asymptotically stable walks and obtain asymptotic behaviour of theprobability that R is ever hit by S conditioned to avoid the negative half-line forever. Two-sided exit of relatively stable walks.
Let a ( x ) be the potential kernel of S when F is recurrent and G ( x ) the Green kernel when F is transient : a ( x ) = ∞ X n =0 (cid:2) P [ S n = 0] − P [ S n = − x ] (cid:3) and G ( x ) = ∞ X n =0 P [ S n = x ] . Under the relative stability F is transient if and only if Z x H ( t ) A ( t ) dt < ∞ and in this case A ( x ) → ∞ as x → ∞ (cf. [24]).2e shall study the asymptotic estimate of P x [ σ R < T ] or P x [ σ < σ ( R, ∞ ) ]. This is not onlyof interest in itself but sometimes useful for the estimates of P x (Λ R ). In fact the comparison of P x [ σ < σ ( R, ∞ ) ] to 1 − P x (Λ R ) leads to the determination of the asymptotic form of the renewalsequence—as well as these two probabilities—under (PRS) with some regularity condition onthe negative tail of F . The following result is fundamental in this direction. Denote by g Ω ( x, y )the Green function of S killed as it enters Ω (see (2.3) for the precise definition). Theorem 1. If F is recurrent and p.r.s., then a ( x ) , x > is s.v. at infinity and as y → ∞ v d ( y ) = o (cid:0) a ( y ) /U a ( y ) (cid:1) , and g Ω ( x, y ) = a ( x ) − a ( x − y ) + o (cid:0) a ( y ) (cid:1) uniformly for x > y/ , in particular g Ω ( y, y ) ∼ a ( y ) ; and for each constant δ < g Ω ( x, y ) = V d ( x ) V d ( y ) (cid:2) a ( y ) − a ( − y ) + o ( a ( y )) (cid:3) as y → ∞ uniformly for ≤ x ≤ δy,o (cid:0) a ( x ) U a ( y ) (cid:14) U a ( x ) (cid:1) as x → ∞ uniformly for ≤ y < δx. (1.2) Remark . (a) An intrinsic part of Theorem 1 will be proved under a condition weakerthan (PRS). (See Proposition 3.1 and Remark 3.1(a).)(b) Under (PRS) some asymptotic estimates of a ( x ) and G ( x ) as x → ±∞ are obtained in[25] and [24], respectively; it especially follows that as x → ∞ /A ( x ) ∼ ( a ( x ) − a ( − x ) if F is recurrent ,G ( x ) − G ( − x ) if F is transient . (1.3)(See (3.4) and (5.1) for more about a and G , respectively.)(c) In view of the identity g Ω ( x, y ) = g [1 , ∞ ) ( − y, − x ), the dual statement of Theorem 1 mayread as follows: If F is recurrent and (NRS) holds, then as y → ∞ g Ω ( x, y ) = a ( − y ) − a ( x − y ) + o (cid:0) a ( − x ) (cid:1) uniformly for y > x/
2; and g Ω ( x, y ) = = o (cid:0) a ( − x ) V d ( x ) (cid:14) V d ( y ) (cid:1) as y → ∞ uniformly for 0 ≤ x < δy,U a ( y ) U a ( x ) (cid:2) a ( − x ) − a ( x ) + o ( a ( − x )) (cid:3) as x → ∞ uniformly for 0 ≤ y ≤ δx. (d) Under (C3) we have g Ω ( x, y ) = V d ( y ) U a ( x ) /x { o (1) } for 0 ≤ x ≤ δy (see Remark3.1(b).). Comparing this to (1.2) (with y = x/
2) one sees that under (PRS) V d ( x ) U a ( x ) /x ∼ a ( x ) − a ( − x ) if lim sup a ( − x ) /a ( x ) < ,V d ( x ) U a ( x ) /x = o ( a ( x )) if lim a ( − x ) /a ( x ) = 1 . The result corresponding to Theorem 1 for the transient walk is much cheaper—we shallgive it in Section 5 as Lemma 5.1 in the dual setting (i.e., for an n.r.s. walk), whereas theexact estimation of u a (given in Proposition 1.2 below for not u a but v d because of the dualsetting) is more costly than for the recurrent walk. Here we state the standard result that if F is transient, then 0 < G (0) = 1 /P [ σ < ∞ ] < ∞ and g Ω ( x, x ) → G (0) . (1.4)3See Appendix (B) for the proof of the latter assertion.)If F is n.r.s., then P x (Λ R ) →
0, and the exact estimation of P x (Λ R ) seems hard to performin general. However, if the positive and negative tails are not balanced in the sense that lim sup x →∞ a ( x ) /a ( − x ) < F is recurrent,lim sup x →∞ G ( x ) /G ( − x ) < F is transient, (1.5)and the positive tail of F satisfies an appropriate regularity condition, then we can computethe precise asymptotic form of g Ω ( x, y ) for 0 ≤ x < δy (that is lacking in the second formula in(c) of Remark 1.1), and thereby obtain that of P x (Λ R ) for F that is n.r.s. We need to assume1 − F varies dominatedly and ∃ λ > , lim sup 1 − F ( λt )1 − F ( t ) < . (1.6)[Here a non-increasing function f is of dominated variation if lim inf f ( x ) /f ( x ) > − F ( x ) ≍ L + ( x ) /x ( x → ∞ ) for some s.v. L + and lim sup x →∞ η − ( x ) /η + ( x ) < E | X | < ∞ ,lim sup x →∞ A + ( x ) /A − ( x ) < F is transient,where η ± ( x ) = R ∞ x P [ ± X > t ] dt and A ± ( x ) = R x P [ ± X > t ] dt ( x ≥ u ( x ) = U d ( x )[1 − F ( x )] − A ( x ) . It holds (see (4.13) and (2.7)) that if F is n.r.s. and (1.5) holds, then U a ( x ) ∼ Z x ˜ u ( t ) dt. (1.7) Proposition 1.1.
Suppose that (1.5), (1.6) and (NRS) hold. Then u a ( x ) ≥ ˜ u ( x ) { o (1) } , (1.8)1 + o (1) ≤ P (Λ R ) v U d ( R )[1 − F ( R )] ≤ C, (1.9)lim ∗ u a ( x ) / ˜ u ( x ) = 1 , lim ∗ P (Λ R ) (cid:14)(cid:2) v U d ( R )(1 − F ( R )) (cid:3) = 1 , and in case EX = 0 , u a ( x ) ≤ C ˜ u ( x ) , where lim ∗ means the limit as x tends to infinity avoiding some relative density zero (cf. [2])and C is a positive constant. In general, for a renewal function U and the associated sequence u that is regularly varyingwith index γ we have lim ∗ xu ( x ) /U ( x ) = π − sin πγ if γ > γ = 0, so the result as to u a in Proposition 1.1 is of interest. In case EX = 0, lim ∗ above can be replaced by lim, if we further assumelim sup x →∞ − F ( x/λ )1 − F ( x ) → λ ↓
1; (1.10)this especially yields the strong renewal theorem for U a (in view of (1.7)).4 heorem 2. Let EX = 0 and suppose that (1.10) holds in addition to (1.5), (1.6) and (NRS).Then u d ( x ) ∼ U d ( x )[1 − F ( x )] − A ( x ) , P (Λ R ) /v ∼ U d ( R )[1 − F ( R )] , (1.11) and for each δ < , uniformly for ≤ x < δy as y → ∞ g Ω ( x, y ) ∼ V d ( x ) U a ( y ) | A ( y ) | ( x + 1) Z yy − x − [1 − F ( t )] dt ; and (1.12) P x [ σ y < T ] P x [Λ y ] ∼ a ( − y ) − a ( y ) a ( − y ) . (1.13)In case E | X | = ∞ one may expect the formulae parallel to those given in Theorem 2 to betrue on an ad hoc basis, but the analysis is more delicate than in case EX = 0. In the nextproposition we give a partial result under the following condition, more restrictive than (1.10): p ( x ) ≤ C [1 − F ( x )] /x ( x ≥ . (1.14) Proposition 1.2.
Let E | X | = ∞ . Suppose that (1.14) holds in addition to (1.5), (1.6) and(NRS) and that there exists an integer n such that (cid:0) x [1 − F ( x )] (cid:14) | A ( x ) | (cid:1) n log | A ( x ) | −→ . (1.15) Then the formulae (1.11) and (1.12) hold and instead of (1.13) it holds that P x [ σ y < T ] P x [Λ y ] ∼ G ( − R ) − G ( R ) G (0) . (1.16) Remark . (a) Even if both tails of F are regularly varying, condition (1.15) may be violated,although it is satisfied practically in most cases: for instance if lim sup[1 − F ( x )] /F ( − x ) < F ( − x ) ≍ x − exp { (log x ) γ / (log log x ) } ( x > , ≤ γ ≤ , then (1.15) holds if γ < γ = 1.(b) By [10, Corollary 2], under (NRS) the assumption of S being oscillating is equivalent to Z ∞ − F ( t ) A ( t ) dt = ∞ . (c) In (1.15) log | A ( x ) | may be replaced by R xx [1 − F ( t )] dt/ | A ( t ) | , which, in case [1 − F ( − x )] /F ( − x ) →
0, is o (log | A ( x | ), hence (1.15) is slightly relaxed.Since P x [ σ y < T ] = g Ω ( x, R ) /g Ω ( R, R ), combining (1.13) or (1.16) with (1.12), Theorem 1,(1.4) leads to the following
Corollary 1.1.
If the assumption of Theorem 2 or Proposition 1.2 holds according as F isrecurrent or transient, then uniformly for ≤ x < δR , P x (Λ R ) ∼ | A ( R ) | g Ω ( x, R ) ∼ V d ( x ) U a ( R ) x + 1 Z RR − x − [1 − F ( t )] dt as R → ∞ . (1.17)5n the dual setting, Theorem 2 is paraphrased as follows. If (PNS) holds, (1.10) and (1.6)hold with F ( − · ) in place of 1 − F and lim sup a ( − x ) /a ( x ) <
1, then1 − P (Λ y ) ∼ V d ( y ) F ( − y ) ∼ v d ( y ) A ( y ) ( y → ∞ ) ,P x [ σ < σ ( R, ∞ ) ] ∼ g Ω ( R, R − x ) /a ( R ) uniformly for 0 ≤ x ≤ R and for each ε > g Ω ( x, y ) ∼ U a ( y ) V d ( x ) A ( x )( y + 1) Z xx − y − F ( − t ) dt ( x → ∞ ) uniformly for 0 ≤ y < (1 − ε ) x, (1.18) P x [ σ < σ ( R, ∞ ) ] P x [ T < σ ( R, ∞ ) ] ∼ a ( R ) − a ( − R ) a ( R ) ( R → ∞ ) uniformly for εR < x ≤ R ,1 − P x (Λ R ) ∼ A ( R ) g Ω ( R, R − x ) ∼ U a ( R − x ) V d ( R ) R − x + 1 Z Rx − F ( − t ) dt ( R → ∞ )uniformly for εR < x ≤ R ; (1.19)in particular by (1.18) P x [ σ y < T ] ≍ yV d ( x ) F ( − x ) V d ( y ) A ( x ) = o (cid:18) V d ( x ) /xV d ( y ) /y (cid:19) ( x → ∞ ) uniformly for 0 ≤ y < δx, where for the last equality we have used V d ( y ) U a ( y ) ≍ y/a ( y ) (see L(3.1) —in Section 2—and(3.6)). The analogous dual results for Proposition 1.2 would be obvious and is omitted.Let Z (resp. ˆ Z ) be the first ladder height of the strictly ascending (resp. descending ladder)process: Z = S σ [1 , ∞ ) , ˆ Z = S T . We shall be also concerned with the overshoot which we defineby Z ( R ) = S σ [ R +1 , ∞ ) − R. Remark . In [26, Eq(2.22)] it is shown that if
EZ < ∞ , then1 − P x (Λ R ) ∼ (cid:2) V d ( R ) − V d ( x ) (cid:3)(cid:14) V d ( R ) as R − x → ∞ for x ≥ , which, giving an exact asymptotics for 0 ≤ x ≤ εR , partially complements (1.19).From the estimate of u a and g Ω ( x, y ) of Theorem 2 and Proposition 1.2 we can derive someexact asymptotic estimates of P x [ S σ ( R, ∞ ) − = y | Λ R ] , (1.20)the conditional probability of S exiting the interval [0 , R ] through y , given Λ R . We shall carryout the derivation in Section 6. Here we state the following consequence of it as to Z ( R ). Proposition 1.3.
Suppose that either the assumption of Theorem 2 or that of Proposition 1.2holds. Suppose, in addition, that − F ( x ) ∼ L + ( x ) /x for some s.v. function L + . Then foreach ε > , uniformly for y > εR and ≤ x < (1 − ε ) R , as R → ∞ P x (cid:2) Z ( R ) ≤ y (cid:12)(cid:12) Λ R (cid:3) ∼ − log[1 − ( R + y ) − ( x + 1)]log[1 − R − (1 + x )] ; in particular P x (cid:2) Z ( R ) ≤ y (cid:12)(cid:12) Λ R (cid:3) ∼ y/ ( R + y ) as x/R → . .2. Asymptotics of P x [ σ R < T | Λ R ] for asymptotically stable walks. As in [27] we bring in the asymptotic stability condition( AS ) a . X is attracted to a stable law of exponent 0 < α ≤ . EX = 0 if E | X | < ∞ .c . there exists ρ := lim P [ S n > . Suppose condition (ASab)—the conjunction of (ASa) and (ASb)—to hold with α <
2. Itthen follows that 1 − F ( x ) ∼ pL ( x ) x − α and F ( − x ) ∼ qL ( x ) x − α (1.21)for some s.v. function L and constant p = 1 − q ∈ [0 , P x [ σ R < T | Λ R ], or what is thesame, the ratio P x [ σ R < T ] /P x (Λ R ). Note that under (AS), uniformly for 0 ≤ x ≤ RP x [ σ R < T ] ∼ P x (Λ R ) ∼ V d ( x ) /V d ( R ) if α = 2 . (1.22) Proposition 1.4.
Suppose (AS) to hold with < α < and let / < δ < as above. (i) For < α < , the following equivalences hold: p = 0 ⇐⇒ (1 . ⇐⇒ lim P x [ σ R < T ] P x (Λ R ) = 1 for some/all x ∈ Z , (1.23) (in this case the last limit is uniform for ≤ x ≤ R ), and if p > , P x [ σ R < T ] ≤ θP x (Λ R )(0 ≤ x < δR ) for some constant θ ∈ (0 , and P x [ σ R < T ] ∼ f (cid:18) xR (cid:19) V d ( x ) V d ( R ) uniformly for ≤ x ≤ R as R → ∞ for some increasing and continuous function f such that f (1) = 1 and f (0) = ( α − /α ˆ ρ . (ii) If α = 1 , ρ > and S is recurrent (necessarily p ≤ q ), then P x [ σ y < T ] P x (Λ y ) −→ (cid:26) ( q − p ) /q as y → ∞ uniformly for ≤ x < δy, as x → ∞ uniformly for ≤ y < δx. (iii) If α = 1 , EX = 0 and p > q (entailing ρ = 0 ), then P x [ σ y < T ] P x (Λ y ) ∼ ( p − q ) /p as y → ∞ uniformly for ≤ x ≤ δy,p − qp · U a ( y ) U a ( x ) as x → ∞ uniformly for ≤ y ≤ δx. (iv) If ρ > and S is transient (necessarily α ≤ ), then uniformly for ≤ x < δR , P x [ σ R < T ] /P x (Λ R ) → . Remark . (iii) above is obtained as a special case of Theorem 2 and does not follow from (ii)by duality. Its proof, much more involved than that of (iii), crucially depends on the fact thatif p > q , P (Λ R ) is comparable with P [ σ R < T ] and the latter is expressed as v u a ( R ) /g Ω ( R.R ).In case α = ˆ ρ = 2 p = 1, one may reasonably expect that P x [ σ R < T ] /P x (Λ R ) converges to 0(as R → ∞ ) whether S is recurrent or not. If F is recurrent (transient), this were true if wecould show that P x [ S ( R, ∞ ) > εR | Λ R ] → P x [ S ( R, ∞ ) > /ε | Λ R ] →
0) as R → ε ↓ P Ωx , x ≥ P Ωx [ S = x , . . . , S n = x n ] = P x (cid:2) S = x , . . . , S n = x n , n > T (cid:3) V d ( x n ) V d ( x ) (1.24)( x, x , . . . , x n ≥ P Ωx is the h -transform with h = V d of the law of S killed asit enters Ω ; P Ωx may be considered to be the law of S started at x ≥ Ω . From the defining expression (1.24) one deduces that P Ωx [ σ y < ∞ ] = V d ( y ) V d ( x ) P x [ σ y < T ] ( x ≥ , y ≥ . (1.25)Because of this identity, the estimates of P x [ σ R < T ] obtained in [27] as well as in this paperlead to the following Corollary 1.2.
Suppose (AS) to hold. (i) If < α ≤ , then uniformly in x ≥ , P Ωx [ σ R < ∞ ] ∼ f ( x/R ) where f ( ξ ) is a continuous function of ξ ≥ such thatfor ξ ≤ : (cid:26) f is identical to if α = 2 , f equals the function appearing in Proposition 1.4 if α < , for ξ > : f ( ξ ) = ( α − ξ − α ˆ ρ R t αρ − ( ξ − t ) α ˆ ρ − dt ; in particular P Ωx [ σ R < ∞ ] ∼ [( α − /αρ ] R/x as R/x → . (ii) If α = 1 and F is recurrent, then for each δ < , uniformly in x ≥ as R → ∞ P Ωx [ σ R < ∞ ] → ( q − p ) /q if q ≥ p, ≍ R (1 − F ( R )] − A ( R ) → if q < p, for x < δR,P Ωx [ σ R < ∞ ] → if q ≥ p, ∼ p − qp · R/A ( R ) x/A ( x ) if q < p. for x > R/δ. (iii) If F is transient, then for each δ < , as R → ∞ P Ωx [ σ R < ∞ ] → uniformly for ≤ x < δR and as x − R → ∞ . Proof.
In view of (1.25), (i) follows from Proposition 1.4(i) in case x ≤ R and from Lemma7.3 in case x > R . As for (ii), use (ii) and (iii) of Proposition 1.4 together with the estimate of P x (Λ R ) of Proposition 1.1 (in case q < p, x < δR ). (iii) follows from Proposition 1.4(iv) in case ρ > L(3.9) of the next section in case ρ = 0. Remark . If σ < ∞ , the relations given in (1.22) and Corollary 1.2 for α = 2 are valid.The asymptotic form of P Ωx [ σ R < ∞ ] for α = 2 as R/x → ∞ follows from the invarianceprinciple for a random walk conditioned to stay positive as established in [4], but the validityof the corresponding statement is not clear for the case 1 < α < EX = 0 and Theorem2 after showing miscellaneous lemmas in preparation for the proofs. Proposition 1.1 (in case E | X | = ∞ ) and Proposition 1.2 are proved in Section 5. In Section 6, we derive asymptoticestimates of the conditional probability in (1.20). In Section 7 we deal with asymptoticallystable walks and prove Proposition 3.1; for the proof we compute, in Lemma 7.1, the exactasymptotic forms of the renewal sequences v d and u a that are of independent interests. By the fact that V d is harmonic for the r.w. killed as it enters Ω we have P x (Λ R ) ≤ V d ( x ) /V d ( R ) . (2.1)(see [27, Eq()]). If either V d or R x P [ ˆ Z > x ] is regularly varying, it follows [2, Eq(8.6.6)] that V d ( x ) xv Z x P [ − ˆ Z > t ] dt −→ α ˆ ρ )Γ(2 − α ˆ ρ ) . (2.2)[If α ˆ ρ <
1, the integral on the LHS may be replaced by xP [ − ˆ Z > x ] / (1 − α ˆ ρ ).] By what ismentioned right above, this shows that V d ( b ( n )) /n tends to a positive constant.For a non-empty B ⊂ Z we define the Green function g B ( x, y ) of the r.w. killed as it hits B by g B ( x, y ) = ∞ X n =0 P x [ S n = y, n < σ B ] . (2.3)(Thus if x ∈ B , g B ( x, y ) is equal to δ x,y for y ∈ B and to E x [ g B ( S , y ) for y / ∈ B .) We shallrepeatedly apply the formula g Ω ( x, y ) = x ∧ y X k =0 v d ( x − k ) u a ( y − k ) for x, y ≥ L(2.1)
For 0 ≤ x ≤ R , R X y =0 g Ω ( x, y ) ≤ V d ( x ) U a ( R ) . We have shown (1.1) under the following condition (among others):(C3) both V d ( x ) and xU a ( x ) are s.v. as x → ∞ .We shall need the dual results of those valid under (C3) whose dual is give as:( c C3) both xV d ( x ) and U a ( x ) are s.v. as x → ∞ .The condition (C3) follows from (PRS) and ( c C3) from (NRS) as mentioned previously. Put for t ≥ ℓ ∗ ( t ) = Z t P [ Z > s ] ds, and ˆ ℓ ∗ ( t ) = 1 v Z t P [ − ˆ Z > s ] ds (2.5)9as in [27]) and ℓ ♯ ( t ) = Z ∞ t F ( − s ) ℓ ∗ ( s ) ds and ˆ ℓ ♯ ( t ) = Z ∞ t − F ( s )ˆ ℓ ∗ ( s ) ds (2.6)(slightly differently from [27] in case (C3) or ( c C3) fails: see (7.16)). It is known that Z is r.s.if and only if xU a ( x ) is s.v. which in turn is equivalent to the slow variation of ℓ ∗ is s.v. [18].[25, Appendix(B)], [24]. L(3.1)
Under (C3), ℓ ∗ and ℓ ♯ are s.v., u a ( x ) ∼ /ℓ ∗ ( x ) and V d ( x ) ∼ /ℓ ♯ ( x ) . By the duality this entails that under ( c C3), ˆ ℓ ∗ and ˆ ℓ ♯ are s.v., v d ( x ) ∼ / ˆ ℓ ∗ ( x ) and U a ( x ) ∼ / ˆ ℓ ♯ ( x ) . (2.7) L(3.3)
If either (C3) or ( c C3) hold, then V d ( x ) U a ( x ) H ( x ) −→ . L(3.4 ) If (C3) holds, then for each ε > P x [ Z ( R ) > εR | Λ R ] → R → ∞ ) uniformlyfor 0 ≤ x < R . L(3.9)
If either ( c C3) or (AS) with α < ρ holds, then P x (Λ R ) V d ( R ) /V d ( x ) −→ R →∞ ) uniformly for 0 ≤ x < δR .These results follow from Lemmas 2.1, 3.1, 3.3, 3.4 and 3.9 of [27]. Here we shall suppose that F is recurrent. For the present purpose it is convenient to considerthe Green function of S killed as it hits ( −∞ , Ω = ( −∞ , − g ( x, y ) = a ( x ) + a ( − y ) − a ( x − y ) . Then, for x ≥ E x [ a ( S σ ( −∞ , )] = a ( x ) − V d ( x − /EZ, and (3.1) g ( −∞ , ( x, y ) = g ( x, y ) − E x [ g ( S ( −∞ , , y )] , (3.2)which take less simple forms for E x [ a ( S T )] and g Ω ( x, y ). Here (3.1) follows from Corollary 1 of[26] and (3.2) from the identity g { } ( x, y ) = g ( x, y ) ( x = 0) (cf. [21, P29.4]).We bring in the following conditions:( ) a ( x ) is almost increasing and a ( − x ) /a ( x ) is bounded as y → ∞ ;( ) sup x : − z ≤ x ≤ δz | a ( x − z ) − a ( − z ) | a ( z ) −→ z → ∞ for any δ < ) P x [ S T > − εx ] → x → ∞ and ε ↓ α = 1 and ρ > Proposition 3.1.
Suppose conditions (1) to (3) above to hold. Then for any ε > , as x → ∞ g Ω ( x, y ) = a ( x ) − a ( x − y ) + o ( a ( y )) uniformly for x > εy > and g Ω ( x, y ) = a ( x ) − a ( − y ) + o ( a ( y )) uniformly for εy ≤ x ≤ (1 − ε ) y, in particular g Ω ( x, y ) ∼ a ( x ) if a ( x − y ) = o ( a ( x )) and x > y/ > . emark . (a) Let (PRS) hold. Then according to [25, Theorem 7], it hods that a ( x ) is s.v.; a ( x ) − a ( − x ) ∼ /A ( x ); a ( x ) ∼ Z x F ( − t ) A ( t ) dt, a ( − x ) = Z x − F ( t ) A ( t ) dt + o ( a ( x )) , (3.4)as x → ∞ . Combined with (PRS) these yield that for − z ≤ x < a ( x − z ) − a ( − z ) = Z x − z − z F ( − t ) A ( t ) dt + o ( a ( z )) = o (cid:18) xA ( z ) z (cid:19) + o ( a ( z )) , and similarly for 0 < x < z , and one sees that (2) of (3.3) is satisfied. We also know that by L(3.1) V d ( x ) is s.v. so that for each M >
1, lim P x [ S T < − M x ] = 0, in particular (3) of (3.3)is satisfied. (1) is obvious from (3.4).(b) Let (PRS) hold. By virtue of the Spitzer’s formula (2.4) we know g Ω ( x, y ) ∼ V a ( x ) /ℓ ∗ ( y )for 0 ≤ x < δy ( δ < V d ( x ) ∼ /ℓ ♯ ( x ) and the second half ofProposition 3.1 leads to the equivalence relations a ( x ) ≍ A ( x ) ⇐⇒ lim sup a ( − x ) a ( x ) < ⇐⇒ a ( x ) ≍ ℓ ∗ ( x ) ℓ ♯ ( x ) , (3.5)as well as 1 /ℓ ∗ ( x ) ℓ ♯ ( x ) = a ( x ) − a ( − x ) + o ( a ( x )), so that each of the conditions in (3.5) implies A ( x ) ∼ ℓ ∗ ( x ) ℓ ♯ ( x ); (3.6)and if lim a ( − x ) /a ( x ) = 1, then ℓ ∗ ( x ) ℓ ♯ ( x ) a ( x ) → ∞ . We shall give a sufficient condition for(3.6) to hold under (C3) in Appendix (A).(c) Suppose that F satisfies (AS) with α = 1 and ρ >
0. Then g Ω ( y, y ) ∼ a ( y ), g Ω ( x, y ) = a ( y ) − a ( − y ) + o ( a ( y )) uniformly for εy ≤ x ≤ (1 − ε ) y.g Ω ( x, y ) = o ( a ( y )) ( y → ∞ ) uniformly for x > (1 + ε ) y. (3.7)Indeed, for ρ = 1, (PRS) is satisfied and the first two formulae follow, while for 0 < ρ < a ( x ) ∼ a ( − x ) ∼ c ρ R x dt/ [ tL ( t )] with a certain positive constant c ρ , according to [25, Proposi-tion 61], entailing (1) and (2) of (3.3); moreover V d is regularly varying with index 1 − ρ ∈ (0 , x> (1+ ε ) y P x [ σ y < T ] →
0. To this end, by (3) of (3.3) one observes that P x [ σ y < σ Ω ] = P x [ S σ [0 ,y ] < ε y ] sup z :0 ≤ z<ε y P z [ σ y < σ Ω ] + o ε (1)with o ε (1) → ε →
0, but P z [ σ y < σ Ω ] = g Ω ( z, y ) /g Ω ( y, y ) tends to zero uniformly in z since a ( y ) ∼ a ( − y ). We shall derive in Section 6 essentially the same result as in Proposition3.1 but under (AS) in the case α = 1 and 0 < ρ < ρ = 1 /
2, so that the inclusionof that case is significant.We state the following corollary that follows immediately from Proposition 3.1 and Remark3.1(c) because of P x [ σ y < T ] = g Ω ( x, y ) g Ω ( y, y ) (3.8)as well as the fact that if V d is s.v., then P x [ σ ( −∞ ,R ] = T ] → x/R → ∞ .11 orollary 3.1. Suppose that F is recurrent and satisfies either (PRS) or (AS) with ρ > .Then, for any < δ < , as R → ∞ P x [ σ R < T ] = a ( R ) − a ( − R ) a ( R ) + o (1) uniformly for εR ≤ x < δR,o (1) uniformly for x > R/δ. The proof of Proposition 3.1 and Theorem 1 will be given after showing two lemmas.
Lemma 3.1.
If (1) and (2) of (3.3) hold, then for any < δ < and M ≥ , sup (cid:26) | a ( x − z ) − a ( − z ) | a † ( | x | ) + a ( − z ) : − M z < x < δz (cid:27) −→ z → ∞ ) , (3.9) where a † (0) = 1 and a † ( x ) = a ( x ) if x = 0 .Proof. From (1) and (2) it follows that | a ( x − z ) − a ( − z ) | /a ( | x | ) → z → ∞ ) uniformly for − M z < x < z (3.10)Indeed, if − z ≤ x < − z , putting x ′ = z , z ′ = − x ′ + z and writing a ( x − z ) − a ( − z ) = [ a ( x ′ − z ′ ) − a ( − z ′ )] + [ a ( x ′ − z ) − a ( − z )]one sees that | a ( x − z ) − a ( − z ) | /a ( | x | ) → x, − z ] at multiples of − kz and write a ( x − z ) − a ( − z ) as a telescopic sum.Pick ε > z —possibly under (2)—so that | a ( x − z ) − a ( − z ) | < ε a ( z ) whenever z ≥ z and − z ≤ x ≤ δz. (3.11)Let − z ≤ x < z > z . On the one hand, if a ( − z ) ≤ εa ( z ), by the inequalities − a ( − z ) a ( z ) a ( x ) ≤ a ( x − z ) − a ( − z ) ≤ a ( x − z ) a ( z − x ) a ( − x ) , (cf. [26, Lemma 3.2],[25, Section 7.1]) condition (1) entails that | a ( x − z ) − a ( − z ) | ≤ εCa ( | x | ),where we have also used the bound (3.11) to have a ( x − z ) /a ( − x + z ) ≤ C ′ [ a ( − z )+ ε a ( z )] /a ( z ) < C ′ ε. On the other hand if a ( − z ) > εa ( z ), (2) entails | a ( x − z ) − a ( − z ) | < ε a ( z ) < εa ( − z ).Thus we have | a ( x − z ) − a ( − z ) | ≤ epC [ a ( | x | ) + a ( z )], showing the supremum in (3.9) restrictedto − z ≤ x ≤ x > a ( x − z ) ≤ εa ( z − x ) or a ( x − z ) > εa ( z − x ) and using a ( − z ) = a ( x − z ) + o (cid:0) a ( z − x ) ∨ a ( x ) (cid:1) valid under the conjunction of (1) and (2) (note 3.10)). Lemma 3.2.
If (1) to (2) of (3.3) hold, then for any ε > , as y → ∞ E x [ a ( S T − y )] = E x [ a ( S T )] + o ( a ( y )) uniformly for x > εy. (3.12) Proof.
By (3), for any ε > ε > δ > P x [ S T ≥ − δy ] < ε if x > εy. (3.13)12upposing (1) and (2) to hold we apply Lemma 3.1 to see that as y → ∞ , a ( − z − y ) = a ( − z ) + o (cid:0) a ( − z ) + a ( y ) (cid:1) uniformly for z > δy, hence E x [ a ( S T − y ); S T < − δy ] = E x [ a ( S T ); S T < − δy ] { o (1) } + o ( a ( y ))= E x [ a ( S T )] { o (1) } + O (cid:0) ε a ( y ) (cid:1) + o ( a ( y )) . Here (3.13) as well as (1) is used for the second equality, and by the same reasoning the left-mostmember is written as E x [ a ( S T − y )] + O ( ε a ( y )). Noting E x [ a ( S T )] ∼ E x [ a ( S σ ( −∞ , )] ≤ Ca † ( x ),we can conclude that E x [ a ( S T − y )] = E x [ a ( S T )] + O ( ε a ( y )), which shows (3.12), ε beingarbitrary. Proof of Proposition 3.1.
Note that for the asymptotic estimates under P x , T and σ ( −∞ , may be interchangeable as x → ∞ . Then applying (3.2) and Lemma 3.2 in turn one sees thatas y → ∞ g ( −∞ , ( x, y ) = a ( x ) − a ( x − y ) − (cid:0) E x [ a ( S ( −∞ , )] − E x [ a ( S ( −∞ , − y )] (cid:1) = a ( x ) − a ( x − y ) + o ( a ( y ))uniformly for x > εy , showing the fist formula of the proposition. Since (2) of (3.3) entails a ( x − y ) − a ( − y ) = o ( a ( y )) for x < (1 − ε ) y , the second formula follows. Proof of Theorem 1.
Taking (1.1) into account we show that under (PRS) for any0 < δ < g Ω ( x, y ) = (cid:26) P x (Λ y ) (cid:2) a ( y ) − a ( − y ) + o ( a ( y )) (cid:3) uniformly for 0 ≤ x < δy,o ( a ( y )) uniformly for x > y/δ. We have only to consider the case x = o ( y ), since the other cases readily follows from Proposition3.1 because of the slow variation of V d and of a (see (3.4) for the latter). However, uniformlyfor 0 ≤ x < y we have g Ω ( x, y ) = X y ≤ w< y P x [ S σ [ y, ∞ ) = w ] g Ω ( w, y ) + o (cid:0) P x (Λ y ) g Ω ( y, y ) (cid:1) = P x (Λ y )[ a ( y ) − a ( − y ) + o ( a ( y )] , where the first and second equalities are due to L(3.4) and Proposition 3.1, respectively.With the help of g − Ω ( − x, − x ) ∼ a ( x ) (valid under (PRS)) and g Ω ( x,
0) = v d ( x ) ( x ≥ L(3.9) that v d ( x ) = o (cid:0) a ( x ) U a ( x ) (cid:1) . (3.14)(the dual assertion, given by (4.2), is more naturally derived). The second formula of (1.2)follows from (3.14) in view of the Spitzer’s formula (2.4) for g Ω ( x, y ).For later usage, here we state the dual result corresponding to Corollary 3.1 for the n. r. s.walk: If A ( x ) /xH ( x ) → −∞ , then P x [ σ y < T ] = o (cid:0) V d ( x ) (cid:14) V d ( y ) (cid:1) as y → ∞ uniformly for 0 ≤ x < δy,U a ( y ) U a ( x ) (cid:20) a ( − x ) − a ( x ) a ( − x ) + o (1) (cid:21) as x → ∞ uniformly for y < δx. (3.15)13 Proof of Proposition 1.1 (case E | X | < ∞ )and Theorem 2 This section consists of three subsections. In the first one we obtain some basic estimates of P (Λ R ) and u a ( x ) for negatively relatively stable walks and prove Proposition 1.1. In the secondwe give precise asymptotic forms of P (Λ R ) and u a ( x ) asserted in Theorem 2 whose proof wegive in the third one. We use the following notation: H + ( x ) := 1 − F ( x ) = P [ X > x ] ( x ≥ B ( R ) := Z \ [0 , R ]; N ( R ) = σ B ( R ) − , the first time the r.v. leaves B ( R ) after time zero. Throughout this section we suppose xH ( x ) /A ( x ) → −∞ ( x → ∞ ).Recall that this entails ρ = 0, V d ( x ) ∼ x/ ˆ ℓ ∗ ( x ) and U a ( x ) ∼ / ˆ ℓ ♯ ( x ). Preliminary estimates of u a and proof of Proposition 1.1 in case EX = 0 . In this subsection we mainly concerns with the case F being recurrent but some of theresults are valid also for F being transient. When the potential function a ( x ) is involved ina statement, it is understood that F is tacitly supposed to be recurrent even if that is notmentioned.Since g Ω (0 , y ) = v u a ( y ), Theorem 1 shows that if (NRS) holds, then P [ σ y < T ] ∼ g Ω (0 , x ) /a ( − y ) = v u a ( y ) /a ( − y ) , (4.1)hence by the dual of L(3.9) u a ( y ) = o (cid:0) a ( − y ) (cid:14) V d ( y ) (cid:1) . (4.2) Lemma 4.1.
Suppose that (NRS) holds and lim sup a ( x ) /a ( − x ) < (necessarily EX = 0 ).Then u a ( y ) = o (cid:16) (cid:14)(cid:2) V d ( y ) | A ( y ) | (cid:3)(cid:17) . (4.3) Proof.
The second condition of the supposition implies a ( − y ) ≍ /A ( y ) owing to (3.4). Hence(4.3) is immediate from (4.2).Under lim sup a ( x ) /a ( − x ) <
1, by (3.6) in Remark 3.1(b) we have 1 /A ( x ) ∼ V d ( x ) U a ( x ) /x ,so that (4.3) is equivalently stated as u a ( y ) ∼ o ( U a ( y ) /y ) , (4.4)which is an expected consequence, for U a is s.v.; we shall use this expression instead of (4.3). Lemma 4.2.
Suppose (NRS) to hold. (i) P (Λ R ) ≥ U a ( R ) H + ( R ) { v + o (1) } . (ii) If − F is of dominated variation, then for each < ε < , P (cid:2) εR ≤ S N ( R ) < (1 − ε ) R (cid:12)(cid:12) Λ R (cid:3) → and P (cid:2) S N ( R ) < εR, Λ R ] ≍ U a ( R ) H + ( R ) . [Here a non-increasing function f is of dominated variation if lim inf f ( x ) /f ( x ) > [11],[2]).] roof. We need to find an appropriate upper bound of g Z \ [0 ,R ) ( x, y ). To this end we use theidentity g B ( R ) ( x, y ) = g Ω ( x, y ) − E x [ g Ω ( S σ ( R, ∞ ) , y ); Λ R ] (0 ≤ x, y < R ) . (4.5)By Spitzer’s representation (2.4) we see that for z ≥ R , 1 ≤ y < δR ( δ < g Ω ( z, y ) ≤ U a ( y ) / ˆ ℓ ∗ ( R ) { o (1) } . It therefore follows that R/ X y =0 E x [ g Ω ( S σ ( R, ∞ ) , y ); Λ R ] ≤ R U a ( R )ˆ ℓ ∗ ( R ) { o (1) } ∼ V d ( R ) U a ( R )for 0 < x < R . Since P (Λ R ) V d ( R ) → L(3.9) , using g Ω (0 , y ) = v u a ( y ) weaccordingly deduce from (4.5) R/ X y =0 g B ( R ) (0 , y ) = U a ( R ) { v + o (1) } . (4.6)Hence P (Λ R ) ≥ R/ X y =0 g B ( R ) (0 , y ) H + ( R − y ) ≥ U a ( R ) H + ( R ) { v + o (1) } , (4.7)showing (i).The first probability in (ii) is less than P (1 − ε ) Ry = εR u a ( y ) H + ( R − y ). After summing by parts,this sum may be expressed as h U a ( y ) H + ( R − y ) i (1 − ε ) Ry = εR − Z (1 − ε ) Ry = εR U a ( t ) dH + ( R − y ) (4.8)apart from the error term of smaller order of magnitude than U a ( R ) H + ( εR ). Because of theslow variation of U a the above difference is o (cid:0) U ( R ) H + ( εR ) (cid:1) . By (i) we therefore obtain thefirst relation of (ii), provided that 1 − F is of the dominated variation. The second relation of(ii) follows from the first and (i). Lemma 4.3.
Suppose lim sup[1 − F ( λx )] / [1 − F ( x )] < for some λ > . Then P x [ Z ( R ) > M R | Λ R ] → as M → ∞ uniformly for ≤ x ≤ R (4.9) and if (NRS) and lim sup a ( x ) /a ( − x ) < in addition, then for some positive constant ccP x (Λ R ) ≤ P x [ σ R < T ] (0 ≤ x ≤ R ) . (4.10) Proof.
For any integer
M >
1, writing M ′ = M + 1 we have P x [ Z ( R ) ≥ M R | Λ R ] ≤ P R − y =0 g B ( R ) ( x, y ) H + ( M ′ R − y ) P R − y =0 g B ( R ) ( x, y ) H + ( R − y − ≤ H + ( M R ) H + ( R ) , of which the last member approaches zero as M → ∞ uniformly in R under the supposition ofthe lemma. Suppose lim sup a ( x ) /a ( − x ) <
1. Then by Proposition 3.1 one can choose positiveconstants c so that for R < z < R , g Ω ( z, R ) ≥ c g Ω ( R, R ) , P z [ σ R < T ] > c . This together with (4.9) leads to P x [ σ R < T ] ≥ P x (Λ R ) c { / o (1) } . Thus we have (4.10).Lemma 4.2(ii) says that, given Λ R , the conditional law of S σ ( R, ∞ ) − /R , the position of de-parture of the scaled r.w. S n /R from the interval [0 , F we shall see that such a concen-tration should be expected to occur only about the lower boundary (see Lemma 4.5), otherwisethis may be not true. Proof of Proposition 1.1 (case EX = 0 ). Suppose the assumption of Proposition 1.1to hold, namely F is n.r.s.; EX = 0 and lim sup x →∞ a ( x ) /a ( − x ) < H + varies dominatedly and ∃ λ > , lim sup H + ( λt ) /H + ( t ) < . (4.11)Then Lemmas 4.2 and 4.3 are applicable. By Lemma 4.3(i) we have P (Λ R ) /v ≥ U a ( R ) H + ( R ) { o (1) } , the lower bound of P (Λ R ) asserted in Proposition 1.1. By (4.1) we have v u a ( R ) = a ( − R ) P [ σ R < T ] { o (1) } ≥ − P (Λ R ) /A ( R ) { o (1) } . (4.12)Here for the inequality we have employed the second condition of (4.11) in addition to (4.9),(3.15) and a ( − x ) − a ( x ) ∼ − /A ( x ). Thus the required lower bounds of u a and P (Λ R ) areobtained.Since ddt ℓ ♯ ( t ) = H + ( t )ˆ ℓ ♯ ( t )ˆ ℓ ∗ ( t ) ∼ U a ( t ) H + ( t ) − A ( t ) = ˜ u ( t ) , (4.13)by (2.7) we have U a ( x ) ∼ R xx ˜ u ( t ) dt . According to a standard result as to lim ∗ (cf. [2, Theorem2.9.1] which can be extended to be applicable under the last condition of (4.11), we havelim ∗ u a ( x ) / ˜ u ( x ) = 1, which entailslim ∗ P (Λ x ) / [ U a ( x ) H + ( x )] = v for the LHS is not less than v by (4.12) and the inequality in the opposite direction followsfrom the lower bound of P (Λ R ) in (1.9). The upper bound in (1.9) follows from the lim ∗ result above because P (Λ x ) is monotone and U a ( x ) H + ( x ) almost decreasing and of dominatedvariation owing to the last condition of (4.11). The upper bound of u a follows from this upperbound of P (Λ R ) and (4.10), the latter entailing u a ( y ) /a ( − y ) ∼ P [ σ R < T ] ≤ CP (Λ R ).The following lemma is used in the next subsection. Lemma 4.4.
Suppose (4.11) to hold. (i)
For any / ≤ δ < , g Ω ( w, y ) = V d ( w ) × o (cid:0) U d ( R ) (cid:14) R (cid:1) ≤ w < δy, ≤ a ( − y ) ≤ C/ | A ( y ) | δy ≤ w ≤ y/δ, ∼ U ( y ) / ˆ ℓ ∗ ( w ) w > y/δ > . (4.14)16ii) sup y ≥ ∞ X w =0 g Ω ( w, y )[1 − F ( w )] < ∞ .Proof. Let EX = 0. By (2.4), the Spitzer’s representation of g Ω ( w, y ), one can easily deduce(i), with the help of (4.3) and (4.21). For convenience of later citations we note that (i) entailsthat for some constant Cg Ω ( w, y ) ( ≤ CV d ( w ) U d ( y ) (cid:14) y ≤ w < y/δ, ≤ CU a ( y ) / ˆ ℓ ∗ ( w ) w ≥ y/δ > . (4.15)By (4.15) P ∞ w =0 g Ω ( w, y )[1 − F ( w )] is bounded above by a constant multiple of yU a ( y ) y X w =0 V d ( w ) H + ( w ) + U a ( y ) ∞ X w =2 y ℓ ∗ ( w ) H + ( w ) (4.16)for y ≥ x . The first term above approaches zero as y → ∞ , for by L(3.3) V d ( w ) H + ( w ) = o (1 /U a ( w )), while the second sum equals ˆ ℓ ♯ (2 y ) ∼ /U a ( y ). Thus (ii) follows. Asymptotic form of u a and P (Λ R ) . Throughout this subsection we suppose (4.11), the assumption of Proposition 1.1, to hold.Thus the results given in the preceding subsection are applicable; in particular (4.4) and (4.10)hold: u a ( y ) = o (cid:0) U d ( y ) (cid:14) y (cid:1) and P x (Λ y ) ≍ P x [ σ y < T ] . (4.17)Assuming the continuity condition (1.10) in addition we are going to refine these estimates to P (Λ y ) /v ∼ U a ( y ) H + ( y ) and u a ( y ) ∼ U a ( y ) H + ( y ) / | A ( y ) | . (4.18)Recalling B ( R ) = Z \ [0 , R ], one has P x (Λ R ) = R X w =0 g B ( R ) ( x, R − w ) H + ( w ) . For each r = 1 , , . . . , g B ( r ) ( x, r − y ) , x, y ∈ B ( r ) is symmetric: g B ( r ) ( x, r − y ) = g B ( r ) ( y, r − x ) , (4.19)for the both sides equal g − B ( r ) ( − r + y, − x ). Note that under (NRS), by duality, ˆ Z is r.s. and v d ( x ) ∼ ℓ ∗ ( x ) and U a ( x ) ∼ ℓ ♯ ( x ) where ˆ ℓ ♯ ( t ) = Z ∞ t H + ( t )ˆ ℓ ∗ ( t ) dt. (4.20)By Theorem 1 and Remark 3.1(b) it follows that under (NRS) and (4.11) g Ω ( x, x ) ∼ a ( − x ) a ( − x ) − a ( x ) ∼ − A ( x ) ∼ ℓ ∗ ( x )ˆ ℓ ♯ ( x ) ∼ V d ( x ) U a ( x ) x . (4.21)The next lemma is crucial for the proof of (4.18). Recall that N ( R ) = σ ( R, ∞ ) − H + ( x/λ ) /H + ( x ) → λ ↓ . emma 4.5. If both (4.11) and (1.10) hold, then P [ S N ( R ) ≥ R | Λ R ] −→ . (4.22) Proof.
We use the representation P [ S N ( R ) ≥ R, Λ R ] = X ≤ w ≤ R/ g B ( R ) (0 , R − w ) H + ( w ) . (4.23)Splitting the r.w. paths by the landing points, y say, when S started at the origin, exits[0 , R ], we obtain for 0 ≤ w < R/ g B ( R ) (0 , R − w ) = X R/ 1, the above triple sum is less than R X z =0 R X y =(1 − ε ) R p ( y − z ) u a ( z ) . (4.30)Since P Ry =(1 − ε ) R p ( R − y − z ) = F ( R − εR − z ) − F ( R − z − z ≤ R/ 2, the above repeated sum is of the smaller order of magnitudethan U a ( R ) H + ( R ), showing (4.29). Remark . The above proof directly (i.e. without recourse to Proposition 1.1) verifies thatunder (4.11) P (Λ R ) ≍ U a ( R ) H + ( R ) and u a ( x ) ≍ U a ( x ) H + ( x ) a ( − x ) . (4.31)To see this one has only to notice the obvious fact that the sum in (4.30) is dominated by CU a ( R ) H + ( R ).As a consequence of Lemmas 4.2 and 4.5 we obtain Lemma 4.6. Suppose that both (4.11) and (1.10) hold. Then (i) for each ε > , as R → ∞ P [ S N ( R ) ≥ εR | Λ R ] → , (4.32) and P [ Z ( R ) ≤ εR | Λ R ] → as ε ↓ uniformly in R > P (Λ R ) /v ∼ U a ( R ) H + ( R ) and P [ σ R < T ] P (Λ R ) ∼ a ( − R ) − a ( R ) a ( − R ) ;(iii) u a ( y ) ∼ U a ( y ) H + ( y ) − A ( y ) . roof. The first convergence of (i) follows from Lemmas 4.2(ii) and 4.5, and the second one of(i) from it—by virtue of (1.10). By (4.32) we have P (Λ R ) ∼ P [ S N ( R ) ≥ (1 − ε ) R, Λ R ] ∼ v Z εR u a ( t ) H + ( R − t ) dt. Integrating by parts transforms the integral on the RHS into U a ( εR ) H + ( R − εR ) − Z εR U a ( t ) dH + ( R − t ) , which may be written as U a ( R )[ H + ( R )] { o (1) } , since U a is s.v., showing the first relation of(ii). By the second of (i) P [ σ R < T ] ∼ P (Λ R ) (cid:18) o ε (1) + ∞ X y = εR P [ Z ( R ) = y | Λ R ] P y [ σ R < T ] (cid:19) , where o ε (1) → ε ↓ 0. By (3.15) the second probability under the summation sign isasymptotically equivalent to [ a ( − R ) − a ( R )] /a ( − R ), and we have the second relation of (ii).By (3.4) we have − /A ( y ) ∼ a ( − y ) − a ( y ), hence by the second equivalence of (ii) P [ σ y < T ] ∼ P (Λ y ) / [ A ( y ) a ( − y )] , while the probability on the LHS ∼ v u a ( y ) /a ( − y ). Combined with the first one of (ii) thisyields (iii). Proof of Theorem 2. Having Lemma 4.6 Theorem 2 follows if we can show the following Proposition 4.1. Suppose that (4.11) and (1.10) hold. Then for each δ < , uniformly for ≤ x < δy , as y → ∞ g Ω ( x, y ) ∼ V d ( x ) U a ( y ) | A ( y ) | ( x + 1) x X k =0 H + ( y − k ) , P x (Λ y ) ∼ g Ω ( x, y ) A ( y ) (4.33) and ∀ ε > , P x [ S N ( y ) ≥ x + εy | Λ y ] → . Remark . The continuity condition (1.10) is necessary for (4.18) to hold, as is seen fromthe identity P (Λ y ) /v = P Rz =0 u a ( z ) H + ( R − y ) where the contribution of the sum over z < εy always signifies for any ε > 0. Also P (Λ y ) /u a ( y ) is not asymptotic to v A ( y ) if (1.10) fails tohold. Lemma 4.7. Under the same assumption as in Proposition 4.1 for each δ < , as y → ∞ g Ω ( x, y ) ∼ U a ( y ) V d ( x ) | A ( y ) | ( x + 1) x X k =0 H + ( y − k ) uniformly for ≤ x < δy. (4.34)20 roof. Substituting the asymptotic form of u a of Lemma 4.6 into (2.4) one has g Ω ( x, y ) ∼ U a ( y ) − A ( y ) x X k =0 v d ( k ) H + ( y − x + k ) uniformly for 0 ≤ x < δy. For any ε > 0, the above sum restricted to k < εx is less than εxH + ( y ) / ˆ ℓ ∗ ( x ) which can bemade relatively small. The rest is easy to see if x → ∞ , while in case x/y ↓ 0, the formula of(4.34) is immediate because of (1.10).From (4.34) and g Ω ( x, x ) ∼ a ( − x ) it follows that uniformly for 0 ≤ x < δR , as R → ∞ P x [ σ R < T ] ∼ U a ( R ) V d ( x ) − a ( − R ) A ( R )( x + 1) Z RR − x − H + ( t ) dt. (4.35) Lemma 4.8. Suppose the same assumption as in Proposition 4.1 to hold. Then for each δ < ,uniformly for ≤ x < δR P x [ S N ( R ) ≥ (1 + δ ) R | Λ R ] → . Proof. Put R ′ = ⌊ δR ⌋ , R ′′ = R − R ′ and Q R ( x ) = P x [ S N ( R ) > R − R ′′ , Λ R ]. Then Q R ( x ) P x (Λ R ′ ) = R X k = R ′ P x (cid:2) Z ( R ′ ) = k (cid:12)(cid:12) Λ R ′ (cid:3) Q R ( k ) . Since for each ε > Q R ( k ) < P k (Λ R ) → R ′ ≤ k ≤ R − εR ′′ and P x (Λ R ′ ) ≍ P x [ σ R < T ] ≍ P x (Λ R ) according to the second half of Lemma 4.3, we have only to show thatuniformly for 0 ≤ x < R ′ , as ε ↓ R →∞ sup ≤ x 1, by (1.10) the abovesum is of the smaller order of magnitude than ≤ V d ( x ) U a ( R ) × H + ( R ) ≍ P x (Λ R ) as R → ∞ and ε ↓ Proof of Proposition 4.1 and Theorem 2. Since, by the asymptotic form of u a in Lemma4.6(iii), g Ω ( x, z ) ≤ CV d ( x ) u ( z ) for z ≥ x + εy , we see P x [ x + εy < S N ( y ) < (1 − ε ) y | Λ y ] → ≤ x < δy as in the proof of Lemma 4.2(ii), and the last formula of Proposition4.1 follows from Lemma 4.8.As before we infer from Lemma 4.3 and Lemma 4.8 that uniformly for 0 ≤ x < δyP x [ εy < Z ( y ) < y/ε | Λ y ] → y → ∞ and ε ↓ . (4.38)21his shows that as y → ∞ P x (Λ y ) ∼ − A ( y ) a ( − y ) P x [ σ y < T ] ∼ − A ( y ) g Ω ( x, y ) uniformly for 0 ≤ x < δy . (4.39)The first equivalence is the same as the asymptotic form of P x [ σ y < T ] asserted in Theorem2, and the second implies the asymptotic form of P x (Λ y ) in (4.33). This finishes proof ofProposition 4.1 (hence of Theorem 2), the other assertions being given in Lemma 4.6 and4.7. E | X | = ∞ ) and 1.2 Throughout this section we suppose that F is transient and n.r.s..[For the transient walks, considering under (NRS) is more convenient than under (PRS)) toobtain the result corresponding to Theorem 1 in the present setting.] According to [24] we have(a) A ( x ) → −∞ , G ( − x ) is s.v. and G ( − x ) − G ( x ) ∼ − /A ( x ) . (b) G ( − x ) ∼ Z ∞ x F ( − t ) A ( t ) dt, G ( x ) ∼ Z ∞ x − F ( t ) A ( t ) dt + o ( G ( − x )) (5.1)as x → ∞ . (The last result in (a) is not stated in [24], but actually proved in the proof ofTheorem of [24] [see Eq(57), Eq(30) and Section 3.3 of [24].) Lemma 5.1. If F is transient and n.r.s., then g Ω ( x, x ) → G (0) , and for any M > , as y → ∞ g Ω ( x, y ) = G ( y − x ) − G ( y ) + o ( G ( − y )) uniformly for ≤ x < M y. Proof. Under (NRS) ˆ Z is r.s., so that P x [ S T < − εx ] → ε > 0. Hence the result isimmediate from the identity g Ω ( x, y ) = G ( y − x ) − E x [ G ( y − S T )].From Lemma 5.1 one infers that as y → ∞ (a) g Ω ( x, y ) = G ( − y ) − G ( y ) + o ( G ( − y )) uniformly for y/δ < x < M y,G ( y − x ) { o (1) } if G ( y ) /G ( y − x ) → y < x < y,o ( G ( − y )) uniformly for 0 ≤ x < δy ; and(b) X y ≤ w ≤ y g Ω ( w, y ) ≤ O (cid:0) yG ( − y ) (cid:1) . (5.2)The estimate of g Ω ( x, y ) in (a) is not exact for x ≤ y/δ except incase x − y is sufficiently small,but for x/y ≍ Proof of Proposition 1.1 (case E | X | = ∞ ) . By Lemma 4.3 we have P [ Z ( R ) >M R | Λ R ] → R ∧ M → ∞ under (1.6), while by the first case of (5.2(a)) P z [ σ R < T ] ≥ [ G ( − R ) − G ( R )] /G (0) { o (1) } for R < z < M R , M > 1. It therefore follows that P [ σ R < T | Λ R ] ≥ [ G ( − R ) − G ( R )] /G (0) { o (1) } . (5.3)22ence v u a ( R ) = G (0) P x [ σ R < T ] { o (1) } ≥ − P (Λ R ) /A ( R ) { o (1) } . (5.4)The lower bound (1.9) of P (Λ R ) also is valid owing to Lemma 4.2, which together with (5.4)yields that of u a ( x ) in Proposition 1.1. The rest is the same as before.For the proof of Proposition 1.2 we need to obtain the bound P [ σ R < T | Λ R ] ≤ CG ( − R ) . (5.5)Using g Ω ( R + y, R ) ≤ G ( − y ) we see that P [ σ R < T | Λ R ] = ∞ X y =1 P [ Z ( R ) = y | Λ R ] g Ω ( R + y, R ) G (0) ≤ E [ G ( − Z ( R )) | Λ R ] G (0) . Therefore, it follows that for each ε > P [ σ R < T | Λ R ] < G ( − R ) { o (1) } + E (cid:2) G ( − Z ( R )); Z ( R ) < εR (cid:12)(cid:12) Λ R (cid:3) . (5.6) Lemma 5.2. If (1.14) and (1.15) hold in addition to (1.5), (1.6) and (NRS), then (i) E [ G ( − Z ( R )); Z ( R ) < R | Λ R ] = O (cid:0) G ( − R ) (cid:1) ( R → ∞ ) . (ii) u a ( y ) ≤ CP (Λ y ) G ( − y ) ≤ C ′ U a ( y ) H + ( y ) / | A ( y ) | . Note that by (5.6) (i) implies (5.5) , which in turn implies (ii). From the latter it followsthat u a ( y ) = O ( U a ( y ) /y )—which is what we actually need for our proof of Proposition 1.2 butseems hard to show without auxiliary conditions (1.14) and/or (1.15). Proof. On writing p ( x ) = P [ X − x ], the conditional expectation in (i) is represented as R X w =0 R X z =0 g B ( R ) (0 , R − w ) p ( w + z ) G ( − z ) . By virtue of (1.14) and the lower bound of P (Λ R ) provided by Proposition 1.1 the outer sumrestricted to w ≥ R/ U a ( R ) H + ( R ) R X w = R/ g B ( R ) (0 , R − w ) H + ( R ) R R X z =0 G ( − z ) ≤ G ( − R ) . (5.7)We show that the other sum is o (cid:0) G ( − R ) (cid:1) . To this end we proceed as in the proof pf Lemma4.5). Let I ε ( w ), II ε ( w ) and III ( w ) be as therein. For the present purpose we take ε = 1 / ε from I ε ( w ) and II ε ( w ). By the lower bound of P (Λ R ) obtained above,what is to be shown may be paraphrased as1 P (Λ R ) R/ X w =0 R/ X z =0 (cid:2) I ( w ) + II ( w ) + III ( w ) (cid:3) p ( w + z ) G ( − z ) = o (cid:0) G ( − R ) (cid:1) . (5.8)First of all we note that combining (1.6) and (1.14) leads to ∞ X z =0 p ( w + z ) G ( − z ) ≤ C H + ( w ) w w X z =0 G ( − z ) + H + ( w ) G ( − w ) ≤ C ′ H + ( w ) G ( − w ) ( w ≥ . (5.9)23ecall g B ( R ) (0 , R − w ) = X R/ Under the same assumption as Lemma 5.2, (i) E [ G ( − Z ( R )) , Z ( R ) < εR | Λ R ] /G ( − R ) → as R → ∞ and ε ↓ in this order; and (ii) P [ S N ( R ) > εR | Λ R ] → and P x [ Z ( R ) < εR | Λ R ] → for each ε > .Proof. Following the proof of Lemma 5.2 we see that1 U a ( R ) H + ( R ) hR X w =0 εR X z =0 g Ω ( x, R − w ) H + ( w + z ) w + z G ( − z ) = o (cid:0) G ( − R ) (cid:1) . [The argument beginning with (5.10) is much simplified because of the bound of u a in Lemma5.2(ii).] On the other hand, the trivial bound H + ( w + z ) / ( w + z ) ≤ H + ( w ) /w yields1 U a ( R ) H + ( R ) R X w = hR εR X z =0 g Ω (0 , R − w ) H + ( w + z ) w + z G ( − z ) ≤ CεG ( − R ) V d ( x ) . For the same reason as one derived the bound (5.7) these together show (i). (ii) is shown in asimilar way (rather simpler). 25y Lemmas 5.3(ii) and 4.3 P [ εR < Z ( R ) < ε − R | | Λ R ] → R → ∞ and ε ↓ δ < P z [ σ R < T | Λ R ] ∼ [ G ( − R ) − G ( R )] /G (0)uniformly for 0 ≤ z ≤ δR . With these relations together with (i) of Lemmas 5.3 the argumentsmade in case EX = 0 lead to P (Λ y ) ∼ U a ( y ) H + ( y ) and u a ( y ) ∼ P (Λ y ) / | A ( y ) | . (5.14)Let 0 ≤ x < δR for δ < 1. By the upper bound of u a in Lemma 5.2 (ii) and the bound R x H + ( t ) = O ( A ( x )) εR X w =0 g Ω ( x, R − w ) H + ( w ) ≤ CV a ( x ) U a ( R ) H + ( R ) , while by L(2.1) (1 − ε ) R X y =0 g Ω ( x, y ) H + ( R − y ) ≤ V a ( x ) U a ( R ) H + ( R ) . Hence P x (Λ R ) ≤ C ′ V a ( x ) U a ( R ) H + ( R ) (0 ≤ x < δR ) . By the asymptotic form of u a obtained from (5.14) we have the same asymptotic form of g Ω ( x, y ) as given in Lemma 4.7, and this together with the above bound of P x (Λ R ) we derivethe following lemma. Lemma 5.4. Under the same assumption as Lemma 5.2, for each δ < , as R → ∞ and ε ↓ in this order (i) E x [ G ( − Z ( R )) , Z ( R ) < εR | Λ R ] /G ( − R ) → ; and (ii) P x [ S N ( R ) > (1 + δ ) R | Λ R ] → and P x [ Z ( R ) < εR | Λ R ] → , in both (i) and (ii) the convergence being uniform for ≤ x < δR .Proof. Proof is similar to that of Lemma 4.8. Proof of Proposition 1.2 By virtue of Lemma 5.4 we obtain P x (Λ R ) ∼ G (0) P x [ σ R < T ] G ( − R ) − G ( R ) ∼ − A ( R ) g Ω ( x, R ) uniformly for 0 ≤ x < δR . (5.15)With this as well as (5.14) we can follow the proof of Theorem 2 to show the rest of the resultsof Proposition 1.2. P (cid:2) S N ( R ) = y (cid:12)(cid:12) Λ R (cid:3) In this section, we suppose( ∗ ) either the assumption of Theorem 2 or that of Proposition 1.2 holds. and compute the conditional probability of S exiting B ( R ) through y ∈ B ( R ), given Λ R and S = x ∈ B ( R ). Denote it by q R ( x, y ): q R ( x, y ) = P x (cid:2) S N ( R ) = y (cid:12)(cid:12) Λ R (cid:3) . / < δ < 1. In Sections 4 and 5 we have shown that uniformly for 0 ≤ x < δR , P x [ εR < Z ( R ) < R/ε | , Λ R ] → R → ∞ and ε ↓ ≤ x < δy , as y → ∞ P (Λ y ) ∼ − u a ( y ) A ( y ) ∼ U a ( y ) H + ( y ) , P x (Λ y ) ∼ − A ( y ) g Ω ( x, y ) , (6.2) g Ω ( x, y ) ∼ U a ( y ) V d ( x ) | A ( y ) | ( x + 1) x X k =0 H + ( y − k ); (6.3)and for any 0 < ε < / X y : | x − y | <εx g Ω ( x, y ) ≤ C εx | A ( x ) | . (6.4) Lemma 6.1. Under ( ∗ ) , as R → ∞ g B ( R ) ( x, y ) = g Ω ( x, y ) − U a ( y ) U a ( R ) g Ω ( x, R ) { o (1) } ; (6.5) in particular uniformly for ≤ y ≤ R , as R → ∞ g B ( R ) (0 , y ) = v u a ( y ) (cid:20) − u a ( R ) /U a ( R ) u a ( y ) /U a ( y ) { o (1) } (cid:21) . (6.6) Proof. By using (4.14) or (5.2) (according as E | X | is finite or infinite)) and (6.1) we deducefirst that E (cid:2) g Ω ( Z ( R ) , y ) (cid:12)(cid:12) Λ R (cid:3) ∼ U a ( y ) / ˆ ℓ ∗ ( R ) { o (1) } uniformly for 0 ≤ y < R, and then, using − A ( R ) / ˆ ℓ ∗ ( R ) ∼ /U a ( R ) and (6.2), that E x (cid:2) g Ω ( Z ( R ) , y ) , Λ R (cid:3) ∼ g Ω ( x, R ) U a ( y ) U a ( R ) uniformly for 0 ≤ y < R, ≤ x < δR. Thus (6.5) follows. By g Ω (0 , y ) = v u a ( y ) (6.6) is immediate from (6.5).By Lemma 6.1 and (6.2) q R ( x, y ) = g B ( R ) ( x, y ) P x (Λ R ) H + ( R − y ) = (cid:20) g Ω ( x, y ) g Ω ( x, R ) { o (1) } − U a ( y ) U a ( R ) (cid:21) H + ( R − y ) A ( R ) . (6.7)We begin with showing Lemma 6.2. Under ( ∗ ) , as R → ∞ P (cid:2) S N ( R ) = y (cid:12)(cid:12) Λ R (cid:3) = u a ( y ) U a ( R ) { o (1) } as y/R ↓ ,o (cid:0) H + ( R − y ) (cid:14) A ( R ) (cid:1) as y/R ↑ , (6.8) and if xH + ( x ) is s.v. in addition, then for each ε > , the first formula of (6.8) holds uniformlyfor ≤ y < (1 − ε ) R . roof. The formula (6.8) follows from (6.6). Indeed, as y/R ↑ g B ( R ) (0 , y ) = o ( u a ( R )) while u a ( R ) ∼ P (Λ R ) /A ( R ) in view of (6.2). Hence, the second case of (6.8) isimmediate from the identity P (cid:2) S N ( R ) = y (cid:12)(cid:12) Λ R ] = g B ( R ) (0 , y ) H + ( R − y ) P (Λ R ) . (6.9)In case y/R → 0, it follows from (6.5) that the RHS of the above identity is expressed as u a ( y ) /U a ( R ) { o (1) } . If xH + ( x ) is s.v., then for εR < y < (1 − ε ) R , (6.6) entails g B ( R ) (0 , y ) = v u a ( y ) (cid:2) − y/R + o (1) (cid:3) , and hence using (6.2) as well as (6.9) we see P (cid:2) S N ( R ) = y (cid:12)(cid:12) Λ R ] ∼ g B ( R ) (0 , y ) v U a ( R ) · − y/R ∼ u a ( y ) U a ( y ) . Taking 0 < ε < / 2, we put δ = 1 − ε and let x, y be such that 0 ≤ x ∧ y ≤ x ∨ y < δR throughout the sequel. Substitution from (6.3) into (6.7) we obtain the following.(a) If x ∨ y < /ε , then q R ( x, y ) ∼ g Ω ( x, y ) V d ( x ) U a ( R ) . (b) Uniformly for y < δx , as x → ∞ g B ( R ) ( x, y ) ∼ g Ω ( x, y ) ∼ V d ( x ) U d ( y ) /x, and q R ( x, y ) ∼ U a ( y ) H + ( R − y ) U a ( R ) R RR − x − H + ( t ) dt ≍ U a ( y ) U a ( R ) x . (c) Uniformly for y > ( x + 1) /δ , as R → ∞ q R ( x, y ) ∼ H + ( R − y ) A ( R ) (cid:18) R yy − x − H + ( t ) dt R RR − x − H + ( t ) dt − (cid:19) = o (cid:18) R (cid:19) for y > εR,u a ( y ) U a ( R ) · R yy − x − H + ( t ) dt ( x + 1) H + ( y ) ≍ u a ( y ) U a ( R ) as y/R → . Using these estimates and (6.4) we infer that as R → ∞ q R ( x, y ) = o (cid:0) /R (cid:1) for y > x/δ, y > εR, ∼ u a ( y ) U a ( R ) if x/y → , y/R → , ≥ U a ( y ) U a ( R ) x { o δ (1) } if δx ≤ y ≤ x = o ( R ) , ∼ U a ( y ) H + ( R − y ) U a ( R ) R RR − x − H + ( t ) dt if y < δx, x → ∞ , (6.10) X y : | x − y | <ε ′ R q R ( x, y ) ≤ Cε ′ xA ( x ) V d ( x ) U a ( R ) ∼ ε ′ CU a ( x ) U a ( R ) (0 < ε ′ < ε/ , (6.11)28et n R be any function such that U a ( n R ) ∼ U a ( R ) and n R /R → 0. Then, on employing (6.11), δR X y = x ∨ n R q R ( x, y ) ≤ x ∨ n R + n R X y = x ∨ n R q R ( x, y ) + C P δRy = n R u a ( y ) U a ( R ) −→ , (6.12)and we see that the mass of the conditional distribution P x (cid:2) S N ( R ) ∈ · (cid:12)(cid:12) Λ R (cid:3) concentrateson (1 + x ) /ε < y < n R if U a ( x ) /U a ( R ) → 0, andon o ( x ) < y < x, if U a ( x ) /U a ( R ) → , where o ( x ) is any function such that o ( x ) /x → o ( x ) → ∞ . This suggests that theconditional walk moves to the right in the former case and to the left in the latter up to theepoch of exiting B ( R ). If ε < U a ( x ) /U a ( R ) < δ , the mass may be possibly distributed on bothsides of x . Proof of Proposition 1.3. Let ε > n R be as above. Observe P x (cid:2) Z ( R ) = z (cid:12)(cid:12) Λ R (cid:3) = R X y =0 g B ( R ) ( x, y ) p ( z + R − y ) P x (Λ R ) = R X y =0 q R ( x, y ) p ( z + R − y ) H + ( R − y ) . Then by the first case of (6.10) and (6.12) we see that uniformly for x > n R and z > ε R , thelast sum restricted to y > x is negligible so that P x (cid:2) Z ( R ) ≤ z (cid:12)(cid:12) Λ R (cid:3) = x X y =0 g Ω ( x, y ) P x (Λ R ) P [ R − y < X ≤ R − y + z ] + o (1) . We substitute the asymptotic form of g Ω ( x, y ) and P x (Λ R ). Noting (6.11) which allows us toreplace g Ω ( x, y ) by U a ( y ) / ˆ ℓ ∗ ( x ), we infer that the above sum is asymptotically equivalent to1 U a ( R ) R RR − x H + ( t ) dt x X y =0 U a ( y ) (cid:2) H + ( R − y ) − H + ( R + z − y ) (cid:3) . Now assuming H + ( t ) ∼ L + ( t ) /t , we see that x X y =0 U a ( y ) H + ( R − y ) ∼ U a ( x ) Z RR − x H + ( t ) dt ∼ − U a ( R ) L + ( R ) log (cid:2) − R − x (cid:3) , and similarly P xy =0 U a ( z ) H + ( R + z − y ) ∼ − U a ( R ) L + ( R ) log[1 − ( R + z ) − x ] . Since the ratiolog[1 − ( R + z ) − x ] to log[1 − R − x ] is bounded away from 1 we obtain the formula of Proposition1.3 (with z in place of y ) in case x > n R .For x ∨ n R , by (6.10) and (6.12) again, we see P x (cid:2) Z ( R ) ≤ z (cid:12)(cid:12) Λ R (cid:3) = n R X y =0 g Ω ( x, y ) P x (Λ R ) P [ R − y < X ≤ R − y + z ] + o (1) . It is easy to see that P n R y =0 g Ω ( x, y ) ∼ V d ( x ) U a (2 n R ) (cf. [27, Lemma 2.1]). Since, by (1.10), H + ( R − y ) − H + ( R + z − y ) = H + ( R ) − H + ( R + z ) + o ( H + ( R )) for 0 ≤ y ≤ x , we obtain P x (cid:2) Z ( R ) ≤ z (cid:12)(cid:12) Λ R (cid:3) = 1 H + ( R ) (cid:2) H + ( R ) − H + ( R + z ) (cid:3) + o (1) . Now the asserted formula of the proposition follows immediately.29 emark . We can easily obtain the corresponding result of P x [ S N ( R ) = y | T < σ ( R, ∞ ) ] for thecase (NRS) with lim sup a ( x ) /a ( − x ) < 1. We consider it in the dual form. Let 0 < ε < / <δ < ≤ x < δR . By L(2.1) we have P δRy =0 g Ω ( x, y ) H + ( R − y ) ≤ V d ( x ) U a ( δR ) H + ((1 − δ ) R )and under (1.1) P x [ S N ( R ) < δR | Λ R ] ≤ P δRy =0 g Ω ( x, y ) H + ( R − y ) P x (Λ R ) ≤ V d ( R ) U a ( R ) H + ((1 − δ ) R ) . Owing to L(3.3) and Remark 3.2 of [27] it, therefore, follows that if (C3) holds, then P x [ S N ( R ) < δR | Λ R ] −→ , saying that the conditional distribution of S N ( R ) tends to concentrate in an interval containedin [ δR, R ] for any δ < 1. (According to [27, Theorem 1]) the same is true under the condition(AS) with α = 2 or m + ( x ) /m − ( x ) → m ± ( x ) = R x η ± ( t ) dt .)Suppose that (PRS) holds, EX = 0 and lim sup a ( − x ) /a ( x ) < 1. Then A ( x ) ∼ ℓ ∗ ( x ) ℓ ♯ ( x ), v d ( x ) = o (cid:0) V a ( x ) /x (cid:1) (by (the dual of) Lemma 4.1), and by Lemma 3.5 of [27] Z x V d ( t ) H + ( t ) dt ∼ ℓ ∗ ( x ) . By these relations we can easily verify that g B ( R ) ( w, y ) ∼ g Ω ( w, y ) uniformly for 0 ≤ w ≤ εR Estimates of g Ω ( x, y ) under (AS). In this subsection we suppose (AS) to hold. Under (AS), there exist s.v. functions ℓ and ˆ ℓ such that U a ( x ) ∼ x αρ /ℓ ( x ) and V d ( x ) ∼ x α ˆ ρ / ˆ ℓ ( x ) . (7.1)as mentioned at the beginning of Section 4 of [27], while it is shown in [27](Lemma 4.1) U a ( x ) V d ( x ) H ( x ) −→ (cid:2) qαρB ( αρ, α ˆ ρ ) (cid:3) − ( πα ˆ ρ ) − sin πα ˆ ρ, where B ( s, t ) = Γ( s + t ) / Γ( s )Γ( t ). By (2.2) we have that if α ˆ ρ < P [ − ˆ Z ≥ x ] /v ∼ ( πα ˆ ρ ) − (sin πα ˆ ρ ) /V d ( x ), and these together yields P [ − ˆ Z ≥ x ] ∼ qαρB ( αρ, α ˆ ρ ) x − α ˆ ρ L ( x ) /ℓ ( x ) . (7.2)30 emma 7.1. Suppose either α = 1 with ρ / ∈ { , , } (entailing p = q ) or < α < . Then (a) u a ( x ) ∼ αρ x αρ − /ℓ ( x ) and (b) v d ( x ) ∼ α ˆ ρ x α ˆ ρ − / ˆ ℓ ( x ) . (7.3) Proof. We prove only (b), (a) being dealt with in the same way. First of all we recall that if α ˆ ρ = 1, then ˆ Z is r.s. and the equivalence (b) follows (cf. [25, Appendix B], [24]). It is alsonoted that in case 1 / < α ˆ ρ < V d without any extraassumption (cf. e.g., [2]) so that (b) follows immediately from (7.1).The proof for α ˆ ρ ≤ / α ˆ ρ ≤ / ε ↓ lim sup x →∞ xP [ − ˆ Z ≥ x ] εx X z =1 P [ − ˆ Z = x − z ] z ( P [ − ˆ Z ≥ z ]) = 0 . (7.4)Note that (a)—as well as (7.2)—is applicable since αρ > / 2. Writing p ( · ) for P [ X = · ], wehave the identities (equivalent to each other); P [ − ˆ Z = x ] v = ∞ X z =0 u a ( z ) p ( − x − z ) , P [ − ˆ Z ≥ x ] v = ∞ X z =0 u a ( z ) F ( − x − z )(see e.g. [11, Eq(XII.3.6a)]). We accordingly deduce that the sum in (7.4) is dominated by aconstant multiple of J := εx X z =1 ∞ X y =1 y αρ − ℓ ( y ) p ( − x − y + z ) 1 z (cid:18) z α ˆ ρ ℓ ( z ) L − ( z ) (cid:19) , where L − ( xt ) = F ( − t ) /t α . We may suppose y αρ − /ℓ ( y ) to be decreasing. (If αρ = 1, one maytake R x P [ Z > t ] dt/v for ℓ ( x ).) Then we perform summation by parts for the inner sum and,after replacing F ( − t ) which thereby comes up by qL ( t ) /t α with L appropriately chosen, makesummation by parts back as before to obtain ∞ X y =1 y αρ − ℓ ( y ) p ( − x − y + z ) ∼ α ∞ X y =1 qL ( x + y − z ) y αρ − ℓ ( y )( x + y − z ) α +1 ≤ C L ( x ) x αρ − α − ℓ ( x ) . Hence J ≤ C L − ( x ) x αρ − α − ℓ ( x ) εx X z =1 z (cid:18) z α ˆ ρ ℓ ( z ) qL ( z ) (cid:19) ≤ C ′ ℓ ( x ) L ( x ) ε α ˆ ρ x α ˆ ρ − . Thus by (7.2) xP [ − ˆ Z ≥ x ] J ≤ C ′′ ε α ˆ ρ , verifying (7.4). Remark . (a) The proof above depends on the fact that if ( ρ ∨ ˆ ρ ) α > / 2, either Z orˆ Z admits the strong renewal theorem. For this reason the case α = 2 ρ = 1 is excluded fromLemma 7.3, while the case 1 / < α < ρ satisfies the above condition.Any way, taking the first formula of (1.11) in Theorem 2 into account, we have the strongrenewal theorem for U a and V d at least in case α ≥ α = 2 ρ = 1; (2) α = ρ ∨ ˆ ρ = 2 p = 1; (3) α = ρ ∨ ˆ ρ = 1 and E | X | = ∞ , of whichProposition 1.2 provides a partial result for the case (3).(b) In the proof of Lemma 7.3 the property of the positive tail of F is used only throughthose of the distributions of Z and ˆ Z . Since the regular variation of u a ( x ) and F ( − x ) impliesthat of P [ − ˆ Z > x ], it accordingly follows—whether (AS) is true or not—that31 f U a ( x ) ∼ x β /ℓ ( x ) , F ( − x ) ∼ L − ( x ) x − α with L − and ℓ s.v. and < α − β < ,and / < β ≤ , then (7.3b) holds with α ˆ ρ = β and ˆ ℓ ( x ) = Γ( α − β + 1)Γ( β + 1)Γ( α ) π − sin α ˆ ρπ · L − ( x ) ℓ ( x ) . If Z is r.s. in particular, then from the condition F ( − x ) ∼ L − ( x ) x − α , < α < ,it follows that v d ( x ) ∼ (cid:2) ( α − π − sin( α − π (cid:3) x α − ℓ ( x ) /L − ( x ) . Lemma 7.1 allows us to compute the precise asymptotic form of g Ω ( x, y ) for α ≥ ρ ∈ { , , } , which case however is covered by Proposition 3.1. Note that g Ω (0 , y ) = v u a ( y )and g Ω ( x, 0) = v d ( y ). Lemma 7.2. (i) If < α ≤ , g Ω ( x, y ) ∼ αρ V d ( x ) ℓ ( y ) x − αρ h α ˆ ρ ( y/x ) as y → ∞ uniformly for ≤ x ≤ y,α ˆ ρ U a ( y )ˆ ℓ ( x ) y − α ˆ ρ h αρ ( x/y ) as x → ∞ uniformly for ≤ y ≤ x, (7.5) where h λ ( ξ ) = λ Z t λ − ( ξ − t ) α − λ − dt (0 < λ ≤ , ξ ≥ . (ii) Let α = 1 and < ρ < . If ρ = 1 / , then for each < δ < the equivalence (7.5)holds uniformly both for ≤ x < δy and for ≤ y < δx , and as x → ∞ g Ω ( x, x ) ∼ ρ ˆ ρ Z x dtℓ ( t )ˆ ℓ ( t ) t ; (7.6) and f ρ = 1 / and F is recurrent, then g Ω ( x, x ) ∼ a ( x ) ∼ π Z x dtL ( t ) t . In either case, for each ε > g Ω ( x, y ) = o ( g Ω ( y, y )) as y → ∞ uniformly for x : | x − y | > εy. (7.7)(iii) If F is transient, then g Ω ( x, x ) → /P [ σ = ∞ ] and for x ≥ , g Ω ( x, y ) = o ( g Ω ( y, y )) as | y − x | ∧ y → ∞ . Note that h λ ≡ λ = 1, h λ (1) = λ/ ( α − h λ ( ξ ) ∼ ξ α − λ − as ξ → ∞ . Proof. Let 1 ≤ x ≤ y . Then g Ω ( x, y ) = x X k =0 v d ( k ) u a ( y − x + k ) . If x/y → 0, then g Ω ( x, y ) ∼ αρ V d ( x ) y αρ − /ℓ ( y ) , h α ˆ ρ ( ξ ) ∼ ξ αρ − as ξ → ∞ . For y ≍ x by Lemma7.1 the above sum divided by αρ is asymptotically equivalent to x X k =0 k α ˆ ρ − ( y − x + k ) αρ − ˆ ℓ ( k ) ℓ ( y − x + k ) ∼ x α − ˆ ℓ ( x ) ℓ ( y ) Z t α ˆ ρ − (cid:18) yx − t (cid:19) αρ − dt ∼ V d ( x ) h α ˆ ρ ( y/x ) ℓ ( y ) x − αρ , verifying the first formula of (7.5). The second one is dealt with in the same way. (i) has beenproved. (iii) is easy to see (cf. Appendix (B)).Let α = 1 and 0 < ρ < 1. Then for ρ = 1 / 2, by Lemma 7.1 v d ( k ) u a ( k ) ∼ ρ ˆ ρ/ [ kℓ ( k )ˆ ℓ ( k )]and (7.6) follows immediately. The first assertion of the case ρ = is verified in the same wayas for (i). If α = 2 ρ = 1 and F is recurrent, then a ( x ) ∼ a ( − x ) ∼ π − R x [ L ( t ) t ] − dt accordingto [25, Proposition 61(iv)]. (7.7) follows from Proposition 3.1 (see Remark 3.1(c) and Lemma4.3).Let 1 < α ≤ 2. Since h λ ( ξ ) ∼ ξ α − λ − as ξ → ∞ , Lemma 7.2(i) entails that g Ω ( x, y ) ≍ ( V d ( x ) U a ( y ) /y, for 0 ≤ x ≤ y,V d ( x ) U a ( y ) /x for 0 ≤ y ≤ x, (7.8)where the constants involved in ≍ depend only on αρ and ≍ can be replaced by ∼ in case y/x → ∞ or 0. By Lemma 4.1 of [27](i) that gives the asymptotics of V d ( y ) U a ( y ) H ( y ), it alsofollows that as y → ∞ g Ω ( y, y ) ∼ αρh α ˆ ρ (1) V d ( y ) U a ( y ) y ∼ ( λ α,ρ / [ yH ( y )] , < α < ,y/ R y tH ( t ) dt, α = 2 , (7.9)where λ α,ρ = κα ρ ˆ ρ/ ( α − > κ is explicitly given as a function of ρ and α only). Lemma 7.3. If < α ≤ , then as R → ∞ P x (cid:2) σ R < T (cid:3) ∼ (cid:20) ( R/x ) − αρ h α ˆ ρ ( R/x ) h α ˆ ρ (1) (cid:21) V d ( x ) V d ( R ) uniformly for ≤ x ≤ R, (cid:20) ( R/x ) α ˆ ρ h αρ ( x/R ) h αρ (1) (cid:21) V d ( x ) V d ( R ) uniformly for x ≥ R ; (7.10) in particular P x (cid:2) σ R < T (cid:3) → as x/R → , ∼ [( α − /αρ ]( R/x ) − α ˆ ρ ˆ ℓ ∗ ( R ) / ˆ ℓ ∗ ( x ) as x/R → ∞ , ∼ [( α − /α ˆ ρ ] V d ( x ) /V d ( R ) as x/R → . (7.11) Proof. Because of the identity P x (cid:2) σ R < T (cid:3) = g Ω ( x, R ) /g Ω ( R, R ) the first formula of (7.10)follows from Lemma 7.2. The derivation of the second one is similar. By lim ξ →∞ ξ αρ − h α ˆ ρ ( ξ ) = 1(7.11) follows from (7.10) together with h λ (1) = λ/ ( α − ξ − αρ h α ˆ ρ ( ξ ) decreasingly approaches unity as ξ → ∞ if αρ < h α ˆ ρ ( ξ ) ≡ αρ = 1. Combining (2.1) with Lemma 7.3 yields that if 1 < α < 2, for 0 ≤ x ≤ RV d ( x ) V d ( R ) ≥ P x (Λ R ) ≥ P x (cid:2) σ R < T (cid:3) ≥ α − α ˆ ρ · V d ( x ) V d ( R ) { o (1) } . (7.12)33 roof of Proposition 1.4. The statement (1.23) in (i) follows from Lemma 47 of [25]—it also follows directly from (7.10) on noting h α − ≡ α ˆ ρ = α − αρ = 1. Let p > αρ < 1. Since then h αρ ( ξ ) is decreasing, the second case of (7.10) implies thatlim inf R →∞ inf x ≥ (1+ ε ) R P x [ T < σ R ] > ε > 0. This shows the inequality of (i) owing to Lemma 4.4 of [27] that entails thatfor p > P x (cid:2) Z ( R ) > εR (cid:12)(cid:12) Λ R (cid:3) ≥ / x < δR ) . (7.13)The asymptotic equivalence stated last in (i) is a reduced form of the first formula in (7.10).(ii) follows from the dual of (3.15) (see also Corollary 3.1 and use (1.1)) with the help ofthe asymptotic form of a ( x ) and a ( − x ) given in (3.4).The case y → ∞ of (iii) follows from Lemma 4.6(ii). The other case is cheaper and immediatefrom (3.15).(iv) follows from Lemma 7.4 given in the next subsection if ρ = 1. In the other case0 < ρ < 1, with the help of Lemma 4.4 of [27] that says that if 0 < ( α ∨ ρ < P x (cid:2) Z ( R ) ≤ εR (cid:12)(cid:12) Λ R (cid:3) → R → ∞ and ε ↓ ≤ x < δR , and the required convergence follows. Transient walks. In [27] we have brought in the condition(C4) (AS) holds with α < ρ. If either (C3) or (C4) holds, then u a ( x ) ∼ /ℓ ∗ ( x ) so that P x [ σ R < T ] = g Ω ( x, R ) g Ω ( R, R ) ∼ V d ( x ) U a ( R ) Rg Ω ( R, R ) uniformly for 0 ≤ x < δR, (7.15)whether F is recurrent or transient. In the next lemma we show what is asserted in (iv) ofProposition 1.4 in case ρ = 1, when either (C3) or (C4) holds. Under (C4) ℓ ♯ should be definedby ℓ ♯ ( t ) = α Z ∞ t s α − F ( − s ) ℓ ( s ) ds ( t > . (7.16) Lemma 7.4. Let F be transient. Then g Ω ( x, x ) → /q ∞ ( < ∞ ) , (7.17) and if either (C3) or (C4) holds, for each δ < , as R → ∞ P x [ σ R < T | Λ R ] ∼ q ∞ /ℓ ∗ ( R ) ℓ ♯ ( R ) −→ uniformly for ≤ x ≤ δR. (7.18) Proof. (7.17) is a standard result for a general transient r.w. (cf. Appendix (B)). The equiv-alence in (7.18) follows from (7.17) in view of (7.15). If (C4) holds, this entails (7.18), for byLemma 8.1 of Appendix (A) ℓ ∗ ( x ) ℓ ♯ ( x ) ∼ A ( x ) → ∞ .Suppose (C3) to hold. By the transience of F —entailing E | X | = ∞ —the probability P x [ σ < ∞ ] tends to zero, hence P x + w [ σ R < ∞ ] → w → ∞ , and it accordingly suffices toshow that for any constant M > P x [ Z ( R ) < M | Λ R ] → R → ∞ ) (7.19)34niformly for 0 ≤ x < δR . Put B = ( −∞ , − ∪ [ R + 1 , ∞ ). Then the conditional probabilityabove is expressed as 1 P x (Λ R ) R X w =0 g B ( x, R − w ) P [0 < X − w < M ] . Let N = ⌊ (1 − δ ) R/ ⌋ so that R − x ≥ N for 0 ≤ x ≤ δR . We claim that X ≤ w ≤ N g [ R, ∞ ) ( x, R − w ) P [0 < X − w < M ] = o ( P x (Λ R )) ( x ≤ δR ) . (7.20)Since g B ( x, R − w ) ≤ g [1 , ∞ ) ( x − R, − w ) = g Ω ( w, R − x ) ∼ V d ( w ) /ℓ ∗ ( R ) for 1 ≤ w ≤ N and,since E [ V d ( X ); X ≥ 0] = V d (0), we obtain X ≤ w ≤ N g B ( x, R − w ) P [0 < X − w < M ] ≤ C/ℓ ∗ ( R ) (0 ≤ y ≤ M ) . (7.21)Summing over y yields that the sum on the LHS is at most a constant multiple of M/ℓ ∗ ( R )which is o ( P x (Λ R )) as x → ∞ , entailing (7.20), for V d ( R ) /ℓ ∗ ( R ) ∼ / [ ℓ ∗ ( R ) ℓ ♯ ( R )] is boundedowing to the equivalence in (7.18) that we have already seen to be true. When x remains ina bounded interval, (7.20) also follows from (7.21). Indeed, for each x fixed, taking any ε > r = r ( ε, x ) so that P x (Λ R ) < ε yield that for 0 ≤ w ≤ N , g B ( x, R − w ) = X ≤ z ≤ R/ P x [ Z ( r ) = z, Λ r ] g B ( z, R − w ) + o ( P x (Λ R ))for P x [ Z ( r ) > R/ | Λ r ] ≤ Cr [1 − F ( R/ o ( P x (Λ R )), and substituting this into (7.21) andtaking summation over w first, we have X ≤ w ≤ N g B ( x, R − w ) P [0 < X − w < M ] ≤ Cε/ℓ ∗ ( R ) (0 ≤ y ≤ M ) . Hence we obtain (7.20) as above, ε being arbitrary. By g B ( x, x ) ≤ C it is easy to see X N Suppose that either (C3) or (C4) holds. Then ℓ ∗ ( t ) ℓ ♯ ( t ) = − Z t F ( − s ) ds + Z t P [ Z > s ] ℓ ♯ ( s ) ds = A ( t ) + o (cid:18) Z t [1 − F ( s )] ds (cid:19) (8.1) and in case EX = 0 , both η − and η are s.v. and ℓ ∗ ( t ) ℓ ♯ ( t ) = Z ∞ t (cid:2) F ( − s ) − P [ Z > s ] ℓ ♯ ( s ) (cid:3) ds = A ( t ) + o ( η + ( t )) . (8.2)35ut A ± ( x ) = R x P [ ± X > t ] dt and suppose (C3) to hold. If the positive and negative tailsof F are not balanced in the sense thatlim sup η + ( x ) η − ( x ) < EX = 0 and lim sup A − ( x ) A + ( x ) < E | X | = ∞ , (8.3)then (8.1) and (8.2) together show (3.6), i.e., ℓ ∗ ( x ) ℓ ♯ ( x ) ∼ A ( x ), or equivalently in view of L(3.1) , V d ( x ) U a ( x ) ∼ x/A ( x ) . (8.4)Since η is s.v. as noted in Lemma 8.1, the first condition of (8.3) implies xH ( x ) /η ( x ) → A ( x ) = η − ( x ) − η + ( x ) ≍ η − ( x ) ≍ η ( x ) shows that F is p.r.s.; in particular if F is recurrent, a ( x ) ≍ R x (cid:2) F ( − t ) /η − ( t ) (cid:3) dt ∼ /η − ( x ), hence a ( x ) ≍ /A ( x ), or what amounts tothe same, lim sup a ( − x ) /a ( x ) < 1. Thus if EX = 0,lim sup η + ( x ) η − ( x ) < ⇒ lim sup a ( − x ) a ( x ) < ⇒ (8.4) . (8.5)where the second implication is observed in Remark 3.1(b). We do not know whether theconverse of the first implication in (8.5) is true or not. From (8.2), being written as ℓ ∗ ( x ) ℓ ♯ ( x ) = η − ( x ) − η + ( x ) + o ( η + ( x )), we also infer that under EX = 0,lim sup η − ( x ) A ( x ) < ∞ ⇐⇒ lim sup η + ( x ) η − ( x ) < ⇐⇒ lim sup η − ( x ) ℓ ∗ ( x ) ℓ ♯ ( x ) < ∞ , and combining this to (3.5) we see that lim sup a ( x ) η ( x ) = ∞ if lim sup a ( − x ) /a ( x ) = 1.Suppose that F is transient and p.r.s.. Then by Proposition 1.2 g Ω ( x, x ) = G ( x ) − G ( − x )+ o ( G ( x )), and the same reasoning as for the recurrent F shows that lim sup G ( − x ) /G ( x ) < G ( x ) ∼ R ∞ x (cid:2) H + ( t ) /A ( t ) (cid:3) dt , in a similar way as above we see thatlim sup A − ( x ) A + ( x ) < ⇒ lim sup G ( − x ) G ( x ) < ⇒ (8.4) . (8.6) (B) Let F be transient so that we have the Green kernel G ( x ) := P ∞ n =0 P [ S n = x ] < ∞ .For y ≥ , x ∈ Z , G ( y − x ) − g Ω ( x, y ) = ∞ X w =1 P x [ S T = − w ] G ( y + w ) . (8.7)According to the Feller-Orey renewal theorem [11, Section XI.9], lim | x |→∞ G ( x ) = 0 (under E | X | = ∞ ), showing that the RHS above tends to zero as y → ∞ (uniformly in x ∈ Z ), in par-ticular lim g Ω ( x, x ) = G (0) = 1 /P [ σ = ∞ ]. It also follows that P x [ σ < ∞ ] = G ( − x ) /G (0) → References [1] J. Bertoin, L´evy Processes, Cambridge Univ. Press, Cambridge (1996).[2] N.H. Bingham, G.M. Goldie and J.L. Teugels, Regular variation, Cambridge Univ. Press,Cambridge, 1989.[3] J. Bertoin and R. A. Doney, On conditioning a random walk to stay nonnegative, Ann.Probab, , no. 4 (1994), 2152-2167. 364] F. Caravenna and L. Chaumont, Invariance principles for random walks conditioned tostay positive, Ann. l’Institut Henri Poincar´e- Probab et Statist. (2008), 170-190[5] F. Caravenna and R. A. Doney, Local large deviations and the strong renewal theorem,arXiv:1612.07635v1 [math.PR] (2016)[6] R.A. Doney, Local behaviour of first passage probabilities, Probab. Theor. Rel. Fields, , (2012), 559-588.[7] R.A. Doney, Conditional limit theorems for asymptotically stable random walks, Z.Wahrsch. Verw. Gebiete (1985), 351-360.[8] R. A. Doney, Fluctuation theory for L´evy processes, Lecture Notes in Math. 1897 (2007).Springer, Berlin.[9] K.B. Erickson, Strong renewal theorems with infinite mean, Trans. Amer. Math. Soc. (1970), 263-291.[10] K. B. Erickson, The strong law of large numbers when the mean is undefined, TAMS. (1973), 371-381.[11] W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2, 2nd edn.John Wiley and Sons, Inc. NY. (1971)[12] J.B.G. Frenk, The behavior of the renewal sequence in case the tail of the time distributionis regularly varying with index − 1, Advances in Applied Probability (1982), 870-884.[13] P. Griffin and T. McConnell, Gambler’s ruin and the first exit position of random walkfrom large spheres, Ann. Probab. (1994), 1429-1472.[14] H. Kesten, Random walks with absorbing barriers and Toeplitz forms, Illinois J. Math. (1961), 267-290.[15] H. Kesten and R. A. Maller, Stability and other limit laws for exit times of random walksfrom a strip or a half line, Ann. Inst. Henri Poincar´e, (1999), 685-734.[16] H. Kesten and R. A. Maller, Infinite limits and infinite limit points of random walks andtrimmed sums, Ann. Probab., (1994), 1473-1513[17] R. A. Maller, Relative stability, characteristic functions and stochastic compactness, J.Austral. Math. Soc. (Series A) (1979), 499-509.[18] B.A. Rogozin, On the distribution of the first ladder moment and height and fluctuationsof a random walk, Theory Probab. Appl. (1971), 575-595.[19] B. A. Rogozin. The distribution of the first hit for stable and asymptotically stable walkson an interval (in Russian). Theor. Probab. Appl. , 342 – 349 (1972).[20] B.A. Rogozin, Relatively stable walks, Theory Probab. Appl. (1976), 375-379.[21] F. Spitzer, Principles of Random Walks, Van Nostrand, Princeton, 1964.[22] K. Uchiyama, On the ladder heights of random walks attracted to stable laws of exponent1, Electron. Commun. Probab. (2018), no. 23, 1-12. doi.org/10.1214/18-ECP122[23] K. Uchiyama, Asymptotically stable random walks of index 1 < α < (2019), 5151-5199.3724] K. Uchiyama, A renewal theorem for relatively stable variables. Bull. L. Math. Society. (2020), 1174-1190.[25] K. Uchiyama, Estimates of Potential functions of random walks on Z withzero mean and infinite variance and their applications. (preprint available at:http://arxiv.org/abs/1802.09832.)[26] K. Uchiyama, The potential function and ladder variables of a recurrent random walk on Z with infinite variance. Electron. J. Probab. (2020)[27] K. Uchiyama, The two-sided exit problem for a random walk on Z with infinite varianceI, (2019) preprint. http://arxiv.org/abs/1908.00303[28] V. A. Vatutin and V. Wachtel, Local probabilities for random walks conditioned to staypositive, Probab. Theory Rel. Fields,143