Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains
aa r X i v : . [ m a t h . P R ] M a y Asymptotic behaviour of randomly reflecting billiardsin unbounded tubular domains
M.V. Menshikov ∗ , M. Vachkovskaia † , A.R. Wade ‡ , Department of Mathematical Sciences, University of Durham, South Road, Durham DH1 3LE, UK.E-mail:
[email protected] Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University ofCampinas–UNICAMP, P.O. Box 6065, CEP 13083–970, Campinas, SP, Brazil.E-mail: [email protected] Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW, UK.Tel: +44 (0) 117 954 6954; Fax: +44 (0) 117 928 7999; E-mail:
Abstract
We study stochastic billiards in infinite planar domains with curvilinear bound-aries: that is, piecewise deterministic motion with randomness introduced via randomreflections at the domain boundary. Physical motivation for the process originateswith ideal gas models in the Knudsen regime, with particles reflecting off microscop-ically rough surfaces. We classify the process into recurrent and transient cases. Wealso give almost-sure results on the long-term behaviour of the location of the particle,including a super-diffusive rate of escape in the transient case. A key step in obtainingour results is to relate our process to an instance of a one-dimensional stochastic pro-cess with asymptotically zero drift, for which we prove some new almost-sure boundsof independent interest. We obtain some of these bounds via an application of generalsemimartingale criteria, also of some independent interest.
Keywords:
Stochastic billiards; rarefied gas dynamics; Knudsen random walk; randomreflections; recurrence/transience; Lamperti problem; almost-sure bounds; birth-and-deathchain.
AMS 2000 Subject Classifications: ∗ Partially supported by FAPESP (07/50459–9) † Partially supported by CNPq (304561/2006–1, 471925/2006–3) and FAPESP (thematic grant04/07276–02) ‡ Partially supported by the Heilbronn Institute for Mathematical Research Introduction
We consider stochastic billiards (Knudsen random walk) in two-dimensional infinite domains(‘tubes’) of the form D = { ( x, y ) ∈ R : x > A, | y | < g ( x ) } where A ≥ g : [1 , ∞ ) → (0 , ∞ ) is a monotone smooth function. Generally speaking, billiards are dynamical systemsdescribing the motion of a particle in a region with reflection rules at the boundary: suchsystems have been extensively studied in the mathematical and physical literature. Whenthe reflection rule is randomized, we have a stochastic billiard process. The stochasticmodels have received much less attention; invariant distributions for stochastic billiards ingeneral (mostly bounded) domains were studied in [5, 9]. In the present paper we study thequalitative (recurrence or transience) and quantitative (almost-sure bounds) behaviour ofstochastic billiards in unbounded domains in the plane.Physical motivation for the billiards model comes from the dynamics of ideal gas mod-els in the so-called Knudsen regime where intermolecular interactions are neglected. Therandom reflection law is motivated by the fact that the particle is small and the surfaceoff which it reflects has a complicated (rough) microscopic structure. The behaviour ofsuch Knudsen flows is of interest in several areas of physics, chemistry, and technology. Forphysical background, see for instance [3, 16]. For more on the motivation of the model inthe present paper, see [5] and references therein. We mention some related models at theend of this section.Informally, the model that we study can be described as follows. A particle movesballistically inside the domain D at constant velocity (of unit magnitude, say), and eachtime that the particle hits the boundary, it is reflected at a random angle α ∈ ( − π/ , π/ ∂ D , where α does not depend onthe initial direction of the particle. So, inside the domain the motion is deterministic: thestochasticity is introduced via the distribution α of the random reflections. In this paper,we take the distribution of α to be symmetric about 0.We consider two types of problem: (i) the continuous-time motion of the particle inthe tube; and (ii) the discrete-time embedded process obtained by observing the instancesof collisions on the boundary (in other words, unit time elapses between reflections). Theembedded process is, from our point of view, of interest in its own right (and behaves verydifferently to the continuous-time process), and is also a vital ingredient for the study ofthe continuous-time process. The asymptotic phenomena that we study for each processare also of two main types: (i) recurrence or transience, and existence of moments forrecurrence times; and (ii) almost-sure bounds for the location of the particle at time t , as t → ∞ .What kind of regions D are of interest? In the case of g ( x ) ≡
1, our model becomessymmetrically reflecting motion in a strip. Roughly speaking for the moment, the horizontalmotion of the particle is then described by a random walk with zero drift, and so the modelis null-recurrent. On the other hand, if g ( x ) = x our tube is a wedge and, with our reflectionrule, at any reflection there is a positive chance that the particle will head off to infinityand never return to a bounded set: it is transient. Similarly if g ( x ) = βx for β ∈ (0 , g ( x ) = x γ for some γ <
1, althoughwe do consider more general forms for the function g . Then there are two main cases: γ ∈ (0 ,
1) and γ <
0. See Figure 1.We obtain criteria for recurrence and transience of our processes. Loosely speaking, theseresults imply, for example, that for γ ≥ / γ ∈ (0 , /
2) depending on the reflection distribution. Also, for veryshallow tubes such as g ( x ) = log x , the introduction of random reflections ensures that theprocess is recurrent, while the corresponding deterministic evolution along the normals istransient. (See Theorem 2.1 below.)We also obtain almost-sure bounds for the horizontal position of the particle, in bothdiscrete- and continuous-time settings. Thus we have information about the asymptoticspeed of the processes. For example, when g ( x ) = x γ for γ ∈ [1 / , t the continuous-time process is at position t / (2 − γ ) , ignoringlogarithmic terms. In particular, the motion is super-diffusive .In the case γ <
0, recurrence is evident. In this case we obtain a criterion for thefiniteness of the mean recurrence-time (roughly speaking, ‘ergodicity’). We also obtainpolynomial estimates for the position of the particle; the ergodicity has so-called ‘heavytails’.A step in our proofs will involve relating the stochastic billiard process to an instance ofthe so-called Lamperti problem (after [17–19]) of a one-dimensional stochastic process withasymptotically zero mean drift. A crucial ingredient to our proofs for the stochastic billiardprocess is provided by some new results for general Lamperti-type processes. These results(Theorems 4.1, 4.2, 4.3) are of independent interest, and are in some cases apparentlynew even for the nearest-neighbour random walk (birth-and-death chain). To obtain ouralmost-sure bounds for Lamperti-type processes, we state and prove some general resultson obtaining almost-sure bounds for stochastic processes via semimartingale-type criteria.We conclude this section with some remarks upon related models. As previously men-tioned, stochastic billiard models have received comparatively little attention. Classical(that is, non-stochastic) billiard models have been extensively studied, particularly in math-ematical physics, and the literature is vast; see for instance [28]. A common approach inthe classical setting is via dynamical systems for which the reflection rule is elastic. In thissetting, we mention that billiards in certain unbounded domains resembling the tubes con-sidered here (at least for γ ≤
0) have been studied; see for instance [8,20–22] and referencestherein. An infinite-tube billiard model with a stochastic component (cf the γ = 0 casehere) is analyzed in [1]. In the dynamical systems setting, studying ergodicity is typicallya primary goal. In the stochastic setting, the results of the present paper demonstratea complete range of behaviours, from positive-recurrence (roughly speaking, ‘ergodicity’),through null-recurrence, to transience.In the next section we give a more formal definition of the model and state our mainresults. The statement of our auxiliary results on the Lamperti problem is deferred toSection 4. 3 Ag ( x ) D ( g ; A ) ∂ D ξ Figure 1: Some examples of the tubes and the trajectories of the process
The rigorous formulation of the stochastic billiard model that we consider is essentiallygiven in [5]. We now describe the construction, which is modified slightly from that in [5]to fit with our context. Let A ∈ [1 , ∞ ), which will be large but fixed, to be specified later.For a monotone function g : [1 , ∞ ) → (0 , ∞ ), let D := D ( g ; A ) := { ( x, y ) ∈ R : x > A, | y | < g ( x ) } . While the continuous-time process is perhaps most natural to describe, it is more con-venient to construct the discrete-time process first, and then to construct the continuous-time process from that. Thus we define a discrete-time Markov chain, informally ob-tained by recording the locations of the successive hits of the particle on the boundary ∂ D . This is a Markov chain with state-space ∂ D ∪ {∞} that we denote by ξ = ( ξ n ) n ∈ Z + ( Z + := { , , , . . . } ), where for ξ n ∈ ∂ D we write in coordinates ξ n = ( ξ (1) n , ξ (2) n ) ∈ ∂ D .Thus when ξ (1) n > A , ξ (2) n = ± g ( ξ (1) n ). We call ξ the collisions process . Informally, the state ∞ represents the absorbing state achieved if the particle attains a trajectory that will neverintersect with ∂ D again.Figure 1 illustrates the idea of the construction. We construct ξ formally as follows.Suppose that ξ ∈ ∂ D , with ξ (1)0 > A . Let α , α , α , . . . be i.i.d. random variables with thedistribution of a random angle α where P [ | α | < π/
2] = 1. For k ∈ Z + , given ξ k ∈ ∂ D∪{∞} ,we perform a step of our process as follows:(i) If ξ k = ∞ , set ξ k +1 = ∞ ;(ii) Otherwise, α k specifies a ray Γ k starting at ξ k ∈ ∂ D with angle α k to the interior nor-mal to ∂ D at ξ k ; we adopt the convention that positive values of the angle correspondto the right of the normal; negative values correspond to the left.(iii) If the ray Γ k does not intersect with ∂ D \ { ξ k } , set ξ k +1 = ∞ . Otherwise, let ( x k , y k )be the first point of intersection of the ray Γ k with ∂ D . Then if x k = A , set ξ k +1 =(2 A, g (2 A )) ∈ ∂ D , else set ξ k +1 = ( x k , y k ) ∈ ∂ D .4his defines the discrete-time process ξ . The randomness is introduced through the randomdraws from the reflection distribution α ; if α is degenerate (i.e., equal to a constant almostsurely), then ξ is deterministic. Note that the construction ensures that ξ (1) k > A wheneverit is defined.We need some modified form of ‘reflection’ away from the vertical boundary at x = A ,such as that specified by (iii). This is for technical reasons, and in particular ensures thatthe process does not jump directly to ∞ , since that case will not be of interest to us here.The particular form of this ‘reflection’ given by (iii) is rather arbitrary; any comparablerule will leave the behaviour unchanged. Moreover, if the distribution of α has no atomat 0, we could instead take the reflection rule to be the same on all boundaries withoutchanging the characteristics of the process.We obtain the continuous-time stochastic process X = ( X t ) t ≥ with state-space D∪ ∂ D∪{∞} , which we call the stochastic billiard process , from the collisions process ξ , essentiallyby interpolation, as follows. Suppose X = ξ ∈ ∂ D . Define the successive collision times of the particle by ν := 0, and if ξ k = ∞ ν k := k − X j =0 k ξ j +1 − ξ j k , ( k ∈ N ) , (2.1)where k · k is Euclidean distance on R and N := { , , , . . . } ; if ξ k = ∞ set ν k := ∞ . Thenfor t ≥ n ( t ) := max { n ∈ Z + : ν n ≤ t } , (2.2)so that ν n ( t ) ≤ t < ν n ( t )+1 for any t ≥
0. Then for any t ∈ (0 , ∞ ) we define X t as follows: X t := ξ n if t = ν n , n ∈ N ; ξ n ( t ) + ξ n ( t )+1 − ξ n ( t ) k ξ n ( t )+1 − ξ n ( t ) k · ( t − ν n ( t ) ) if t ∈ ( ν n ( t ) , ν n ( t )+1 ) and ξ n ( t )+1 = ∞ ; ∞ if t ∈ ( ν n ( t ) , ν n ( t )+1 ) and ξ n ( t )+1 = ∞ . This defines the process X , and in particular ξ is embedded in X via ξ n = X ν n , n ∈ Z + (where X ∞ := ∞ ). When X t ∈ D ∪ ∂ D , write X t = ( X (1) t , X (2) t ) in coordinates.A realization of the sequence ( α , α , . . . ) therefore specifies the processes ξ and X viathe construction just described. We now describe some further assumptions that we makeon the function g that specifies the region D and on the distribution of the angle α .Suppose that the random variable α for the angle of reflection satisfies for some α ∈ (0 , π/ P [ | α | < α ] = 1 , and P [ α ≥ x ] = P [ α ≤ − x ] ( x ∈ R ) , (2.3)so that α is bounded strictly away from ± π/ α is symmetric around0. Of special nature is the degenerate case where P [ α = 0] = 1; this leads to a deterministicevolution (i.e., reflection always occurs along the normals).We will shortly introduce some assumptions on the function g defining the domain D .If g ( x ) = x , D is a right-cone, and at any reflection from ∂ D there is positive probability(in fact, P [ α ≥ ≥ / ξ (hence X ) will go to ∞ : this is an obvious case of transience. Moreover, if α is degenerate, then it is clear that5or g strictly monotone ξ (1) n → ∞ if and only if g ( x ) → ∞ ; so transience is possible evenif g ( x ) grows arbitrarily slowly, at least in this deterministic case. On the other hand, if g ( x ) = c ∈ (0 , ∞ ), we have that if α is non-degenerate then ξ (1) n performs a one-dimensionalrandom walk on a half-line with zero drift and uniformly bounded jumps: so in this case ξ will be null-recurrent (loosely speaking for the moment). On another hand, it is at leastintuitively plausible that if g ( x ) decreases sufficiently fast, ξ will be positive-recurrent.In the present paper we will interpolate between these three situations; formally weassume:(A1) g : [1 , ∞ ) → (0 , ∞ ) is monotonic, and thrice-differentiable. Moreover, there exists γ ∈ ( −∞ ,
1) for which, as x → ∞ , g ( x ) = x γ + o (1) g ′ ( x ) = [ γ + o (1)] g ( x ) xg ′′ ( x ) = [ γ ( γ −
1) + o (1)] g ( x ) x | g ′′′ ( x ) | = o ( x − ) . Examples of functions satisfying (A1) include x γ or x γ (log x ) β , where γ <
1. Then inparticular we will be interested in the two cases where in addition to (A1) either: g ( x ) → ∞ and x − g ( x ) → x → ∞ ; (2.4)or: g ( x ) → x → ∞ . (2.5)Given (A1), a necessary condition for (2.4) is that γ ∈ [0 , γ <
0. The conditions on the derivatives of g imposed by (A1) are naturalsmoothness constraints that are necessary for our purposes. A particular case that we willbe interested in is where for γ < g ( x ) = x γ for x ≥
1. Then g satisfies (A1) and moreoversatisfies (2.4), (2.5) for γ ∈ (0 , γ < x − g ( x ) →
0, elementary geometry, using the fact that α is bounded strictly awayfrom ± π/
2, implies that ξ (and hence X ) will almost surely never jump directly to ∞ , if A is large enough (see Lemma 5.1 below). Thus for the remainder of the paper we can takethe state-space of ξ to be ∂ D and that of X to be D ∪ ∂ D .In the next section we state our main results. Then in Section 2.3 we mention someopen problems, and outline the remainder of the paper. Recall the parameter A ∈ [1 , ∞ ). Set τ A := inf { t > X (1) t ≤ A } ; and σ A := inf { n ∈ Z + : ξ (1) n ≤ A } ;6ere and throughout the paper we use the convention that inf ∅ := + ∞ . We will say thatthe process X is recurrent if P [ τ A < ∞ ] = 1 and transient otherwise; if recurrent then it ispositive- or null-recurrent according to whether E [ τ A ] < ∞ or not. Similarly for the process ξ , but with σ A instead of τ A .By the construction of our processes, we have that τ A < ∞ ⇐⇒ σ A < ∞ ; (2.6)thus the classification of recurrence or transience transfers directly between ξ and X .Since | α | < α < π/
2, we have that the nonnegative random variable tg α is boundedabove by tg α < ∞ . Thus E [tg α ] < ∞ , and E [tg α ] = 0 if and only if P [ α = 0] = 1. Asan example, in the case where for α ∈ (0 , π/ α is uniformly distributed on the interval( − α , α ), it is the case that E [tg α ] = tg α α − > ξ and X for the case of a growing tube. Positive-recurrence is clearly ruled out in this case(as our processes dominate the zero-drift process in a strip). So here the first importantissue is that of transience versus null-recurrence. Define γ c := E [tg α ]1 + 2 E [tg α ] , (2.7)so that γ c ∈ [0 , /
2) and γ c = 0 if and only if P [ α = 0] = 1. Theorem 2.1
Suppose that the random variable α satisfies (2.3) and that for γ ∈ [0 , , g satisfies (A1) and (2.4). Then there exists A ∈ (0 , ∞ ) such that for all A > A and ξ = X with ξ (1)0 > A :(i) ξ , X are transient if γ > γ c ; and(ii) ξ , X are null-recurrent if γ < γ c . In particular, since γ c < /
2, Theorem 2.1 says that for γ ≥ /
2, the processes ξ and X are always transient, regardless of the distribution of α .A special case of Theorem 2.1 is given by g ( x ) = (log x ) K for some K >
0; then (A1)holds with γ = 0 and (2.4) holds. In this case it follows from Theorem 2.1 that ξ is transientif P [ α = 0] = 1, otherwise it is null-recurrent. That is, the introduction of randomness ensures that the process returns to a neighbourhood of the origin infinitely often (withprobability 1), whereas in the deterministic case the process is transient. The same remarkapplies for other functions g that grow as x o (1) .Next we deal with the case of a narrowing tube, when g is decreasing. Now our processesare dominated by the zero-drift random walk in the strip, so transience is impossible. Thusrecurrence is evident, and the question of interest in this case is which moments of τ A , σ A exist. The following result is for the collisions process ξ . Theorem 2.2
Suppose α satisfies (2.3) and that for γ ≤ , g satisfies (A1) and (2.5).Then there exists A ∈ (0 , ∞ ) such that for all A > A and ξ with ξ (1)0 > A : i) ξ is positive-recurrent if γ < − E [tg α ] ; and(ii) ξ is null-recurrent if γ > − E [tg α ] . It follows from Lemma 5.3(ii) below that the continuous-time process X is also positive-recurrent under the conditions of Theorem 2.2(i). Again, Theorem 2.2 shows that for a sub-polynomial function such as g ( x ) = 1 / log x , a non-degenerate α leads to null-recurrence.Theorem 2.3 below deals with the important special case of g ( x ) = x γ , γ ∈ (0 ,
1) or γ <
0. In particular, when α is non-degenerate, it covers the two critical cases omitted inTheorems 2.1 and 2.2; they are null-recurrent. Theorem 2.3
Suppose α satisfies (2.3) and is not degenerate, and that g ( x ) = x γ , γ < .(i) Suppose γ = γ c > . Then there exists A ∈ (0 , ∞ ) such that for all A > A and ξ with ξ (1)0 > A , ξ is null-recurrent.(ii) Suppose γ = − E [tg α ] < . Then there exists A ∈ (0 , ∞ ) such that for all A > A and ξ with ξ (1)0 > A , ξ is null-recurrent. We now turn to our results on almost-sure bounds for our processes, that is, how farfrom the origin the particle will typically be in both the discrete- and continuous-timeprocesses. First we give a basic statement that says that such questions are non-trivial.
Proposition 2.1
Suppose that α satisfies (2.3). Suppose that g satisifies (A1) and either(i) g satisfies (2.4); or (ii) g satisfies (2.5) and α is non-degenerate. Then for A sufficientlylarge and any ξ = X with ξ (1)0 = X (1)0 > A , a.s., lim sup n →∞ ξ (1) n = + ∞ , and lim sup t →∞ X (1) t = + ∞ . If g satisfies (2.5), and ξ (1)0 is not large enough in this last result, then it is clear that forsome distributions of α (with small bound α ) the particle may get ‘trapped’ in a boundedset. This case is not of interest for us here.We want to quantify the result of Proposition 2.1. Again we can proceed more generally,but our results are clearer if we take g ( x ) = x γ ( γ <
1) from now on. Under the more generalassumption (A1), we believe that our techniques still apply, with some modifications, andthat the polynomial exponents in the theorems below remain valid.Our first almost-sure bounds are for the collisions process in the case γ ∈ (0 , Theorem 2.4
Suppose α satisfies (2.3) and g ( x ) = x γ with γ ∈ (0 , . Suppose that X (1)0 = ξ (1)0 > A for A sufficiently large. For any ε > , a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n ξ (1) m ≤ n − γ ) (log n ) − γ ) + ε , and (2.8)max ≤ m ≤ n ξ (1) m ≥ n − γ ) (log n ) − − γ ) − ε . (2.9)8 oreover, if γ > γ c (so that by Theorem 2.1 ξ is transient), then there exists D ∈ (0 , ∞ ) such that a.s., for all but finitely many n ∈ Z + , ξ (1) n ≥ n − γ ) (log n ) − D . (2.10)Theorem 2.4 shows that we have polynomial behaviour, but that on the time-scale ofthe collisions process ξ , the speed n − ξ (1) n can be very large for γ close to 1. The next resultgives the corresponding bounds for the stochastic billiard process X . On this time-scale, thebehaviour is very different, since the particle can spend much more time between collisions. Theorem 2.5
Suppose α satisfies (2.3) and g ( x ) = x γ for γ ∈ (0 , . Suppose that ξ (1)0 > A for A sufficiently large. For any ε > , a.s., for all t > large enough, sup ≤ s ≤ t X (1) s ≥ t − γ (log t ) − − γ )(2 − γ ) − ε . (2.11) Moreover, if γ > γ c (so that by Theorem 2.1 X is transient), then there exists D ∈ (0 , ∞ ) such that a.s., for all t large enough, t − γ (log t ) − D ≤ X (1) t ≤ t − γ (log t ) D . (2.12)In particular, in the (perhaps most interesting) transient case, (2.12) shows that theasymptotic speed for the particle is zero, i.e., lim t →∞ t − X (1) t = 0 a.s., but the motion issuper-diffusive. Moreover, as γ ↑
1, we approach linear growth, while as γ ↓
0, we approachdiffusive growth, as expected in view of the remarks just above the statement of (A1).(2.11) also demonstrates super-diffusive growth, even in the recurrent case.Now we consider the case where g ( x ) = x γ for γ <
0. If γ < − E [tg α ], we use thenotation ρ ( γ ) := E [tg α ](1 − γ ) E [tg α ] − γ , so that 0 ≤ ρ ( γ ) < / (2(1 − γ )). The next result deals with the collisions process ξ . Theorem 2.6
Suppose α satisfies (2.3) and is non-degenerate, and g ( x ) = x γ for γ < .Suppose that ξ (1)0 > A for A sufficiently large.(i) Suppose that γ > − E [tg α ] . Then for any ε > , a.s., for all but finitely many n ∈ Z + , (2.8) and (2.9) hold.(ii) Suppose that γ < − E [tg α ] . Then for any ε > , a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n ξ (1) m ≤ n ρ ( γ ) (log n ) ρ ( γ )+ ε , and (2.13)max ≤ m ≤ n ξ (1) m ≥ n ρ ( γ ) (log n ) − ρ ( γ ) − ε . (2.14)Now we state the corresponding result for the stochastic billiard process X .9 heorem 2.7 Suppose α satisfies (2.3) and is non-degenerate, and g ( x ) = x γ for γ < .Suppose that X (1)0 = ξ (1)0 > A for A sufficiently large.(i) Suppose that γ > − E [tg α ] . Then for any ε > , a.s., for all t > large enough,(2.11) holds.(ii) Suppose that γ < − E [tg α ] . Then for any ε > , a.s., for all t > large enough, sup ≤ s ≤ t X (1) s ≥ t ρ ( γ )1+ γρ ( γ ) (log t ) − γρ ( γ )2+2 ρ ( γ )1+ γρ ( γ ) − ε . (2.15)Theorems 2.6(ii) and 2.7(ii) show that even in the positive-recurrent case, the behaviouris essentially polynomial (i.e., heavy-tailed) in nature. Moreover, the exponents dependexplicitly on the reflection distribution α , unlike in the other cases. We briefly mention some open problems for the model studied here.Of interest is the weak (distributional) limiting behaviour of the horizontal componentsof ξ , X . We have from Lemma 5.4 and Corollary 5.1 below together with Theorem 4.1in [19] that Lamperti’s invariance principle (4.11) below will hold with η • = ( ξ (1) • ) − γ and γ ∈ ( − E [tg α ] , α non-degenerate, provided that Lamperti’s ‘condition (c)’ from [19]is satisfied. If Lamperti’s ‘condition (c)’ were verified, this would then imply that for γ ∈ ( − E [tg α ] ,
1) and α non-degenerate, we would have the weak invariance principle as n → ∞ : n − / ( ξ (1) ⌊ n •⌋ ) − γ ⇒ Υ • . Here (Υ t ) t> is a diffusion process on [0 , ∞ ) with Kolmogorov backwards equation u t = ax u x + b u xx , where a = 2 γ (1 − γ )(1 + E [tg α ]) , and b = 4(1 − γ ) E [tg α ] . A more challenging problem would be to determine whether any corresponding weak limittheory holds for the continuous-time process X (1) t (for γ > − E [tg α ], say).It may also be of interest to relax some of our assumptions, such as those on thereflection distribution. For instance, one might consider distributions for α with supporton all of [ − π/ , π/ α be bounded away from ± π/
2. Indeed, if α can take value π/
2, say, with positive probability, the particle willeventually jump to ∞ in the case of a tube of nondecreasing width. On the other hand,if α has distribution on [ − π/ , π/
2] with sufficiently light tails at the endpoints (suchthat in particular E [tg α ] < ∞ , say), it may be possible to modify the techniques in thepresent paper to obtain similar results; one would need to extend various results from10rocesses with jumps that are bounded to processes with jumps satisfying higher-ordermoment assumptions, for example.The structure of the remainder of the paper is as follows. In Section 3 we present andprove general semimartingale criteria (of some independent interest) on almost-sure boundsfor one-dimensional stochastic processes. In Section 4 we make use of the results in Section3, and give some results for the so-called Lamperti problem, which are of interest in theirown right as well as being a crucial element in the proofs of our main results. In Section 5we prove the results stated in Section 2.2 on the processes ξ, X . In this section we present and prove general semimartingale criteria for obtaining upperand lower almost sure bounds for discrete-time stochastic processes on the half-line. Theseresults will provide some of our main tools for the study of processes with asymptoticallyzero mean drifts in Section 4 below, but for the present section we work in some generality.Let ( F n ) n ∈ Z + be a filtration on a probability space (Ω , F , P ). Let Y = ( Y n ) n ∈ Z + be adiscrete-time ( F n )-adapted stochastic process taking values in [0 , ∞ ). Suppose that P [ Y = x ] = 1 for some x ∈ [0 , ∞ ).The types of process to which our criteria can be applied are quite general. For instance,due to the semimartingale nature of the results, we do not require that Y be a Markovprocess. This fact is particularly useful if Y is a process of norms k Z t k ; even if ( Z t ) t ∈ Z + is Markov, ( k Z t k ) t ∈ Z + will not be, in general. We anticipate that the general results inthis section are widely applicable. Moreover, continuous-time processes may be treated viaembedded discrete-time processes. One condition that we need for some of our results isthat jumps of the process Y are uniformly bounded above . Elements of the proofs in thepresent section extend ideas used in [4] and [25].Our conditions will involve the existence of suitable functions f such that the process f ( Y ) satisfies an appropriate ‘drift’ condition. Our criteria will involve only first-orderconditions (i.e. expectations); no variance results are required. Our results will yield almostsure bounds for max ≤ m ≤ n Y m (and hence Y n ) in terms of the function f and simple functions a, v that control our bounds. The functions a, v will belong to classes of eventually increasingfunctions defined as follows. Definition 3.1
We say function a : [1 , ∞ ) → [1 , ∞ ) satisfies condition (C1) and v :[1 , ∞ ) → [1 , ∞ ) satisfies condition (C2) if:(C1) a ( x ) → ∞ as x → ∞ , there exists n a ∈ N such that x a ( x ) is increasing for all x ≥ n a , and P ∞ n =1 1 a ( n ) < ∞ ;(C2) v ( x ) → ∞ as x → ∞ , there exists n v ∈ N such that x v ( x ) is increasing for all x ≥ n v , and P ∞ n =1 1 nv ( n ) < ∞ . P ∞ n =1 1 v ( r n ) < ∞ for some (hence all) r >
1. Throughout this section we interpret log x asmax { , log x } . With this convention, condition (C1) is satisfied, for example, by a ( x ) = x ε or x (log x ) ε , where ε >
0, and (C2) is satisfied, for example, by v ( x ) = (log x ) ε or(log x )(log log x ) ε , ε > Theorem 3.1
Let ( Y n ) n ∈ Z + be a discrete-time ( F n ) -adapted stochastic process taking valuesin [0 , ∞ ) . Suppose that there exists f : [0 , ∞ ) → [0 , ∞ ) a nondecreasing function such that, E [ f ( Y n +1 ) − f ( Y n ) | F n ] ≥ a.s. , (3.1) for all n ∈ Z + . Also suppose that there exists B ∈ (0 , ∞ ) such that for all n ∈ N E [ f ( Y n )] ≤ Bn. (3.2)
Define the nondecreasing function f − for x > by f − ( x ) := sup { y ≥ f ( y ) < x } . (3.3) Let a satisfy (C1). Then, a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n Y m ≤ f − ( a (2 n )) . (3.4)A sufficient condition for (3.2) is clearly that there exists B ′ ∈ (0 , ∞ ) such that E [ f ( Y n +1 ) − f ( Y n ) | F n ] ≤ B ′ a.s. , (3.5)for all n ∈ Z + . We next present a variant of Theorem 3.2, which relaxes the condition (3.1)at the expense of this slightly stronger version of (3.2). Theorem 3.2
Theorem 3.1 holds with conditions (3.1) and (3.2) replaced by the lone con-dition (3.5).
For the proof of Theorem 3.2, we need the following:
Lemma 3.1
Let ( Z n ) n ∈ Z + be an ( F n ) -adapted process on [0 , ∞ ) and z ∈ [0 , ∞ ) such that P [ Z = z ] = 1 and for some B ∈ (0 , ∞ ) and all n ∈ Z + E [ Z n +1 − Z n | F n ] ≤ B a . s .. (3.6) Then for any x > and any n ∈ N P (cid:20) max ≤ m ≤ n Z m ≥ x (cid:21) ≤ ( Bn + z ) x − . (3.7)12 roof. Similarly to Doob’s decomposition (see e.g. [32], p. 120), set W := Z , and for m ∈ N let W m := Z m + A − m − + A − m − + · · · + A − , where A m = E [ Z m +1 − Z m | F m ], A − m = max {− A m , } ≥ A + m = max { A m , } ≥
0. Then E [ W m +1 − W m | F m ] = E [ Z m +1 − Z m + A − m | F m ] = A m + A − m = A + m ∈ [0 , B ]so that ( W m ) is a nonnegative ( F m )-submartingale with W m ≥ Z m for all m , and E [ W n ] ≤ W + Bn = z + Bn . Hence by Doob’s submartingale inequality (see e.g. [32], p. 137) P (cid:20) max ≤ m ≤ n Z m ≥ x (cid:21) ≤ P (cid:20) max ≤ m ≤ n W m ≥ x (cid:21) ≤ x − E [ W n ] ≤ ( Bn + z ) x − , as required. Proof of Theorems 3.1 and 3.2.
First we prove Theorem 3.1. Since, by (3.1), ( f ( Y n )) isa nonnegative submartingale, Doob’s submartingale inequality implies that, for any n ∈ N , P (cid:20) max ≤ m ≤ n f ( Y m ) ≥ a ( n ) (cid:21) ≤ ( a ( n )) − E [ f ( Y n )] ≤ Bn ( a ( n )) − , (3.8)using (3.2). Also, for n ∈ N , P (cid:20) max ≤ m ≤ n f ( Y m ) ≥ a ( n ) (cid:21) = P (cid:20) f (cid:18) max ≤ m ≤ n Y m (cid:19) ≥ a ( n ) (cid:21) , (3.9)since f is nondecreasing. With f − defined by (3.3), let E n denote the event E n := (cid:26) max ≤ m ≤ n Y m > f − ( a ( n )) (cid:27) . Then since z > f − ( r ) implies f ( z ) > r , we obtain from (3.8) and (3.9) that for all n ∈ N P [ E n ] ≤ P (cid:20) f (cid:18) max ≤ m ≤ n Y m (cid:19) ≥ a ( n ) (cid:21) ≤ Bn ( a ( n )) − . (3.10)Now ∞ X ℓ =0 ℓ a (2 ℓ ) < ∞ ⇐⇒ ∞ X ℓ =1 a ( ℓ ) < ∞ . (3.11)Hence by (3.10) and (3.11), along the subsequence n = 2 ℓ for ℓ = 0 , , , . . . , (C1) andthe Borel-Cantelli lemma imply that, a.s., the event E n occurs only finitely often, and inparticular there exists ℓ < ∞ such that for all ℓ ≥ ℓ max ≤ m ≤ ℓ Y m ≤ f − ( a (2 ℓ )) . Every n ∈ N sufficiently large has 2 ℓ n ≤ n < ℓ n +1 for some ℓ n ≥ ℓ ; then, a.s.,max ≤ m ≤ n Y m ≤ max ≤ m ≤ ℓn +1 Y m ≤ f − ( a (2 ℓ n +1 )) , n . Now since 2 ℓ n +1 ≤ n and f − is nondecreasing, (3.4) follows.To obtain Theorem 3.2, in the previous argument we replace (3.8) by an application ofLemma 3.1 with Z n = f ( Y n ) and x = a ( n ).We now work towards obtaining a lower bound for max ≤ m ≤ n Y m . For ℓ ∈ N let σ ℓ denote the first passage time of ℓ for Y , that is σ ℓ := min { n ∈ Z + : Y n ≥ ℓ } . Then σ , σ , . . . is a nondecreasing sequence of stopping times for the process Y ; under thecondition lim sup n →∞ Y n = + ∞ a.s., σ ℓ < ∞ a.s. for every ℓ .For ℓ ∈ N and all n ∈ Z + , let Y ℓn := Y n ∧ σ ℓ (where a ∧ b := min { a, b } ), the processstopped at σ ℓ ; then Y ℓn = Y n if n < σ ℓ and Y ℓn = Y σ ℓ ≥ ℓ for n ≥ σ ℓ . We have the followingresult, which is a ‘reverse Foster’s criterion’ (compare Theorem 2.1.1 in [11]). Lemma 3.2
Let ( Y n ) n ∈ Z + be a discrete-time ( F n ) -adapted stochastic process taking valuesin [0 , ∞ ) , such that for some b ∈ N P [ Y n +1 ≤ Y n + b ] = 1 , (3.12) for all n ∈ Z + . Fix ℓ ∈ N and Y = x ∈ [0 , ℓ ) . Suppose that there exists f : [0 , ∞ ) → [0 , ∞ ) a nondecreasing function for which, for some ε > , E [ f ( Y ℓn +1 ) − f ( Y ℓn ) | F n ] ≥ ε { σ ℓ >n } a.s. (3.13) for all n ∈ Z + . Then E [ σ ℓ ] ≤ ε f ( ℓ + b ) . Proof.
Let ℓ ∈ N . Taking expectations in (3.13) we have for all m ∈ Z + E [ f ( Y ℓm +1 )] − E [ f ( Y ℓm )] ≥ ε P [ σ ℓ > m ] . Summing for m from 0 up to n we obtain E [ f ( Y ℓn +1 )] − f ( Y ℓ ) ≥ ε n X m =0 P [ σ ℓ > m ] . Since f ( Y ℓ ) = f ( Y ) ≥ E [ σ ℓ ] = lim n →∞ n X m =0 P [ σ ℓ > m ] ≤ ε lim sup n →∞ E [ f ( Y ℓn +1 )] ≤ ε f ( ℓ + b ) , using the fact that, by (3.12), f ( Y ℓn ) ≤ f ( ℓ + b ) for all n ∈ Z + since f is nondecreasing.Now we have the following lower bound: 14 heorem 3.3 Let ( Y n ) n ∈ Z + be a discrete-time ( F n ) -adapted stochastic process taking valuesin [0 , ∞ ) , such that condition (3.12) holds. Suppose that there exists f : [0 , ∞ ) → [0 , ∞ ) anondecreasing function and ε > for which E [ f ( Y n +1 ) − f ( Y n ) | F n ] ≥ ε a.s. (3.14) for all n ∈ Z + . Let v satisfy (C2). For x ≥ define the function r v by r v ( x ) := inf { y ≥ ε − v ( y ) f ( y + b ) ≥ x } . (3.15) Then, a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n Y m ≥ r v ( n ) − b. (3.16) Proof.
First note that by (C2) and the fact that f is nondecreasing, we have that x r v ( x )is nondecreasing for all x sufficiently large. Fix K >
1. By Markov’s inequality, P [ σ ℓ > v ( ℓ ) E [ σ ℓ ]] ≤ ( v ( ℓ )) − . (3.17)Given (3.17) and (C2), the Borel-Cantelli lemma implies that, a.s., σ ⌊ K ℓ ⌋ > v ( ⌊ K ℓ ⌋ ) E [ σ ⌊ K ℓ ⌋ ]for only finitely many ℓ ∈ Z + . Moreover, given that f satisfies (3.14), we have that (3.13)holds for all ℓ ∈ N and all n ∈ Z + . Then Lemma 3.2 with (3.12) in this case implies that E [ σ ℓ ] ≤ ε − f ( ℓ + b ) for all ℓ . Thus we have that a.s., for some ℓ < ∞ and all ℓ ≥ ℓ , σ ⌊ K ℓ ⌋ ≤ ε − v ( ⌊ K ℓ ⌋ ) f ( ⌊ K ℓ ⌋ + b ). Hence with the definition of r v at (3.15) we have, a.s., forall ℓ sufficiently large, r v ( σ ⌊ K ℓ ⌋ ) ≤ r v ( ε − v ( ⌊ K ℓ ⌋ ) f ( ⌊ K ℓ ⌋ + b )) ≤ ⌊ K ℓ ⌋ ≤ Y σ ⌊ Kℓ ⌋ . (3.18)Since E [ σ ℓ ] < ∞ for all ℓ , we have that σ ℓ < ∞ a.s. for all ℓ . Moreover, the jumps bound(3.12) implies that, for all ℓ ≥ Y , σ ℓ + b ≥ σ ℓ a.s., so that lim ℓ →∞ σ ℓ = ∞ a.s.. Thus,a.s., every n ∈ Z + satisfies σ ⌊ K ℓn − ⌋ ≤ n < σ ⌊ K ℓn ⌋ for some ℓ n ∈ N . Then, a.s.,max ≤ m ≤ n Y m ≥ max ≤ m ≤ σ ⌊ Kℓn − ⌋ Y m ≥ Y σ ⌊ Kℓn − ⌋ ≥ ⌊ K ℓ n − ⌋ ≥ ⌊ K ℓ n − ⌋⌊ K ℓ n ⌋ ( Y σ ⌊ Kℓn ⌋ − b ) , (3.19)for all n sufficiently large, since Y σ ⌊ Kℓn ⌋ ≤ ⌊ K ℓ n ⌋ + b . Then (3.19) and (3.18) imply that,a.s., for all but finitely many n ∈ Z + ,max ≤ m ≤ n Y m ≥ ⌊ K ℓ n − ⌋⌊ K ℓ n ⌋ (cid:0) r v ( σ ⌊ K ℓn ⌋ ) − b (cid:1) ≥ ⌊ K ℓ n − ⌋⌊ K ℓ n ⌋ ( r v ( n ) − b ) , since σ ⌊ K ℓn ⌋ > n and r v ( n ) is nondecreasing for n sufficiently large. Now taking a sequenceof values for K converging down to 1 we obtain (3.16).15 Almost-sure bounds for the Lamperti problem
In this section we give upper and lower almost-sure bounds for the so-called Lampertiproblem of a stochastic process on the half-line with mean drift asymptotically zero. Wewill need these results for our almost-sure bounds for the stochastic billiard model; ourresults for the Lamperti problem appear to be new in the generality given here, and insome cases even for a nearest-neighbour random walk.Let ( F n ) n ∈ Z + be a filtration on a probability space (Ω , F , P ). Let η = ( η n ) n ∈ Z + be adiscrete-time time-homogeneous stochastic process adapted to ( F n ) n ∈ Z + and taking valuesin an unbounded subset S of [0 , ∞ ).We suppose that jumps of η are uniformly bounded, that is there exists B ∈ (0 , ∞ ) suchthat for all n ∈ Z + and all x ∈ S P [ | η n +1 − η n | > B | F n ] = 0 , a.s. . (4.1)Under the jumps condition (4.1), the jump-moment functions µ : S → [ − B, B ] and µ : S → [0 , B ] given for k ∈ { , } by µ k ( x ) := E [( η n +1 − η n ) k | η n = x ] , ( n ∈ Z + ) (4.2)are well-defined; in particular µ ( x ) is bounded above. Our basic assumption for this sectionwill be (A2) below.(A2) Let η be a discrete-time stochastic process on [0 , ∞ ) satisfying (4.1) with µ , µ asgiven by (4.2).For some of the results in this section we also assume that there exists v > x ∈ S µ ( x ) ≥ v. (4.3)We will sometimes make the further assumption that P (cid:20) lim sup n →∞ η n = + ∞ (cid:21) = 1 . (4.4)The model that we concentrate on is a particular case of the so-called Lamperti problem(see [17–19, 23]) of a stochastic process on [0 , ∞ ) with mean drift asymptotically zero: thatis µ ( x ) → x → ∞ . Our results do not actually assume µ ( x ) →
0, but it is in thiscase (in fact, in the case | µ ( x ) | = O (1 /x )) that they are of most interest. As mentioned byLamperti [17], and as is also true for the results in this section, only the first two moments µ , µ of the jump distribution are important: there is some form of ‘invariance principle’ atwork. We will be mainly concerned with the case that, from the point of view of recurrence,turns out to be critical; that is where x | µ ( x ) | remains bounded away from zero and frominfinity.In the particular case of a nearest-neighbour random walk, where η is supported on Z + ,the problem reduces to that of a simple random walk with asymptotically zero perturbation:16hat model was studied by Harris [13] and by Hodges and Rosenblatt [14], and is amenableto special methods for so-called birth-and-death chains. Thus many results are present inthe literature for the nearest-neighbour case: in Section 4.2 below we briefly mention someof these results and their relation to the results given in this section. For the applications inthe present paper, however, we cannot use these nearest-neighbour results. Thus we needto prove results in the general Lamperti setting. In fact, as we will point out below, some ofthe results that we give in the present section seem to be new even for the nearest-neighboursituation. Thus the results that we state in this section are of independent interest.As well as being of interest in their own right, stochastic processes on the half-line withmean drift asymptotically zero are important for the study of multidimensional processesby the method of Lyapunov-type functions (see e.g. [11, 23]). For example, if ( Z n ) n ∈ Z + is azero-drift process with bounded jumps on Z d , d ≥
2, the process ( k Z n k ) n ∈ Z + is supportedon the half-line and has mean drift asymptotically zero. Clearly, the process k Z n k will notbe nearest-neighbour; thus the generality of results like Lamperti’s [17, 19] and those in thepresent section is valuable.Fix H >
0, which will need to be large for some of our results. We say that η is recurrent or transient according to whether the return time inf { n ∈ Z + : η n ≤ H } is almost surelyfinite or not; in the former case we distinguish positive- and null-recurrence according towhether the return time has finite or infinite expectation.The recurrence and transience properties of η were studied by Lamperti, who proved thefollowing result (see Theorems 3.1 and 3.2 of [17] with Theorem 2.1 of [19]; also Theorem3 of [23] for a finer result). Note that all of the conditions that we state in this sectioninvolving evaluating µ ( x ), µ ( x ) only need apply for x ∈ S . Proposition 4.1 [17, 19] Suppose that (A2), (4.3), and (4.4) hold.(i) If for all x > H , x | µ ( x ) | ≤ µ ( x ) , then η is null-recurrent for any η > H .(ii) Suppose that there exists δ > such that for all x > H xµ ( x ) − µ ( x ) > δ. (4.5) Then η is transient for any η > H .(iii) Suppose that there exists δ > such that for all x > H , xµ ( x ) + µ ( x ) < − δ . Then η is positive-recurrent for any η > H . For our almost-sure lower bounds, we impose an additional ‘reflection’ condition thatensures that we can avoid getting trapped in a bounded set. Condition (A3) below is themost appropriate way of doing this for our applications to the stochastic billiard model,but is clearly stronger than is necessary for the results in the present section.(A3) Given η n = x > H , if a jump would take η n +1 < H we replace the jump with η n +1 = 2 H instead. 17e now state our results on almost-sure bounds. In the general setting of the presentsection, the only almost-sure bound result that we could find in the literature is also dueto Lamperti [18]; see Proposition 4.2 in our discussion of the literature below. Lamperti’sbound is a weaker version of our result Theorem 4.1(i) below, while Theorem 4.1(ii) givesa complementary lower bound. Theorem 4.1
Suppose that Assumption (A2) holds.(i) Suppose that there exists C ∈ (0 , ∞ ) such that for all x ≥ xµ ( x ) ≤ C. Then for any η , for any ε > , a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n η m ≤ n / (log n ) (1 / ε . (ii) Suppose that (A3) holds, and that there exists δ > such that for all x > H xµ ( x ) + µ ( x ) ≥ δ. Then for any η > H , for any ε > , a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n η m ≥ n / (log n ) − (1 / − ε . In the transient case, we prove the following lower bound that strengthens, in somesense, Theorem 4.1(ii) in this case.
Theorem 4.2
Suppose that (A2) and (A3) hold. If (4.5) holds for some δ > and all x > H , then there exists D ∈ (0 , ∞ ) such that, for any η > H , a.s., for all but finitelymany n ∈ Z + , η n ≥ n / (log n ) − D . We could find no reference for a result like Theorem 4.2, even in the nearest-neighbourcase. We prove Theorem 4.2 in Section 4.4; it may be possible to extract bounds for theexponent D in the logarithmic term in terms of the constants B, δ by keeping track of theconstants in our proofs. Note that the transient critical Lamperti problem for which (A2)and (A3) hold, and δ < xµ ( x ) − µ ( x ) < C for some 0 < δ < C < ∞ and all x large enough, satisfies the conditions of each of theresults in Theorems 4.1 and 4.2.The following discussion suggests that without taking into account finer behaviour (suchas the smallest value of δ > S dn ) n ∈ Z + be the symmetric simple random walk on Z d , d ≥
2, so that for x, y ∈ Z d , P [ S dn +1 = y | S dn = x ] = (2 d ) − if and only if k y − x k = 1. Then elementary calculationsshow that2 k x k E [ k S dn +1 k − k S dn k | S dn = x ] − E [( k S dn +1 k − k S dn k ) | S dn = x ] → − d as k x k → ∞ ; thus the process ( k S dn k ) n ∈ Z + is in precisely the critical Lamperti situation,and for d > d >
2, a classical result of Dvoretzky and Erd˝os [7] saysthat for any ε >
0, a.s., k S dn k > n / (log n ) − d − − ε (4.6)for all but finitely many n ∈ Z + , and that this bound is sharp in that for ε = 0 the inequality(4.6) fails infinitely often with probability 1. In particular, it is at least informally reasonableto argue that by letting d ↓ g ( x ) = x γ with γ <
0. Again these results seem to be new in the generalitygiven here.Theorem 4.3 below is further demonstration of the phenomenon of polynomial ergodicity for the critical Lamperti problem, as also evidenced by results of [24] in the context ofstationary measures.
Theorem 4.3
Suppose that (A2) and (4.3) hold.(i) Suppose that there exists κ > such that for all x sufficiently large − κµ ( x ) + o (1) ≤ xµ ( x ) ≤ − κµ ( x ) + o ((log x ) − ) . (4.7) Then for any η , any ε > , a.s., for all but finitely many n ∈ Z + , max ≤ m ≤ n η m ≤ n / (1+ κ ) (log n ) (2 / (1+ κ ))+ ε . (ii) Suppose that (A3) holds, and that there exists κ ≥ such that for all x sufficientlylarge xµ ( x ) + κµ ( x ) ≥ o ((log x ) − ) . (4.8) Then there exists H ∈ (0 , ∞ ) such that for any η > H , for any ε > , a.s., for allbut finitely many n ∈ Z + , max ≤ m ≤ n η m ≥ n / (1+ κ ) (log n ) − (2 / (1+ κ )) − ε . Note that the κ = 1 case of Theorem 4.3(ii) gives a weaker form of the lower bound inTheorem 4.1(ii) under a somewhat weaker condition.Before we prove the results stated in Section 4.1, we briefly discuss the existing literaturerelated to the present section. This is done in Section 4.2 below. Then we give the proofsof Theorems 4.1 and 4.3 in Section 4.3 and Theorem 4.2 in Section 4.4.19 .2 Remarks on the literature In the general setting of the so-called Lamperti problem, when (A2) holds, there seem tobe few almost-sure bounds known. The next result, due to Lamperti himself (see Theorems2.1, 2.2, 4.2, and 5.1 in [18]), includes an almost-sure upper bound in a particular casewhere the conditions of (i) or (ii) in Proposition 4.1 hold.
Proposition 4.2 [18] Suppose that (A2) holds, and that for any finite interval I ⊂ [0 , ∞ )lim n →∞ n n − X m =0 P [ η m ∈ I ] = 0 . (4.9) Suppose that for a, b ∈ R lim x →∞ µ ( x ) = b > , lim x →∞ ( xµ ( x )) = a > − ( b/ . Then for any η , for any ε > , a.s., for all but finitely many n ∈ Z + η n ≤ n (1 / ε . (4.10) In addition, suppose that (4.9) holds uniformly in η . Then as n → ∞ the followinginvariance principle applies: n − / η ⌊ n •⌋ ⇒ Υ • , (4.11) where (Υ t ) t> is a diffusion process on [0 , ∞ ) with Kolmogorov backwards equation u t =( a/x ) u x + ( b/ u xx (see Section 3 in [18] for details). Thus Theorem 4.1(i) improves upon the bound in (4.10). Note that the condition (4.9),which does not distinguish between null-recurrence and transience, implies (4.4).A special case of the Lamperti problem on [0 , ∞ ) is the case where the process η issupported on Z + and only nearest-neighbour jumps are allowed. This special case hasreceived much more attention than the general case described in Section 4.1 above; now webriefly describe known results in the nearest-neighbour case.When they exist, these nearest-neighbour results are sharper than the general resultsthat we give in Section 4.1. Thus it may be possible to sharpen the bounds in the resultsin Section 4.1.In the nearest-neighbour case, η is sometimes known as a birth-and-death chain (orbirth-and-death random walk). Precisely, suppose that there exists a sequence ( p x ) x ∈ Z + with p x ∈ (0 ,
1) such that for all x ∈ N and n ∈ Z + P [ η n +1 = x − | η n = x ] = 1 − P [ η n +1 = x + 1 | η n = x ] = p x , with reflection from 0 governed by P [ η n +1 = 0 | η n = 0] = 1 − P [ η n +1 = 1 | η n = 0] = p . (In the literature, the slightly more general model where the walk is allowed to stay in thecurrent position also appears. This introduces no new essential features however.)20uch processes have been extensively studied in various contexts. They are oftenamenable to explicit computation; one particularly fruitful approach is via orthogonal poly-nomials, dating back at least to Karlin and McGregor [15] and subsequently employed forinstance by Voit [29, 31]. A recent reference is [6], to which we are indebted for bringing toour attention several of the papers cited in this section.The one-step mean drift of this walk is for x ∈ N E [ η n +1 − η n | η n = x ] = 1 − p x . We are in the Lamperti situation if we assume lim x →∞ p x = 1 /
2. A case of particularinterest is when p x ∈ (0 ,
1) satisfies p x = 12 + κ x − α + o ( x − α ) , (4.12)for α > κ ∈ R \ { } . Then with the notation at (4.2), µ ( x ) = − ( κ/ x − α + o ( x − α )and µ ( x ) = 1.In the nearest-neighbour case, partial versions of the recurrence result Proposition 4.1were given in e.g. [13, 14]. When (4.12) holds, Proposition 4.1 implies that if α > η is null-recurrent, while if α ∈ (0 , η is transient, positive-recurrent according to κ < κ >
0. In the critical case α = 1, η is transient if κ < −
1, null-recurrent if | κ | <
1, andpositive-recurrent if κ > α ∈ (0 , Proposition 4.3 [30] Suppose that (4.12) holds for α ∈ (0 , and κ < . Then for any η , a.s. lim n →∞ η n n / (1+ α ) = (cid:18) | κ | α ) (cid:19) / (1+ α ) . Theorem 7.1 of Lamperti [18] gives a corresponding result for more general processes (inthe manner of Section 4.1), but with convergence in probability only.The following iterated logarithm result that applies in the κ < α = 1 case of (4.12)follows from Theorem 4(b) of [12] (compare also Theorem 1.3 of [29]). Proposition 4.4 [12] Suppose that there exist c, C with < c < C < ∞ such that for all x sufficiently large − Cx − ≤ p x ≤ − cx − . Then for any η , a.s. lim sup n →∞ η n √ n log log n = 1 . α = 1 and κ ∈ (0 , α = 1 and κ >
1. Thus Theorems 4.1, 4.2,and 4.3 add to known results even in the nearest-neighbour case.
In order to prove Theorems 4.1 and 4.3, we apply the general results on almost-sure boundsfor discrete-time stochastic processes given in Section 3.
Proof of Theorem 4.1.
For x ≥ n ∈ Z + we have E [ η n +1 − η n | η n = x ] = 2 x E [ η n +1 − η n | η n = x ] + E [( η n +1 − η n ) | η n = x ]= 2 xµ ( x ) + µ ( x ) . (4.13)Under the conditions of part (i) of the theorem, the right-hand side of (4.13) is uniformlybounded above. Hence we can apply Theorem 3.2, with Y n = η n , taking f ( x ) = x and a ( x ) = x (log x ) ε . This proves part (i).Under the conditions of part (ii) of the theorem, we have that the right-hand side of(4.13) is strictly positive for all x > H . Under condition (A3) it suffices to consider theprocess on [ H, ∞ ). Hence we can apply Theorem 3.3, with Y n = η n , taking f ( x ) = x and v ( x ) = (log x ) ε . This proves part (ii).Now we prepare for the proof of Theorem 4.3. We need two more lemmas, to identifysuitable functions f with which to apply the criteria of Section 3. Lemma 4.1
Suppose that (A2) and (4.3) hold. Suppose that there exists κ > such that(4.7) holds for all x sufficiently large. Then there exists C ∈ (0 , ∞ ) such that for all x ≥ E [ η κn +1 (log(1 + η n +1 )) − − η κn (log(1 + η n )) − | η n = x ] ≤ C. Proof.
It follows from Taylor’s theorem applied to the function x x κ (log(1 + x )) − that for | θ ( x ) | = O (1) as x → ∞ ( x + θ ( x )) κ (log(1 + x + θ ( x ))) − − x κ (log(1 + x )) − = x κ log(1 + x ) (cid:20) κ x (2 xθ ( x ) + κθ ( x ) ) − x log(1 + x ) (2 xθ ( x ) + (2 κ + 1) θ ( x ) + o (1)) (cid:21) . Now conditioning on η n = x and setting θ ( x ) = η n +1 − η n , taking expectations in the lastdisplayed equation gives E [ η κn +1 (log(1 + η n +1 )) − − η κn (log(1 + η n )) − | η n = x ]= x κ log(1 + x ) (cid:20) κ x (2 xµ ( x ) + κµ ( x )) − x log(1 + x ) (2 xµ ( x ) + (2 κ + 1) µ ( x ) + o (1)) (cid:21) x κ − (log(1 + x )) (cid:20) − µ ( x ) + o (1) (cid:21) , using (4.7). Then with (4.3) this yields the result. Lemma 4.2
Suppose that (A2) and (4.3) hold. Suppose that there exists κ ≥ such that(4.8) holds for all x sufficiently large. Then there exist H ∈ (0 , ∞ ) , ε > such that for all x > H E [ η κn +1 log(1 + η n +1 ) − η κn log(1 + η n ) | η n = x ] ≥ ε. Proof.
This time, it follows from Taylor’s theorem that for | θ ( x ) | = O (1) as x → ∞ ( x + θ ( x )) κ log(1 + x + θ ( x )) − x κ log(1 + x )= x κ (cid:20) κ x (log(1 + x ))(2 xθ ( x ) + κθ ( x ) ) + 12 x (2 xθ ( x ) + (2 κ + 1) θ ( x ) + o (1)) (cid:21) . Now conditioning on η n = x and setting θ ( x ) = η n +1 − η n , taking expectations in the lastdisplayed equation gives E [ η κn +1 log(1 + η n +1 ) − η κn log(1 + η n ) | η n = x ]= x κ (cid:20) κ x (log(1 + x ))(2 xµ ( x ) + κµ ( x )) + 12 x (2 xµ ( x ) + (2 κ + 1) µ ( x ) + o (1)) (cid:21) ≥ x κ − (cid:20) κ + 12 µ ( x ) + o (1) (cid:21) ≥ ε > , for all x large enough, using (4.8), (4.3), and the fact that κ ≥ Proof of Theorem 4.3.
By Lemma 4.1, we can apply Theorem 3.2 with f ( x ) = x κ (log(1 + x )) − , a ( x ) = x (log x ) ε , and Y n = η n to obtain part (i) of the theorem.On the other hand, by Lemma 4.2 we can apply Theorem 3.3 with f ( x ) = x κ log(1 + x ), v ( x ) = (log x ) ε , and Y n = η n to obtain part (ii) of the theorem. Suppose that (4.5) holds for some δ >
0. Our strategy for the proof of Theorem 4.2 is toconstruct a scale on which the process η is transient with mean drift that is positive andbounded uniformly away from 0. Fix β >
0. Denote I := ∅ and for r ∈ N define theintervals I r := [(1 + β ) r − B, (1 + β ) r + B ] , where B is as in the jumps bound (4.1); then for β sufficiently large, I , I , . . . do notoverlap.We look at the process η at the moments at which it enters an interval I r different fromthe last one visited. We define the random times ℓ , ℓ , . . . inductively as follows. Set ℓ :=min { n ∈ Z + : η n ∈ ∪ r I r } ; for k ∈ N , if η ℓ k ∈ I r , let ℓ k +1 := min { ℓ > ℓ k : η ℓ ∈ I r − ∪ I r +1 } .Consider the embedded process ˜ η = (˜ η k ) k ∈ N = ( η ℓ k ) k ∈ N . The conditions of Theorem 4.223mply those of Theorem 4.1(ii), so in particular (4.4) holds. This together with the jumpsbound (4.1) implies that P h [ m>n { η m ∈ I r − ∪ I r +1 } | η n ∈ I r i = 1for any r ∈ N , so that the process ˜ η is well-defined and the random times ℓ k are almostsurely finite for each k . Denote p r := P [ η ℓ k +1 ∈ I r +1 | η ℓ k ∈ I r ] , ( k ∈ N ) . (4.14)The next result says that the process ˜ η has uniformly positive mean drift. Lemma 4.3
Under the conditions of Theorem 4.2, there exist β > and ε > such thatfor all r ∈ N p r >
12 + ε . Proof.
Let λ <
0. Taylor’s theorem implies that as x → ∞ E [ η λn +1 − η λn | η n = x ] = x λ − (cid:20) λxµ ( x ) + λ ( λ − µ ( x ) + O ( x − ) (cid:21) , using (4.1) and (4.2). Then by (4.5) and (4.1) again, we have E [ η λn +1 − η λn | η n = x ] ≤ x λ − (cid:2) ( λ/ δ + λB ) + O ( x − ) (cid:3) , where B is the jumps bound in (4.1). It follows that for λ ∈ ( − δ/B , E [ η λn +1 − η λn | η n = x ] ≤ , for all x sufficiently large. Hence for the stopping times ℓ k it is also the case that E [ η λℓ k +1 − η λℓ k | η ℓ k = x ] ≤ , (4.15)for all x sufficiently large. Then the supermartingale property (4.15) implies that for β large enough and all r ∈ N E [ η λℓ k +1 | η ℓ k ∈ I r ] ≤ [(1 + β ) r − B ] λ . (4.16)Also, we have that E [ η λℓ k +1 | η ℓ k ∈ I r ] = p r E [ η λℓ k +1 | η ℓ k ∈ I r , η ℓ k +1 ∈ I r +1 ]+ (1 − p r ) E [ η λℓ k +1 | η ℓ k ∈ I r , η ℓ k +1 ∈ I r − ] ≥ p r [(1 + β ) r +1 + B ] λ + (1 − p r )[(1 + β ) r − + B ] λ . (4.17)Combining (4.16) and (4.17) we see that one can choose β > ε > p r > (1 /
2) + ε for all r . 24onsider, for k ∈ N , Z k := X r ∈ N r { η ℓ k ∈ I r } . (4.18)Then the process Z = ( Z k ) k ∈ N is a stochastic process on N with nearest-neighbour tran-sitions that tracks which interval I r the embedded process ˜ η is in. With p r as defined by(4.14), we have P [ Z k +1 = r + 1 | Z k = r ] = 1 − P [ Z k +1 = r − | Z k = r ] = p r . (4.19)Let κ Zr denote the time of the first visit of Z to r ∈ N , i.e. κ Zr := min { k ∈ N : Z k = r } . (4.20)For r ∈ N , s ∈ Z + define γ ( r, s ) := P [ κ Zr < ∞ | Z = r + s ] . (4.21)In words, γ ( r, s ) is the probability that the process Z hits r in finite time, given that itstarts at r + s .The next result will enable us to show, loosely speaking, that the process Z leaves eachstate for good only shortly after its first visit. Lemma 4.4
Under the conditions of Theorem 4.2, there exists C ∈ (0 , ∞ ) such that X r ∈ N γ ( r, ⌊ C log r ⌋ ) < ∞ . Proof.
To estimate the required hitting probability we introduce an auxiliary process. Fix C ∈ (0 , ∞ ), which we will eventually take to be large. Let r ∈ N . Define the nonnegativeprocess ( W t ) t ∈ Z + for t ∈ Z + by W t := exp (cid:8) − C − ( Z t − r ) (cid:9) . Then we have for n ∈ N and t ∈ Z + E [ W t +1 − W t | Z t = n ] = exp {− C − ( n − r ) } E [exp {− C − ( Z t +1 − Z t ) } − | Z t = n ] . (4.22)We will make use of the fact that there exists C ∈ (0 , ∞ ) such that exp( − x ) − ≤ − x + C x for all x with | x | ≤
1. Since Z has nearest-neighbour jumps, it follows that E [exp {− C − ( Z t +1 − Z t ) } − | Z t = n ] ≤ − C − E [ Z t +1 − Z t | Z t = n ] + C C − E [( Z t +1 − Z t ) | Z t = n ] ≤ − C − (2 p n −
1) + C C − , (4.23)using (4.19) and the fact that Z has nearest-neighbour jumps again. It then follows from(4.22), (4.23), and Lemma 4.3 that we can choose C sufficiently large, not depending on r , so that for any n ∈ N E [ W t +1 − W t | Z t = n ] ≤ , (4.24)25or all t ∈ Z + .Now let s ∈ Z + and take Z = r + s . Recall the definition of the stopping time κ Zr from (4.20). From (4.24) we have that ( W t ) t ∈ Z + is a nonnegative supermartingale, so that W ∞ := lim t →∞ W t exists a.s., and E [ W ] ≥ E [ W κ Zr ]. It follows that for any r ∈ N , s ∈ Z + E [ W ] = exp {− C − s } ≥ E [ W κ Zr { κ Zr < ∞} ] = γ ( r, s ) , using (4.21). In particular, it follows that for some C ∈ (0 , ∞ ) large enough X r ∈ N γ ( r, ⌊ C log r ⌋ ) ≤ X r ∈ N exp {− C − ⌊ C log r ⌋} ≤ X r ∈ N r − < ∞ ;completing the proof of the lemma.To complete the proof of Theorem 4.2, we need to translate our results for the embeddedprocess ˜ η back to the underlying process η . Proof of Theorem 4.2.
Consider the nearest-neighbour process Z = ( Z k ) k ∈ N on N , asdefined by (4.18). Recall the definition of κ Zr from (4.20). Let ω Zr denote the time of the last visit of Z to r ∈ N , i.e. ω Zr := max { k ∈ N : Z k = r } . Similarly for the process η set for x > κ ηx := min { n ∈ Z + : η n ≥ x } , ω ηx := max { n ∈ Z + : η n ≤ x } . With Lemma 4.4, the Borel-Cantelli lemma implies that, a.s., for only finitely many r ∈ N does Z return to r after visiting r + ⌊ C log r ⌋ . So, a.s., for all but finitely many r ∈ N , ω Zr ≤ κ Zr + ⌊ C log r ⌋ . (4.25)Set r ( x ) := (cid:22) log x log(1 + β ) (cid:23) . Observe that by definition of the process Z we have that a.s. for some C ∈ (0 , ∞ ) and all x large enough ω ηx ≤ ω Zr ( x )+1 ≤ κ Zr ( x )+ ⌊ C log r ( x ) ⌋ , by (4.25). Again by the definition of Z , it now follows that a.s. for some C ′ ∈ (0 , ∞ ) ω ηx ≤ κ Zr ( x )+ ⌊ C log r ( x ) ⌋ ≤ κ η ⌊ x (log x ) C ′ ⌋ , (4.26)for all x large enough.The conditions of Theorem 4.2 imply those of Theorem 4.1(ii). Hence the lower boundin Theorem 4.1(ii) applies, and so for any ε >
0, a.s., for all but finitely many n ∈ Z + sup { x ≥ κ ηx ≤ n } ≥ max ≤ m ≤ n η m ≥ n / (log n ) − (1 / − ε .
26t follows that for any ε >
0, a.s., for all but finitely many x ∈ Z + κ ηx ≤ x (log x ) ε . (4.27)So by (4.26) and (4.27) there exist C, x ∈ (0 , ∞ ) such that, a.s., for all x ≥ x ω ηx ≤ x (log x ) C . By the transience of η , we have that a.s. η n ≥ x for all but finitely many n ∈ Z + . Hencea.s., for all but finitely many n ∈ Z + we have n ≤ ω ηη n ≤ η n (log η n ) C ≤ η n (log n ) C ′ , using the jumps bound (4.1) for the final inequality. This proves the theorem. To prove our main theorems on the stochastic billiard model, we start by studying theproperties of the process between successive collisions; i.e., the jumps of the process ξ .Suppose that at time n ∈ Z + we have ξ n = ( ξ (1) n , ξ (2) n ) = ( x, ± g ( x )) for x > A , and then ξ n is reflected at angle α to the normal. Denote ∆( x, α ) := ξ (1) n +1 − ξ (1) n , the jump of thehorizontal component of ξ . Also set θ := arctg g ′ ( x ), so tg θ = g ′ ( x ).We now proceed to obtain estimates for ∆( x, α ) and its moments. The next lemmagives an upper bound on ∆( x, α ) that follows from the fact that for large enough x ourtube will be almost flat, while α is bounded strictly away from ± π/ Lemma 5.1
Let α ∈ (0 , π/ , and suppose that g satisfies (A1), and also (2.4) or (2.5).Then there exist A, C ∈ (0 , ∞ ) such that for all x > A and all α ∈ ( − α , α ) , | ∆( x, α ) | ≤ Cg ( x ) . Proof.
Fix α ∈ (0 , π/ g ′ ( x ) → θ → x → ∞ ; in particular we can choose A large enough so that for all x ≥ A , | θ | < min { α , ( π/ − α } and tg( α + θ ) < c for some c < ∞ .By symmetry, it suffices to suppose that we start on the positive half of the curve,i.e., at ( x, g ( x )). We have that | ∆( x, α ) | is bounded by max {| ∆( x, α ) | , | ∆( x, − α ) |} . Firstsuppose (2.4) holds. Then g is nondecreasing, so θ ≥
0. Consider ∆( x, α ), represented as∆ + when α = α in Figure 2. The reflected ray at angle α to the normal from ( x, g ( x ))has equation in ( x , y ) given by x − x = − ( y − g ( x ))tg( α + θ ) . Take a ∈ (0 , /c ). As g ′ ( x ) →
0, we have | g ′ ( x ) | < a for all x large enough. Considerthe line y + g ( x ) = − a ( x − x ) . α α + θθ normal∆ + y + g ( x ) = − a ( x − x )∆ ′ α + θ Figure 2: Auxiliary construction for the proof of Lemma 5.1This line intersects ∂ D at ( x, − g ( x )) and it intersects the reflected ray at angle α to thenormal from ( x, g ( x )) at x ≥ x with x − x = 2 g ( x )tg( α + θ )1 − a tg( α + θ ) . (5.1)Since | g ′ ( x ) | < a for x ≥ x , the curve y = − g ( x ) remains above the line y + g ( x ) = − a ( x − x ) for all x ≥ x . Thus | ∆( x, α ) | is bounded by x − x as given by (5.1); this is∆ ′ in Figure 2.Thus, for some C ∈ (0 , ∞ ), | ∆( x, α ) | ≤ Cg ( x ), for all x large enough. A similarargument applies for ∆( x, − α ), and with slight modification when (2.5) holds.A related geometrical argument also provides the proof of Proposition 2.1: Proof of Proposition 2.1.
Under condition (2.4), so that g ( x ) is strictly increasing, thedeterministic path in the case P [ α = 0] = 1 tends to infinity. It is then clear that for anydistribution for α in this case that there exists ε > P [ ξ (1) n +1 − ξ (1) n > ε | ξ (1) n = x ] > ε (5.2)for all x large enough. The stated result for ξ then follows in this case, and hence the resultfor X also.Now suppose that condition (2.5) holds and that α is non-degenerate. Then thereexists ε > P [ α > ε ] > ε . Under (A1), g ′ ( x ) → x → ∞ , so wecan choose A big enough so that the angle θ to the normal satisfies | θ | < ε / x > A . From (A1), we have that g is monotone and g ( x ) >
0. Then it follows that forall x in any bounded interval ( A, C ), (5.2) holds for some ε >
0. Thus with positive28 α α + θg ( x ) tg( α + θ ) g ( x + ∆ + ) tg( α + θ ) θ normal Figure 3: ∆ + : α > ξ reaches any finite horizontal distance, and the result follows in this case also.The next lemma gives crucial estimates for the first two moments of ∆( x, α ). Theserequire fairly lengthy computations. Lemma 5.2
Suppose that α satisfies (2.3) and g ( x ) satisfies (A1), and also (2.4) or (2.5) .Then as x → ∞ E [∆( x, α )] = 2 g ′ ( x ) g ( x )(1 + 2 E [tg α ]) + O ( g ( x ) /x ); (5.3) and E [∆( x, α ) ] = 4 g ( x ) E [tg α ] + O ( g ( x ) /x ) . (5.4) Proof.
First suppose that (2.4) holds (the case of an increasing tube). Then θ ≥
0. Take x sufficiently large so that α + θ < π/
2. Consider the jump ∆( x, α ). We need to consider3 cases. In the first case, α > x, α ) = ∆ + = ( g ( x ) + g ( x + ∆ + ))tg( α + θ ) . In the second case, (see Figure 4), α < , | α | < θ ,∆( x, α ) = ∆ ′ + = ( g ( x ) + g ( x + ∆ ′ + ))tg( α + θ ) . In the third case, (see Figure 5), α < , | α | ≥ θ , − ∆( x, α ) = ∆ − = ( g ( x ) + g ( x − ∆ − ))tg( − α − θ ) . We start with the first of the three cases. Using Taylor’s theorem, we can write∆ + = tg( α + θ ) h g ( x ) + g ( x ) + g ′ ( x )∆ + + g ′′ ( x + φ ∆ + )2 ∆ i , θ α Figure 4: ∆ ′ + : − θ < α < x | α |− α − θ g ( x ) tg( − α − θ ) g ( x − ∆ − ) tg( − α − θ ) θ normal Figure 5: ∆ − : α < − θ φ ∈ [0 , x + = x + φ ∆ + ,∆ + = 2 g ( x ) tg( α + θ )1 − g ′ ( x )tg( α + θ ) + g ′′ ( x + )tg( α + θ )2(1 − g ′ ( x )tg( α + θ )) ∆ = 2 g ( x ) tg α + tg θ − tg α tg θ − g ′ ( x )(tg α + tg θ )+ g ′′ ( x + )(tg α + tg θ )2(1 − tg α tg θ − g ′ ( x )(tg α + tg θ )) ∆ = 2 g ( x ) g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + g ′′ ( x + )2 g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) ∆ , where we have used the fact thattg( u ± v ) = tg u ± tg v ∓ tg u tg v . (Recall that tg θ = g ′ ( x )). Analogously, in the second case,∆ ′ + = 2 g ( x ) tg( α + θ )1 − g ′ ( x )tg( α + θ ) + g ′′ ( x ′ + )2 tg( α + θ )(1 − g ′ ( x )tg( α + θ )) (∆ ′ + ) = 2 g ( x ) g ′ ( x ) − tg | α | g ′ ( x )tg | α | − g ′ ( x ) + g ′′ ( x ′ + )2 g ′ ( x ) − tg | α | g ′ ( x )tg | α | − g ′ ( x ) (∆ ′ + ) , where x ′ + = x + φ ∆ ′ + , φ ∈ [0 , − = 2 g ( x ) tg( − α − θ )1 + g ′ ( x )tg( − α − θ ) + g ′′ ( x − )tg( − α − θ )2(1 + g ′ ( x )tg( − α − θ )) ∆ − = 2 g ( x ) − g ′ ( x ) + tg | α | g ′ ( x )tg | α | − g ′ ( x ) + g ′′ ( x − )2 − g ′ ( x ) + tg | α | g ′ ( x )tg | α | − g ′ ( x ) ∆ − , where x − = x − φ ∆ − , φ ∈ [0 , { ∆ + , ∆ ′ + , ∆ − } = O ( g ( x )) a.s., and somax {| x + − x | , | x ′ + − x | , | x − − x |} = O ( g ( x )) = o ( x ) . So in particular g ′′ ( x + ) = O ( g ′′ ( x )), and similarly for x ′ + , x − . Thus,∆ + = 2 g ( x ) g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) )∆ ′ + = 2 g ( x ) g ′ ( x ) − tg | α | g ′ ( x )tg | α | − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) , and ∆ − = 2 g ( x ) − g ′ ( x ) + tg | α | g ′ ( x )tg | α | − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) . x, α ). For convenience write F ( x ) := P [ α ≤ x ]. For the first moment we obtain, using the symmetry of F , E [∆( x, α )] = α Z ∆ + d F ( α ) + Z − θ ∆ ′ + d F ( α ) − − θ Z − α ∆ − d F ( α )= θ Z g ( x ) h g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + g ′ ( x ) − tg α g ′ ( x )tg α − g ′ ( x ) i d F ( α )+ α Z θ g ( x ) h g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) − − g ′ ( x ) + tg α g ′ ( x )tg α − g ′ ( x ) i d F ( α )+ O ( g ′′ ( x ) g ( x ) )= α Z g ( x ) h g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + g ′ ( x ) − tg α g ′ ( x )tg α − g ′ ( x ) i d F ( α )+ O ( g ′′ ( x ) g ( x ) )= α Z g ( x ) h g ′ ( x ) + 4 g ′ ( x )tg α − g ′ ( x ) (1 − g ′ ( x )tg α − g ′ ( x ) )(1 + 2 g ′ ( x )tg α − g ′ ( x ) ) i d F ( α )+ O ( g ′′ ( x ) g ( x ) ) . The denominator in the term in square brackets in the last integrand is 1+ O ( g ′ ( x ) ). Hence E [∆( x, α )] = 2 g ( x ) g ′ ( x ) (cid:16) E [tg α ] (cid:17) + O ( g ′′ ( x ) g ( x ) + g ′ ( x ) g ( x )) . Then using (A1) to bound the error terms, we obtain (5.3).For the second moment, we have, in a similar fashion, E [∆( x, α ) ] = α Z ∆ d F ( α ) + Z − θ (∆ ′ + ) d F ( α ) + − θ Z − α ∆ − d F ( α )= α Z h g ( x ) g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )+ Z − θ h g ( x ) g ′ ( x ) − tg | α | g ′ ( x )tg | α | − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )+ − θ Z − α h g ( x ) − g ′ ( x ) + tg | α | g ′ ( x )tg | α | − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )32 α Z h g ( x )tg α + O ( g ( x ) g ′ ( x )) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )+ Z − θ h g ( x )tg α + O ( g ( x ) g ′ ( x )) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )+ − θ Z − α h g ( x )( − tg α ) + O ( g ( x ) g ′ ( x )) + O ( g ′′ ( x ) g ( x ) ) i d F ( α )= 2 α Z g ( x ) (tg α )d F ( α ) + O ( g ′ ( x ) g ( x ) ) + O ( g ′′ ( x ) g ( x ) )= 4 g ( x ) E [tg α ] + O ( g ′ ( x ) g ( x ) ) + O ( g ′′ ( x ) g ( x ) ) . Then (5.4) follows, again using (A1) to bound the error terms.Now suppose that (2.5) holds (the case of a decreasing tube). In this case the argumentfollows similar lines to the previous one, and we only sketch the details. Now θ ≤ + , ∆ − , ∆ ′− . For α > | θ | , we have∆( x, α ) = ∆ + = ( g ( x ) + g ( x + ∆ + ))tg( α − | θ | ) . In the second case, when 0 ≤ α ≤ | θ | , − ∆( x, α ) = ∆ ′− = ( g ( x ) + g ( x − ∆ ′− ))tg( | θ | − α ) . In the third case, when α < − ∆( x, α ) = ∆ − = ( g ( x ) + g ( x − ∆ − ))tg( | α | + | θ | ) . In the same way as before, we obtain∆ + = 2 g ( x ) g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) , ∆ ′− = − g ( x ) g ′ ( x ) + tg α − g ′ ( x )tg α − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) , and ∆ − = 2 g ( x ) − g ′ ( x ) + tg | α | g ′ ( x )tg | α | − g ′ ( x ) + O ( g ′′ ( x ) g ( x ) ) . Similar computations to before then yield the same expressions (5.3) and (5.4). Lemma 5.2is proved.The next result will allow us to compare the recurrence times σ A and τ A . Lemma 5.3
Suppose that α satisfies (2.3) and g satisfies (A1). i) If g satisfies (2.4) then for all A sufficiently large τ A ≥ σ A a.s..(ii) If g satisfies (2.5) then for all A sufficiently large τ A ≤ σ A a.s.. Proof.
First we observe that when ξ (1) n = x is large enough (such that | g ′ ( x ) | < tg( π − α )),we have ξ (2) n +1 ξ (2) n <
0, that is, successive collisions are a.s. on different sides of the tube.Thus, for ξ (1) n = x ≥ A , a.s., k ξ n +1 − ξ n k ≥ | ξ (2) n +1 − ξ (2) n | ≥ g ( x ) . (5.5)Now suppose that (2.4) holds. Then for x large enough, under (2.4), (5.5) implies k ξ n +1 − ξ n k ≥
1. In other words, the time between collisions for the process X is no less than thatfor the process ξ , and part (i) follows.Now suppose that (2.5) holds. By the triangle inequality, we have k ξ n +1 − ξ n k ≤ | ξ (1) n +1 − ξ (1) n | + | ξ (2) n +1 − ξ (2) n | . Thus Lemma 5.1 with (2.5) implies that for some
A, C ∈ (0 , ∞ ), given ξ (1) n = x ≥ A , k ξ n +1 − ξ n k ≤ Cg ( x ) ≤ , for all x large enough. Then part (ii) follows. Our approach to studying the horizontal component of the collisions process ξ is to considera rescaled version of the process in such a way that we get exactly an instance of theLamperti problem. The key is to find a scale on which the process has uniformly boundedjumps.Define the function h : [1 , ∞ ) → (0 , ∞ ) via h ( x ) := x/g ( x ). Under assumption (A1) on g , it follows that h ′ ( x ) = 1 g ( x ) − xg ′ ( x ) g ( x ) = [1 − γ + o (1)] 1 g ( x ) ,h ′′ ( x ) = − g ′ ( x ) g ( x ) − xg ′′ ( x ) g ( x ) + 2 xg ′ ( x ) g ( x ) = [ γ ( γ −
1) + o (1)] 1 xg ( x ) ,h ′′′ ( x ) = o (cid:16) xg ( x ) (cid:17) . Now for n ∈ Z + set ζ n := h ( ξ (1) n ). The process ζ = ( ζ n ) n ∈ Z + is then covered by theLamperti problem (cf Section 4), as the following result shows. Lemma 5.4
Suppose that (A1) holds. Suppose that α satisfies (2.3). Then there exists B ∈ (0 , ∞ ) such that for all n ∈ Z + and all y ≥ P [ | ζ n +1 − ζ n | ≤ B | ζ n = y ] = 1 . (5.6)34 lso, for all n ∈ Z + , as y → ∞ m ( y ) := E [ ζ n +1 − ζ n | ζ n = y ] = 2 γ (1 − γ )(1 + E [tg α ]) y + o ( y − ); and (5.7) m ( y ) := E [( ζ n +1 − ζ n ) | ζ n = y ] = 4(1 − γ ) E [tg α ] + o (1) . (5.8) Proof.
Given ξ (1) n = x , denote ζ n = y = h ( x ) >
0. If the reflection is at angle α , we havefrom Taylor’s theorem that as x → ∞ ζ n +1 − ζ n = h ( x + ∆( x, α )) − h ( x )= h ′ ( x )∆( x, α ) + h ′′ ( x )2 ∆( x, α ) + O ( h ′′′ ( x )∆( x, α ) )= (1 − γ + o (1)) ∆( x, α ) g ( x ) + γ ( γ −
1) + o (1)2 ∆( x, α ) xg ( x ) + o ( g ( x ) /x ) , (5.9)using Lemma 5.1. By Lemma 5.1 we have that | ∆( x, α ) | = O ( g ( x )), and then (5.6) isimmediate.Taking expectations in (5.9) and using Lemma 5.2, we obtain E [ ζ n +1 − ζ n | ζ n = h ( x )]= 2(1 − γ + o (1))(1 + 2 E [tg α ]) g ′ ( x ) + (2 γ ( γ −
1) + o (1)) E [tg α ] g ( x ) x = 2 γ (1 − γ )(1 + E [tg α ]) g ( x ) x + o ( g ( x ) /x ) . This yields (5.7). Similarly, squaring both sides of (5.9) and taking expectations gives(5.8).This last result, together with Proposition 2.1, immediately implies the following:
Corollary 5.1
Suppose that g satisfies (A1), and that α satisfies (2.3). Then ( ζ n ) n ∈ Z + isa Lamperti-type problem as discussed in Section 4, satisfying (A2) and (A3). Moreover,(4.4) holds if g satisfies (2.4), and also, if α is non-degenerate, if g satisfies (2.5). Finally,(4.3) holds provided that α is non-degenerate. In the special case where g ( x ) = x γ , γ <
1, so that ζ n = ( ξ (1) n ) − γ , we will need a moreprecise version of Lemma 5.4. This is Lemma 5.5 below. Not only will this enable us to dealwith the critical case in the recurrence classification (Theorem 2.3), it will also be crucialfor our proofs of the almost-sure bounds carried out in Section 5.3. Lemma 5.5
Suppose that g ( x ) = x γ where γ < , and that α satisfies (2.3). Then for all n ∈ Z + , as y → ∞ m ( y ) := E [ ζ n +1 − ζ n | ζ n = y ] = 2 γ (1 − γ )(1 + E [tg α ]) y + o ( y − (log y ) − ); (5.10) m ( y ) := E [( ζ n +1 − ζ n ) | ζ n = y ] = 4(1 − γ ) E [tg α ] + o ((log y ) − ) . (5.11)35 roof. We can apply Taylor’s theorem to obtain, conditional on ζ n = y = x − γ > ζ n +1 − ζ n = (1 − γ ) x − γ ∆( x, α ) − γ (1 − γ )2 x − γ − ∆( x, α ) + O ( x γ − ) . Then taking expectations and using (5.3) and (5.4) we obtain (5.10). Similarly we obtain(5.11) after squaring the last displayed expression.
Proof of Theorem 2.1.
Under the conditions of Theorem 2.1, Corollary 5.1 holds. Weapply Lamperti’s result Proposition 4.1 to the process ζ described by Lemma 5.4, notingthat ζ is null-recurrent, positive-recurrent, or transient exactly when ξ is. From Lemma5.4, we have that 2 ym ( y ) + m ( y ) = 4(1 − γ )( γ + E [tg α ] + o (1)) , (5.12)and also 2 ym ( y ) − m ( y ) = 4(1 − γ ) (cid:0) γ (1 + 2 E [tg α ]) − E [tg α ] + o (1) (cid:1) = 4(1 − γ ) (cid:0) ( γ − γ c )(1 + 2 E [tg α ]) + o (1) (cid:1) , (5.13)where γ c is given by (2.7).For part (i) of the theorem, if γ ∈ ( γ c ,
1) we have from (5.13) that there exists δ > ym ( y ) − m ( y ) ≥ δ for all y sufficiently large, and hence by Proposition 4.1(ii), ζ is transient.For part (ii) of the theorem, it suffices to consider the case where α is non-degenerate, so γ c >
0. Then from (5.13) we have that for 0 ≤ γ < γ c , 2 ym ( y ) ≤ m ( y ) for all y sufficientlylarge. Also, in this case γ + E [tg α ] >
0, so we have from (5.12) that 2 ym ( y ) ≥ − m ( y )for all y sufficiently large. Hence by Proposition 4.1(i), ζ is null-recurrent.This proves Theorem 2.1 for the process ξ , and the statement for the process X followsfrom (2.6) and Lemma 5.3(i). Proof of Theorem 2.2.
For part (i), if γ + E [tg α ] <
0, we have from (5.12) that2 ym ( y ) + m ( y ) < − δ for some δ > y sufficiently large. Then (noting Corollary5.1) it follows from Proposition 4.1(iii) that ζ and hence ξ is positive-recurrent.For part (ii), it suffices to suppose that α is non-degenerate. Then the γ ≤ ym ( y ) ≤ m ( y ) for all y large enough. On the other hand, if γ + E [tg α ] >
0, we have from (5.12) that 2 ym ( y ) + m ( y ) ≥ y sufficiently large.Then null-recurrence follows from Proposition 4.1(i).In order to complete the proof of Theorem 2.3, we need a sharper form of Lamperti’srecurrence classification result presented in Proposition 4.1. Fine results in this directionare given in [23]. We will only need the following consequence of Theorem 3 of [23]. Lemma 5.6 [23] For η a Lamperti-type problem satisfying (A2), (4.3), and (4.4), η isnull-recurrent if, for all x large enough, x | µ ( x ) | ≤ (cid:18) x (cid:19) µ ( x ) . Proof of Theorem 2.3.
This now follows from Lemma 5.6 with Lemma 5.5.36 .3 Proofs for almost-sure bounds
The key to the proof of our almost-sure bounds for the stochastic billiard model is to applyour almost-sure bound results from Section 4 to the rescaled process ζ that we studied inLemma 5.5. This will allow us to obtain Theorems 2.4 and 2.6. We will then derive theresults for the continuous-time process X , Theorems 2.5 and 2.7, from the correspondingresults for ξ . Recall that for n ∈ Z + , ζ n := ( ξ (1) n ) − γ . Proof of Theorem 2.4.
The idea here is to apply Theorem 4.1 to the process ζ . ByLemma 5.5, we have that (5.12) holds. Then since γ > ζ (using Corollary 5.1). Thus for any ε >
0, a.s., for all but finitely many n ∈ Z + , n / (log n ) − (1 / − ε ≤ max ≤ m ≤ n ζ m ≤ n / (log n ) (1 / ε , and then (2.8) and (2.9) follow, since ζ n = ( ξ (1) n ) − γ .Also by Lemma 5.5, we have that (5.13) holds. If γ > γ c , it follows (using Corollary5.1) that we can apply Theorem 4.2 with η = ζ to obtain that for some D ∈ (0 , ∞ ), a.s.,for all but finitely many n ∈ Z + , ζ n ≥ n / (log n ) − D . Then (2.10) follows. Proof of Theorem 2.6.
This time we will apply Theorems 4.1 and 4.3. It follows fromLemma 5.5 that 2 ym ( y ) = − κm ( y ) + o ((log y ) − ) , where κ = − γ (1 + E [tg α ])(1 − γ ) E [tg α ] . Hence for γ < − E [tg α ], κ > ζ . Then part (ii) of the theorem follows.On the other hand, for γ > − E [tg α ], we have κ < X (1) t from bounds for ξ (1) n . Lemma 5.7
Suppose that γ < . Suppose that there exist a, b > with aγ > − , such thatfor any ε > , a.s., for all but finitely many n ∈ Z + n a (log n ) − b − ε ≤ max ≤ m ≤ n ξ (1) m ≤ n a (log n ) b + ε . (5.14) Then for any ε > , a.s., for all t sufficiently large, sup ≤ s ≤ t X (1) s ≥ t a γa (log t ) − γab + b γa − ε . roof. Recall the definition of the collision times ν k from (2.1). We have from the triangleinequality and Lemma 5.1 that for some C ∈ (0 , ∞ ), for all k ∈ N , ν k = k − X j =0 k ξ j +1 − ξ j k ≤ k − X j =0 (cid:16) | ξ (1) j +1 − ξ (1) j | + | ξ (2) j +1 − ξ (2) j | (cid:17) ≤ C k X j =0 ( ξ (1) j ) γ . Thus by the upper bound in (5.14), for any ε >
0, a.s., for some C ∈ (0 , ∞ ) and all k ∈ Z + , ν k ≤ Ck γa (log k ) γb + ε . (5.15)Let ε >
0, and for t >
1, set k ε ( t ) := j t γa (log t ) − γb γa − ε k . Also recall the definition of n ( t ) from (2.2). Then by (5.15) we have that for any ε > C ∈ (0 , ∞ ) and ε ′ > t large enough ν k ε ( t ) ≤ Ct (log t ) − ε ′ ≤ t ;hence for any ε >
0, a.s., for all t sufficiently large k ε ( t ) ≤ n ( t ) , and ν k ε ( t ) ≤ ν n ( t ) ≤ t < ν n ( t )+1 . (5.16)Now from (5.16) we have that, a.s., for all t large enoughsup ≤ s ≤ t X (1) s ≥ max ≤ m ≤ k ε ( t ) X (1) ν m = max ≤ m ≤ k ε ( t ) ξ (1) m . (5.17)Now applying the lower bound in (5.14) we obtain for any ε >
0, a.s., for all t large enoughsup ≤ s ≤ t X (1) s ≥ ( k ε ( t )) a (log k ε ( t )) − b − ε ≥ Ct a γa (log t ) − γab γa − εa (log t ) − b − ε , using the definition of k ε ( t ). Simplifying leads to the desired result.We will use Lemma 5.7 in the proofs of Theorems 2.5 and 2.7 below. We will apply thelemma taking either a = 1 / (2(1 − γ )) with γ < a = ρ ( γ ) with γ <
0. Note that for γ ≤ γρ ( γ ) ≥ γ − γ ) ≥ − , so that the hypothesis aγ > − Proof of Theorem 2.5.
First of all, from Theorem 2.4 we have that (2.8) and (2.9) hold.Thus we can apply the a = b = 1 / (2(1 − γ )) case of Lemma 5.7, which yields (2.11). Itremains to prove (2.12).By the construction of the process, we have that a.s., for t ≥ X (1) t ∈ [min { ξ (1) n ( t ) , ξ (1) n ( t )+1 } , max { ξ (1) n ( t ) , ξ (1) n ( t )+1 } ] . (5.18)38uppose that γ > γ c , so that we have transience. First we prove the lower bound in(2.12). We have from (5.18) that a.s., for all t large enough, X (1) t ≥ min { ξ (1) n ( t ) , ξ (1) n ( t )+1 } . Hence from (2.10) we have that for some D ∈ (0 , ∞ ), a.s., for all t large enough X (1) t ≥ ( n ( t )) − γ ) (log n ( t )) − D ≥ ( k ε ( t )) − γ ) (log k ε ( t )) − D , the final inequality using (5.16) and the fact that the function z z − γ ) (log z ) − D iseventually increasing in z . Now (2.12) follows by the definition of k ε ( t ).Now we prove the upper bound in (2.12). From (2.1) and (5.5) we have that ν k ≥ P k − j =0 ( ξ (1) j ) γ . Then using (2.10) we have that for some D ∈ (0 , ∞ ), a.s., for all but finitelymany k ∈ N , ν k ≥ k − γ − γ ) (log k ) − D . (5.19)Let D >
0, and for t > k ′ D ( t ) := ⌊ t − γ )2 − γ (log t ) D ⌋ . Then by (5.19), we have that for D large enough, a.s., for all t sufficiently large, ν k ′ D ( t ) ≥ t, and n ( t ) ≤ k ′ D ( t ) . Now from (5.18) and (2.8) we have that for some C ∈ (0 , ∞ ), a.s., for all t sufficiently large, X (1) t ≤ ξ (1) n ( t ) + ξ (1) n ( t )+1 ≤ n ( t ) − γ ) (log n ( t )) C . Now using the fact that n ( t ) ≤ k ′ D ( t ) a.s., and the definition of k ′ D ( t ), the result follows. Proof of Theorem 2.7.
We apply Lemma 5.7 again. For part (i), we have from part (i)of Theorem 2.6 that (2.8) and (2.9) hold, so we can apply the a = b = 1 / (2(1 − γ )) case ofLemma 5.7 to obtain (2.11) in this case.For part (ii), we have from part (ii) of Theorem 2.6 that (2.13) and (2.14) hold. So wecan apply the 2 a = b = 2 ρ ( γ ) case of Lemma 5.7, which yields (2.15). Acknowldegements
We are grateful to the anonymous referees for their diligence and their detailed commentson an earlier version of this paper, which have led to several improvements.
References [1] H. Babovsky, On Knudsen flows within thin tubes,
J. Statist. Phys. (1986) 865–878.392] H. Br´ezis, W. Rosenkrantz, and B. Singer, An extension of Khintchine’s estimate forlarge deviations to a class of Markov chains converging to a singular diffusion, Comm.Pure Appl. Math. (1971) 705–726.[3] C. Cercignani, The Boltzmann Equation and its Applications, Springer-Verlag, NewYork, 1988.[4] F. Comets, M. Menshikov, and S. Popov, Lyapunov functions for random walks andstrings in random environment, Ann. Probab. (1998) 1433–1445.[5] F. Comets, S. Popov, G.M. Sch¨utz, and M. Vachkovskaia (2007) Billiards in a generaldomain with random reflections. To appear in: Archive for Rational Mechanics andAnalysis . Available at arXiv.org as math.PR/0612799 .[6] E. Cs´aki, A. F¨oldes, and P. R´ev´esz (2007) Transient NN random walk on the line.Preprint. Available at arXiv.org as math.PR/0707.0734 .[7] A. Dvoretzky and P. Erd˝os, Some problems on random walk in space, in: Proceedingsof the Second Berkeley Symposium on Mathematical Statistics and Probability, 1950,pp. 353–367. University of California Press, Berkeley and Los Angeles, 1951.[8] M.D. Esposti, G.D. Magno, and M. Lenci, An infinite step billiard, Nonlinearity (1998) 991–1013.[9] S.N. Evans, Stochastic billiards on general tables, Ann. Appl. Probab. (2001) 419–437.[10] A.M. Fal ′ , Certain limit theorems for an elementary Markov random walk, UkrainianMath. J. (1981) 433–435, translated from Ukrain. Mat. Zh. Adv. Appl. Probab. (1984) 293–323 (inFrench).[13] T.E. Harris, First passage and recurrence distributions, Trans. Amer. Math. Soc. (1952) 471–486.[14] J.L. Hodges, Jr. and M. Rosenblatt, Recurrence-time moments in random walks, Pa-cific J. Math. (1953) 127–136.[15] S. Karlin and J. McGregor, Random walks, Illinois J. Math. (1959) 66–81.[16] M. Knudsen, Kinetic Theory of Gases: Some Modern Aspects, Methuen’s Monographson Physical Subjects, Methuen, London, 1952.4017] J. Lamperti, Criteria for the recurrence and transience of stochastic processes I, J.Math. Anal. Appl. (1960) 314–330.[18] J. Lamperti, A new class of probability limit theorems, J. Math. Mech. (1962)749–772.[19] J. Lamperti, Criteria for stochastic processes II: passage-time moments, J. Math. Anal.Appl. (1963) 127–145.[20] M. Lenci, Escape orbits for non-compact flat billiards, Chaos (1996) 428–431.[21] M. Lenci, Semi-dispersing billiards with an infinite cusp I, Comm. Math. Phys. (2002) 133–180.[22] M. Lenci, Semidispersing billiards with an infinite cusp. II,
Chaos (2003) 105–111.[23] M.V. Menshikov, I.M. Asymont, and R. Iasnogorodskii, Markov processes with asymp-totically zero drifts, Problems of Information Transmission (1995) 248–261, trans-lated from Problemy Peredachi Informatsii Markov Processes Relat. Fields (1995) 57–78.[25] M.V. Menshikov and A.R. Wade, Logarithmic speeds for one-dimensional perturbedrandom walk in random environment, Stochastic Processes Appl. (2008) 389–416.[26] W.A. Rosenkrantz, A local limit theorem for a certain class of random walks,
Ann.Math. Statist. (1966) 855–859.[27] G.J. Sz´ekely, On the asymptotic properties of diffusion processes, Ann. Univ. Sci.Budapest. E¨otv¨os Sect. Math. (1974) 69–71.[28] S. Tabachnikov, Billiards, Soci´et´e Math´ematique de France, Paris, 1995.[29] M. Voit, A law of the iterated logarithm for a class of polynomial hypergroups, Monatsh. Math. (1990) 311–326.[30] M. Voit, Strong laws of large numbers for random walks associated with a class ofone-dimensional convolution structures,
Monatsh. Math. (1992) 59–74.[31] M. Voit, A law of the iterated logarithm for Markov chains on N associated withorthogonal polynomials, J. Theoret. Probab.6