New insights on the reinforced Elephant Random Walk using a martingale approach
OOn the reinforced elephant random walk
Lucile Laulin
Abstract
This paper is devoted to a direct martingale approach for one type ofreinforced elephant random walk (RERW). The elephant random walkis a non-markovian process which has a complete memory of its en-tire history. In the diffusive and critical regimes, we establish the al-most sure convergence, the law of iterated logarithm and the quadraticstrong law for the RERW. The distributional convergences of the RERWto some Gaussian processes are also provided. In the superdiffusiveregime, we prove the distributional convergence as well as the meansquare convergence of the RERW. All our analysis relies on asymptoticresults for multi-dimensional martingales with matrix normalization.
The subject of reinforced random walk has been of interest over the last years with the focusbeing mainly on graphs, edge or vertex reinforced random walk, see for example [19] or [22]for a comprehensive and extensive overview on the subject. See also [1, 8] for other recentcontributions on reinforced random walks.In this paper, we investigate a special case of reinforced random walk that relies on theElephant Random Walk (ERW), another subject of many attentions since it was introducedby Sch ¨utz and Trimper [23] in the early 2000s. At first, the ERW was used to investigatehow long-range memory affects the random walk and induces a crossover from a diffusiveto superdiffusive behavior. It was referred to as the ERW in allusion to the traditional sayingthat elephants can always remember anywhere they have been. The elephant starts at theorigin at time zero, S =
0. At time n =
1, the elephant moves in one to the right withprobability q and to the left with probability 1 − q for some q in [
0, 1 ] . Afterwards, at time n +
1, the elephant chooses uniformly at random an integer k among the previous times1, . . . , n . Then, it moves exactly in the same direction as that of time k with probability p or the oppositve direction with the probability 1 − p , where the parameter p stands for thememory parameter of the ERW. The position of the elephant at time n + S n + = S n + X n + (1.1)where X n + is the ( n + ) -th increment of the random walk. The ERW shows three differentsregimes depending on the location of its memory parameter p with respect to the criticalvalue p c = / d =
1. A strong law of largenumbers and a central limit theorem for the position S n , properly normalized, were estab-lished in the diffusive regime p < / p = /
4, see [2], [12], [13], [23]and the more recent contributions [4], [11], [15], [16], [21], [26]. The superdiffusive regime p > / a r X i v : . [ m a t h . P R ] D ec n the reinforced elephant random walk [5] extended all the results of [3] to the multi-dimensional ERW (MERW) where d ≥ ( n + ) -th increment X n + under the form X n + = α n + X β n + . (1.2)In the case of the ERW we had α n + ∼ R ( p ) and β n + ∼ U {
1, . . . , n } . The only, but major,change for the RERW is the distribution of β n .This paper is organized as follows. The model of reinforced memory is presented in Section2 while the main results are given in Section 3. We first investigate the diffusive regime a < ( − c ) / a = ( − c ) / a > ( − c ) / We assume in all the sequel that the parameter p (cid:54) = / p = / F n = σ ( X , . . . , X n ) and denote ρ n ( k ) the weight of the instant k after n steps. The ERWis associated with the special case where ρ n ( k ) = k ≤ n and 0 elsewise. Adding areinforcement of weight c , where c is a non-negative real number, implies that the weight ρ n ( k ) of instant k is modified as follows ρ n ( k ) = k ≥ n +
11 if k = n ρ n − ( k ) + c β n = k if 1 ≤ k < n .Consequently, it follows from the very definition of ρ n ( k ) that the conditional distribution of β n + is given by, for 1 ≤ k ≤ n , P ( β n + = k | F n ) = ρ n ( k ) ∑ nj = ρ n ( j ) = ρ n ( k )( c + ) n − c .The parameter c represents the intensity of the reinforcement. The reader can notice that thecase c = β n + is not dependant of F n . Hereafter, let a = p −
1, such that − ≤ a ≤
1. We have bythe definition of X n , E [ X n + | F n ] = E [ α n + ] E [ X β n + | F n ] = a E (cid:104) n ∑ k = X k β n + = k | F n (cid:105) = a ( c + ) n − c n ∑ k = X k ρ n ( k ) .Then, denote Y n = n ∑ k = X k ρ n ( k ) (2.1) n the reinforced elephant random walk such that E [ X n + | F n ] = a ( c + ) n − c Y n . (2.2)Hence, we immediatly get E [ S n + | F n ] = S n + E [ X n + | F n ] = S n + a ( c + ) n − c Y n . (2.3)Hereafter, notice that Y n + = n + ∑ k = X k ρ n + ( k ) = n ∑ k = X k (cid:0) ρ n ( k ) + c β n + = k (cid:1) + X n + = Y n + ( α n + + c ) X β n + (2.4)we obtain E [ Y n + | F n ] = (cid:16) + a + c ( c + ) n − c (cid:17) Y n . (2.5)Finally, for any n ≥ γ n = + a + c ( c + ) n − c = n + a λ n − c λ where λ = c + a n = n − ∏ k = γ − k = Γ ( n − c λ ) Γ ( + a λ ) Γ ( n + a λ ) Γ ( λ ) . (2.7)It follows from standard calculations on the Gamma function thatlim n → ∞ n ( a + c ) λ a n = Γ ( + a λ ) Γ ( λ ) . (2.8)Our strategy for proving asymptotic results for the reinforced elephant random walk is asfollows. On the one hand, the behavior of position S n is closely related to the one of thesequences ( M n ) and ( N n ) defined for all n ≥ M n = a n Y n and N n = S n − aa + c Y n . (2.9)We immediatly get from (2.5) and (2.7) that ( M n ) is a locally square-integrable martingaleadapted to F n . Moreover, we have from (2.2),(2.3) and (2.5) that E (cid:104) S n + − aa + c Y n + | F n (cid:105) = S n − aa + c Y n which means that ( N n ) is also a locally square-integrable martingale adapted to F n . On theother hand, we can rewrite S n as S n = N n + aa + c a − n M n (2.10)and equation (2.10) allows us to establish the asymptotic behavior of the RERW via an ex-tensive use of the strong law of large numbers and the functional central limit theorem formulti-dimensional martingales [10], [14], [17], [25]. n the reinforced elephant random walk Our first result deals with the strong law of large numbers for the RERW in the diffusiveregime where a < ( − c ) / Theorem
We have the almost sure convergence lim n → ∞ S n n = a.s. (3.1)The almost sure rate of convergence for RERW is as follows. Theorem
We have the quadratic strong law lim n → ∞ n n ∑ k = S k k = ac + c − a + c − a.s. (3.2) Remark
In addition, we could also obtain an upper-bound for the law of iterated logarithm as itwas done for the center of mass of the MERW in [6].
Hereafter, we are interested in the distributional convergence of the RERW, which holds inthe Skorokhod space D ([ ∞ [) of right-continuous functions with left-hand limits. Theorem
The following convergence in distribution in D ([ ∞ [) holds (cid:16) S (cid:98) nt (cid:99) √ n , t ≥ (cid:17) = ⇒ (cid:0) W t , t ≥ (cid:1) (3.3) where (cid:0) W t , t ≥ (cid:1) is a real-valued centered Gaussian process starting from the origin with covariance E [ W s W t ] = a ( − c )( a + c )( − a − c ) s (cid:16) ts (cid:17) λ ( a + c ) + c ( a + ) a + c s (3.4) for < s ≤ t. In particular, we haveS n √ n L −→ N (cid:18)
0, 2 ac + c − a + c − (cid:19) . (3.5) Remark
When c = we find again the results from [2] for the ERW (cid:16) S (cid:98) nt (cid:99) √ n , t ≥ (cid:17) = ⇒ (cid:0) W t , t ≥ (cid:1) where (cid:0) W t , t ≥ (cid:1) is a real-valued mean-zero Gaussian process starting from the origin and E [ W s W t ] = − a s (cid:16) ts (cid:17) a . In particular, we also obtain the asymptotic normality from [3, 12]S n √ n L −→ N (cid:18)
0, 11 − a (cid:19) . n the reinforced elephant random walk As it was done in [7], we also obtain the asymptotic normality for the center of mass of theRERW defined by G n = n n ∑ k = S n . Corollary
We have the asymptotic normalityG n √ n L −→ N (cid:18)
0, 2 − c ( c + + ca + a − a ) ( + c − a )( − a − c ) (cid:19) . (3.6) Remark
When c = , we find again the asympotic normality established in [6, 7]G n √ n L −→ N (cid:18)
0, 23 ( − a )( − a ) (cid:19) . Figure 1: Asymptotic normality for the RERW inthe diffusive regime, when p = c = Hereafter, we investigate the critical regime where a = ( − c ) / Theorem
We have the almost sure convergence lim n → ∞ S n √ n log n = a.s. (3.7)The almost sure rates of convergence for the RERW are as follows. Theorem
We have the quadratic strong law lim n → ∞ n n ∑ k = S k ( k log k ) = ( c − ) c + a.s. (3.8) In addition, we also have the law of iterated logarithm lim sup n → ∞ S n n log n log log log n = ( c − ) c + a.s. (3.9)Once again, our next result concerns the functional convergence in distribution for the RERW. n the reinforced elephant random walk Theorem
The following convergence in distribution in D ([ ∞ [) holds (cid:16) S (cid:98) n t (cid:99) (cid:112) n t log n , t ≥ (cid:17) = ⇒ (cid:115) ( c − ) ( c + ) (cid:0) B t , t ≥ (cid:1) (3.10) where ( B t , t ≥ ) is a one-dimensional standard Brownian motion. In particular, we haveS n (cid:112) n log n L −→ N (cid:18) ( c − ) c + (cid:19) . (3.11) Remark
When c = , we find again the results from [2] for the ERW (cid:16) S (cid:98) nt (cid:99) (cid:112) n t log n , t ≥ (cid:17) = ⇒ (cid:0) B t , t ≥ (cid:1) where ( B t , t ≥ ) is a one-dimensional standard Brownian motion. In particular, we find once againthe asymptotic normality from [2, 3, 12] S n (cid:112) n log n L −→ N (
0, 1 ) . Figure 2: Asymptotic normality for the RERW inthe critical regime, when c = p = Finally, we focus our attention on the superdiffusive regime where a > ( − c ) /
2. The readercan notice that the following almost sure convergences of ( S n ) , properly normalized, is theonly type of behavior for the RERW that still holds when c > a ≥ − Theorem
We have the following distributional convergence in D ([ ∞ [) (cid:16) S (cid:98) nt (cid:99) n λ ( c + a ) , t ≥ (cid:17) = ⇒ ( Λ t , t ≥ ) (3.12) where the limiting Λ t = t λ ( c + a ) L c , L c being some non-denegerate random variable. In particular, wehave lim n → ∞ S n n ( a + c ) λ = L c a.s. (3.13) n the reinforced elephant random walk Moreover, we also have the mean square convergence lim n → ∞ E (cid:104)(cid:12)(cid:12)(cid:12) S n n ( a + c ) λ − L c (cid:12)(cid:12)(cid:12) (cid:105) =
0. (3.14)
Theorem
The expected value of L c is E [ L c ] = a ( q − ) Γ ( λ )( a + c ) Γ ( + a λ ) (3.15) while its variance is given by E (cid:2) L c (cid:3) = a ( + ac + c ) Γ ( λ )( a + c ) λ ( a + c − ) Γ (( a + c ) λ ) . (3.16) Remark
When c = , we find once again the moments of L established in [3] E [ L ] = q − Γ ( a + ) and E (cid:2) L (cid:3) = ( a − ) Γ ( a ) . In order to investigate the asymptotic behavior of ( S n ) , we introduce the two-dimensionalmartingale ( M n ) defined by M n = (cid:18) N n M n (cid:19) (4.1)where ( M n ) and ( N n ) are the two locally square-integrable martingales introduced in (2.9).As for the center of mass of the ERW [6], the main difficulty we face is that the predictablequadratic variation of ( M n ) and ( N n ) increase to infinity with two different speeds. A matrixnormalization will again be necessary to establish the asymptotic behavior of the RERW. Wewill alternatively study ( M n ) , ( M n ) or ( N n ) .Let ε n + = Y n + − γ n Y n and ξ n = ( α n − a ) X β n . We have from equations (2.4), (2.7) and (2.9) ∆ M n + = M n + − M n = (cid:18) S n + − S n − aa + c (cid:0) Y n + − Y n (cid:1) a n + Y n + − a n Y n (cid:19) = (cid:18) α n + X β n + − aa + c ( α n + + c ) X β n + a n + ε n + (cid:19) = a n + ε n + (cid:16) (cid:17) + ca + c ξ n + (cid:16) (cid:17) . (4.2)We also find from (2.4) that E [ ε n + | F n ] = E [ Y n + | F n ] − γ n Y n = Y n + ( γ n − ) Y n + + ac + c − γ n Y n = + ac + c − ( γ n − ) Y n . (4.3)In addtion, we obtain once again from (2.4) that E [ ξ n + | F n ] = − a (4.4) n the reinforced elephant random walk and finally E [ ε n + ξ n + | F n ] = E (cid:2)(cid:0) ( − γ n ) Y n + ( α n + + c ) X β n + (cid:1) ( α n + − a ) X β n + | F n (cid:3) = E (cid:2)(cid:0) ( − γ n )( α n + − a ) Y n X β n + + ( α n + + c )( α n + − a ) | F n (cid:3) = − a . (4.5)Hereafter, we deduce from (4.2), (4.3), (4.4) and (4.5) that E (cid:2) ( ∆ M n + )( ∆ M n + ) T | F n (cid:3) = a n + (cid:0) + ac + c − ( γ k − ) Y k (cid:1) (cid:18) (cid:19) + a n + ca + c ( − a ) (cid:18) (cid:19) + (cid:16) ca + c (cid:17) ( − a ) (cid:18) (cid:19) .We are now able to compute the quadratic variation of M n , that is (cid:104) M (cid:105) n = n − ∑ k = a k + (cid:0) + ac + c − ( γ k − ) Y k (cid:1) (cid:18) (cid:19) + n − ∑ k = a k + ca + c ( − a ) (cid:18) (cid:19) + n (cid:16) ca + c (cid:17) ( − a ) (cid:18) (cid:19) .Consequently, (cid:104) M (cid:105) n = v n ( + ac + c ) (cid:18) (cid:19) + w n ca + c ( − a ) (cid:18) (cid:19) + n (cid:16) ca + c (cid:17) ( − a ) (cid:18) (cid:19) − R n (cid:18) (cid:19) (4.6)where v n = n ∑ k = a k , w n = n ∑ k = a k and R n = n − ∑ k = a k + ( γ k − ) Y k .Hereafter, we immediatly deduce from (4.6) that (cid:104) M (cid:105) n = ( + ac + c ) n ∑ k = a k − R n (4.7)and that (cid:104) N (cid:105) n = (cid:16) ca + c (cid:17) ( − a ) n . (4.8)The asympotic behavior of M n is closely related to the one of ( v n ) as one can observe that wealways have (cid:104) M (cid:105) n ≤ ( + ac + c ) v n . Consequently to the definition of ( a n ) , we have threeregimes of behavior for ( M n ) . In the diffusive regime where is a < ( − c ) / n → ∞ v n n − ( a + c ) λ = (cid:96) where (cid:96) = − ( a + c ) λ (cid:16) Γ ( + a λ ) Γ ( λ ) (cid:17) . (4.9) n the reinforced elephant random walk In the critical regime where a = ( − c ) / n → ∞ v n log n = (cid:16) Γ ( c + ( c + ) ) Γ ( c + ) (cid:17) . (4.10)In the superdiffusive regime where a > ( − c ) / n → ∞ v n = ∞ ∑ n = (cid:16) Γ ( n − c λ ) Γ ( + a λ ) Γ ( n + a λ ) Γ ( λ ) (cid:17) . (4.11) As it was done in [1, 2, 7], it is possible to use an other approach based on P `olya-type urnsand the results from [18]. Here, we have to consider an urn U n = ( G n , B n , R n ) T for n ∈ N ,with balls of three different types and with mean replacement matrix given by A = c + p − p − p c c − p p p . (5.1)The coefficient a i j of the matrix A represents the mean number of balls of type i which areadded to the urn if a ball of type j is drawn, observed and then returned to the urn. Here,let say we have three colors of balls that are green ones, blue and red. The numbers ofballs of each color at instant n ≥ G n , B n and R n . In our configuration, thenumber of red balls corresponds to the number of steps towards the right direction. Thenumber of blue balls corresponds to the additional weight of the right direction. The numberof green balls corresponds to the total weight of the left direction. For example, let say agreen ball is drawn, it is then returned to the urn together c (because a step to the left wasremembered). Then, with probability p one other green ball is added, meaning a step to theleft is performed, and with probability 1 − p one red ball is added, meaning a step to theright is performed. No blue balls are added because the instant remembered was a left one.Hereafter, it follows from the dynamics of the urn that the number of steps to the right ofthe RERW until time n is distributed as R n . Consequently, we have for the position S n of thereinforced ERW at time n that S n L = R n − n . (5.2)Hereafter, the eigenvalues associated with the mean replacement matrix A defined in (5.1)are λ = c + λ = c + a and λ = L are v T = ( c + ) ( c + c , 1 ) , v T = ( c + a ) ( − ( c + a ) , c , a ) , v T = ( −
1, 1 ) .Then, we denote u , u and u the vectors of a corresponding dual basis where u T = (
1, 1, 1 ) , u T = ( −
1, 1, 1 ) , u T = ( c + )( c + a ) ( c ( a − ) , − ( a + ca + c ) , c ( c + a + )) .The study of the process ( U n ) relies on the value of the ratio λ / λ . In particular, the case λ / λ = / a = ( − c ) /
2, which is coherent with the previ-ous trichotomy. This connection allows us to retrieve the results from Theorems 3.4 and 3.10using Theorem 3.31 from [18]. We also find again the distributional convergence (3.12) fromTheorem 3.12 using once again [18], Theorem 3.24. n the reinforced elephant random walk Lemma
Let ( V n ) be the sequence of positive definite diagonal matrices of order given byV n = √ n (cid:32) aa + c a − n (cid:33) . (6.1) Then, the quadratric variation of (cid:104) M (cid:105) n satisfies in the diffusive regime where a < ( − c ) / , lim n → ∞ V n (cid:104) M (cid:105) n V n = V a.s. (6.2) where the matrix V is given byV = ( a + c ) c ( − a ) ac ( c + )( + a ) ac ( c + )( + a ) a ( + ac + c )( c + ) − c − a . (6.3) Remark
Following the same steps as in the proof of Lemma 6.1, we find that in the critical regimea = ( − c ) / , the sequence of normalization matrices ( V n ) has to be replaced byW n = (cid:112) n log n aa + c a − n . (6.4) The limit matrix V also need to be replaced byW = ( c − ) c + (cid:18) (cid:19) . (6.5) Proof of Lemma
We immediatly obtain from Theorem 3.1 and (2.8), (4.6), (4.9) thatlim n → ∞ V n (cid:104) M (cid:105) n V Tn = (cid:16) aa + c (cid:17) − λ ( a + c ) ( + ac + c ) (cid:18) (cid:19) + − λ ( − a ) ac ( a + c ) (cid:18) (cid:19) + ( − a ) (cid:16) ca + c (cid:17) (cid:18) (cid:19) = ( a + c ) c ( − a ) ac ( c + )( + a ) ac ( c + )( + a ) a ( + ac + c )( c + ) − c − a which is exactly what we wanted to prove. (cid:4) We shall make extensive use of the strong law of large numbers formartingales given, e.g. by theorem 1.3.24 of [14]. First, we have for M n that for any γ > M n = O (cid:0) ( log v n ) + γ v n (cid:1) a.s. n the reinforced elephant random walk which by definition of M n and as a n is asymptotically equivalent to n − ( a + c ) λ and v n is asymp-totically equivalent to n − ( a + c ) λ ensures that Y n n = O (cid:16) ( log n ) + γ n − ( a + c ) λ n ( − ( a + c ) λ ) (cid:17) a.s.and finally that Y n n = O (cid:16) ( log n ) + γ n (cid:17) a.s.This implies that lim n → ∞ Y n n = N n . By the same token as before, we have that for any γ > N n = O (cid:0) ( log n ) + γ n (cid:1) a.s.which by definition of N n gives us (cid:0) S n − aa + c Y n (cid:1) n = O (cid:16) ( log n ) + γ n (cid:17) a.s.and we conclude that lim n → ∞ S n n − aa + c Y n n = (cid:4) Proof of Theorem
We need to check that all the hypotheses of Theorem A.2 in [6] aresatisfied. Thanks to Lemma 6.1, hypothesis ( H.1 ) holds almost surely. In order to verify thatLindeberg’s condition ( H.2 ) is satisfied, we have from (2.9) together with (4.1) and V n givenby (6.1) that for all 1 ≤ k ≤ nV n ∆ M k = ( a + c ) √ n (cid:18) c ξ n + aa − n a k ε k (cid:19) which implies that (cid:107) V n ∆ M k (cid:107) = ( a + c ) n (cid:0) c + a a − n a k ε k (cid:1) (6.8)and (cid:107) V n ∆ M k (cid:107) = ( a + c ) n (cid:0) c + a c a − n a k ε k + a − n a k ε k (cid:1) . (6.9)Consequently, we obtain that for all ε > n ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) {(cid:107) V n ∆ M k (cid:107) > ε } (cid:12)(cid:12) F k − (cid:3) ≤ ε n ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) (cid:12)(cid:12) F k − (cid:3) . (6.10)It follows from (2.8) that a − n n ∑ k = a k = O ( n ) and a − n n ∑ k = a k = O ( n ) . n the reinforced elephant random walk Hence, using that the sequence ( ε n ) is uniformly boundedsup ≤ k ≤ n | ε k | ≤ c + n ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) (cid:12)(cid:12) F k − (cid:3) = O (cid:16) n (cid:17) a.s.which ensures that Lindeberg’s condition ( H.2 ) holds almost surely, that is for all ε > n → ∞ n ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) {(cid:107) V n ∆ M k (cid:107) > ε } (cid:12)(cid:12) F k − (cid:3) = ( H.3 ) is satisfied in the special case β = ∞ ∑ n = (cid:0) log ( det V − n ) (cid:1) E (cid:2) (cid:107) V n ∆ M n (cid:107) (cid:12)(cid:12) F n − (cid:3) < ∞ a.s.We immediatly have from (6.1) det V − n = a + ca √ na n . (6.13)Hence, we obtain from (2.8) and (6.13) thatlim n → ∞ log ( det V − n ) log n = − ( a + c ) λ . (6.14)Therefore, we can replace log ( det V − n ) by log n in (6.1). Hereafter, we obtain from (6.9) and(6.11) that ∞ ∑ n = ( log n ) E (cid:2) (cid:107) V n ∆ M n (cid:107) (cid:12)(cid:12) F n − (cid:3) = O (cid:16) ∞ ∑ n = ( n log n ) (cid:17) . (6.15)Thus, (6.15) guarentees that ( H.3 ) is verified. We are now going to apply the quadratic stronglaw given by Theorem A.2 in [6]. We get from equation (6.14) thatlim n → ∞ n n ∑ k = (cid:16) ( det V k ) − ( det V k + ) ( det V k ) (cid:17) V k M k M Tk V Tk = ( − ( a + c ) λ ) V a.s. (6.16)However, we obtain from (2.8) and (6.13) thatlim n → ∞ n (cid:16) ( det V n ) − ( det V n + ) ( det V n ) (cid:17) = − ( a + c ) λ . (6.17)Finally, let u = (cid:0)
1, 1 (cid:1) T we have u T V n M n = S n √ n (6.18)and we deduce from (6.16), (6.17) and (6.18) thatlim n → ∞ n n ∑ k = S k k = ( − ( a + c ) λ ) v T Vv a.s. (6.19)which, together with u T Vu = ac + c − a + c − (cid:4) n the reinforced elephant random walk Again, we shall make use of the strong law of large numbers formartingales given, e.g. by theorem 1.3.24 of [14]. First, we have for M n that for any γ > M n = O (cid:0) ( log v n ) + γ v n (cid:1) a.s.which by definition of M n and as a n is asymptotically equivalent to n − / and v n is asymp-totically equivalent to log n ensures that Y n ( √ n log n ) = O (cid:16) ( log log n ) + γ log n ( log n ) (cid:17) a.s.and finally that Y n ( √ n log n ) = O (cid:16) ( log log n ) + γ log n (cid:17) a.s.This implies that lim n → ∞ Y n √ n log n = γ > N n = O (cid:0) ( log n ) + γ n (cid:1) a.s.which by definition of N n gives us (cid:0) S n − aa + c Y n (cid:1) ( √ n log n ) = O (cid:16) ( log n ) γ − (cid:17) a.s.Taking e.g. γ = we can conclude thatlim n → ∞ S n √ n log n − aa + c Y n √ n log n = (cid:4) Proof of Theorem
The proof of the quadratic strong law (3.8) is left to the reader as itfollows essentially the same lines as that of (3.2). The only minor change is that the matrix V n has to be replaced by the matrix W n defined in (6.4). We shall now proceed to the proof ofthe law of iterated logarithm given by (3.9). On the one hand, it follows from (2.8) and (4.9)that + ∞ ∑ n = a n v n < ∞ . (6.23)Moreover, we have from (4.7) and (4.8) thatlim n → ∞ (cid:104) M (cid:105) n v n = + ac + c a.s. and lim n → ∞ (cid:104) N (cid:105) n n = (cid:16) ca + c (cid:17) ( − a ) a.s. n the reinforced elephant random walk Consequently, we deduce from the law of iterated logarithm for martingales due to Stout[24], see also Corollary 6.4.25 in [14], that ( M n ) satisfies when a = ( − c ) / n → ∞ M n ( v n log log v n ) / = − lim inf n → ∞ M n ( v n log log v n ) / = √ + c a.s.However, as a n v − / n is asymptotically equivalent to ( n log n ) − / , we immediately obtainfrom (4.10) thatlim sup n → ∞ Y n ( n log n log log log n ) / = − lim inf n → ∞ Y n ( n log n log log log n ) / = √ + c a.s. (6.24)The law of iterated logarithm for martingales also allow us to find that ( N n ) satisfieslim sup n → ∞ N n ( n log log n ) / = − lim inf n → ∞ N n ( n log log n ) / = cc + (cid:113) ( − a ) a.s.which ensures that lim sup n → ∞ N n ( n log n log log log n ) / = n → ∞ S n ( n log n log log log n ) / = lim sup n → ∞ N n + − c + c a − n M n ( n log n log log log n ) / = lim sup n → ∞ − c + c Y n ( n log n log log log n ) / = − lim inf n → ∞ − c + c Y n ( n log n log log log n ) / = − lim inf n → ∞ S n ( n log n log log log n ) / .Hence, we obtain thatlim sup n → ∞ S n n log n log log log n = lim sup n → ∞ (cid:16) − c + c (cid:17) Y n n log n log log log n = ( − c ) + c which immediatly leads to (3.9), thus completing the proof of Theorem 3.9. (cid:4) Hereafter, we shall again make extensive use of the strong law oflarge numbers for martingales given, e.g. by theorem 1.3.24 of [14] in order to prove (3.13). n the reinforced elephant random walk When a > ( − c ) /
2, we have from (4.11) that v n converges. Hence, as (cid:104) M (cid:105) n ≤ ( + ac + c ) v n , we clealy have that (cid:104) M (cid:105) ∞ < ∞ almost surely and we can conclude thatlim n → ∞ M n = M a.s. where M = ∞ ∑ k = a k ε k which by definition of M n and as a n is asymptotically equivalent to Γ ( + a λ ) Γ ( λ ) n − ( a + c ) λ ensuresthat lim n → ∞ Y n n ( a + c ) λ = Y a.s. where Y = Γ ( λ ) Γ ( + a λ ) M . (6.25)Moreover, we still have that for any γ > N n = O (cid:0) ( log n ) + γ n (cid:1) a.s.which by definition of N n gives us for all t ≥ (cid:0) S n − aa + c Y n (cid:1) n ( a + c ) λ = O (cid:16) ( log n ) + γ n ( a + c ) λ − (cid:17) a.s.As a > ( − c ) / t ≥ n → ∞ S (cid:98) nt (cid:99) (cid:98) nt (cid:99) ( a + c ) λ − aa + c Y (cid:98) nt (cid:99) (cid:98) nt (cid:99) ( a + c ) λ = (cid:98) nt (cid:99) is asymptotically equivalentto nt which implies lim n → ∞ S (cid:98) nt (cid:99) n ( a + c ) λ = t ( a + c ) λ L c a.s. (6.27)Finally, the fact that (6.27) holds almost surely ensures that it also holds for the finite-dimensionaldistributions, and we obtain (3.12) with Λ t = t ( a + c ) λ L c and L c = aa + c Y .We shall now proceed to the proof of the mean square convergence (3.14). On the one hand,as M = E (cid:2) M n (cid:3) = E (cid:2) (cid:104) M (cid:105) n (cid:3) ≤ ( + ac + c ) v n .Hence, we obtain from (4.11) that sup n ≥ E (cid:2) M n (cid:3) < ∞ which ensures that the martingale ( M n ) is bounded in L . Therefore, we have the meansquare convergence lim n → ∞ E (cid:2)(cid:12)(cid:12) M n − M (cid:12)(cid:12) (cid:3) = n → ∞ E (cid:104)(cid:12)(cid:12)(cid:12) Y n n ( a + c ) λ − Y (cid:12)(cid:12)(cid:12) (cid:105) =
0. (6.28)On the other hand, for any n ≥
0, the martingale ( N n ) satisfies E (cid:2) N n (cid:3) = E (cid:2) (cid:104) N (cid:105) n (cid:3) ≤ ( − a ) (cid:16) ca + c (cid:17) n n the reinforced elephant random walk and since ( a + c ) λ > we obtain lim n → ∞ E (cid:104)(cid:12)(cid:12)(cid:12) N n n ( a + c ) λ (cid:12)(cid:12)(cid:12) (cid:105) =
0. (6.29)Finally, we obtain the mean square convergence (3.14) from (6.28) and (6.29) and we achievethe proof of Theorem 3.12. (cid:4)
Proof of Theorem
We start by the calculation of the expectation (3.15). We immediatlyhave from (2.5) that E [ Y n + ] = γ n E [ Y n ] = (cid:16) n + a λ n − c λ (cid:17) E [ Y n ] which leads to E [ Y n ] = n − ∏ k = (cid:16) k + a λ k − c λ (cid:17) E [ Y ] = n − ∏ k = (cid:16) k + a λ k − c λ (cid:17) E [ X ] = ( q − ) a − n . (6.30)Hence, we immediatly get equation (3.15) from (6.30), that is E [ L c ] = a Γ ( λ )( a + c ) Γ ( + a λ ) E [ M ] = a Γ ( λ )( a + c ) Γ ( + a λ ) E [ M n ] = a ( q − ) Γ ( λ )( a + c ) Γ ( + a λ ) .Hereafter, we obtain from (4.3) by taking expectation on both sides that E [ Y n + ] = + ac + c + ( γ n − ) E [ Y n ] = + ac + c + (cid:16) n + ( a + c ) λ n − c λ (cid:17) E [ Y n ] and thanks to well-kown recursive relation solutions and Lemma B.1 in [3], we get E [ Y n ] = ( + ac + c ) n − ∏ k = (cid:16) k + ( a + c ) λ k − c λ (cid:17) n − ∑ k = k ∏ i = i − c λ i + ( a + c ) λ = ( + ac + c ) Γ ( n + ( a + c ) λ ) Γ ( λ ) Γ ( n − c λ ) Γ ( + ( a + c ) λ ) n − ∑ k = Γ ( k + λ ) Γ ( + ( a + c ) λ ) Γ ( k + + ( a + c ) λ ) Γ ( λ )= ( + ac + c ) Γ ( n + ( a + c ) λ ) Γ ( n − c λ ) n ∑ k = Γ ( k + λ − ) Γ ( k + ( a + c ) λ )= ( + ac + c ) Γ ( n + ( a + c ) λ ) λ ( a + c − ) Γ ( n − c λ ) (cid:16) Γ ( λ ) Γ (( a + c ) λ ) − Γ ( n + λ ) Γ ( n + ( a + c ) λ ) (cid:17) .Hence, we obtain from (2.8), (2.9) and (6.28) that E [ Y ] = lim n → ∞ E [ Y n ] n ( a + c ) λ = ( + ac + c ) Γ ( λ ) λ ( a + c − ) Γ (( a + c ) λ ) . (6.31) (cid:4) n the reinforced elephant random walk In order to apply Theorem A.1 in the Appendix, we must verify that(H.1), (H.2) and (H.3) are satisfied. (H.1)
We have from (6.2) and the fact that a (cid:98) nt (cid:99) is asymtotically equivalent to t − ( a + c ) λ a n that V n (cid:104) M (cid:105) (cid:98) nt (cid:99) V Tn −→ n → ∞ V t a.s.where V t = ( a + c ) c ( − a ) t ac ( c + )( + a ) t − ( a + c ) λ ac ( c + )( + a ) t − ( a + c ) λ a ( + ac + c )( c + ) − c − a t − ( a + c ) λ . (H.2) We also get that Lindeberg’s condition is satisfied as we already know from (6.12) thatfor all ε > n → ∞ n ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) {(cid:107) V n ∆ M k (cid:107) > ε } (cid:12)(cid:12) F k − (cid:3) = V n V − (cid:98) nt (cid:99) convergeslim n → ∞ (cid:98) nt (cid:99) ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) {(cid:107) V n ∆ M k (cid:107) > ε } (cid:12)(cid:12) F k − (cid:3) ≤ lim n → ∞ (cid:98) nt (cid:99) ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) (cid:3) ≤ lim n → ∞ (cid:98) nt (cid:99) ∑ k = E (cid:2) (cid:107) ( V n V − (cid:98) nt (cid:99) ) V (cid:98) nt (cid:99) ∆ M k (cid:107) (cid:3) = (H.3) In this particular case, we have V t = tK + t α K + t α K where α = − ( a + c ) λ > α = − ( a + c ) λ > a ≤ ( − c ) /
2, and the matrix are symmetric K = c ( − a )( a + c ) (cid:18) (cid:19) , K = ac ( c + )( a + )( a + c ) (cid:18) (cid:19) , K = a ( + ac + c )( c + )( − a − c )( a + c ) (cid:18) (cid:19) .Consequently, we obtain that (cid:0) V n M (cid:98) nt (cid:99) , t ≥ (cid:1) = ⇒ (cid:0) B t , t ≥ (cid:1) where B is defined as in (A.1). Finally, we conclude using that S (cid:98) nt (cid:99) is asymptotically equiv-alent to N (cid:98) nt (cid:99) + t ( a + c ) λ aa + c a − n M (cid:98) nt (cid:99) so that we obtain by multiplying u t = (cid:16) t ( a + c ) λ (cid:17)(cid:0) √ n S (cid:98) nt (cid:99) , t ≥ (cid:1) = ⇒ (cid:0) W t , t ≥ (cid:1) (7.1) n the reinforced elephant random walk where W t = u Tt B t . It only remains to compute the covariance function of W that is for0 ≤ s ≤ t E (cid:2) W s W t (cid:3) = u Ts E (cid:2) B s B Tt (cid:3) u t = u Ts V s u t = u Ts (cid:0) sK + s − ( a + c ) λ K + s − ( a + c ) λ K ) u t = c ( − a )( a + c ) s + ac ( c + )( a + )( a + c ) s − ( a + c ) λ ( s ( a + c ) λ + t ( a + c ) λ )+ a ( + ac + c )( c + )( − a − c )( a + c ) s − ( a + c ) λ ( st ) ( a + c ) λ = (cid:16) c ( − a )( a + c ) + ac ( c + )( a + )( a + c ) (cid:17) s + (cid:16) ac ( c + )( a + )( a + c ) + a ( + ac + c )( c + )( − a − c )( a + c ) (cid:17) s (cid:16) ts (cid:17) ( a + c ) λ = c ( a + ) a + c s + a ( − c )( a + c )( − a − c ) s (cid:16) ts (cid:17) ( a + c ) λ . (cid:4) Proof of Corollary
As for Corollary 4.1 from [7], we observe that G n √ n = (cid:90) S (cid:98) nt (cid:99) √ n d t .Consquently, G n / √ n is a continuous function of S (cid:98) nt (cid:99) / √ n in D ([
0, 1 ]) . Hence, the functionaldistribution from Theorem 3.4 gives us that G n √ n = (cid:90) S (cid:98) nt (cid:99) √ n d t L −→ (cid:90) W t d t .The process (cid:0) W t , t ≥ (cid:1) is a continuous real-valued and centered Gaussian process startingfrom the origin, which implies that (cid:82) W t d t is also one. Its covariance is given by E (cid:104)(cid:0) (cid:90) W s d s (cid:1)(cid:0) (cid:90) W t d t (cid:1)(cid:105) = (cid:90) (cid:90) t E (cid:2) W s W t (cid:3) d s d t = a ( − c )( a + c )( − a − c ) (cid:90) (cid:90) t s (cid:16) ts (cid:17) λ ( a + c ) d s d t + c ( a + ) a + c (cid:90) (cid:90) t s d s d t = a ( − c )( c + ) ( + c − a )( a + c )( − a − c ) + c ( a + ) ( a + c )= − c ( c + + ca + a − a ) ( + c − a )( − a − c ) . (cid:4) First, we have from (4.8) that for all t ≥ (cid:104) N (cid:105) (cid:98) n t (cid:99) n t log n −→ n the reinforced elephant random walk which implies from Theorem 1.3.24 of [14] that N (cid:98) n t (cid:99) n t log n −→ ( M n ) , we mustonce again verify that (H.1), (H.2) and (H.3) are satisfied. (H.1) Let w n = (cid:112) v − n , we have from (4.7), Remark 6.2 and the fact that a (cid:98) n t (cid:99) is asymtoticallyequivalent to n − t / that w n (cid:104) M (cid:105) (cid:98) n t (cid:99) w n −→ n → ∞ t ( c + ) a.s. (H.2) We also get that Lindeberg’s condition is satisfied as v n is increasing as log n and wehave for all ε > n → ∞ v n n t ∑ k = E (cid:2) ∆ M k {| ∆ M k | > ε √ v n } (cid:12)(cid:12) F k − (cid:3) ≤ lim n → ∞ ε v n (cid:98) n t (cid:99) ∑ k = E (cid:2) ∆ M k (cid:3) ≤ lim n → ∞ (cid:16) v (cid:98) n t (cid:99) v n (cid:17) ε v (cid:98) n t (cid:99) (cid:98) n t (cid:99) ∑ k = E (cid:2) ∆ M k (cid:3) ≤ lim n → ∞ t ε ( log n t ) (cid:98) n t (cid:99) ∑ k = E (cid:2) ∆ M k (cid:3) .Moreover, we have from the very definition of M n that n ∑ k = E (cid:2) ∆ M k (cid:3) = O (cid:16) n ∑ k = a k (cid:17) a.s.and as a n is asymptotically equivalent to n − / , we can conclude thatlim n → ∞ v n n t ∑ k = E (cid:2) ∆ M k {| ∆ M k | > ε √ v n } (cid:12)(cid:12) F k − (cid:3) = (H.3) In this particular case, we have w t = t ( c + ) . Hence, we obtain that (cid:0) w n M (cid:98) n t (cid:99) , t ≥ (cid:1) = ⇒ (cid:0) W t , t ≥ (cid:1) where W is defined as in Theorem A.1. Moreover, when a = ( − c ) / ( a (cid:98) n t (cid:99) v n ) − is asymptotically equivalent to (cid:112) n t log n − that (cid:0) S (cid:98) n t (cid:99) (cid:112) n t log n − N (cid:98) n t (cid:99) (cid:112) n t log n , t ≥ (cid:1) = ⇒ − cc + (cid:0) W t , t ≥ (cid:1) .Consequently, using that W is a centered Brownian motion with variance ( c + ) , we canconclude that (cid:16) S (cid:98) n t (cid:99) (cid:112) n t log n , t ≥ (cid:17) = ⇒ (cid:115) ( − c ) c + (cid:0) B t , t ≥ (cid:1) and this achieves the proof of Theorem 3.10. (cid:4) n the reinforced elephant random walk Appendix. A non-standard result on martingales
The proofs of our main results rely on the non-standard functional central limit theoremand quadratic strong law for multi-dimensional martingales as for the center of mass of theelephan random walk [6]. A simplified version of Theorem 1 part 2) of Touati [25] is asfollows.
Theorem
A.1.
Let ( M n ) be a locally square-integrable martingale of R δ adapted to a filtration ( F n ) ,with predictable quadratic variation (cid:104) M (cid:105) n . Let ( V n ) be a sequence of non-random square matrices oforder δ such that (cid:107) V n (cid:107) decreases to as n goes to infinity. Moreover let τ : R + → R + be a non-decreasing function going to infinty at infinity. Assume that there exists a symmetric and positivesemi-definite matrix V t that is deterministic and such that for all t ≥ ( H.1 ) V n (cid:104) M (cid:105) τ ( nt ) V Tn P −→ n → ∞ V t . Moreover, assume that Lindeberg’s condition is satisfied, that is for all t ≥ and ε > , ( H.2 ) τ ( nt ) ∑ k = E (cid:2) (cid:107) V n ∆ M k (cid:107) {(cid:107) V n ∆ M k (cid:107) > ε } (cid:12)(cid:12) F k − (cid:3) P −→ n → ∞ where ∆ M n = M n − M n − . Finally, assume that ( H.3 ) V t = q ∑ j = t α j K j where α j > and K j is a symmetric matrix, for some q ∈ N ∗ . Then, we have the distributionalconvergence in the Skorokhod space D ([ ∞ [) of right-continuous functions with left-hand limits, (cid:0) V n M τ ( nt ) , t ≥ (cid:1) = ⇒ (cid:0) W t , t ≥ (cid:1) (A.1) where W = (cid:0) W t , t ≥ (cid:1) is a continuous R d -valued centered Gaussian process starting at 0 withcovariance, for ≤ s ≤ t, E [ W s W Tt ] = V s . (A.2) References [1] B
AUR , E. On a class of random walks with reinforced memory.
J. Stat. Phys. (2020).[2] B
AUR , E.,
AND B ERTOIN , J. Elephant random walks and their connection to p ´olya-typeurns.
Physical review. E 94, 052134 (2016).[3] B
ERCU , B. A martingale approach for the elephant random walk.
J. Phys. A 51 , 1 (2018),015201, 16.[4] B
ERCU , B., C
HABANOL , M.-L.,
AND R UCH , J.-J. Hypergeometric identities arisingfrom the elephant random walk.
J. Math. Anal. Appl. 480 , 1 (2019), 123360, 12.[5] B
ERCU , B.,
AND L AULIN , L. On the multi-dimensional elephant random walk.
J. Stat.Phys. 175 , 6 (2019), 1146–1163. n the reinforced elephant random walk [6] B ERCU , B.,
AND L AULIN , L. On the center of mass of the elephant random walk.
Stochastic Process. Appl. 133 (2021), 111 – 128.[7] B
ERTENGHI , M. Functional limit theorems for the multi-dimensional elephant randomwalk. arXiv:2004.02004 (2020).[8] B
ERTOIN , J. Scaling exponents of step-reinforced random walks. hal-02480479 (2020).[9] B
USINGER , S. The shark random swim (L´evy flight with memory).
J. Stat. Phys. 172 , 3(2018), 701–717.[10] C
HAABANE , F.,
AND M AAOUIA , F. Th´eor`emes limites avec poids pour les martingalesvectorielles.
ESAIM Probab. Statist. 4 (2000), 137–189.[11] C
OLETTI , C.,
AND P APAGEORGIOU , I. Asymptotic analysis of the elephant randomwalk. arXiv:1910.03142 (2019).[12] C
OLETTI , C. F., G
AVA , R.,
AND S CH ¨ UTZ , G. M. Central limit theorem and relatedresults for the elephant random walk.
J. Math. Phys. 58 , 5 (2017), 053303, 8.[13] C
OLETTI , C. F., G
AVA , R.,
AND S CH ¨ UTZ , G. M. A strong invariance principle for theelephant random walk.
J. Stat. Mech. Theory Exp. , 12 (2017), 123207, 8.[14] D
UFLO , M.
Random iterative models , vol. 34 of
Applications of Mathematics (New York) .Springer-Verlag, Berlin, 1997.[15] F AN , X., H U , H., AND X IAOHUI , M. Some limit theorems for the elephant randomwalk. arXiv:2003.12937 (2020).[16] G
ONZ ´ ALEZ -N AVARRETE , M. Multidimensional walks with random tendency. arXiv:2004.04033 (2020).[17] H
ALL , P.,
AND H EYDE , C. C.
Martingale limit theory and its application . Academic Press,Inc., New York-London, 1980. Probability and Mathematical Statistics.[18] J
ANSON , S. Functional limit theorems for multitype branching processes and general-ized p ´olya urns.
Stochastic Process. Appl. 110 , 2 (2004), 177–245.[19] K
OZMA , G. Reinforced random walk. In
European Congress of Mathematics . Eur. Math.Soc., Z ¨urich, 2013, pp. 429–443.[20] K
UBOTA , N.,
AND T AKEI , M. Gaussian fluctuation for superdiffusive elephant randomwalks.
J. Stat. Phys. 177 , 6 (2019), 1157–1171.[21] M
IYAZAKI , T.,
AND T AKEI , M. Limit theorems for the ‘laziest’ minimal random walkmodel of elephant type.
J. Stat. Phys. 181 , 2 (2020), 587–602.[22] P
EMANTLE , R. A survey of random processes with reinforcement.
Probab. Surveys 4 (2007), 1–79.[23] S CH ¨ UTZ , G. M.,
AND T RIMPER , S. Elephants can always remember: Exact long-rangememory effects in a non-markovian random walk.
Physical review. E 70, 045101 (2004).[24] S
TOUT , W. F. Maximal inequalities and the law of the iterated logarithm.
Ann. Probabil-ity 1 (1973), 322–328. n the reinforced elephant random walk [25] T OUATI , A. Sur la convergence en loi fonctionnelle de suites de semimartingales versun m´elange de mouvements browniens.
Teor. Veroyatnost. i Primenen. 36 , 4 (1991), 744–763.[26] V ´
AZQUEZ G UEVARA , V. H. On the almost sure central limit theorem for the elephantrandom walk.
J. Phys. A 52 , 1 (2019), 475201. U NIVERSIT ´ E DE B ORDEAUX , I
NSTITUT DE M ATH ´ EMATIQUES DE B ORDEAUX , UMR 5251, 351 C
OURS DE LA L IB ´ ERATION , 33405 T
ALENCE CEDEX , F
RANCE . E-mail adress : [email protected]@math.u-bordeaux.fr