Long Memory in a Linear Stochastic Volterra Differential Equation
aa r X i v : . [ m a t h . C A ] S e p . LONG MEMORY IN A LINEAR STOCHASTIC VOLTERRADIFFERENTIAL EQUATION
JOHN A. D. APPLEBY AND KATJA KROL
Abstract.
In this paper we consider a linear stochastic Volterra equationwhich has a stationary solution. We show that when the kernel of the funda-mental solution is regularly varying at infinity with a log-convex tail integral,then the autocovariance function of the stationary solution is also regularlyvarying at infinity and its exact pointwise rate of decay can be determined.Moreover, it can be shown that this stationary process has either long memoryin the sense that the autocovariance function is not integrable over the reals oris subexponential. Under certain conditions upon the kernel, even arbitrarilyslow decay rates of the autocovariance function can be achieved. Analogousresults are obtained for the corresponding discrete equation. Introduction
In recent years, much attention in quantitative finance has centred on the questionof whether financial markets are efficient, and whether there is a significant impactof past events on the current state of the system, see e.g. Cont [13]. A mathemat-ical way in which this phenomenon can be captured is through the theory of longrange dependence , or long memory . For continuous time processes, this is mea-sured by the autocovariance function of a stationary process being non–integrableand polynomially decaying, so it must decay more slowly than exponentially. Pro-cesses with long memory also arise in other areas of science such as data networktraffic or hydrology see e.g. Doukhan et al. [14].In this paper, we describe a class of processes, both in discrete and continuous timewhich exhibit long range dependence through non–exponential convergence of theirautocovariance functions. In the continuous case, these are solutions of scalar affinestochastic Volterra equations of the form dX ( t ) = (cid:18) aX ( t ) + Z t k ( t − s ) X ( s ) ds (cid:19) dt + σ dB ( t ) for t ≥ , (1.1)where B is standard Brownian motion and k is an integrable function. Applica-tions of such equations stochastic Volterra equations arise in physics and mathe-matical finance. In physics, for example, the behaviour of viscoelastic materialsunder external stochastic loads has been analysed using Itˆo–Volterra equations (cf., Date : October 26, 2018.1991
Mathematics Subject Classification.
Primary: 34K20, 34K25, 39A11, 45D05, 60G10;Secondary: 60G15, 60H10, 60H20, 60K05.
Key words and phrases.
Volterra integrodifferential equations, Volterra difference equations,Itˆo–Volterra integrodifferential equations, differential resolvent, asymptotic stability, infinite mo-ment, stationary solutions, long memory, long–range dependency, renewal sequences with infinitemean, regular variation, subexponential.JA is partly supported by Science Foundation Ireland under the Mathematics Initiative 2007grant 07/MI/008 “Edgeworth Centre for Financial Mathematics”.KK is supported by the Deutsche Telekom Stiftung. e.g. Drozdov and Kolmanovski˘ı [15]). In financial mathematics, the presence of in-efficiency in real markets can be modelled by using stochastic functional differentialequations. Anh et al. [1, 2] have posited models for the evolution of asset returnsusing stochastic Volterra equations with infinite memory.For affine stochastic functional differential equations with bounded delay, it hasbeen shown that stationary solutions always have exponentially fading autocovari-ance function, see e.g. Gushchin and K¨uchler [21], Riedle [33]. This is a consequenceof the fact that, if an autonomous linear differential equation with finite delay isstable, then its resolvent converges to zero at an exponentially fast rate, see Haleand Lunel [22].In order to obtain polynomial convergence results for linear autonomous Volterraequations, it is necessary to consider kernels k which decay non–exponentially,both for deterministic and stochastic equations. While a substantial literature ex-ists in the deterministic case (see e.g., [35, 19, 25, 7, 5, 6]) only a few results fornon–exponential convergence phenomena of linear stochastic autonomous Volterraequations exist, and those that do concern the asymptotic stability of point equi-libria. Examples of such papers include Appleby [3, 4] for pointwise convergencerates, Appleby and Riedle [9] for convergence rates in weighted L p –spaces, andMao and Riedle for mean square convergence rates [29]. In particular, polynomialconvergence rates of the autocovariance function of (1.1) have not been recorded.In this paper, we examine the asymptotic behaviour of the autocovariance functionof asymptotically stationary solutions of (1.1). To do this, our first class of resultsconcerns the exact rate of convergence to zero of the solution of the differentialresolvent associated with (1.1), namely(1.2) r ′ ( t ) = ar ( t ) + Z t k ( t − s ) r ( s ) ds for t ≥ , r (0) = 1 . We consider first equations for which the kernel k is positive and integrable withinfinite first moment. In this case it is only known to date that the resolvent r converges to zero and is not integrable.In this paper we first show that if the kernel k additionally satisfies a + R ∞ k ( s ) ds =0 and the tail integral λ ( t ) := R ∞ t k ( s ) ds is a log–convex regularly varying functionwith index α , then the solution r is decays at a hyperbolic rate, according to(1.3) lim t →∞ r ( t ) t − α L ( t ) = sin αππ , where L is a slowly varying function related to k . Corresponding asymptotic re-sults are established in discrete time. The discrete analogue of equation (1.2) withpositive summable kernel of infinite moment corresponds to the renewal sequenceof a null–recurrent Markov chain [20], and under similar additional assumptions onthe kernel, the hyperbolic decay of the sequence relies upon well–known results byGarsia and Lamperti [18] and Isaac [23].Our second class of results in this paper employ the convergence rate of the resolvent r to investigate the long memory properties of the solution of the Itˆo–Volterradifferential equation (1.1) and its discrete analogue. It turns out, that under thesame conditions on the kernel k , the equation (1.1) possesses an asymptoticallystationary solution for 0 < α < /
2. There also exists a limiting equation which isstationary and its autocovariance function c obeys(1.4) lim t →∞ c ( t ) L ( t ) t − α = σ Γ(1 − α )Γ( α )Γ(1 − α ) · sin ( πα ) π . ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 3
Moreover, because c is non–integrable, the process has long memory. Again, corre-sponding results hold in discrete time.If α > /
2, no stationary solutions exist and the case α = 1 / a + R ∞ k ( s ) ds < a + R ∞ k ( s ) ds >
0. While in the latter case no stationary solution exists, we showin the first case, that under weaker assumptions on the kernel k , the autocovari-ance function of the stationary solution is integrable. Nevertheless, its decay is veryslow: the rate of convergence to zero is the same as the decay rate of k , that ishyperbolic.Although we have mentioned discrete results only briefly in this introduction, thereare many reasons to formulate the models (1.2) and (1.1) in discrete time. Whenmodelling dynamic real–world phenomena, it is desirable that properties formu-lated in discrete or continuous time should be consistent. In this paper, our resultsdemonstrate that the long or subexponential memory are general properties of theVolterra model and do not depend on the continuity assumption. Secondly, by ap-plying for example a constant step size Euler–Maruyama scheme to the continuousequation (1.1), we obtain consistent estimates of the decay rate of the autocovari-ance function. These decay estimates stabilise appropriately to those obtained inthe continuous case in the limit as the step size tends to zero.2. Discrete and continuous stochastic Volterra–Equations
Mathematical Preliminaries.
We denote the spaces of real–valued contin-uous functions by C ([0 , ∞ ); R ). Let L p ([0 , ∞ ); R ) ( ℓ p ), p ≥
1, denote the space ofreal–valued measurable functions f (sequences ( f n ) n ∈ N ) satisfying Z ∞ | f ( t ) | p dt < ∞ (cid:0) ∞ X n =0 | f n | p < ∞ (cid:1) . We write f ∼ g for x → x ∈ R ∪ {±∞} if lim x → x f ( x ) /g ( x ) = 1 . A function L : [0 , ∞ ) → (0 , ∞ ) is slowly varying at infinity if for all x > , (2.1) lim t →∞ L ( xt ) L ( t ) = 1 . A function f varies regularly with index α ∈ R , f ∈ RV ∞ ( α ), if it is of the form(2.2) f ( t ) = t α L ( t )with L slowly varying, see e.g. Feller [17], Ch. VIII.8.The definition of a regularly varying sequence is a counterpart of the continuousdefinition [12]: a sequence of positive numbers ( c n ) n ∈ N is said to be regularly varyingof index ρ ∈ R ( c is slowly varying if ρ = 0), iflim n →∞ c [ λn ] c n = λ ρ , for every λ > , where [ x ] denotes the integer part of x ∈ R + . A regularly varying sequence isembeddable as the integer values of a regularly varying function: the function c ( · ),defined on [0 , ∞ ) by c ( x ) := c [ x ] is regularly varying of index ρ . JOHN A. D. APPLEBY AND KATJA KROL
Continuous–time Gaussian Volterra equations.
We first turn our atten-tion to the deterministic Volterra equation in R : x ′ ( t ) = ax ( t ) + Z t k ( t − s ) x ( s ) ds for t ≥ , x (0) = x . (2.3)For any x ∈ R there is a unique R –valued function x which satisfies (2.3) on [0 , ∞ ).The so–called fundamental solution or resolvent of (2.3) is the real–valued function r : [0 , ∞ ) → R , which is the unique solution of the equation (1.2).Let (Ω , F , P ) be a probability space equipped with a filtration ( F t ) t ≥ , and let B = { B ( t ) : t ≥ } be a one–dimensional Brownian motion on this probabilityspace. We will consider the stochastic integro–differential equation of the form dX ( t ) = (cid:18) aX ( t ) + Z t k ( t − s ) X ( s ) ds (cid:19) dt + σ dB ( t ) for t ≥ ,X (0) = X , (2.4)where k is a continuous, integrable real–valued function, and σ is a non–zero realconstant. The initial condition X is a real–valued, F –measurable random vari-able with E | X | < ∞ which is independent of B . The existence and uniquenessof a continuous solution X of (2.4) with X (0) = X P –a.s. is covered in Bergerand Mizel [10], for instance. Independently, the existence and uniqueness of solu-tions of stochastic functional equations was established in Itˆo and Nisio [24] andMohammed [31]. In fact, X has the variation of constants representation(2.5) X ( t ) = r ( t ) X + Z t r ( t − s ) σ dB ( s ) , t ≥ . We first discuss the existence of asymptotically stationary solutions of (2.4). Ittranspires that the critical condition to guarantee stationarity is that the funda-mental solution r of (2.3) is in L ([0 , ∞ ); R ). Theorem 2.1.
Let k ∈ L ([0 , ∞ ); R ) ∩ C ([0 , ∞ ); R ) . Suppose the fundamentalsolution r of (2.3) obeys r ∈ L ([0 , ∞ ); R ) . Let σ ∈ R \ { } . Let X be the solutionof (2.4) . Then for every t ≥ there exists a real–valued function c such that (2.6) c ( t ) := lim s →∞ Cov( X ( s ) , X ( s + t )) = σ Z ∞ r ( s ) r ( s + t ) ds. The result follows directly from (2.5), and the fact that X is independent of B .The following theorem shows that (2.4) has a limiting equation which possessesa stationary, rather than an asymptotically stationary solution. To this end, let B = { B ( t ) : t ≥ } and B = { B ( t ) : t ≥ } be independent standard Brownianmotions, and consider the process B = { B ( t ) : t ∈ R } defined by(2.7) B ( t ) = (cid:26) B ( t ) , t > B ( − t ) , t ≤ . Then B is a standard Brownian motion defined on the whole line. Theorem 2.2.
Let k ∈ L ([0 , ∞ ); R ) ∩ C ([0 , ∞ ); R ) . Suppose the fundamentalsolution r of (2.3) obeys r ∈ L ([0 , ∞ ); R ) . Let σ ∈ R \ { } . Let B = { B ( t ) : t ∈ R } be the standard one–dimensional Brownian motion defined by (2.7) . Then the ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 5 unique continuous adapted process which obeys dX ( t ) = (cid:18) aX ( t ) + Z ∞ k ( s ) X ( t − s ) ds (cid:19) dt + σ dB ( t ) , t > X ( t ) = Z t −∞ r ( t − s ) σ dB ( s ) , t ≤ , (2.8) is given by (2.9) X ( t ) = Z t −∞ r ( t − s ) σ dB ( s ) , t ∈ R . Moreover, X is a stationary zero mean Gaussian process with autocovariance func-tion given by (2.10) c ( t ) = Cov( X ( s ) , X ( s + t )) = σ Z ∞ r ( s ) r ( s + t ) ds. It is clear that if r is in L ([0 , ∞ ); R ) that X defined by (2.9) is a stationary zeromean Gaussian process with autocovariance function given by (2.10). To show that X satisfies (2.8) requires more work, and a proof is given in Section 6.Theorem 2.2 provides direction for the investigations in this paper. It is readilyseen that r ∈ L ([0 , ∞ ); R ) implies c ∈ L ([0 , ∞ ); R ). Therefore in order to possesslong memory but still to have stationary solutions, we need to consider conditionson the kernel k in (2.3) such that the fundamental solution r of (2.3) obeys r ∈ L ([0 , ∞ ); R ) but r L ([0 , ∞ ); R ).Section 3 gives an example of how this can be achieved. The crucial hypotheseson k is that it is regularly varying and its tail integral is log–convex: this enablesus to prove that r is regularly varying and to determine the exact rate of decay of r . We then show how the asymptotic behaviour of c can be inferred from r when r is regularly varying in such a way that r ∈ L ([0 , ∞ ); R ) but r L ([0 , ∞ ); R ).The results enable us to determine the exact rate of decay of the autocovariancefunction c in terms of the rate of decay of k .2.3. Discrete–time Volterra equations.
Let (Ω , F , P ) be a probability spaceequipped with a filtration ( F n ) n ∈ N . We consider the discrete version of (2.4): X n +1 − X n = aX n + n X j =1 k j X n − j + ξ n +1 , n ≥ ,X = x , (2.11)where k is a positive summable kernel, a := − P ∞ j =1 k j and ξ = { ξ n : n ∈ N } isa sequence of independent, identically distributed random variables with E ( ξ n ) =0 , E ( ξ n ) = σ > n ∈ N . x is an F –measurable random variable with E ( x ) < ∞ which is independent of ξ . Let r = { r n : n ∈ N } denote the fundamentalsolution of (2.11), i.e., the unique solution of(2.12) r n +1 − r n = ar n + n X j =1 k j r n − j , n ≥ , r = 1 . For more information on Volterra difference equations, the reader is referred to thebook of Elaydi [16]. An analogous result to Theorem 2.2 holds for (2.11):
JOHN A. D. APPLEBY AND KATJA KROL
Theorem 2.3.
Suppose that k ∈ ℓ and the fundamental solution (2.12) obeys r ∈ ℓ . Then there is a unique adapted process X which obeys X n +1 − X n = aX n + ∞ X j =1 k j X n − j + ξ n +1 , n ≥ X n = n X j = −∞ r n − j ξ j , n < , (2.13) where ξ is extended to n ∈ Z by taking an independent copy ξ of ξ (defined on thesame probability space) and setting ξ − n = ξ n , n ∈ N . X is a stationary zero meanprocess with autocovariance function given by (2.14) c ( h ) = Cov( X n , X n + h ) = σ ∞ X n =0 r n r n + h , h ∈ N . Again, we are able to show that if ( k n ) n ∈ N is a so–called Kaluza–sequence, then r satisfies r ∈ ℓ but r ℓ with exact rate of decay specified. From (2.14) wecan deduce the exact asymptotic behaviour of the autocovariance function of thestationary solution.3. Long memory in the continuous equation
Asymptotic Behaviour of the Deterministic Resolvent.
This sectiongives the exact rate of decay of the solution of a scalar linear Volterra differentialequation with a non–integrable solution r which nonetheless obeys r ( t ) → t → ∞ . Suppose that a + R ∞ k ( s ) ds = 0 and let k satisfy the following conditions(C1) k ∈ L ([0 , ∞ ); (0 , ∞ )) ∩ C ([0 , ∞ ); (0 , ∞ )),(C2) t log λ ( t ) is a convex function, where(3.1) λ ( t ) := Z ∞ t k ( s ) ds, (C3) λ ( t ) = L ( t ) t − α with α ∈ (0 ,
1) and a slowly varying at infinity function L . Remark . The last two conditions are satisfied, if k is a completely monotonefunction such that k ∈ RV ∞ ( − − α ). Condition (C2) is equivalent to(C2*) λ ( t ) λ ( t + T ) is non–increasing in t for all T > r which is a so-lution of the integro–differential equation (1.2). In particular, it follows from (C3)that k obeys(3.2) Z ∞ sk ( s ) ds = ∞ . In this case it is only known that the differential resolvent r satisfies(3.3) lim t →∞ r ( t ) = 0 , r L ((0 , ∞ ); (0 , ∞ )) . ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 7
Theorem 3.2.
Suppose that k obeys (C1)–(C3). If r is the unique continuoussolution of (1.2) , then (3.4) lim t →∞ r ( t ) t − α L ( t ) = sin αππ . Hence for α ∈ (0 , /
2) we have r ∈ L ([0 , ∞ ); (0 , ∞ )) but r L ([0 , ∞ ); (0 , ∞ ))due to r ∈ RV ∞ ( µ ) for µ = α − ∈ ( − , − / Proof.
We note that λ ∈ C ((0 , ∞ ); (0 , ∞ )). Evidently λ is positive, non–increasing,satisfies λ ( t ) → t → ∞ . Though, by virtue of (C3) this happens so slowly that λ L ([0 , ∞ ); R ).Since r ∈ C ((0 , ∞ ); (0 , ∞ )), we can also introduce the function ρ = − r ′ .By differentiation of the function f ( t ) = r ( t ) + R t λ ( t − s ) r ( s ) ds , and using (1.2),we see that f ′ ( t ) = 0. Since f (0) = r (0) = 1, we have(3.5) r ( t ) + Z t λ ( t − s ) r ( s ) ds = 1 , t ≥ . Therefore, ρ ( t ) = − r ′ ( t ) = ddt (cid:18) − Z t λ ( s ) r ( t − s ) ds (cid:19) = Z t λ ( s ) r ′ ( t − s ) ds + λ ( t ) r (0) = λ ( t ) − Z t λ ( t − s ) ρ ( s ) ds. Hence ρ is the integral resolvent of λ . Now by (C2) and and Theorem 1.2 in [28],it follows that(3.6) 0 ≤ ρ ( t ) ≤ λ ( t ) for all t > , Z ∞ ρ ( t ) dt = 1 , particularly implying 0 ≤ r ( t ) ≤ t ≥
0. Since λ ( t ) ≥
0, we may define ameasure Λ by Λ([0 , t ]) = R t λ ( s ) ds . Then ω Λ ( z ) := Z ∞ e − zt Λ( dt ) = ˆ λ ( z ) . By (C3), it follows that Λ ∈ RV ∞ (1 − α ), so as 1 − α >
0, we can apply TheoremXIII.5.1 in [17] to get(3.7) ˆ λ ( τ ) = ω Λ ( τ ) ∼ Γ( − α + 2)Λ(1 /τ ) , as τ → . Next, as r ( t ) > t ≥
0, we may define the measure U by U ([0 , t ]) = R t r ( s ) ds .Then u ( t ) := U ′ ( t ) = r ( t ) obeys u ′ ( t ) = r ′ ( t ) = − ρ ( t ) ≤ t ≥
0. Furthermore ω U ( z ) := Z ∞ e − zt U ( dt ) = Z ∞ e − zt r ( t ) dt = ˆ r ( z ) . Since λ ( t ) → r ( t ) → t → ∞ , ˆ λ ( z ) and ˆ r ( z ) exist for ℜ ( z ) >
0. Therefore,by (3.5), we have ˆ r ( z ) + ˆ λ ( z )ˆ r ( z ) = 1 z , ℜ ( z ) > . Therefore, for τ > ω U ( τ ) = ˆ r ( τ ) = 1 τ + τ ˆ λ ( τ ) . Now, by (3.7) τ ˆ λ ( τ ) ∼ Γ( − α + 2) τ Λ(1 /τ ) , as τ → . JOHN A. D. APPLEBY AND KATJA KROL
Because Λ ∈ RV ∞ ( − α + 1), Λ ( τ ) := τ Λ(1 /τ ) obeys Λ ∈ RV ( α ). Since α ∈ (0 , τ + τ ˆ λ ( τ ) ∼ Γ(2 − α )Λ ( τ ) = Γ(2 − α ) τ Λ(1 /τ ) as τ →
0. Thus(3.8) ω U ( τ ) = 1 τ + τ ˆ λ ( τ ) ∼ − α ) τ Λ(1 /τ ) = 1 τ α L (1 /τ ) , as τ → , where L (1 /τ ) = 1Γ(2 − α ) τ α − Λ(1 /τ ) , which is a slowly varying function by virtue of the fact that Λ ∈ RV ∞ ( − α + 1).Then, as U has a monotone derivative u , and (3.8) holds, Theorem XIII.5.4 in [17]implies that u ( t ) ∼ α ) t α − L ( t ) , as t → ∞ . Since u ( t ) = r ( t ), by the definition of Lr ( t ) ∼ α ) t α − · − α ) t − α +1 Λ( t ) = 1Γ( α )Γ(2 − α ) 1Λ( t ) , as t → ∞ . Moreover, we have from Proposition 1.5.8 in [11], thatΛ( t ) = Z t s − α L ( s ) ds ∼ − α t − α L ( t ) , as t → ∞ . Hence, lim t →∞ r ( t ) t − α L ( t ) = 1 − α Γ( α )Γ(2 − α ) = sin αππ , as required. (cid:3) For the sake of completeness, we also study the case where λ , defined as in (3.1),satisfies λ ∈ RV ∞ ( − α ) with α >
1. It turns out that in this case r converges to apositive limit and hence cannot be asymptotically stable. Corollary 3.3.
Suppose that k satisfies (C1) and (C3) with α > and that a + R ∞ k ( s ) ds = 0 holds true. Then, R ∞ sk ( s ) ds < ∞ and (3.9) lim t →∞ r ( t ) = (cid:18) Z ∞ sk ( s ) ds (cid:19) − . Proof.
Since λ is continuous satisfying λ (0) = R ∞ k ( s ) ds < ∞ and λ ∈ RV ∞ ( − α )with α >
1, we also have λ ∈ L ([0 , ∞ ); (0 , ∞ )) ∩ C ([0 , ∞ ); (0 , ∞ )). Moreover Z ∞ λ ( s ) ds = Z ∞ sk ( s ) ds < ∞ . Then, Theorem 4.2 in [8] yields (3.9). (cid:3)
Asymptotic behaviour of the autocovariance function.
In this sectionwe state our second main result, Theorem 3.4, which characterizes completely theasymptotic rate of convergence of the autocovariance function c ( t ) of the solutionof (2.8) for the case when a = − R ∞ k ( s ) ds . In the case where 0 < α < / k satisfying (C1)–(C3), c ( t ) resembles the powerlaw function t α − for large values of t and hence exhibits long memory. The casewhere α = 1 / k we have r L ([0 , ∞ ); R ).If r ∈ L ([0 , ∞ ); R ), it is still possible to determine the rate of decay of c , whichcontinues to exhibit long memory. Perhaps the most interesting aspect of this resultis that arbitrarily slow rates of decay of c in RV ∞ (0) can be obtained. ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 9
Theorem 3.4.
Suppose that k satisfies (C1)–(C3) with α ∈ (0 , / . Let r be thesolution of (1.2) . Let σ ∈ R \ { } and B = { B ( t ) : t ∈ R } be the standard one–dimensional Brownian motion defined by (2.7) . Then there is a unique stationaryGaussian process X which obeys (2.8) : dX ( t ) = (cid:18) aX ( t ) + Z ∞ k ( s ) X ( t − s ) ds (cid:19) dt + σ dB ( t ) , t > X ( t ) = Z t −∞ r ( t − s ) σ dB ( s ) , t ≤ . The autocovariance function c ( · ) = Cov( X ( s ) , X ( s + · )) satisfies (3.10) lim t →∞ c ( t ) L ( t ) t − α = σ Γ(1 − α )Γ( α )Γ(1 − α ) · sin ( πα ) π . Proof.
The proof of the theorem can be found in Section 7. (cid:3)
Example 3.5.
Let α ∈ (0 , /
2) and(3.11) k ( t ) = 1(1 + t ) α +1 , t ≥ . Then, λ ( t ) = 1 / ( α (1 + t ) α ), t ≥
0, and since L ( t ) → /α as t → ∞ , we obtain thefollowing convergence rate of the autocovariance function:lim t →∞ c ( t ) t α − = σ sin( απ )Γ(1 − α ) π Γ( − α ) . We now consider the interesting and critical case where α = 1 /
2. Depending onthe properties of the slowly varying function L , both r L ([0 , ∞ ); R ) as well as r ∈ L ([0 , ∞ ); R ) is possible. We first determine the rate of convergence of theautocovariance function. Theorem 3.6.
Suppose that k satisfies (C1), (C2) and k ( t ) = L ( t ) t − / , t ≥ ,with a slowly varying function L . Then, r ∈ L ([0 , ∞ ); R ) if and only if (3.12) Z ∞ tL ( t ) dt < ∞ . Moreover, if (3.12) holds true, then c ( t ) ∼ σ π Z ∞ t sL ( s ) ds, t → ∞ . Proof.
Theorem 3.2 yields that(3.13) lim t →∞ r ( t ) t / L ( t ) = lim t →∞ r ( t ) k ( t ) t = 1 π . Since r is continuous on [0 , ∞ ), r ∈ L ([0 , ∞ ); R ) if and only if Z ∞ t k ( s ) dt = Z ∞ tL ( t ) dt < ∞ . In this case we denote by f ( t ) := σ π Z ∞ t s k ( s ) ds, t ≥ . The integrand of f is regularly varying with index −
1. Then, by Karamata’sTheorem (see e.g. [11], Theorem 1.5.11) we obtain(3.14) tt k ( t ) f ( t ) → , for t → ∞ . Moreover, with (3.13) and (3.14) it holds that(3.15) lim t →∞ tr ( t ) f ( t ) = lim t →∞ r ( t ) t k ( t ) lim t →∞ tt k ( t ) f ( t ) = 0 . We write c ( t ) f ( t ) = σ f ( t ) Z t r ( s ) r ( t + s ) ds + σ f ( t ) Z ∞ t r ( s ) r ( t + s ) ds =: I ( t ) + I ( t ) , t ≥ . By (3.6), r is positive and non–increasing, hence we obtain the following upperbound for I ( t ):(3.16) I ( t ) ≤ σ f ( t ) Z ∞ t r ( s ) ds, t ≥ . The denominator and the numerator in (3.16) tend to zero as t tends to infinity,therefore, we may apply L’Hˆospital’s rule to obtain(3.17) lim t →∞ σ f ( t ) Z ∞ t r ( s ) ds = lim t →∞ π r ( t ) t k ( t ) = 1 . On the other hand,(3.18) I ( t ) ≥ σ f ( t ) Z ∞ t r ( s + t ) ds = σ f ( t ) Z ∞ t r ( s ) ds, t ≥ . By (3.15) we have(3.19) lim t →∞ f ( t ) Z tt r ( s ) ds ≤ lim t →∞ tr ( t ) f ( t ) = 0 . Combining (3.16), (3.17), (3.18) and (3.19) we obtain lim t →∞ I ( t ) = 1. Theterm I ( t ) vanishes as t tends to infinity: applying Karamata’s theorem to r ∈ RV ∞ ( − /
2) and using (3.15), we obtainlim t →∞ I ( t ) σ ≤ lim t →∞ r ( t ) f ( t ) Z t r ( s ) ds = lim t →∞ R t r ( s ) dstr ( t ) · tr ( t ) f ( t ) = 2 lim t →∞ tr ( t ) f ( t ) = 0 . This completes the proof. (cid:3)
To see that it is possible to obtain arbitrary rates of decay for c in the class of slowlyvarying functions which tend to zero, we consider such a function γ ∈ RV ∞ (0). Wedemonstrate this claim, under a mild technical assumption on γ . Corollary 3.7.
Suppose that γ is in C ((0 , ∞ ); (0 , ∞ )) , γ ( t ) → as t → ∞ andthat − γ ′ ∈ RV ∞ ( − . Then γ ∈ RV ∞ (0) and there exists L ∈ RV ∞ (0) whichsatisfies (3.12) and (3.20) Z ∞ t sL ( s ) ds ∼ γ ( t ) , as t → ∞ . Proof.
For any
T > t ≥
0, we have γ ( T ) − γ ( t ) = R Tt γ ′ ( s ) ds . Letting T → ∞ , wesee that γ ( t ) = R ∞ t − γ ′ ( s ) ds . − γ ′ is integrable because γ ( t ) → t → ∞ . Thefact that − γ ′ ∈ RV ∞ ( −
1) and is integrable forces γ to be in RV ∞ (0). Define thefunction L : [1 , ∞ ) → (0 , ∞ ) by(3.21) L ( t ) = − tγ ′ ( t ) . Clearly L ∈ RV ∞ (0). Moreover for any T ≥ Z T sL ( s ) ds = Z T − γ ′ ( s ) ds = γ (1) − γ ( T ) . ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 11
Since γ ( T ) → T → ∞ , it follows that L obeys (3.12). The asymptotic relation(3.20) is an obvious consequence of the construction of L . (cid:3) Remark . By applying Theorem 3.6, it can be seen that if k ( t ) ∼ t − / L ( t ) as t → ∞ , where L is given by (3.21), then c ( t ) ∼ σ /π γ ( t ) as t → ∞ . Therefore,functions k exist such that the rate of convergence of the autocovariance function isan (essentially) arbitrary function in RV ∞ (0). For example, c ( t ) can decay to zeroat a rate asymptotic to (log log log · · · log t ) − as t → ∞ , where there are finitelybut arbitrarily many compositions of logarithms.4. Long memory in the discrete equation
In this section we study the discrete counterparts to equations (1.2) and (2.4) forsome summable kernels k with infinite mean.4.1. Asymptotic Behaviour of the Deterministic Resolvent.
Let us firstconsider the deterministic equation (2.12) with a + P ∞ j =1 k j = 0. If 1 + a > k n ) n ≥ has infinite mean, the classical renewal theorem yields that r n convergesto zero as n tends to infinity. If ( k n ) n ≥ has a regularly varying tail (Garsia andLamperti [18], Theorem 1.1) and ( r n ) n ∈ N is monotone non–increasing (Isaac [23],Theorem 3.1), the exact convergence rates are also known.In this section we prove that if the tail (cid:16)P ∞ j = n k j (cid:17) n ≥ is a so–called Kaluza se-quence, which is a discrete analogue of log–convexity, then the sequence ( r n ) n ∈ N ismonotone non–increasing and we can apply the above mentioned theorems. Theorem 4.1.
Let ( k n ) n ≥ be a positive sequence such that P ∞ j =1 k j ≤ . More-over, let λ n := P ∞ j = n k j , n ≥ , satisfy: (C2’) ( λ n ) n ≥ is a Kaluza sequence, that is λ n ≤ λ n − λ n +1 for all n ≥ , (C3’) λ n = L ( n ) n − α , where < α < and L ( n ) is a slowly varying sequence.Then lim n →∞ n − α L ( n ) r n = sin αππ . Proof.
Since ( L ( n )) n ∈ N is slowly varying, so is the function x L ([ x ]). Since1 + a ≥
0, we can apply Theorem 1.1 in [18] to obtain the result for 1 / < α < α ≤ / r n ) n ≥ ismonotone non–increasing. To show this, we define a n := r n + n − X j =1 r j λ n +1 − j , n ≥ , to obtain a n +1 − a n = r n +1 − r n + n − X j =0 ( λ n +1 − j − λ n − j ) r j + r n λ = r n +1 − r n − n − X j =0 k n − j r j + r n a = 0 . Hence, ( a n ) n ≥ is a constant sequence and equals a = r = 1. With ∆ n := − ( r n − r n − ) we have0 = a n − a n − = − ∆ n + n − X j =0 r j λ n − j − n − X j =0 r j λ n − − j = − ∆ n + n − X j =1 λ n − j ( r j − r j − ) + λ n = − ∆ n − n − X j =1 λ n − j ∆ j + λ n . Therefore, (∆ n ) n ≥ satisfies the recurrence relation(4.1) ∆ n = λ n − n − X j =1 λ n − j ∆ j . Since ( λ n ) n ≥ is a Kaluza sequence, it follows from [34] that ∆ n is non–negative forall n ≥
0. Hence, the sequence ( r n ) n ≥ is non–increasing and the claim follows. (cid:3) Asymptotic behaviour of the autocovariance function.
Now we are ableto state the discrete analogue of Theorem 3.4:
Theorem 4.2.
Suppose that k satisfies the assumptions of Theorem 4.1 with α ∈ (0 , / . Let r be the solution of (2.12) and ξ = { ξ n : n ∈ Z } be a sequence ofrandom variables defined as in Theorem 2.3. Then there is a unique stationaryprocess X which obeys (2.13) : X n +1 − X n = − aX n + ∞ X j =1 k j X n − j + ξ n +1 , n ≥ X n = n X j = −∞ r n − j ξ j , n < . The autocovariance function c ( · ) = Cov( X n , X n + · ) obeys (4.2) lim h →∞ c ( h ) L ( h ) h − α = σ Γ(1 − α )Γ( α )Γ(1 − α ) · sin ( πα ) π . Proof.
The stationary solution is given by X ( n ) = P nj = −∞ r n − j ξ j , n ∈ Z , and itsautocovariance function obviously satisfies (2.14). Since the sequence ( L ( n )) n ∈ N isslowly varying we obtain with Theorem 4.1 for all λ > n →∞ r [ λn ] r n = lim n →∞ L ( n ) n − α [ λn ] − α L ([ λn ]) = lim n →∞ n − α [ λn ] − α = lim n →∞ (cid:18) λ + [ λn ] − λnn (cid:19) α − = λ α − . Hence the positive sequence ( r n ) n ∈ N is regularly varying with index α −
1. Therefore,as mentioned in Section 2.1, the function r ( x ) := r [ x ] , x ≥
0, is also regularly varyingand we may write c ( h ) = σ Z ∞ r ( x ) r ( x + h ) , h ∈ N . With Theorem 7.1 we obtain lim h →∞ c ( h ) hr h = L. ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 13
Following the steps of the proof of Theorem 3.4 we obtain (4.2). (cid:3) Subexponential decay of the autocovariance function
In this section we study the properties of the autocovariance function of the sta-tionary solution of the main continuous– and discrete–time equations (1.1) and(2.11) if the kernel k is again regularly varying with index − − α , α > a + R ∞ k ( s ) ds < a + P ∞ n =1 k n < k is a subexponential function or sequence in the sense of Appleby et al. [5, 6].In this case, the fundamental solution in both discrete– (Theorem 3.2 in [6]) andcontinuous–time (Theorem 15 in [5]) decays at the same rate as the kernel k . Since k is regularly varying with parameter − α − < − r ∈ L ([0 , ∞ ); R ) ∩ L ([0 , ∞ ); R ).This implies that the autocovariance function of the stationary solution is inte-grable. The next results show that nevertheless the autocovariance function decaysvery slowly: it converges to zero at same rate as the kernel k , that is polynomially. Remark . If a + R ∞ k ( s ) ds >
0, then the fundamental solution grows expo-nentially: The characteristic function of r , a function h which satisfies ˆ r ( z ) =1 /h ( z ) , ℜ z ≥ , is given by h ( z ) = z − a − ˆ k ( z ) , z ∈ C , and satisfies h (0) = − a − R ∞ k ( s ) ds <
0. Since k is positive, we obtain for x > h ( x ) = x − a − Z ∞ e − xs k ( s ) ds ≥ x − a − Z ∞ k ( s ) ds, which is positive if x > a + R ∞ k ( s ) ds >
0. Therefore, by the intermediate valuetheorem, there exists a positive root of the characteristic function. By the stan-dard theory of Volterra equations this implies that the fundamental solution growsexponentially. Hence, the case a + R ∞ k ( s ) ds > Continuous–time stochastic equation with subexponentially decay-ing memory.
Suppose k ∈ C ([0 , ∞ ); (0 , ∞ )) satisfies(S1) k ∈ RV ∞ ( − − α ) for α > a + R ∞ k ( s ) ds < k :(5.1) lim t →∞ r ( t ) k ( t ) = 1 (cid:0) a + R ∞ k ( s ) ds (cid:1) =: L c . Moreover, r is also subexponential. Since r is also square integrable, the stationarysolution of (2.8) exists and the exact rate of decay of the autocovariance functioncan be determined. Theorem 5.2.
Suppose k satisfies (S1) and (S2). Let r be solution of (1.2) . Let σ ∈ R \ { } and B be the Brownian motion defined by (2.7) . Then, the autoco-variance function c ( · ) = Cov( X ( s ) , X ( s + · )) of the stationary solution of (2.8) satisfies (5.2) lim t →∞ c ( t ) k ( t ) = σ (cid:0) − a − R ∞ k ( s ) ds (cid:1) > . Proof.
The autocovariance function of the stationary solution is again given by(2.10). Theorem 1.8.3 in [11] yields, that there exists a decaying function λ with k ( t ) ∼ λ ( t ) for t → ∞ . Since r is integrable, we choose for an arbitrary ǫ > T >
0, so that 2 L c R ∞ T | r ( s ) | ds < ǫ. We now write(5.3) Z ∞ r ( t + s ) r ( s ) k ( t ) ds = Z T r ( t + s ) k ( t + s ) k ( t + s ) k ( t ) r ( s ) ds + Z ∞ T λ ( t ) k ( t ) r ( t + s ) λ ( t + s ) λ ( t + s ) λ ( t ) r ( s ) ds. The second integral is negligible: since λ is decreasing and r ( t ) /λ ( t ) → L c for t → ∞ , the integrand is bounded for sufficiently large t by 2 L c | r ( s ) | . Hencelim sup t →∞ (cid:12)(cid:12)(cid:12)(cid:12)Z ∞ T r ( t + s ) r ( s ) k ( t ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ L c Z ∞ T | r ( s ) | ds < ǫ. Let us now consider the first integral in (5.3). With Potter’s bound (cf. [11], The-orem 1.5.6) we obtain(5.4) k ( t + s ) k ( t ) → , t → ∞ , uniformly in s for all s < T . Therefore for all sufficiently large t (5.5) sup s ≤ T (cid:12)(cid:12)(cid:12)(cid:12) k ( t + s ) k ( t ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ s> (cid:12)(cid:12)(cid:12)(cid:12) r ( t + s ) k ( t + s ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ L c . Using dominated convergence theorem we obtainlim t →∞ Z T r ( t + s ) k ( t + s ) k ( t + s ) k ( t ) r ( s ) ds = L c Z T r ( s ) ds. Hence, the left–hand side of (5.2) converges to L c R ∞ r ( s ) ds and the claim followsfrom the fact that R ∞ k ( s ) ds = − / ( a + R ∞ k ( s ) ds ). (cid:3) Example 5.3.
Let α > k ( t ) = 1(1 + t ) α +1 , t ≥ . We obtain the following convergence rate of the autocovariance function:lim t →∞ c ( t ) t α = σ ( − a − /α ) . Remark . Examples 3.5 and 5.3 make clear that there is a very different impacton the rate of convergence of the autocovariance function from the decay rate of thekernel k according as to whether we are in the long–memory or subexponential case.In the latter case, the rate of decay of the autocovariance function c is proportionalto the rate of decay of the kernel k , so slow decay in the memory as measured bythe rate of decay of k is reflected exactly in the statistical memory, as measured by c . On the contrary, in the long–memory case, a faster rate of decay of the kernel k results in a slower rate of decay of c .5.2. Discrete–time stochastic equation with subexponentially decayingmemory.
Let us now consider the equation (2.11) with a discrete kernel k = { k n : n ≥ } satisfying(S1’) k is a regularly varying sequence with index − − α for α > a + P ∞ j =1 k j < . Then k satisfies the assumptions of the Theorem 3.2 in [6] and the fundamentalsolution of (1.2) converges to zero at the same rate as k :(5.7) lim n →∞ r n k n = 1 (cid:16) a + P ∞ j =1 k j (cid:17) =: L d . ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 15
Again, the stationary solution of (2.11) exists and the exact rate of decay of theautocovariance function can be determined.
Theorem 5.5.
Suppose k satisfy (S1’) and (S2’). Let r be solution of (2.12) . Let ξ = { ξ n : n ∈ Z } be a sequence of random variables defined as in Theorem 2.3.Then, the autocovariance function c ( · ) = Cov( X n , X n + · ) of the stationary processdefined in (2.13) satisfies (5.8) lim h →∞ c ( h ) k h = σ (cid:16) − a − P ∞ j =1 k j (cid:17) > . Proof.
The autocovariance function of the stationary solution is again given by(2.14). Since ( k n ) n ∈ N is a regularly varying sequence, the function x k ( x ) := k [ x ] is a regularly varying function with index − − α . Hence, we may choose thefunction λ as in the proof of Theorem 5.2. Since r is absolutely summable, wechoose for an arbitrary ǫ > N , so that 2 L d P ∞ n = N +1 | r n | < ǫ. Similarly to the continuous case, we split the sum and study each term separately:(5.9) ∞ X n =0 r n + h r n k h = N X n =0 r n + h k n + h k ( n + h ) k ( h ) r n + ∞ X n = N +1 λ ( h ) k ( h ) r n + h λ ( n + h ) λ ( n + h ) λ ( h ) r n . The sequence r h /λ ( h ) converges to L d as h → ∞ , so the terms of the second sumare bounded for sufficiently large h by 2 L d | r n | . Therefore,lim sup h →∞ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ X n = N +1 r n + h r n k n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ L d ∞ X n = N +1 | r n | < ǫ. Let us now consider the first term in (5.9). Applying Potter’s bound to the function k ( x ) as in (5.4) we obtain the discrete version of (5.5). Thus,lim h →∞ N X n =0 r n + h r n k h = L d N X n =0 r n . Similarly, P ∞ n =0 r n = − / ( a + P ∞ j =1 k j ) and the claim follows. (cid:3) Proof of Theorem 2.2
First we show that the process defined by (2.9) has a continuous modification.Applying Itˆo’s lemma, Cauchy-Schwarz inequality and Fubini’s theorem we obtain E (( X ( t ) − X ( u )) ) = σ Z R ( r ( t − s ) { s ≤ t } − r ( u − s ) { s ≤ u } ) ds = σ Z R (cid:18)Z tu r ′ ( v − s ) dv (cid:19) ds ≤ σ ( t − u ) Z R Z tu ( r ′ ( v − s )) dv ds = σ ( t − u ) Z tu Z R ( r ′ ( s )) ds dvσ ( t − u ) Z R ( r ′ ( s )) ds. Now, r is square integrable and with k k ∗ r k L ≤ k k k L k r k L , we have r ′ ∈ L ((0 , ∞ ); R ). Here, ( k ∗ r )( · ) denotes the convolution of k and r , given by R · k ( s ) r ( ·− s ) ds . The Kolmogorov-Chentsov theorem (see e.g [27], Theorem 2.8) yields that X has a continuous modification. It remains to show that the process defined by(2.9) solves (2.8). We write X ( t ) − X (0) = Z −∞ (( r ( t − s ) − r ( − s )) dB ( s ) + Z t r ( t − s ) dB ( s )= Z t −∞ Z t r ′ ( u − s ) du dB ( s ) + σB ( t )= Z t −∞ Z t (cid:18) ar ( u − s ) + Z u − s r ( u − s − v ) k ( v ) dv (cid:19) du dB ( s ) + σB ( t )= Z t Z u −∞ ar ( u − s ) dB ( s ) du + Z t Z ∞ Z u − v −∞ r ( u − s − v ) dB ( s ) dv du + σB ( t )= Z t aX ( u ) du + Z t Z ∞ k ( v ) X ( u − v ) dv du + σB ( t ) . Since r and B are continuous, we are able to apply stochastic Fubini’ theorem (e.g.[32], Ch.IV.6, Thm. 65), if Z t −∞ Z t r ( u − s ) du ds < ∞ and Z t −∞ Z t (cid:18)Z u − s r ( u − s − v ) k ( v ) dv (cid:19) du ds < ∞ . The statement follows from classical Fubini’s theorem and the fact that r ∈ L ([0 , ∞ ); R )and k k ∗ r k L ≤ k k k L k r k L .7. Poof of Theorem 3.4
Suppose that r ∈ C ([0 , ∞ ); (0 , ∞ )) obeys(7.1) r ∈ RV ∞ ( µ ) for some µ ∈ ( − , − / . Since r ∈ L ([0 , ∞ ); R ), there exists c : [0 , ∞ ) → (0 , ∞ ) such that(7.2) c ( t ) = Z ∞ r ( s ) r ( s + t ) ds, t ≥ . By assuming (7.1), we exclude the possibility that r ∈ L ([0 , ∞ ); R ). Our firstresult is the following rate of decay of c . Theorem 7.1.
Suppose that r is a positive continuous function which obeys (7.1) for some µ ∈ ( − , − / . Then the function c in (7.2) is well–defined and moreoverobeys (7.3) lim t →∞ c ( t ) tr ( t ) = Γ( − − µ )Γ(1 + µ )Γ( − µ ) =: L > . Proof.
For µ ∈ ( − , − /
2) we have R ∞ x µ ( x + 1) µ dx = L. First we suppose that r is decreasing. In this case we choose for an arbitrary 0 < ǫ < δ = δ ( ǫ ) > R δ x µ ( x + 1) µ dx < ǫ. The Uniform Convergence Theorem ([11], Theorem1.5.2) yields that r ( tx ) r ( t ) → x µ , uniformly in x , for all x ≥ δ. Hence, there exists a t = t ( δ ) such that r ( tx ) r ( t ( x + 1)) r ( t ) ≤ x µ ( x + 1) µ , for all t ≥ t , x > δ. ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 17
The function on the right hand side is integrable, hence, the dominated convergencetheorem yields thatlim t →∞ Z ∞ δ r ( tx ) r ( t ( x + 1)) r ( t ) dx = Z ∞ δ lim t →∞ r ( tx ) r ( t ( x + 1)) r ( t ) dx = Z ∞ δ x µ ( x + 1) µ dx. There exists a t = t ( δ ) > t such that (cid:12)(cid:12)(cid:12)(cid:12) L − Z ∞ δt r ( s ) r ( t + s ) tr ( t ) ds (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) L − Z ∞ δ r ( tx ) r ( t ( x + 1)) r ( t ) dx (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ + (cid:12)(cid:12)(cid:12)(cid:12)Z ∞ δ x µ ( x + 1) µ dx − Z ∞ δ r ( tx ) r ( t ( x + 1)) r ( t ) dx (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ (7.4)for all t ≥ t . On the other hand, using the monotonicity of r we obtain Z δt r ( s ) r ( t + s ) tr ( t ) ds ≤ Z δt r ( s ) tr ( t ) ds = R ( t ) r ( t ) t R ( δt ) R ( t ) , where R ( t ) = R t r ( s ) ds ∈ RV ∞ ( µ + 1) . It follows from Karamata’s Theorem [11],Theorem 1.5.11, that R ( t ) tr ( t ) → µ + 1 . Choosing δ small enough and a t ( δ ) > t ( δ ) large enough we obtain(7.5) Z δt r ( s ) r ( t + s ) tr ( t ) ds ≤ t →∞ R ( t ) r ( t ) t R ( δt ) R ( t ) = 2 1 µ + 1 δ µ +1 ≤ ǫ, for all t ≥ t . Hence, combining (7.5) and (7.4) we get for all t ≥ t (cid:12)(cid:12)(cid:12)(cid:12) L − Z ∞ r ( s ) r ( t + s ) tr ( t ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) L − Z ∞ δt r ( s ) r ( t + s ) tr ( t ) ds (cid:12)(cid:12)(cid:12)(cid:12) + Z δt r ( s ) r ( t + s ) tr ( t ) ds ≤ ǫ. Now, for arbitrary r obeying (7.1), let ρ ( t ) := sup { r ( t ) : t ≥ x } . Then ρ is apositive decreasing function, continuous on [0 , ∞ ) and satisfying ρ ( x ) ∼ r ( x ) for x → ∞ [11], Theorem 1.5.3. For an arbitrary ǫ > t = t ( ǫ ) such thatfor all t > t we have (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z ∞ ρ ( s ) ρ ( t + s ) ds − L (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ. Since r ( t ) /ρ ( t ) → t → ∞ for every ε ∈ (0 ,
1) there exists t = t ( ε ) ≥ t suchthat 1 − ε < r ( t ) /ρ ( t ) < ε for all t ≥ t . Therefore(7.6) (1 − ε ) ≤ R ∞ t r ( s ) r ( s + t ) ds R ∞ t ρ ( s ) ρ ( s + t ) ds ≤ (1 + ε ) . For ǫ sufficiently small we obtain(7.7) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) R ∞ t r ( s ) r ( s + t ) ds R ∞ t ρ ( s ) ρ ( s + t ) ds − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ. Now, since ρ is decreasing, we have for t ≥ t (7.8) 1 ρ ( t ) Z t r ( s ) r ( s + t ) ds = Z t r ( s ) r ( s + t ) ρ ( s + t ) ρ ( s + t ) ρ ( t ) ds ≤ (1 + ǫ ) Z t r ( s ) ds. Therefore as t tρ ( t ) is in RV ∞ ( µ + 1) and µ + 1 >
0, we have tρ ( t ) → ∞ as t → ∞ , and so there exists a t = t ( ǫ ) ≥ t such that (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z t r ( s ) r ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ and (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z t ρ ( s ) ρ ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ for all t ≥ t . Therefore, (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z ∞ r ( s ) r ( s + t ) ds − L (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z ∞ ρ ( s ) ρ ( s + t ) ds − L (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z t r ( s ) r ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z t ρ ( s ) ρ ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) tρ ( t ) Z ∞ t r ( s ) r ( s + t ) ds − tρ ( t ) Z ∞ t ρ ( s ) ρ ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ + 1 tρ ( t ) Z ∞ t ρ ( s ) ρ ( s + t ) ds (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) R ∞ t r ( s ) r ( s + t ) ds R ∞ t ρ ( s ) ρ ( s + t ) ds − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ + 3 ǫL = (3 + 3 L ) ǫ. Finally we note thatlim t →∞ c ( t ) r ( t ) = lim t →∞ c ( t ) ρ ( t ) ρ ( t ) r ( t ) = lim t →∞ c ( t ) ρ ( t ) . (cid:3) We now explicitly connect the result of Theorem 7.1 to the autocovariance functionof the stationary solution of (2.8) in the case when a = − R ∞ k ( s ) ds to prove ourmain result. Proof of Theorem 3.4.
It follows from Theorem 3.2 that(7.9) lim t →∞ tr ( t ) · L ( t ) t − α = sin αππ . Since α ∈ (0 , /
2) we have that r ∈ L ([0 , ∞ ); (0 , ∞ )) ∩ C ([0 , ∞ ); (0 , ∞ )) and r ∈ RV ∞ ( α −
1) with µ := α − ∈ ( − , − / t →∞ c ( t ) L ( t ) t − α = lim t →∞ c ( t ) tr ( t ) · r ( t ) L ( t ) t − α = σ Γ(1 − α )Γ( α )Γ(1 − α ) · sin ( πα ) π , as claimed. (cid:3) References [1] V. Anh and A. Inoue,
Financial markets with memory. I. Dynamic models , Stoch. Anal.Appl. (2005), no. 2, 275–300.[2] V. Anh, A. Inoue and Y. Kasahara, Financial markets with memory. II. Innovation processesand expected utility maximization , Stoch. Anal. Appl. (2005) no. 2, 301–328.[3] J. A. D. Appleby, Subexponential solutions of scalar linear Itˆo-Volterra equations with dampedstochastic perturbations , Functional Differential Equations, , 1-2 (2004), 5–10.[4] , Almost sure subexponential decay rates of scalar Itˆo-Volterra equations , Elec. Jnl.Qualitative Theory Of Differential Equations, Proc. 7th Coll. QTDE, No.1 (2004), 1–32.[5] J. A. D. Appleby, I. Gy˝ori, and D. W. Reynolds,
On exact rates of decay of solutions of linearsystems of Volterra equations with delay , J. Math. Anal. Appl, (2006), no. 1, 56–77.[6] ,
On exact convergence rates for solutions of linear systems of Volterra differenceequations , Journal of Difference Equations and Applications, (2006), no. 12, 1257-1275.[7] J. A. D. Appleby and D. W. Reynolds, Subexponential solutions of linear Volterra integro–differential equations and transient renewal equations , Proc. Roy. Soc. Edinburgh. Sect. A (2002), 521–543.
ONG MEMORY VOLTERRA DIFFERENTIAL EQUATION 19 [8] ,
Subexponential solutions of linear integro–differential equations , Proc. Dyn. Syst.Appl. IV (2004), 488–494.[9] J. A. D. Appleby and M. Riedle Stochastic Volterra differential equations in weighted spaces ,J. Integral Equations Appl., (2010), no. 1, 1–17.[10] M. A. Berger and V. J. Mizel, Volterra equations with Itˆo integrals I , J. Integral Equations (1980), no. 3, 187–245.[11] N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular Variation , Cambridge UniversityPress (1989).[12] R. Bojanic and E. Seneta,
A unified theory of regularly varying sequences , Math. Z. (1973), 91–106.[13] R. Cont,
Long range dependence in financial markets , Springer–Fractals in Engineering,(E. Lutton and J. Vehel, eds.) (2005), 159–180.[14] P. Doukhan, M. S. Taqqu and G. Oppenheim (eds.),
Theory and Applications of Long-RangeDependence , Birkenh¨auser, 2001.[15] A. D. Drozdov and V. B. Kolmanovski˘ı,
Stochastic stability of viscoelastic bars , StochasticAnal. Appl. (1992), no. 3, 265–276.[16] S Elaydi, An Introduction to Difference Equations (third ed.) , Springer, New York, 2005.[17] W. Feller,
An Introduction to Probability Theory and its Applications , Volume II, New York:Wiley, 1971.[18] A. Garsia and J. Lamperti,
A discrete renewal theorem with infinite mean , Comment. Math.Helv. (1962/63), 221–234.[19] I. M. Gelfand, D. A. Raikov, and G. E. Shilov, Commutative normed rings , Chelsea, NewYork, 1964.[20] G. Giacomin,
Random Polymer Models , Imperial College Pr., 2007.[21] A. Gushchin and U. K¨uchler,
On Stationary Solutions of Delay Differential Equations Drivenby a L´evy Process , Stochastic Processes and their Applications (2000), 195–211.[22] J. K. Hale and S. M. Verduyn Lunel, Introduction to functional differential equations ,Springer-Verlag, New York, 1993.[23] R. Isaac,
Rates of convergence for renewal sequences in the null-recurrent case , J. Austral.Math. Soc.
A 45 (1988), 381–388.[24] K. Itˆo and M. Nisio,
On stationary solutions of a stochastic differential equation , J. Math.Kyoto Univ., (1964), 1–75.[25] G. S. Jordan and R. L. Wheeler, Weighted L -remainder theorems for resolvents of Volterraequations , SIAM J. Math. Anal. (1980), 885–900.[26] G. S. Jordan, O. Staffans and R. L. Wheeler, Local analyticity in weighted L -spaces and ap-plications to stability problems for Volterra equations , Trans. Amer. Math. Soc., (1982),749–782.[27] I. Karatzas and S. E. Shreve, Brownian motion and stochastic calculus , Springer, 1991.[28] J. J. Levin,
Resolvents and bounds for linear and nonlinear Volterra equations , Trans. Amer.Math. Soc., (1977), 207–222.[29] X. Mao and M. Riedle,
Mean square stability of stochastic Volterra integro–differential equa-tions , Systems Control Lett. (2006), no. 6, 459–465.[30] R. K. Miller, On Volterra integral equations with nonnegative integrable resolvents , J. Math.Anal. Appl. (1968), 319–340.[31] S. E. A. Mohammed, Stochastic Functional Differential Equations , Pitman, Boston, Mass.,1984.[32] P. E. Protter,
Stochastic integration and differential equations , New York: Springer, 2004.[33] M. Riedle,
Solutions of affine stochastic functional differential equations in the state space ,Journal of evolution equations (2008), 71–97.[34] D. N. Shanbhag, On renewal sequences, Bull. London Math. Soc. (1977), 79–80.[35] D. F. Shea and S. Wainger, Variants of the Wiener–Levy theorem, with applications tostability problems for some Volterra integral equations , Am. J. Math. (1975), 312–343. School of Mathematical Sciences, Dublin City University, Dublin 9, Ireland
E-mail address : [email protected] URL : http://webpages.dcu.ie/~applebyj Humboldt–Universit¨at zu Berlin, Institut f¨ur Mathematik, Unter den Linden 6, 10099Berlin, Germany
E-mail address ::