Central limit theorems for stationary random fields under weak dependence with application to ambit and mixed moving average fields
CCentral limit theorems for stationary random fieldsunder weak dependence with application to ambit andmixed moving average fields
Imma Valentina Curato, Robert Stelzer and Bennet Ströh ∗ July 22, 2020
Abstract
We obtain central limit theorems for stationary random fields which are basedon the use of a novel measure of dependence called θ -lex weak dependence. Wediscuss hereditary properties for θ -lex and η -weak dependence and illustrate thepossible applications of the weak dependence notions to the study of the asymp-totic properties of stationary random fields. Our general results are applied to mixedmoving average fields (MMAF in short) and ambit fields. We show general condi-tions such that MMAF and ambit fields, with the volatility field being an MMAFor a p -dependent random field, are weakly dependent. For all the aforementionedmodels, we give a complete characterization of their weak dependence coefficientsand sufficient conditions to obtain asymptotic normality of their sample moments.Finally, we give explicit computations of the weak dependence coefficients in thecase of MSTOU and CARMA fields. MSC 2020: primary 60G10, 60G57, 60G60,62M40; secondary 62F10, 62M30.Keywords: stationary random fields, weak dependence, central limit theorems, mixed mov-ing average fields, CARMA fields, ambit fields.
Many modern statistical applications consider the modeling of phenomena evolving intime and/or space with either a countable or uncountable index set. To this end, wecan employ random fields on Z m or R m which are defined, for example, as solutions ofrecurrence equations, e.g. in [34], or stochastic partial differential equations [16, 25, 51]. ∗ Ulm University, Institute of Mathematical Finance, Helmholtzstraße 18, 89069 Ulm, Germany.Emails: [email protected], [email protected], [email protected]. a r X i v : . [ m a t h . S T ] J u l oticeable examples of the latter come from the class of ambit and mixed moving averagefields.The mixed moving average fields, MMAF in short, are defined as X t = Z S Z R m f ( A, t − s ) Λ( dA, ds ) , t ∈ R m , (1.1)where A is a random parameter with values in a Polish space S , f a deterministic func-tion called kernel and Λ a Lévy basis. The above model encompasses Gaussian and non-Gaussian random fields by choosing the Lévy basis Λ. Ambit fields are defined by con-sidering an additional multiplicative random function in the integrand of (1.1) which iscalled volatility or intermittency field. However, an ambit field is typically defined with-out allowing the presence of the random parameter A in its kernel function. We referthe reader to [6] for a comprehensive introduction to ambit fields which provide a richclass of spatial-temporal models on R × R m . Overall, MMAF and ambit fields are usedin many applications throughout different disciplines, like geophysics [38], brain imaging[40], physics [11], biology [8, 10], economics and finance [3, 5, 22, 46, 47].The generality and flexibility of these models motivate an in-depth analysis of theirproperties. If we consider purely temporal ambit fields, i.e. Lévy semistationary processes,in [7, 9, 14] the authors obtain infill asymptotic results for this class of processes, thatis, under the assumption that the number of observations in a given interval approachesto infinity. For ambit fields on R × R m with m ≥ R , i.e. a mixedmoving average process, several difficulties already arise in showing that it is stronglymixing, see [26]. Usually, strong mixing is established by using a Markovian representationand showing geometric ergodicity of it. In turn this often requires smoothness conditionson the driving random noise and it is well-known that even autoregressive processes oforder one are not strongly mixing when the distribution of the noise is not sufficientlyregular, see [1]. Since we are interested in central limit results which do not require heavyassumptions on Λ or a Markovian representation, a different measure of dependence isrequired. Gaussian MMAF on R m for m ≥ α mixing. However, for general driving Lévy bases no results in theliterature can be found regarding the strong mixing of MMAF. On the other hand, if welook at the concept of association, a powerful measure of dependence which allows sharpcentral limit results (see [23, 45] for a comprehensive introduction on this topic), we arecapable of obtaining central limit theorems for MMAF just under restrictive conditions2n the kernel function f in (1.1), see e.g. Theorem 3 .
27 in [23]. Moreover, associationis inherited only under monotone functions which restricts the possible extension of itsrelated asymptotic theory. At last, we mention the results in [15] where the author showsasymptotic normality of the sample mean and sample autocovariance of a moving averagefield, a sub-class of the MMAF. This line of proof is not directly applicable to the studyof higher order sample moments and therefore we do not pursue it.We are interested in studying asymptotics of the partial sums (and of higher samplemoments) of MMAF and ambit fields in general, i.e. without imposing regularity con-ditions on the driving Lévy basis Λ apart from moment conditions. To do so, we definea new notion of dependence called θ -lex weak dependence. This tool can overcome thebottlenecks identified in the literature above. We want to emphasize that, although all theexamples of our theory will be taken from the aforementioned model classes, we presentgeneral central limit theorem results which can be applied to different stationary randomfields.In order to introduce θ -lex weak dependence, we first present the notions of η and θ -weak dependence introduced for stochastic processes in [32] and [29], respectively. η -weakdependence is typically associated with the study of non-causal processes whereas θ -weakdependence is connected to the analysis of the causal ones. Central limit theorems for θ -weakly dependent processes hold under weaker conditions compared to results for η -weaklydependent processes (different demands on the decay rate of the η - and θ -coefficients asdetermined in Theorem 2.2 [35] and Theorem 2 [29]). We have that the definition of η -and θ -weak dependence can be easily extended to the field case by following Remark2.1 [30]. However, just for η -weakly dependent random fields, asymptotics of the partialsums of the process X have been so far analyzed in [33]. We aim to determine a centrallimit theorem which improves the results obtained in [33]. This is achieved by definingthe notion of θ -lex-weak dependence which is a modification of the original definition of θ -weak dependence. In fact, we can show that for θ -lex-weakly dependent random fieldsthe sufficient conditions of a very powerful central limit theorem from Dedecker [27] hold.Moreover, we obtain hereditary properties for θ -lex and η -weakly dependent random fieldswhich allow us to easily extend the asymptotic results under weak dependence to the studyof sample moments estimators.We then look at the class of MMAF. We distinguish in our theory between influ-enced and non-influenced MMAF, see Definition 3.8. Influenced MMAF represent a pos-sible extension of causal mixed moving average processes (Section 3.2 [26]) to randomfields. Hence, we show that influenced MMAF are θ -lex-weakly dependent and that non-influenced MMAF are η -weakly dependent with coefficients computable in terms of thekernel function f and the characteristic quadruplet of the Lévy basis Λ. From this we no-tice that in the case of influenced MMAF the conditions ensuring asymptotic normalityof the partial sums of X are weaker– in terms of the decay rate of the weak dependencecoefficients– in comparison with the one obtained for non-influenced MMAF. We then ob-serve a parallel between our results and the one obtained for causal and non-causal mixedmoving average processes [26]. Moreover, we exploit the hereditary properties of η - as wellas θ -lex-weak dependence and obtain conditions for the sample moments of order p with p ≥ p − dependent random field which is independent of the Lévy basis Λ.Under these assumptions, we show that homogeneous and stationary ambit fields are θ -lex-weakly dependent and give sufficient conditions on the θ -lex-coefficients to ensureasymptotic normality of the sample moments.The paper is structured as follows. In Section 2, we introduce η -weak dependenceand the novel θ -lex-weak dependence. In Section 2.2 we state central limit theorems for θ -lex weakly dependent random fields in an ergodic, non-ergodic and multivariate setting.Additionally, we provide some insight into possible functional extensions of the theorem.In Section 3, we discuss weak dependence properties of MMAF. We first give a compre-hensive introduction to Lévy bases and its related integration theory which leads to theformal definition of an MMAF. We discuss conditions on MMAF to be η or θ -lex-weaklydependent and their related sample moment asymptotics. In Section 3.7, we apply the de-veloped theory to MSTOU processes and give explicit conditions assuring the asymptoticnormality of their sample moments under a Gamma distributed mean reversion param-eter. We conclude Section 3 with an application of the theory to Lévy-driven CARMAfields. In Section 4, we discuss weak dependence properties and related limit theorems forambit fields. Section 5 contains the detailed proofs of most of the results presented in thepaper. We assume that all random elements in this paper are defined on a given complete prob-ability space (Ω , A , P ). By N we denote the set of non-negative integers, N ∗ denotes theset of positive integers, and R + the set of the non-negative real numbers. For x ∈ R d wedefine | x | = k x k ∞ = max j =1 ,...,d | x ( j ) | and k x k denotes the Euclidean norm of x . For afunction F : R d → R k we define k F k ∞ = sup t ∈ R d k F ( t ) k and by k X k p for p > L p -norm of a random vector X . In the following Lipschitz continuousis understood tomean globally Lipschitz. For a random field X = ( X t ) t ∈ R m and a finite set Γ ⊂ R m withΓ = ( i , . . . , i u ) we define the vector X Γ = ( X i , . . . , X i u ). Finally, A ⊂ B denotes a notnecessarily proper subset A of a set B and | B | denotes the cardinality of B . For u, n ∈ N ∗ , let F ∗ u be the class of bounded functions from ( R n ) u to R and F u be the classof bounded, Lipschitz continuous functions from ( R n ) u to R with respect to the distance δ on ( R n ) u defined by δ (( x , . . . , x u ) , ( y , . . . , y u )) = u X i =1 δ ( x i , y i ) , δ is the Euclidean norm on R n such that δ ( x i , y i ) = k x i − y i k . Now define F = [ u ∈ N ∗ F u , F ∗ = [ u ∈ N ∗ F ∗ u , and for G ∈ F ∗ n Lip ( G ) = sup x = y | G ( x ) − G ( y ) |k x − y k + . . . + k x u − y u k Definition 2.1 ([30, Definition 2.2 and Remark 2.1]) . Let X = ( X t ) t ∈ R m be an R n -valuedrandom field. Then, X is called η -weakly dependent if η ( h ) = sup u,v ∈ N ∗ η u,v ( h ) −→ h →∞ , where η u,v ( h ) = sup ( | Cov ( F ( X Γ ) , G ( X ˜Γ )) | u k G k ∞ Lip ( F ) + v k F k ∞ Lip ( G ) ,F, G ∈ F , Γ , ˜Γ ⊂ R m , | Γ | = u, | ˜Γ | = v, dist (Γ , ˜Γ) ≥ h ) , with dist (Γ , ˜Γ) = inf i ∈ Γ ,j ∈ ˜Γ k i − j k ∞ . We call ( η ( h )) h ∈ R + the η -coefficients. In the following we consider the lexicographic order on R m , i.e. for distinct elements y = ( y , . . . , y m ) ∈ R m and z = ( z , . . . , z m ) ∈ R m we say y < lex z if and only if y < z or y p < z p and y q = z q for some p ∈ { , . . . , m } and q = 1 , . . . , p −
1. Furthermore, we say y ≤ lex z if y < lex z or y = z holds. Let us define the sets V t = { s ∈ R m : s < lex t } ∪ { t } and V ht = V t ∩ { s ∈ R m : k t − s k ∞ ≥ h } for h >
0. The same definitions of the sets V t and V ht are going to be used when referring to the lexicographic order on Z m . Definition 2.2.
Let X = ( X t ) t ∈ R m be an R n -valued random field. Then, X is called θ -lex-weakly dependent if θ ( h ) = sup u ∈ N ∗ θ u ( h ) −→ h →∞ , where θ u ( h ) = sup ( | Cov ( F ( X Γ ) , G ( X j )) |k F k ∞ Lip ( G ) ,F ∈ F ∗ , G ∈ F , j ∈ R m , Γ ⊂ V hj , | Γ | = u ) . We call ( θ ( h )) h ∈ R + the θ -lex-coefficients. Remark 2.3.
Our definition of θ -lex-weak dependence differs from the θ -weak dependencedefinition given in Remark 2.1 [30]. In fact, instead of considering the covariance of twoarbitrary finite dimensional samples X Γ and X ˜Γ , for Γ , ˜Γ ∈ R m , we control the covarianceof a finite dimensional sample X Γ and an arbitrary one point sample X j . Secondly, byassuming that all points in the sampling set Γ are lexicographically smaller than j , weprovide an order in the sampling scheme. X t ) t ∈ R m be θ -lex- or η -weakly dependent and h : R n → R k be an arbitraryLipschitz function, then the field ( h ( X t )) t ∈ R m is also θ -lex- or η -weakly dependent. Thelatter can be readily checked based on Definition 2.1 and 2.2. In the next proposition,we give conditions for hereditary properties of functions that are only locally Lipschitzcontinuous. The proof of the result below is analogous to Proposition 3.2 [26]. Proposition 2.4.
Let X = ( X t ) t ∈ R m be an R n -valued stationary random field and assumethat there exists a constant C > such that E [ k X k p ] ≤ C , for p > . Let h : R n → R k bea function such that h (0) = 0 , h ( x ) = ( h ( x ) , . . . , h k ( x )) and k h ( x ) − h ( y ) k ≤ c k x − y k (1 + k x k a − + k y k a − ) , for x, y ∈ R n , c > and ≤ a < p . Define Y = ( Y t ) t ∈ R m by Y t = h ( X t ) . If X is η or θ -lex-weakly dependent, then Y is η or θ -lex-weakly dependent respectively with coefficients η Y ( h ) ≤ C η X ( h ) p − ap − or θ Y ( h ) ≤ C θ X ( h ) p − ap − for all h > and a constant C independent of h . θ -lex-weakly dependent randomfields In the theory of stochastic processes one of the typical ways to prove central limit typeresults is to approximate the process of interest by a sequence of martingale differences.This approach was first introduced by Gordin [36]. However, the latter does not apply tohigh-dimensional random fields as successfully as to processes. This unpleasant circum-stance has been known among researchers for almost 40 years, as Bolthausen [17] notedthat martingale approximation appears a difficult concept to generalize to dimensionsgreater or equal than two.For stationary random fields X = ( X t ) t ∈ Z m , Dedecker derived a central limit resultin [27] under the projective criterion X k ∈ V | X k E [ X |F Γ( k ) ] | ∈ L , for F Γ( k ) = σ ( X k : k ∈ V | k | ) . (2.1)This condition is weaker than martingale-type assumptions and provides optimal resultsfor mixing random fields. We show in this section that (2.1) is also fulfilled by appropriate θ -lex-weakly dependent random fields.In the following by stationarity we mean stationarity in the strict sense. Let Γ be asubset of Z m . We define ∂ Γ = { i ∈ Γ : ∃ j / ∈ Γ : k i − j k ∞ = 1 } . Let ( D n ) n ∈ N be a sequenceof finite subsets of Z m such thatlim n →∞ | D n | = ∞ and lim n →∞ | ∂D n || D n | = 0 . heorem 2.5. Let X = ( X t ) t ∈ Z m be a stationary centered real-valued random field suchthat E [ | X t | δ ] < ∞ for some δ > . Additionally, assume that θ ( h ) ∈ O ( h − α ) with α > m (1 + δ ) . Define σ = X k ∈ Z m E [ X X k |I ] , where I is the σ -algebra of shift invariant sets as defined in [27, Section 2] (see [41,Chapter 1] for an introduction to ergodic theory). Then, σ is finite, non-negative and | D n | X j ∈ Γ n X j d −−−→ n →∞ εσ, (2.2) where ε is a standard normally distributed random variable which is independent of σ .Proof. See Section 5.1.In the following we give an ergodic multivariate extension of the previous theorem.
Corollary 2.6.
Let X = ( X t ) t ∈ Z m be a stationary ergodic centered R n -valued random fieldsuch that E [ k X t k δ ] < ∞ for some δ > . Additionally, let us assume that θ ( h ) ∈ O ( h − α ) with α > m (1 + δ ) . Then Σ = X k ∈ Z m E [ X X k ] , is finite, positive definite and | D n | X j ∈ Γ n X j d −−−→ n →∞ N (0 , Σ) , where N (0 , Σ) denotes the multivariate normal distribution with mean and covariancematrix Σ .Proof. First, the univariate result follows directly from Theorem 2.5, since X is ergodic.Now let X be multivariate. Since linear functions are Lipschitz we note that for all a ∈ R n , a X t is a θ -lex-weakly dependent field with θ -lex-coefficients smaller or equal to those of X t . Then 1 | D n | X j ∈ Γ n a X j d −−−→ n →∞ N (0 , a Σ a ) . Applying the Cramér-Wold device, the asymptotic normality of the sample mean followsimmediately. 7 emark 2.7.
It is natural to ask for conditions on a functional extension of Theorem 2.5.As a matter of fact, results of this kind are strongly related to the following L p -projectivecriterion X k ∈ V E [ | X k E [ X |F V | k | ] | p ] < ∞ , p ∈ [1 , ∞ ] , (2.3) where F Γ = σ ( X k , k ∈ Γ) . If (2.3) holds for p = 1 , then [27, Theorem 1] provides anon-functional central limit theorem for stationary random fields which yields Theorem2.5. Now, possible functional extensions depend on the dimension of the domain of therandom field.When m = 1 , Dedecker and Rio showed in [31, Theorem] that if (2.3) holds for p = 1 ,then a functional central limit theorem holds.In the general case m > , Dedecker proved in [28, Theorem 1] a functional central limittheorem if (2.3) holds for p > .Since we can establish the connection between the L p -projective criterion (2.3) and thesummability condition of the θ -lex-coefficients of X just for p = 1 , there is no functionalextension of Theorem 2.5 readily obtainable, except for m = 1 (see [26, Remark 4.2]). In this section we first introduce MMAF driven by a Lévy basis. Then, we discuss weak de-pendence properties of such MMAF and derive sufficient conditions such that the asymp-totic results of Section 2.2 apply.
Let S denote a non-empty polish space, B ( S ) the Borel σ -algebra on S , π some probabilitymeasure on ( S, B ( S )) and B b ( S × R m ) the bounded Borel sets of S × R m . Definition 3.1.
Consider a family
Λ = { Λ( B ) , B ∈ B b ( S × R m ) } of R d -valued randomvariables. Then Λ is called an R d -valued Lévy basis or infinitely divisible independentlyscattered random measure on S × R m if(i) the distribution of Λ( B ) is infinitely divisible (ID) for all B ∈ B b ( S × R m ) ,(ii) for arbitrary n ∈ N and pairwise disjoint sets B , . . . , B n ∈ B b ( S × R m ) the randomvariables Λ( B ) , . . . , Λ( B n ) are independent and(iii) for any pairwise disjoint sets B , B , . . . ∈ B b ( S × R m ) with S n ∈ N B n ∈ B b ( S × R m ) we have, almost surely, Λ( S n ∈ N B n ) = P n ∈ N Λ( B n ) . In the following we will restrict ourselves to Lévy bases which are homogeneous inspace and time and factorisable, i.e. Lévy bases with characteristic function ϕ Λ( B ) ( u ) = E h e i h u, Λ( B ) i i = e Φ( u )Π( B ) , (3.1)8or all u ∈ R d , B ∈ B b ( S × R m ) and Π = π × λ is the product measure of the probabilitymeasure π on S and the Lebesgue measure λ on R m . Furthermore,Φ( u ) = i h γ, u i − h u, Σ u i + Z R d (cid:16) e i h u,x i − − i h u, x i [0 , ( k x k ) (cid:17) ν ( dx ) , (3.2)is the cumulant transform of an ID distribution with characteristic triplet ( γ, Σ , ν ), where γ ∈ R d , Σ ∈ M d × d ( R ) is a symmetric positive-semidefinite matrix and ν is a Lévy-measureon R d , i.e. ν ( { } ) = 0 and Z R d (cid:16) ∧ k x k (cid:17) ν ( dx ) < ∞ . The quadruplet ( γ, Σ , ν, π ) determines the distribution of the Lévy basis completely andtherefore it is called the characteristic quadruplet. Following [50], it can be shown that aLévy basis has a Lévy-Itô decomposition. Theorem 3.2.
Let { Λ( B ) , B ∈ B b ( S × R m ) } be an R d -valued Lévy basis on S × R m withcharacteristic quadruplet ( γ, Σ , ν, π ) . Then there exists a modification ˜Λ of Λ which is alsoa Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) such that there exists an R d -valuedLévy basis ˜Λ G on S × R m with characteristic quadruplet (0 , Σ , , π ) and an independentPoisson random measure µ on ( R d × S × R m , B ( R d × S × R m )) with intensity measure ν × π × λ such that ˜Λ( B ) = γ ( π × λ )( B )+ ˜Λ G ( B ) + Z k x k≤ Z B x ( µ ( dx, dA, ds ) − dsπ ( dA ) ν ( dx ))+ Z k x k > Z B xµ ( dx, dA, ds ) , (3.3) for all B ∈ B b ( S × R m ) .If the Lévy measure additionally fulfills R k x k≤ k x k ν ( dx ) < ∞ , it holds that ˜Λ( B ) = γ ( π × λ )( B ) + ˜Λ G ( B ) + Z R d Z B xµ ( dx, dA, ds ) , (3.4) for all B ∈ B b ( S × R m ) with γ := γ − Z k x k≤ xν ( dx ) . (3.5) Note that the integral with respect to µ exists ω -wise as a Lebesgue integral.Proof. Analogous to [12, Theorem 2.2].We refer the reader to [39, Section 2.1] for further details on the integration withrespect to Poisson random measures. From now on we assume that any Lévy basis has adecomposition (3.3).Let us recall the following multivariate extension of [52, Theorem 2.7]. We denote by A the transpose of a matrix A in what follows.9 heorem 3.3. Let
Λ = { Λ( B ) , B ∈ B b ( S × R m ) } be an R d -valued Lévy basis with char-acteristic quadruplet ( γ, Σ , ν, π ) , f : S × R m → M n × d ( R ) be a B ( S × R m ) -measurablefunction. Then f is Λ -integrable in the sense of [52], if and only if Z S Z R m (cid:13)(cid:13)(cid:13)(cid:13) f ( A, s ) γ + Z R d f ( A, s ) x (cid:16) [0 , ( k f ( A, s ) x k ) − [0 , ( k x k ) (cid:17) ν ( dx ) (cid:13)(cid:13)(cid:13)(cid:13) dsπ ( dA ) < ∞ , (3.6) Z S Z R m k f ( A, s )Σ f ( A, s ) k dsπ ( dA ) < ∞ and (3.7) Z S Z R m Z R d (cid:18) ∧ k f ( A, s ) x k (cid:19) ν ( dx ) dsπ ( dA ) < ∞ . (3.8) If f is Λ -integrable, the distribution of the stochastic integral R S R R m f ( A, s )Λ( dA, ds ) isID with the characteristic triplet ( γ int , Σ int , ν int ) given by γ int = Z S Z R m (cid:18) f ( A, s ) γ + Z R d f ( A, s ) x (cid:18) [0 , ( k f ( A, s ) x k ) − [0 , ( k x k ) (cid:19) ν ( dx ) (cid:19) dsπ ( dA ) , Σ int = Z S Z R m f ( A, s )Σ f ( A, s ) dsπ ( dA ) and ν int ( B ) = Z S Z R m Z R d Âă B ( f ( A, s ) x ) ν ( dx ) dsπ ( dA ) , for all Borel sets B ⊂ R n \{ } .Proof. Analogous to [12, Proposition 2.3].Implicitly, we always assume that Σ int or ν int are different from zero throughout thepaper to rule out the deterministic case.For m = 1 it is known that the Lévy-Itô decomposition simplifies if the underly-ing Lévy process L t = Λ( S × (0 , t ]) is of finite variation (if and only if Σ = 0 and R | x |≤ | x | ν ( dx ) < ∞ ). Extending this one-dimensional notion, we speak of the finite varia-tion case whenever Σ = 0 and R k x k≤ k x k ν ( dx ) < ∞ . Corollary 3.4.
Let
Λ = { Λ( B ) , B ∈ B b ( S × R m ) } be an R d -valued Lévy basis with char-acteristic quadruplet ( γ, , ν, π ) satisfying R k x k≤ k x k ν ( dx ) < ∞ , and define γ as in (3.5),such that for Φ( u ) in (3.1) we have Φ( u ) = i h γ , u i + R R d (cid:16) e i h u,x i − (cid:17) ν ( dx ) . Furthermore,let f : S × R m → M n × d ( R ) be a B ( S × R m ) -measurable function satisfying Z S Z R m k f ( A, s ) γ k dsπ ( dA ) < ∞ and (3.9) Z S Z R m Z R d (cid:18) ∧ k f ( A, s ) x k (cid:19) ν ( dx ) dsπ ( dA ) < ∞ . (3.10) Then, Z S Z R m f ( A, s )Λ( dA, ds ) = Z S Z R m f ( A, s ) γ dsπ ( dA ) + Z R d Z S Z R m f ( A, s ) xµ ( dx, dA, ds ) , here the right hand side denotes a ω -wise Lebesgue integral. Additionally, the distributionof the stochastic integral R S R R m f ( A, s )Λ( dA, ds ) is ID with characteristic function E h e i h u, R S R R m f ( A,s )Λ( dA,ds ) i i = e i h u,γ int, i + R R d ( e i h u,x i − ) ν int ( dx ) , u ∈ R d , where γ int, = Z S Z R m f ( A, s ) γ dsπ ( dA ) ,ν int ( B ) = Z S Z R m Z R d Âă B ( f ( A, s ) x ) ν ( dx ) dsπ ( dA ) . Definition 3.5.
Let
Λ = { Λ( B ) , B ∈ B b ( S × R m ) } be an R d -valued Lévy basis and let f : S × R m → M n × d ( R ) be a B ( S × R m ) -measurable function satisfying the conditions(3.6), (3.7) and (3.8). Then the stochastic integral X t := Z S Z R m f ( A, t − s )Λ( dA, ds ) (3.11) is stationary, well-defined for all t ∈ R m and its distribution is ID. The random field X is called an R n -valued mixed moving average field (MMAF) and f its kernel function. In the following result we give conditions ensuring finite moments of an MMAF andexplicit formulas for the first- and second-order moments.
Proposition 3.6.
Let X be an R n -valued MMAF driven by an R d -valued Lévy basis withcharacteristic quadruplet ( γ, Σ , ν, π ) and with Λ -integrable kernel function f : S × R m → M n × d ( R ) .(i) If R k x k > k x k r ν ( dx ) < ∞ and f ∈ L r ( S × R m , π ⊗ λ ) for r ∈ [2 , ∞ ) , then E [ k X t k r ] < ∞ for all t ∈ R m .(ii) If R k x k > k x k r ν ( dx ) < ∞ and f ∈ L r ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) for r ∈ (0 , ,then E [ k X t k r ] < ∞ for all t ∈ R m .Consider the finite variation case, i.e. Σ = 0 and R k x k≤ k x k ν ( dx ) < ∞ , then the followingholds(i) If R k x k > k x k r ν ( dx ) < ∞ and f ∈ L r ( S × R m , π ⊗ λ ) for r ∈ [1 , ∞ ) , then E [ k X t k r ] < ∞ .(ii) If R k x k > k x k r ν ( dx ) < ∞ and f ∈ L r ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) for r ∈ (0 , ,then E [ k X t k r ] < ∞ .Proof. Analogous to [26, Proposition 2.6].
Proposition 3.7.
Let X be an R n -valued MMAF driven by an R d -valued Lévy basis withcharacteristic quadruplet ( γ, Σ , ν, π ) and with Λ -integrable kernel function f : S × R m → M n × d ( R ) . i) If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π × λ ) ∩ L ( S × R m , π × λ ) the firstmoment of X is given by E [ X t ] = Z S Z R m f ( A, − s ) µ Λ dsπ ( dA ) , where µ Λ = γ + R k x k≥ xν ( dx ) .(ii) If R R d k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π × λ ) , then X t ∈ L and V ar ( X t ) = Z S Z R m f ( A, − s )Σ Λ f ( A, − s ) dsπ ( dA ) and Cov ( X , X t ) = Z S Z R m f ( A, − s )Σ Λ f ( A, t − s ) dsπ ( dA ) , where Σ Λ = Σ + R R d xx ν ( dx ) .(iii) Consider the finite variation case, i.e. Σ = 0 and R k x k≤ k x k ν ( dx ) < ∞ . If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π × λ ) the first moment of X is given by E [ X t ] = Z S Z R m f ( A, − s ) (cid:18) γ + Z R d xν ( dx ) (cid:19) dsπ ( dA ) , with γ as defined in (3.5).Proof. Immediate from [54, Section 25] and Theorem 3.3. ( A, Λ) -influenced MMAF Since there is no natural order on R m for m > θ -lex-weak dependence for MMAFfalling within this framework. Examples will be presented in Section 3.7. Definition 3.8.
Let X = ( X t ) t ∈ R m be a random field, A = ( A t ) t ∈ R m ⊂ R m a family ofBorel sets and M = { M ( B ) , B ∈ B b ( S × R m ) } an independently scattered random measure.Assume that X t is measurable with respect to σ ( M ( B ) , B ∈ B b ( S × A t )) . We then call A the sphere of influence , M the influencer , ( σ ( M ( B ) , B ∈ B b ( S × A t ))) t ∈ R m the filtrationof influence and X an ( A, M )-influenced random field . If A is translation invariant, i.e. A t = t + A , the sphere of influence is fully described by the set A and we call A the initial sphere of influence . Note that for m = 1, the class of causal mixed moving average processes driven by aLévy basis Λ equals the class of ( A, Λ)-influenced mixed moving average processes drivenby Λ with A t = V t .Let A = ( A t ) t ∈ R m be a full dimensional, translation invariant sphere of influence withinitial sphere of influence A . In this section we consider the filtration ( A t ) t ∈ R m generatedby Λ, i.e. the σ -algebra generated by the set of random variables { Λ( B ) : B ∈ B ( S × A t ) } t ∈ R m .Consider an MMAF X that is adapted to ( A t ) t ∈ R m . Then, X is ( A, Λ)-influenced and canbe written as X t = Z S Z R m f ( A, t − s )Λ( dA, ds ) = Z S Z A t f ( A, t − s )Λ( dA, ds ) . (3.12)Note that the translation invariance of A is required to ensure stationarity of X .In the following we discuss under which assumptions an ( A, Λ)-influenced MMAF is θ -lex-weakly dependent. We start with a preliminary definition. Definition 3.9 ([18, Definition 2.4.1]) . K ⊂ R m is called a closed convex proper cone ifit satisfies the following properties(i) K + K ⊂ K (ensures convexity)(ii) αK ⊂ K for all α ≥ (ensures that K is a cone)(iii) K is closed(iv) K is pointed (i.e. x ∈ K and − x ∈ K = ⇒ x = 0 ). We then apply a truncation technique to show that X is θ -lex-weakly dependent.Define X j , X Γ as in Definition 2.2 such that j ∈ R m and Γ ⊂ V hj (see Figure 1). Wetruncate X j such that the truncation ˜ X j and X Γ become independent. From our con-struction it will become clear that it is enough to find a truncation such that ˜ X j and X i are independent for the lexicographic greatest point i ∈ V hj .For a given point j , we determine the truncation of X j by intersecting the integration setwith V ψj for ψ > A i (see Figure 2 and 3). In thefollowing we will describe the choice of ψ . The figures illustrate the case m = 2.Let i ∈ V hj be the lexicographic greatest point in V hj , i.e. k ≤ lex i for all k ∈ V hj . Inthe following dist ( A, B ) = inf a ∈ A,b ∈ B k a − b k denotes the Euclidean distance of the sets A and B . To ensure the existence of the above truncation, we assume that there exists an α ∈ R m \{ } such that sup x ∈ A ,x =0 α x k x k < . (3.13)Intuitively, (3.13) ensures that the initial sphere of influence A can be covered by a closedconvex proper cone. Moreover, w.l.o.g. by applying a rotation to A , we can always assumeto work with A ⊂ V . The following remark discuss such transformation. Remark 3.10.
Let A be a full dimensional subset of a half-space such that A (cid:42) V Define the translation invariant sphere of influence A = ( A t ) t ∈ R m by A t = ( A + t ) t ∈ R m andconsider the ( A, Λ) -influenced MMAF X = ( X t ) t ∈ R m of the form X t = R S R A + t f ( A, t − s )Λ( dA, ds ) . Note that if A would not be full dimensional, X would be since the Lebesguemeasure of A is zero. Define the hyperplane D = { x ∈ R m : α x = 0 } . Using theprincipal axis theorem we find an orthogonal matrix O such that the axis of the first oordinate is orthogonal to the rotated hyperplane OD . Since O is orthogonal it holds that | Det ( Dϕ )( u ) | = | Det ( O ) | = 1 , where Dϕ denotes the Jacobian matrix of the function ϕ : u Ou . Additionally, for the rotated initial set OA it holds that OA \ V ⊂ { } × [0 , ∞ ) m − , such that λ ( { } × [0 , ∞ ) m − ) = 0 . By substitution for multiple variables we getfor ˜ t = Ot . X t = Z S Z R m f ( A, t − s ) A + t ( s )Λ( dA, ds )= Z S Z R m f ( A, O − ( Ot − Os )) OA + Ot ( Os )Λ( dA, ds )= Z S Z OA +˜ t f ( A, O − (˜ t − ˜ s ))Λ( dA, d ˜ s ) = Z S Z OA ∩ V +˜ t f ( A, O − (˜ t − ˜ s ))Λ( dA, d ˜ s )= Z S Z V ˜ t ˜ f O ( A, ˜ t − ˜ s )Λ( dA, d ˜ s ) = ˜ X ˜ t , (3.14) with ˜ f O ( A, t − s ) = f ( A, O − ( t − s )) { s ∈ OA + t } . In Figure 4, it is pictured the smallest closed convex proper cone covering A i whichis called K . Note that all conditions can be formulated in terms of A since the sphere ofinfluence A is translation invariant. -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 1:
Integrationsets A j and A i of X j and X i -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 2: A j and A i to-gether with V ψj -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 3:
Integrationsets A i and A j \ V ψj of X i and ˜ X j In order to choose ψ we first define b = sup x ∈ A k x k =1 α x k α k < K = ( x ∈ R m : α x k x k ≤ b ) , (3.15)where the last inequality follows from (3.13) (see Figure 4). It holds − ≤ b <
0. For x , x ∈ ˜ K it holds α ( x + x ) k x + x k ≤ α x k x + x k + α x k x + x k ≤ b k x k + k x kk x + x k ≤ b such that ˜ K is a closed convex proper cone. It can be interpreted as the smallest equian-gular closed convex proper cone that contains A . Then, cos( β + π ) = b such that14 = arcsin( − b ) ∈ [0 , π ) (see Figure 5) and dist ( j, ˜ K ) ≥ sin( β ) h = − bh (see Figure6). We choose ψ as ψ ( h ) = − bh √ m . (3.16)In particular we have ψ ( h ) = O ( h ).Let l ∈ V hj be now an arbitrary point. From the given choice of ψ and i it holds dist ( l, j ) ≥ dist ( i, j ), A i ∩ ( A j \ V ψj ) = ∅ , A i = i + A ⊂ i + ˜ K and A l = l + A ⊂ l + ˜ K .Since ˜ K is an equiangular closed convex proper cone we get A l ∩ ( A j \ V ψj ) = ∅ . -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 4:
Choice of α and β -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 5:
Choice of ˜ K -8 -7 -6 -5 -4 -3 -2 -1 0 1-3-2-101234567 Figure 6:
Construction of ψ Hence, the conditions below, which are expressed in terms of the kernel function f and the characteristic quadruplet of the driving Lévy basis, are sufficient to show that an( A, Λ)-influenced MMAF is θ -lex-weakly dependent. Proposition 3.11.
Let Λ be an R d -valued Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) and f : S × R m → M n × d ( R ) a B ( S × R m ) -measurable function. Consider the ( A, Λ) -influenced MMAF X t = Z S Z A t f ( A, t − s )Λ( dA, ds ) , t ∈ R m , with translation invariant sphere of influence A such that (3.13) holds.(i) If R k x k > k x k ν ( dx ) < ∞ , γ + R k x k > xν ( dx ) = 0 and f ∈ L ( S × R m , π ⊗ λ ) , then X is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ X ( h ) ≤ (cid:18) Z S Z A ∩ V ψ ( h )0 tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) = ˆ θ ( i ) X ( h ) . (3.17) (ii) If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) , then X is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ X ( h ) ≤ Z S Z A ∩ V ψ ( h )0 tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA )+ (cid:13)(cid:13)(cid:13)(cid:13) Z S Z A ∩ V ψ ( h )0 f ( A, − s ) µ Λ dsπ ( dA ) (cid:13)(cid:13)(cid:13)(cid:13) ! = ˆ θ ( ii ) X ( h ) . (3.18)15 iii) If R R d k x k ν ( dx ) < ∞ , Σ = 0 and f ∈ L ( S × R m , π ⊗ λ ) with γ as in (3.5), then X is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ X ( h ) ≤ Z S Z A ∩ V ψ ( h )0 k f ( A, − s ) γ k dsπ ( dA )+ Z S Z A ∩ V ψ ( h )0 Z R d k f ( A, − s ) x k ν ( dx ) dsπ ( dA ) ! = ˆ θ ( iii ) X ( h ) . (3.19) (iv) If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) , then X is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ X ( h ) ≤ Z S Z A ∩ V ψ ( h )0 tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA )+ (cid:13)(cid:13)(cid:13)(cid:13) Z S Z A ∩ V ψ ( h )0 f ( A, − s ) γdsπ ( dA ) (cid:13)(cid:13)(cid:13)(cid:13) ! +2 Z S Z A ∩ V ψ ( h )0 Z k x k > k f ( A, − s ) x k ν ( dx ) dsπ ( dA ) = ˆ θ ( iv ) X ( h ) . for all h > , with ψ as defined in (3 . . Furthermore, Σ Λ = Σ + R R d xx ν ( dx ) and µ Λ = γ − R k x k≥ xν ( dx ) .Proof. See Section 5.2.In the next proposition we consider a vector of a shifted real-valued ( A, Λ)-influencedMMAF and we show that it is θ -lex weakly dependent. This result is necessary to analyze,for example, the asymptotic behavior of the sample autocovariances. Define the set ofpossible shifts S k = { ( a, b ) ∈ { , . . . , k } × {− k, . . . , k } m − } , k ∈ N (3.20)and consider the enumeration { s , . . . , s | S k | } of S k , where | S k | = ( k +1)(2 k +1) m − . Besidesthe hereditary properties from Proposition 2.4 we show that weak dependence propertiesare inherited by the field Z t = ( X t , X t + s , X t + s , . . . , X t + s | Sk | ) . (3.21) Proposition 3.12.
Let Λ be an R d -valued Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) and f : S × R m → M × d ( R ) be a Λ -integrable, B ( S × R m ) -measurable function. Considerthe ( A, Λ) -influenced MMAF X t = Z S Z A t f ( A, t − s )Λ( dA, ds ) , t ∈ R m , with translation invariant sphere of influence A such that (3.13) holds. Then Z t := Z S Z A t g ( A, t − s )Λ( dA, ds ) , t ∈ R m , here g ( A, s ) = ( f ( A, s ) , f ( A, s − s ) , . . . , f ( A, s − s | S k | )) is a B ( S × R m ) -measurablefunction with values in M ( k +1)(2 k +1) m − × d ( R ) for k ∈ N , is an ( A, Λ) -influenced MMAF.If X additionally satisfies the conditions of Proposition 3.11 (i), (ii), (iii) or (iv) then Z is θ -lex-weakly dependent with coefficients respectively given by θ ( i ) Z ( h ) ≤D ˆ θ ( i ) X ( h − ψ − ( k )) , θ ( ii ) Z ( h ) ≤ D ˆ θ ( ii ) X ( h − ψ − ( k )) ,θ ( iii ) Z ( h ) ≤C ˆ θ ( iii ) X ( h − ψ − ( k )) and θ ( iv ) Z ( h ) ≤ C ˆ θ ( iv ) X ( h − ψ − ( k )) , (3.22) where D = | S k | m/ , C = | S k | m for ψ ( h ) > k with the corresponding ˆ θ ( · ) ( h ) from Proposition3.11.Proof. See Section 5.2. ( A, Λ) -influenced MMAF Let us consider an R n -valued ( A, Λ)-influenced MMAF X = ( X u ) u ∈ Z m with X u = Z S Z A u f ( A, u − s )Λ( dA, ds ) , (3.23)with full-dimensional translation invariant sphere of influence A and initial sphere ofinfluence A ⊂ V such that (3.13) holds. We assume that we observe X on the finitesampling sets D n ⊂ Z m , such thatlim n →∞ | D n | = ∞ and lim n →∞ | D n || ∂D n | = 0 . (3.24)We note that this includes in particular the equidistant sampling E n = (0 , n ] m ∩ Z m such that | E n | = n m , n ∈ N . (3.25)The sample mean of the random field X is then defined as1 | D n | X u ∈ D n X u . (3.26)If R k x k > k x k ν ( dx ) < ∞ we define the centered MMAF ˜ X u = X u − E [ X u ] and the sampleautocovariance on E n at lag k ∈ N × Z m − | E n − ˜ k | X u ∈ E n − ˜ k ˜ X u ˜ X u + k , k ∈ N × Z m − , (3.27)where ˜ k = k k k ∞ . Let us start by analyzing the asymptotic properties of the sample mean(3.26) for a centered ( A, Λ)-influenced MMAF.17 heorem 3.13.
Let X = ( X u ) u ∈ Z m be an ( A, Λ) -influenced MMAF as defined in (3.23)such that R k x k > k x k δ ν ( dx ) < ∞ , γ + R k x k > xν ( dx ) = 0 and f ∈ L ( S × R m , π ⊗ λ ) ∩ L δ ( S × R m , π ⊗ λ ) for some δ > . Assume that X has θ -lex-coefficients satisfying θ X ( h ) = O ( h − α ) , where α > m (1 + δ ) . Then Σ = X k ∈ Z m E [ X X k ] , is finite, positive semidefinite and | D n | X j ∈ D n X j d −−−→ n →∞ N (0 , Σ) . (3.28) Proof.
By [49, Theorem 3.6], it follows that an MMAF is ergodic.Then, the result follows from Corollary 2.6.In the theorem above, the initial sphere of influence A must satisfy (3.13). Addi-tionally, we observe a trade-off between moment conditions on X and the decay rate ofthe θ -lex coefficients. However, one can derive similar results for the sample mean of aMMAF by relaxing condition (3.13) and exploiting the second order moment structureof a MMAF. On the other hand, the following technique does not carry over to highermoments. Theorem 3.14.
Let X = ( X u ) u ∈ Z m be an ( A, Λ) -influenced MMAF defined by X u = Z S Z A u f ( A, u − s )Λ( dA, ds ) , with full dimensional translation invariant sphere of influence A and initial sphere ofinfluence A ⊂ V such that γ + R k x k > xν ( dx ) = 0 and E [ k X k ] < ∞ . Assume that X has θ -lex-coefficients satisfying θ X ( h ) = O ( h − α ) , where α > m . Then Σ = X k ∈ Z m E [ X X k ] , is finite, positive definite and | D N | X j ∈ D N X j d −−−→ N →∞ N (0 , Σ) . (3.29) Proof.
See Section 5.3.To lighten notation in the following we assume that X is real-valued and centered,i.e. E [ X ] = 0. In order to derive asymptotic properties for the distribution of (3.27) weneed to show weak dependence properties of the random field Y = ( Y j,k ) j ∈ Z m defined as Y j,k = X j X j + k − R ( k ) , k ∈ N × Z m − , (3.30)18here R ( k ) = Cov ( X , X k ) = E [ X X k ] = Z S Z A ∩ A k f ( A, − s )Σ Λ f ( A, k − s ) dsπ ( dA ) , k ∈ N × Z m − , with Σ Λ = Σ+ R R d xx ν ( dx ) for an ( A, Λ)-influenced MMAF X with characteristic quadru-plet ( γ, Σ , ν, π ). The last equality follows from Proposition 3.7. Proposition 3.15.
Let X = ( X u ) u ∈ Z m be a real-valued ( A, Λ) -influenced MMAF as de-fined in (3.23) such that E [ X ] = 0 and E [ k X k δ ] < ∞ for some δ > with θ -lex-coefficients θ X . Then, ( Y j,k ) j ∈ Z m , k ∈ N × Z m − as defined in (3.30) is θ -lex-weakly de-pendent with coefficients θ Y ( h ) ≤ C (cid:16) √ θ ( i ) X (cid:16) h − ψ − ( k k k ∞ ) (cid:17)(cid:17) δ δ , where C is a constant, independent of h , ˆ θ ( i ) X from Proposition 3.12, and ψ as defined in (3 . . Furthermore, in the finite variation case it holds θ Y ( h ) ≤ C (cid:16) θ ( i ) X (cid:16) h − ψ − ( k k k ∞ ) (cid:17)(cid:17) δ δ . Proof.
Consider the 2-dimensional process Z = ( X j , X j + k ) j ∈ Z m with k ∈ N × Z m − .Proposition 3.12 implies that Z is θ -lex-weakly dependent and from the proof we obtainthe coefficients θ Z ( h ) ≤ √ θ ( i ) X ( h − ψ − ( k k k ∞ )) for ψ ( h ) > k k k ∞ . Consider the function h : R → R such that h ( x , x ) = x x . The function h satisfies theassumptions of Proposition 2.4 for p = 2+ δ , c = 1 and a = 2. Considering h ( Z ) = X j X j + k we obtain the θ -lex-coefficients of ( Y j,k ) j ∈ Z m θ Y ( h ) ≤ C ( √ θ ( i ) X ( h − ψ − ( k k k ∞ ))) δ δ for ψ ( h ) > k k k ∞ .The coefficients for the finite variation case can be obtained from Proposition 2.4 and(3.22).The next corollary gives asymptotic properties of the sample autocovariances (3.27)for ( A, Λ)-influenced MMAF, i.e. we can give a distributional limit theorem for the process( Y j,k ) j ∈ Z m by determining the asymptotic distribution of1 | E n − ˜ k | X j ∈ E n − ˜ k Y j,k , k ∈ N × Z m − , where ˜ k = k k k ∞ . 19 orollary 3.16. Let X = ( X u ) u ∈ Z m be a real-valued ( A, Λ) -influenced MMAF as definedin (3.23) such that E [ X ] = 0 and E [ k X k δ ] < ∞ , for some δ > . If ˆ θ ( i ) X ( h ) = O ( h − α ) ,with ˆ θ ( i ) X from Proposition 3.12 and α > m (cid:16) δ (cid:17) ( δ δ ) , then Σ = X l ∈ Z m Cov Y , ... Y ,k , Y l, ... Y l,k = X l ∈ Z m Cov X X ... X X k , X l X l ... X l X l + k , is finite, positive semidefinite and | E n − ˜ k | X j ∈ E n − ˜ k Y j, ... Y j,k d −−−→ N →∞ N (0 , Σ) , where ˜ k = k k k ∞ .Proof. Analogous to Theorem 3.13 we obtain the stated convergence using Proposition3.15.
Corollary 3.17.
Let X = ( X u ) u ∈ Z m be a real-valued ( A, Λ) -influenced MMAF as definedin (3.23) and p ≥ such that E [ | X | p + δ ] < ∞ for some δ > . If ˆ θ ( i ) X ( h ) = O ( h − α ) , with ˆ θ ( i ) X from Proposition 3.12 and α > m (cid:16) δ (cid:17) ( p − δp + δ ) , then Σ = X k ∈ Z m Cov ( X p , X pk ) , is finite, positive semidefinite and | E n − ˜ k | X j ∈ E n − ˜ k ( X pj − E [ X p ]) d −−−→ N →∞ N (0 , Σ) , where ˜ k = k k k ∞ . Remark 3.18.
The theory developed in this section is an important first step in showingasymptotic normality of parametric estimators based on moment functions as the gener-alized method of moments (for a comprehensive introduction see [37]). An example of theapplication of the weak dependence properties and related central limit theorems to thestudy of GMM estimators can be found in [26, Section 6.1], where the authors analyzeparametric estimators of the supOU process.
In this subsection we consider a general MMAF X = ( X t ) t ∈ R m as defined in (3.11), i.e. X t = Z S Z R m f ( A, t − s )Λ( dA, ds ) , t ∈ R m , and discuss under which assumptions a non-influenced MMAF is η -weakly dependent.Note that we do not demand any additional assumption on the structure of X as assumedin Section 3.2 and 3.3. 20 roposition 3.19. Let Λ be an R d -valued Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) and f : S × R m → M n × d ( R ) a B ( S × R m ) -measurable function. Consider the MMAF X = ( X t ) t ∈ R m with X t = Z S Z R m f ( A, t − s )Λ( dA, ds ) , t ∈ R m . (i) If R k x k > k x k ν ( dx ) < ∞ , γ + R k x k > xν ( dx ) = 0 and f ∈ L ( S × R m , π ⊗ λ ) , then X is η -weakly dependent with η -coefficients satisfying η X ( h ) ≤ Z S Z (( − h , h ) m ) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) = ˆ η ( i ) X ( h ) . (ii) If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) , then X is η -weakly dependent with η -coefficients satisfying η X ( h ) ≤ Z S Z (( − h , h ) m ) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA )+ (cid:13)(cid:13)(cid:13)(cid:13) Z S Z (( − h , h ) m ) c f ( A, − s ) µ Λ dsπ ( dA ) (cid:13)(cid:13)(cid:13)(cid:13) ! = ˆ η ( ii ) X ( h ) . (iii) If R R d k x k ν ( dx ) < ∞ , Σ = 0 and f ∈ L ( S × R m , π ⊗ λ ) with γ as in (3.5), then X is η -weakly dependent with η -coefficients satisfying η X ( h ) ≤ Z S Z (( − h , h ) m ) c k f ( A, − s ) γ k dsπ ( dA )+ Z S Z (( − h , h ) m ) c Z R d k f ( A, − s ) x k ν ( dx ) dsπ ( dA ) = ˆ η ( iii ) X ( h ) . (iv) If R k x k > k x k ν ( dx ) < ∞ and f ∈ L ( S × R m , π ⊗ λ ) ∩ L ( S × R m , π ⊗ λ ) , then X is η -weakly dependent with η -coefficients satisfying η X ( h ) ≤ Z S Z (( − h , h ) m ) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA )+ (cid:13)(cid:13)(cid:13)(cid:13) Z S Z (( − h , h ) m ) c f ( A, − s ) µ Λ dsπ ( dA ) (cid:13)(cid:13)(cid:13)(cid:13) ! + Z S Z (( − h , h ) m ) c Z k x k > k f ( A, − s ) x k ν ( dx ) dsπ ( dA ) = ˆ η ( iv ) X ( h ) , for all h > , where Σ Λ = Σ + R R d xx ν ( dx ) and µ Λ = γ − R k x k≥ xν ( dx ) .Proof. See Section 5.4. 21nalogous to Proposition 3.12 we obtain the following result.
Proposition 3.20.
Let Λ be an R d -valued Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) and f : S × R m → M × d ( R ) be a Λ -integrable, B ( S × R m ) -measurable function. Considerthe real-valued MMAF X t = Z S Z R m f ( A, t − s )Λ( dA, ds ) , t ∈ R m . Then Z t := Z S Z R m g ( A, t − s )Λ( dA, ds ) , t ∈ R m , where g ( A, s ) = ( f ( A, s ) , f ( A, s − s ) , . . . , f ( A, s − s | S k | )) is a B ( S × R m ) -measurablefunction with values in M ( k +1)(2 k +1) m − × d ( R ) for k ∈ N , is an MMAF.If X additionally satisfies the conditions of Proposition 3.19 (i), (ii), (iii) or (iv), then Z is η -weakly dependent with coefficients respectively given by η ( i ) Z ( h ) ≤D ˆ η ( i ) X ( h − k ) , η ( ii ) Z ( h ) ≤ D ˆ η ( ii ) X ( h − k ) ,η ( iii ) Z ( h ) ≤C ˆ η ( iii ) X ( h − k ) and η ( iv ) Z ( h ) ≤ C ˆ η ( iv ) X ( h − k ) , (3.31) where D = | S k | m/ , C = | S k | m for h > k with the corresponding ˆ η ( · ) ( h ) from Proposition3.19.Proof. Analogous to Proposition 3.12.
Let us consider an R n -valued MMAF X = ( X u ) u ∈ Z m with X u = Z S Z R m f ( A, u − s )Λ( dA, ds ) . (3.32)As in Section 3.2 we assume that we observe X on a sequence of finite sampling sets D n ⊂ Z m , such that (3.24) holds. Theorem 3.21.
Let ( X u ) u ∈ Z m be an MMAF as defined in (3.32) such that E [ X ] = 0 and E [ k X k δ ] < ∞ for some δ > . Assume that X has η -coefficients satisfying η X ( h ) = O ( h − β ) , where β > m max (cid:16) , (cid:16) δ (cid:17)(cid:17) . Then Σ = X u ∈ Z m Cov ( X , X u ) = X u ∈ Z m E [ X X u ] , (3.33) is finite, positive semidefinite and | D n | X u ∈ D n X u d −−−→ n →∞ N (0 , Σ) . (3.34)22 roof. Let us consider the notations and assumptions stated in Definition 2.1. Then, X is λ -weakly dependent, see definition in [33, Definition 1]. Finally [33, Theorem 2] impliesthe summability of σ and the result stated in (3.34). The multivariate extension followsanalogously to Corollary 2.6 by the Cramér-Wold device. Remark 3.22.
Theorem 3.21 can be formulated as a functional central limit theorem,following [33, Theorem 3]. Set S n ( t ) = P j ∈ tE n X j , t ∈ R m with E n as defined in (3.25)and the additional assumption that S n ( t ) = 0 if one coordinate of t equals zero. Then,under the assumptions of Theorem 3.21 it holds that n m S n ( t ) D ([0 , −−−−→ n →∞ σW ( t ) , (3.35) where W denotes a Brownian sheet and D ([0 , −−−−→ n →∞ denotes the convergence in the Skorokhodspace. Analogous to Proposition 3.15 we show the following result.
Proposition 3.23.
Let ( X u ) u ∈ Z m be a real-valued MMAF as defined in (3.32) such that E [ X ] = 0 and E [ k X k δ ] < ∞ for some δ > . Then, ( Y j,k ) j ∈ Z m , k ∈ N × Z m − asdefined in (3.30) is η -weakly dependent with coefficients η Y ( h ) ≤ C ( √ η ( i ) X ( h − k k k ∞ )) δ δ , where C is a constant, independent of h and ˆ η ( i ) X from Proposition 3.20.Furthermore, in the finite variation case it holds η Y ( h ) ≤ C (2ˆ η ( iii ) X ( h − k k k ∞ )) δ δ . In the following we give asymptotic properties of the sample autocovariances (3.27).
Corollary 3.24.
Let ( X u ) u ∈ Z m be a real-valued MMAF as defined in (3.32) such that R k x k > k x k δ ν ( dx ) < ∞ , γ + R k x k > xν ( dx ) = 0 and f : S × R m → M × d ( R ) satisfies f ∈ L ( S × R m , π ⊗ λ ) ∩ L δ ( S × R m , π ⊗ λ ) for some δ > . If ˆ η ( i ) X ( h ) = O ( h − β ) , with ˆ η ( i ) X from Proposition 3.20 and β > m max (cid:16) , (cid:16) δ (cid:17)(cid:17) ( δ δ ) , then Σ = X l ∈ Z m Cov Y , ... Y ,k , Y l, ... Y l,k = X l ∈ Z m Cov X X ... X X k , X l X l ... X l X l + k , is finite, positive semidefinite and | E n − ˜ k | X j ∈ E n − ˜ k Y j, ... Y j,k d −−−→ N →∞ N (0 , Σ) , where ˜ k = k k k ∞ . roof. Analogous to Theorem 3.21 we obtain the stated convergence using Proposition3.23.
Remark 3.25.
Note that for m = 1 Theorem 3.21 improves the only existing centrallimit theorem for MMA processes based on η -weak dependence (see [26, Theorem 4.1]) byreducing the necessary decay of the η -coefficients from β > δ to β > max (cid:16) , (cid:16) δ (cid:17)(cid:17) . Remark 3.26.
Let X be an ( A, Λ) -influenced MMAF satisfying the conditions of Propo-sition 3.11 (i). Then X is θ -lex- and η -weakly dependent with the same weak dependencecoefficients and both the asymptotic results in Section 3.4 and 3.6 can be applied.Note that the asymptotic results in Section 3.4 hold under weaker decay demands for theweak dependence coefficients compared to the results in Section 3.6. ( A, Λ) -influenced MMAF: MSTOU processes We apply the developed asymptotic theory to mixed spatio-temporal Ornstein-Uhlenbeck(MSTOU) processes. MSTOU processes were introduced in [47] and extend spatio-temporalOrnstein-Uhlenbeck (STOU) processes (see [11],[46]) by additionally mixing the mean re-version parameter. Moreover, this extension can cover short-range as well as long-rangedependence structures in space-time.In the following we will treat the temporal and spatial domain separately.MSTOU processes are an example of ( A, Λ)-influenced MMAF where the sphere ofinfluence is a family of ambit sets, i.e. A t ( x ) ⊂ R × R m such that A t ( x ) = A (0) + ( t, x ) , (Translation invariant) A s ( x ) ⊂ A t ( x ) ,A t ( x ) ∩ (( t, ∞ )) × R m = ∅ . (Non-anticipative) . (3.36) Proposition 3.27.
Let Λ be a real-valued Lévy basis on (0 , ∞ ) × R × R m with characteristicquadruplet ( γ, Σ , ν, π ) such that R | x | > x ν ( dx ) < ∞ and f ( λ ) be the density function of π (i.e. the mean reversion parameter λ ) with respect to the Lebesgue measure. Furthermore,let A = ( A t ( x )) ( t,x ) ∈ R × R m be an ambit set. If Z ∞ Z A t ( x ) exp( − λ ( t − s )) dsdξf ( λ ) dλ < ∞ , then the ( A, Λ) -influenced MMAF Y t ( x ) = Z ∞ Z A t ( x ) exp( − λ ( t − s )) Λ( dλ, ds, dξ ) , ( t, x ) ∈ R × R d , is well defined and we call Y t ( x ) a mixed spatio-temporal Ornstein-Uhlenbeck (MSTOU)process.Proof. Follows immediately from [47, Corollary 1].24n order to calculate explicit conditions for the asymptotic results of Section 3.3, itbecomes necessary to specify a family of ambit sets. In the following we will considerc-class MSTOU processes which are a sub-class of the g -class MSTOU processes definedin [47, Definition 9]. Definition 3.28.
Let Y t ( x ) be a MSTOU process as in Proposition 3.27. If, for a constant c > , A t ( x ) = { ( s, ξ ) : s ≤ t, k x − ξ k ≤ c | t − s |} , then Y t ( x ) is called a c-class MSTOU process. A c-class MSTOU process is well definedif Z ∞ λ d +1 f ( λ ) dλ < ∞ . (3.37)The next theorem expresses the θ -lex coefficients of c-class MSTOU processes interms of the characteristic quadruplet of the driving Lévy basis. We note that A (0) is afull dimensional closed convex proper cone satisfying (3.13). From (3.16) it follows that ψ ( h ) = √ c +1 h √ d +1 . Theorem 3.29.
Let ( Y t ( x )) ( t,x ) ∈ R × R m be a c-class MSTOU process and ( γ, Σ , ν, π ) thecharacteristic quadruplet of its driving Lévy basis. Moreover, let f ( λ ) be the density of π with respect to the Lebesgue measure.(i) If R | x | > x ν ( dx ) < ∞ and γ + R | x | > x ν ( dx ) = 0 , then Y t ( x ) is θ -lex-weakly dependent.For c ∈ (0 , , θ Y ( h ) satisfies m = 1 : θ Y ( h ) ≤ c Σ Λ Z ∞ (2 λψ ( h ) + 1) λ e − λψ ( h ) f ( λ ) dλ ,m ≥ θ Y ( h ) ≤ V m ( c )Σ Λ Z ∞ m ! P mk =0 1 k ! (2 λψ ( h )) k (2 λ ) d +1 e − λψ ( h ) f ( λ ) dλ ! , and for c > m = 1 : θ Y ( r ) ≤ (cid:18) c Σ Λ Z ∞ (2 λ ψ ( h ) c + 1)2 λ e − λ ψ ( h ) c dsf ( λ ) dλ (cid:19) ,m ≥ θ Y ( h ) ≤ V m ( c )Σ Λ Z ∞ m ! P mk =0 1 k ! (2 λψ ( h ) c ) k (2 λ ) d +1 e − λψ ( h ) c f ( λ ) dλ . (ii) If R R | x | ν ( dx ) < ∞ , Σ = 0 and γ as defined in (3.5), then Y t ( x ) is θ -lex-weaklydependent. For c ∈ (0 , m ∈ N : θ Y ( h ) ≤ V m ( c ) (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) Z ∞ m ! P mk =0 1 k ! ( λψ ( h )) k λ d +1 e − λψ ( h ) f ( λ ) dλ ! , nd for c > m ∈ N : θ Y ( h ) ≤ V m ( c ) (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) Z ∞ m ! P mk =0 1 k ! ( λψ ( h ) c ) k λ d +1 e − λψ ( h ) c f ( λ ) dλ .V m ( c ) = ( Γ( ) c ) m Γ( m +1) denotes the volume of the m -dimensional ball with radius c , ψ ( h ) = √ c +1 h √ m +1 and Σ Λ = Σ + R R x ν ( dx ) .Proof. (i) Let us consider the case m = 1. From Proposition 3.11 we deduce θ Y ( h ) ≤ (cid:18) Σ Λ Z ∞ Z A (0) ∩ V ψ ( h )0 exp(2 sλ ) ds dξ f ( λ ) dλ (cid:19) . (3.38)As first step, one has to evaluate the truncated integration set A (0) ∩ V ψ ( h )0 . Depend-ing on the width of A (0), we distinguish the two cases illustrated in the followingfigures. Figure 7 and 8 consider the case c ∈ (0 ,
1] and Figure 9 and 10 cover thecase c > -6 -5 -4 -3 -2 -1 0 1 2-6-4-20246 Figure 7:
Inte-gration set A (0)with ( V h (0 , ) c for c = √ and h =4 √ -6 -5 -4 -3 -2 -1 0 1 2-6-4-20246 Figure 8:
Trun-cated set A (0) ∩ V ψ ( h )(0 , for c = √ and h = 4 √ -6 -5 -4 -3 -2 -1 0 1 2-6-4-20246 Figure 9:
Inte-gration set A (0)with ( V h (0 , ) c for c = √ h =4 √ -6 -5 -4 -3 -2 -1 0 1 2-6-4-20246 Figure 10:
Trun-cated set A (0) ∩ V ψ ( h )(0 , for c = √ h = 4 √ Let c ∈ (0 , (cid:18) Σ Λ Z ∞ Z − ψ ( h ) −∞ Z k ξ k≤ cs dξ e sλ dsf ( λ ) dλ (cid:19) = 2 (cid:18) Σ Λ Z ∞ Z − ψ ( h ) −∞ ( − cs ) e sλ dsf ( λ ) dλ (cid:19) = (cid:18) c Σ Λ Z ∞ (2 λψ ( h ) + 1) λ e − λψ ( h ) f ( λ ) dλ (cid:19) . The integral R k ξ k≤ cs dξ is the volume of an m -dimensional ball of radius cs which for m = 1 is equal to − cs .For c > (cid:18) Σ Λ Z ∞ Z − ψ ( h ) c −∞ ( − cs ) e sλ dsf ( λ ) dλ (cid:19) = 2 (cid:18) c Σ Λ Z ∞ (2 λ ψ ( h ) c + 1)2 λ e − λ ψ ( h ) c dsf ( λ ) dλ (cid:19) . In a similar way, one can derive the θ -lex coefficients for m ≥ θ -lex-coefficients of a c-class MSTOU pro-cess in the case in which the mean reverting parameter λ is gamma distributed. For a Gamma ( α, β ) distributed mean reversion parameter λ , i.e. f ( λ ) = β α Γ( α ) λ α − e − βλ [0 , ∞ ) ( λ ),the c-class MSTOU process is well defined if α > m + 1 and β > Theorem 3.30.
Let ( Y t ( x )) ( t,x ) ∈ R × R m be a c-class MSTOU process and ( γ, Σ , ν, π ) thecharacteristic quadruplet of its driving Lévy basis. Moreover, let the mean reversion pa-rameter λ be Gamma ( α, β ) distributed with α > m + 1 and β > .(i) If R | x | > x ν ( dx ) < ∞ , γ + R | x | > xν ( dx ) = 0 , then Y t ( x ) is θ -lex-weakly dependent.For c ∈ [0 , , m = 1 : θ Y ( h ) ≤ c Σ Λ β α α ) Γ( α − ψ ( h ) + β ) α − + 2 ψ ( h )Γ( α − ψ ( h ) + β ) α − !! ,m ≥ θ Y ( h ) ≤ V m ( c ) m !Σ Λ β α m +1 m X k =0 (2 ψ ( h )) k k !(2 ψ ( h ) + β ) α − m − k Γ( α − m − k )Γ( α ) ! , and for c > m ∈ N : θ Y ( h ) ≤ V m ( c ) m !Σ Λ β α m +1 m X k =0 (cid:16) ψ ( h ) c (cid:17) k k ! (cid:16) ψ ( h ) c + β (cid:17) α − m − k Γ( α − m − k )Γ( α ) , such that θ Y ( h ) = O ( h ( m +1) − α ) .(ii) If R R | x | ν ( dx ) < ∞ , Σ = 0 and γ as defined in (3.5), then Y t ( x ) is θ -lex-weaklydependent. For c ∈ (0 , , m ∈ N : θ Y ( h ) ≤ V m ( c ) m ! β α (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) m X k =0 ψ ( h ) k k !( ψ ( h ) + β ) α − m − k Γ( α − m − k )Γ( α ) , and for c > m ∈ N : θ Y ( h ) ≤ V m ( c ) d ! β α (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) m X k =0 (cid:16) ψ ( h ) c (cid:17) k k ! (cid:16) ψ ( h ) c + β (cid:17) α − m − k Γ( α − m − k )Γ( α ) , such that θ Y ( h ) = O ( h ( m +1) − α ) . This implies the following sufficient conditions for the asymptotic normality of thesample mean and the sample autocovariance function.27 orollary 3.31.
Let ( Y t ( x )) ( t,x ) ∈ R × R m be a c-class MSTOU process and ( γ, Σ , ν, π ) thecharacteristic quadruplet of its driving Lévy basis. Moreover, let the mean reversion pa-rameter λ be Gamma ( α, β ) distributed with α > m + 1 and β > .(i) If γ + R | x | > xν ( dx ) = 0 , R | x | > | x | δ ν ( dx ) < ∞ for some δ > and α > ( m +1) (cid:16) δ (cid:17) , then the sample mean of Y t ( x ) as defined in (3.26) is asymptoticallynormal.(ii) If γ + R | x | > xν ( dx ) = 0 , R | x | > | x | δ ν ( dx ) < ∞ for some δ > and α > ( m +1) (cid:16) δ δ (cid:17) (cid:16) δ (cid:17) , then the sample autocovariances as defined in (3.27) are asymp-totically normal. Corollary 3.32.
Let ( Y t ( x )) ( t,x ) ∈ R × R m be a c-class MSTOU process and ( γ, Σ , ν, π ) thecharacteristic quadruplet of its driving Lévy basis. Moreover, let the mean reversion pa-rameter λ be Gamma ( α, β ) distributed such that α > d + 1 and β > .(i) If R R | x | ν ( dx ) < ∞ , Σ = 0 , γ as defined in (3.5) and α > ( m + 1) (cid:16) δ (cid:17) , then thesample mean of Y t ( x ) as defined in (3.26) is asymptotically normal.(ii) If R R | x | ν ( dx ) < ∞ , Σ = 0 , γ as defined in (3.5), R | x | > | x | δ ν ( dx ) < ∞ for some δ > and α > ( m + 1) (cid:16) δ δ (cid:17) (cid:16) δ (cid:17) , then the sample autocovariances as defined in(3.27) are asymptotically normal. Remark 3.33.
Since the c-class MSTOU processes satisfy the assumptions of Theorem3.14, we can derive asymptotic normality of the sample mean for this fields under theweaker assumptions E [ Y t ( x ) ] < ∞ and α > d + 1) . We conclude with some remarks regarding the short and long range dependence ofan MSTOU process.
Definition 3.34.
A stationary random field Y = ( Y t ( x )) ( t,x ) ∈ R × R m is said to have temporalshort-range dependence if Z ∞ Cov ( Y t ( x ) , Y t + τ ( x )) dτ < ∞ , and temporal long-range dependence if the integral is infinite.If Cov ( Y t ( x ) , Y t ( x + m x )) = C ( | m x | ) for all m x ∈ R m and a positive definite function C the random field Y is called isotropic. Now, an isotropic random field is said to havespatial short-range dependence if Z ∞ C ( r ) dr < ∞ . We have that an MSTOU is a stationary and isotropic random field, see Theorem 5[47]. By assuming a
Gamma ( α, β )-distributed random parameter λ , we have the followingresults, as shown in Section 6 [26] and Section 3.3 [47]:28i) For m = 0, we have that Y is a supOU process which is well-defined for α > β >
0. Thus, we obtain a long-memory process for 1 < α ≤ α > m = 1, Y is well-defined if α > β > Y exhibits temporal as well asspatial long-range dependence for 2 < α ≤
3. If α > m = 3, Y is well-defined if α > β > Y exhibits temporal as well asspatial long-range dependence for 4 < α ≤
5. If α >
Remark 3.35. (GMM estimator)For m = 0 , a consistent GMM estimator for the supOU process is defined in [55]. In[26], the authors show asymptotic normality of the estimator and that if the underlyingLévy process is of finite variation and all moments exist, then the GMM estimator isasymptotic normally distributed for α > .For m ≥ , a consistent GMM estimator for a c-class MSTOU process is introducedin [47]. The results in Corollaries 3.31 and 3.32 should pave the way for an analysis ofthe asymptotic normality of the GMM estimator defined in [47] using arguments similarto [26]. For example, when m = 1 , in the finite variation case and when all momentsexist, we can apply our results to short-range dependent MSTOU processes with α > . We conclude the section by applying our developed asymptotic theory to the class ofLévy-driven CARMA fields defined on R m .CARMA (continuous autoregressive moving average) fields are an extension of thewell-known CARMA processes (see e.g. [21] for a comprehensive introduction) and havebeen introduced in [16, 22, 42, 51].In [22], the authors define CARMA fields as isotropic random fields Y ( t ) = Z R m g ( t − s ) dL ( s ) , t ∈ R m , (3.39)where g is a radially symmetric kernel and L a real-valued Lévy basis on R m . Whenthe Lévy basis L has a finite second order structure, the CARMA fields generates a richfamily of isotropic covariance functions on R m which are not necessarily non-negative ormonotone.On the other hand, in [51], the author defines CARMA(p,q) fields based on a systemof stochastic partial differential equations. For 0 ≤ q < p , the mild solution of the systemis called a causal CARMA field and is given by Y ( t ) = b T Z t −∞ · · · Z t m −∞ e A ( t − s ) · · · e A m ( t m − s m ) c dL ( s ) , ( t , . . . , t m ) ∈ R m , (3.40)29here A , . . . , A m are companion matrices, L is a real-valued Lévy basis on R m , b =( b , . . . , b p − ) T ∈ R p with b q = 0 and b i = 0 for i > q and c = (0 , . . . , , T ∈ R p , see [51,Definition 3.3].In [16], the author shows the existence of a mild solution for the CARMA stochasticpartial differential equation, c.f. [16, equation (1.7)], in [16, Theorem 5.3]. The causalCARMA fields presented in [51] can be seen as a special case of the CARMA randomfields defined in [16]. A more subtle relationship exists between the definition of CARMAfield in [16] and [22] just when m is odd, see [16, Section 7].In general, our framework can be applied to the class of CARMA fields introducedin [16] and [22] when the conditions of the below theorem are satisfied. Theorem 3.36.
Let L be an R d -valued Lévy basis with characteristic quadruplet ( γ, Σ , ν, π ) such that R k x k > k x k ν ( dx ) < ∞ and γ + R k x k > xν ( dx ) = 0 . Let g : R m → M n × d ( R ) suchthat g is exponentially bounded in norm, i.e. there exists M, K ∈ R + such that k g ( t ) k ≤ M e − K k t k , for all t ∈ R m . (3.41) Then, the moving average field X t = R R m g ( t − s ) L ( ds ) , t ∈ R m is an η -weakly dependentfield with exponentially decaying η -coefficients. Due to the equivalence of norms the resultdoes not depend on a specific choice of norms.Proof. See Section 5.
Remark 3.37.
Since the kernels in (3.39) and (3.40) satisfy equation (3.41), for example,we can show that these fields are η -weakly dependent by applying Theorem 3.36. In the following we will briefly introduce stationary ambit fields. We discuss weak depen-dence properties of such fields and give sufficient conditions for the applicability of theresults in Section 2.2.
Let A t ( x ) ⊂ R × R m for ( t, x ) ∈ R × R m be an ambit set as defined in (3.36). By P we denotethe usual predictable σ -algebra on R , i.e. the σ -algebra generated by all left-continuousadapted processes. Then, a random field X : Ω × R × R m → R is called predictable if it ismeasurable with respect to the σ -algebra P defined by P = P ⊗ B ( R m ). Definition 4.1.
Let Λ be a real-valued Lévy basis on R × R m with characteristic quadruplet ( γ, Σ , ν, π ) , σ a predictable stationary random field on R × R m independent of Λ . Further-more, let l : R m × R → R be a measurable function and A t ( x ) an ambit set. We assumethat f ( ξ, s ) = A t ( x ) ( ξ, s ) l ( ξ, s ) σ s ( ξ ) satisfies (3.6), (3.7) and (3.8) almost surely. Then,the random field Y t ( x ) = Z A t ( x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) , ( t, x ) ∈ R × R m , (4.1)30 s called an ambit field and it is stationary (see p. 185 [6]). Remark 4.2.
Ambit fields require us to define integrals with respect to Lévy bases wherethe integrand is stochastic. Although the integration theory from Rajput and Rosinski justenables us to define stochastic integrals with respect to deterministic integrands [52], onecan extend this theory to stochastic integrands which are predictable and independent ofthe Lévy basis. In fact, we can condition on the σ -algebra generated by the field σ anduse again the integration theory introduced in [52]. Then, such integrals are well definedif the kernel function satisfy the sufficient conditions (3.6), (3.7) and (3.8) almost surely.Allowing for dependence between the volatility field and the Lévy basis demands the use ofa different integration theory as presented in Section 1.2.1 [2], Proposition 39 [6], Theorem3.2 [13] and [25]. We conclude this section by giving explicit formulas for the first and second momentof an ambit field.
Proposition 4.3.
Let Y be an ambit field as defined in (4.1) driven by a real-valued Lévybasis with characteristic quadruplet ( γ, Σ , ν, π ) and Λ -integrable kernel function f ( ξ, s ) = A t ( x ) ( ξ, s ) l ( ξ, s ) σ s ( ξ ) , where σ is predictable, stationary and independent of Λ .(i) If E [ | Y t ( x ) | ] < ∞ the first moment of Y is given by E [ Y t ( x )] = µ Λ E [ σ t ( x )] Z A t ( x ) l ( x − ξ, t − s ) dξds, where µ Λ = γ + R | x |≥ xν ( dx ) .(ii) If E [ Y t ( x ) ] < ∞ it holds V ar ( Y t ( x )) =Σ Λ E [ σ t ( x ) ] Z A t ( x ) l ( x − ξ, t − s ) dξds + µ Z A t ( x ) Z A t ( x ) l ( x − ξ, t − s ) l ( x − ˜ ξ, t − ˜ s ) ρ ( s, ˜ s, ξ, ˜ ξ ) dξdsd ˜ ξd ˜ s and Cov ( Y t ( x ) , Y ˜ t (˜ x )) =Σ Λ E [ σ t ( x ) ] Z A t ( x ) ∩ A ˜ t (˜ x ) l ( x − ξ, t − s ) l (˜ x − ξ, ˜ t − s ) dξds + µ Z A t ( x ) Z A ˜ t (˜ x ) l ( x − ξ, t − s ) l (˜ x − ˜ ξ, ˜ t − ˜ s ) ρ ( s, ˜ s, ξ, ˜ ξ ) dξdsd ˜ ξd ˜ s, where Σ Λ = Σ + R R x ν ( dx ) and ρ ( s, ˜ s, ξ, ˜ ξ ) = E [ σ s ( ξ ) σ ˜ s ( ˜ ξ )] − E [ σ s ( ξ )] E [ σ ˜ s ( ˜ ξ )] .Proof. Immediate from [6, Proposition 41].
Let us consider a stationary ambit field Y = ( Y t ( x )) ( t,x ) ∈ R × R m as defined in (4.1). In orderto analyze the covariance structure of Y , it becomes necessary to specify a model for σ . In314] the authors proposed to model σ by kernel-smoothing of a homogeneous Lévy basis,i.e. a moving average random field σ t ( x ) = Z A σt ( x ) j ( x − ξ, t − s )Λ σ ( dξ, ds ) , (4.2)where Λ σ is a real valued Lévy basis independent of Λ with characteristic quadruplet( µ σ , Σ σ , ν σ , π σ ), A σ = ( A σt ( x )) ( t,x ) ∈ R × R m an ambit set as defined in (3.36) and j a realvalued Λ σ -integrable function. In the following we extend this model and assume σ to bean ( A σ , Λ σ )-influenced MMAF, i.e. σ t ( x ) = Z S Z A σt ( x ) j ( A, x − ξ, t − s )Λ σ ( dA, dξ, ds ) . (4.3) Proposition 4.4.
Let Y = ( Y t ( x )) ( t,x ) ∈ R × R m be an ambit field as defined in (4.1) with σ =( σ t ( x )) ( t,x ) ∈ R × R m being a predictable ( A σ , Λ σ ) -influenced MMAF as defined in (4.3) andsuch that A (0) and A σ (0) satisfy (3.13), j ∈ L ( S × R × R m , π ⊗ λ ) ∩ L ( S × R m × R , π ⊗ λ ) ,where λ indicates the Lebesgue measure on R m +1 , and R | x | > | x | ν σ ( dx ) < ∞ .(i) If l ∈ L ( R × R m ) , R | x | > | x | ν ( dx ) < ∞ and γ + R | x | > xν ( dx ) = 0 , then Y is θ -lex-weakly dependent with θ -lex-coefficients θ Y ( h ) satisfying θ Y ( h ) ≤ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) + 2 Σ Λ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA ) × Σ Λ Z A (0) \ V ψ ( h )(0 , l ( − ξ, − s ) dξds . (4.4) (ii) If l ∈ L ( R × R m ) ∩ L ( R × R m ) , R | x | > | x | ν ( dx ) < ∞ , then Y is θ -lex-weakly dependentwith θ -lex-coefficients θ Y ( h ) satisfying θ Y ( h ) ≤ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds + µ Λ E [ σ (0) ] (cid:18) Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) (4.5)+ 2 Σ Λ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA )32 µ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA ) × Σ Λ Z A (0) \ V ψ ( h )(0 , l ( − ξ, − s ) dξds + µ Λ (cid:18) Z A (0) \ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) . (iii) If l ∈ L ( R × R m ) , R R | x | ν ( dx ) < ∞ and Σ = 0 , then Y is θ -lex-weakly dependentwith θ -lex-coefficients θ Y ( h ) satisfying θ Y ( h ) ≤ σ (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19)(cid:18) Z A (0) ∩ V ψ ( h )(0 , | l ( − ξ, − s ) | dξds (cid:19) + 2 Σ Λ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA ) × (cid:18) | γ | + Z R | x | ν ( dx ) (cid:19)(cid:18) Z A (0) \ V ψ ( h )(0 , | l ( − ξ, − s ) | dξds (cid:19) , (4.6) for all h > , with ψ ( h ) = − bh √ m +1 and b as defined in (3 . , µ Λ = γ + R | x |≥ xν ( dx ) , Σ Λ = Σ + R R x ν ( dx ) , µ Λ σ = γ σ + R | x |≥ xν σ ( dx ) and Σ Λ σ = Σ σ + R R x ν σ ( dx ) .Proof. See Section 5.6.We now analyze the case in which σ is a p -dependent random field for p ∈ N . Proposition 4.5.
Let Y = ( Y t ( x )) ( t,x ) ∈ R × R m be an ambit field as defined in (4.1) witha predictable p -dependent stationary random field σ t ( x ) for p ∈ N . Assume that A (0) satisfies (3.13). Additionally assume that l ∈ L ( R m × R ) , R | x | > | x | ν ( dx ) < ∞ and γ + R | x | > xν ( dx ) = 0 . Then, for sufficiently big h , Y is θ -lex-weakly dependent with θ -lex-coefficients θ Y ( h ) satisfying θ Y ( h ) ≤ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) , (4.7) with ψ ( h ) = − bh √ m +1 and b as defined in (3 . , Σ Λ = Σ + R R x ν ( dx ) .Proof. See Section 5.6. 33 .2.1 Volatility fields If σ is a ( A σ , Λ σ )-influenced MMAF as defined in (4.2), j is a non-negative kernel functionand the following assumptions hold(H) : ( The Lévy basis Λ σ has generating quadruple ( γ σ , , ν σ , π σ ) such that R R | x | ν σ ( dx ) < ∞ , γ σ − R | x |≤ xν σ ( dx ) ≥ ν σ ( R − ) = 0 , then σ has values in R + and we call it volatility or intermittency field. Note thatAssumption (H) imply that Λ σ satisfies the finite variation case. This model is used inseveral applications of the ambit fields, see [6].By assuming additionally that j ∈ L ( S × R × R m , π ⊗ λ ) ∩ L ( S × R m × R , π ⊗ λ )and R | x | > | x | ν σ ( dx ) < ∞ , the results in Proposition 4.4 (i) and (ii) hold. On the otherhand, the results in Proposition 4.4(iii) can be improved. Corollary 4.6.
Let Y = ( Y t ( x )) ( t,x ) ∈ R × R m be an ambit field as defined in (4.1) withpredictable volatility field σ t ( x ) being an ( A σ , Λ σ ) -influenced MMAF such that A (0) and A σ (0) satisfy (3.13), j ∈ L ( S × R × R m , π ⊗ λ ) , l ∈ L ( R × R m ) and Assumption (H)holds. Let γ with respect to Λ and γ ,σ with respect to Λ σ be defined as in (3.5). Then, Y is θ -lex-weakly dependent with θ -lex-coefficients θ Y ( h ) satisfying θ Y ( h ) ≤ (cid:18) | γ ,σ | + Z R | x | ν σ ( dx ) (cid:19)(cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) × (cid:18) Z S Z A (0) | j ( A, − ξ, − s ) | dξdsπ ( dA ) (cid:19)(cid:18) Z A (0) ∩ V ψ ( h )(0 , | l ( − ξ, − s ) | dξds (cid:19) + 2 (cid:18) | γ ,σ | + Z R | x | ν σ ( dx ) (cid:19)(cid:18) | γ | + Z R | x | ν ( dx ) (cid:19) × (cid:18) Z A (0) \ V ψ ( h )(0 , | l ( − ξ, − s ) | dξds (cid:19)(cid:18) Z S Z A (0) ∩ V ψ ( h )(0 , | j ( A, − ξ, − s ) | dξdsπ ( dA ) (cid:19) , (4.8) for all h > , with ψ ( h ) = − bh √ m +1 and b as defined in (3 . .Proof. Analogous to Proposition 4.4.
In this section we study the asymptotic distribution of sample moments of Y . As in Section3.2 we assume that we observe Y on a sequence of finite sampling sets D n ⊂ Z × Z m , suchthat (3.24) holds. Theorem 4.7.
Let Y = ( Y t ( x )) ( t,x ) ∈ Z × Z m be an ambit field as defined in (4.1) such that E [ Y t ( x )] = 0 , E [ | Y t ( x ) | δ ] < ∞ for some δ > . Additionally assume that Y is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ Y ( h ) = O ( h − α ) , where α > m (1 + δ ) .Then σ = X ( u t ,u x ) ∈ Z × Z m E [ Y (0) Y u t ( u x ) |I ] , ith I from Theorem 2.5, is finite, non-negative and | D n | X ( u t ,u x ) ∈ D n Y u t ( u x ) d −−−→ n →∞ εσ, (4.9) where ε is a standard normally distributed random variable which is independent of σ .Proof. The result follows from Theorem 2.5.
Corollary 4.8.
Let Y = ( Y t ( x )) ( t,x ) ∈ Z × Z m be an ambit field as defined in (4.1) suchthat E [ | Y t ( x ) | p + δ ] < ∞ for p ≥ and some δ > . Additionally, let us assume that Y is θ -lex-weakly dependent with θ -lex-coefficients satisfying θ Y ( h ) = O ( h − α ) , where α >m (cid:16) δ (cid:17) ( p − δp + δ ) . Then Σ = X ( u t ,u x ) ∈ Z × Z m Cov ( Y (0) p , Y u t ( u x ) p |I ) , with I from Theorem 2.5, is finite, non-negative and | D n | X ( u t ,u x ) ∈ D n Y u t ( u x ) p − E [ Y (0) p ] d −−−→ n →∞ εσ, (4.10) where ε is a standard normally distributed random variable which is independent of σ .Proof. Analogous to Corollary 3.17.
Remark 4.9.
Theorem 4.7 and Corollary 4.8 are important first steps to develop statis-tical inference for the class of ambit fields. However, we note that the limits in (4.9) and(4.10) are of mixed Gaussian type. Conditions which ensure the ergodicity of an ambitfield with a deterministic kernel can be found in Theorem 3.6 [49] whereas for the case ofa non-deterministic kernel this remains an open problem.
We establish Theorem 2.5 by extending results from [29] to higher dimensions. This en-ables us to connect the conditions on the asymptotic result stated in [27] with our defini-tion of the θ -lex-coefficients.Define the space of bounded, Lipschitz continuous functions L = { g : R → R , boundedand Lipschitz continuous with Lip ( g ) ≤ } .For a σ -algebra M and an R n -valued integrable random field X = ( X t ) t ∈ Z m we define thefollowing two mixingale-type measures of dependence(i) γ ( M , X ) = || E [ X | M ] − E [ X ] || and 35ii) θ ( M , X ) = sup g ∈ L || E [ g ( X ) | M ] − E [ g ( X )] || .Using the above measures of dependence we define the following dependence coefficients γ h = sup j ∈ Z m γ (cid:16) F V hj , X j (cid:17) and θ h = sup j ∈ Z m θ (cid:16) F V hj , X j (cid:17) , (5.1)for h ∈ N ∗ . Obviously, it holds γ ( M , X ) ≤ k X k and γ ( M , X ) ≤ θ ( M , X ) such that γ h ≤ θ h for all h ∈ N ∗ . If X is stationary we can write γ h and θ h from (5.1) as γ h = γ (cid:16) F V h , X (cid:17) and θ h = θ (cid:16) F V h , X (cid:17) , (5.2) h ∈ N ∗ . First, we extend Proposition 2.3 from [30] and connect the θ -lex-coefficients θ ( h )from Definition 2.2 with the mixingale-type coefficient θ h defined above. Lemma 5.1.
Let X = ( X t ) t ∈ Z m be a real-valued random field. Then it holds that θ ( h ) = θ h , h ∈ N ∗ . Proof.
Fix u, h ∈ N ∗ . We first show θ u ( h ) ≤ θ h . Let F ∈ F ∗ , G ∈ F , j ∈ R m , k ≤ u andΓ = { i , . . . , i k } with i , . . . , i k ∈ V hj . Now (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Cov F ( X Γ ) || F || ∞ , G ( X j ) Lip ( G ) !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " F ( X Γ ) || F || ∞ G ( X j ) Lip ( G ) − E " F ( X Γ ) || F || ∞ E " G ( X j ) Lip ( G ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " E " F ( X Γ ) || F || ∞ G ( X j ) Lip ( G ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F V hj − F ( X Γ ) || f || ∞ E " G ( X j ) Lip ( G ) ≤ E "(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F ( X Γ ) || F || ∞ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " G ( X j ) Lip ( G ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F V hj − E " G ( X j ) Lip ( G ) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " G ( X j ) Lip ( G ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F V hj − E " G ( X j ) Lip ( G ) = θ ( F V hj , X j ) ≤ θ h . Taking the supremum on the left hand side we obtain θ u ( h ) ≤ θ h and finally θ ( h ) ≤ θ h .To prove the converse inequality, we first remark that by the martingale convergencetheorem θ ( F V hj , X j ) = lim k →∞ θ ( F V hj \ V kj , X j ) , (5.3)Now, let G ∈ L , i.e. G ∈ F with Lip ( G ) ≤ j ∈ R m . We first define X hj ( k ) = { X i : i ∈ V hj \ V kj } and F ( X hj ( k )) = sign ( E [ G ( X j ) |F V hj \ V kj ] − E [ g ( X j )]) for k > h . Then F ∈ F ∗ with k F k ∞ = 1 and it holds E h(cid:12)(cid:12)(cid:12) E [ g ( X j ) |F V hj \ V kj ] − E [ g ( X j )] (cid:12)(cid:12)(cid:12)i = E (cid:20)(cid:16) E [ g ( X j ) |F V hj \ V kj ] − E [ g ( X j )] (cid:17) F (cid:18) X hj ( k ) (cid:19)(cid:21) E (cid:20) E (cid:20) F (cid:18) X hj ( k ) (cid:19) g ( X j ) |F V hj \ V kj (cid:21) − E (cid:20) F (cid:18) X hj ( k ) (cid:19)(cid:21) E [ g ( X j )] (cid:21) = Cov (cid:18) F (cid:18) X hj ( k ) (cid:19) , g ( X j ) (cid:19) ≤ θ ( h ) . Using (5.3) we can deduce the stated equality.We define Q X as the generalized inverse of the tail function x P ( | X | > x ) and G X as the inverse of x R x Q X ( u ) du . Lemma 5.2.
Let X = ( X t ) t ∈ Z m be a stationary centered real-valued random field suchthat k X k < ∞ and assume that Z || X || ˜ θ ( u ) Q X ◦ G X ( u ) du < ∞ , (5.4) with Q X and G X as defined above and ˜ θ ( u ) = P k ∈ V { u<θ | k | } . Then X k ∈ V | E [ X k E | k | [ X ]] | < ∞ , (5.5) where E | k | [ X ] = E [ X | F V | k | ] .Proof. First, let us observe that X k is F V | k | measurable, since k ∈ V | k | . Then define ε k = sign ( E | k | [ X ]) such that X k ∈ V | E [ X k E | k | [ X ]] | ≤ X k ∈ V E [ | X k || E | k | [ X ] | ] = X k ∈ V E [ | X k | ε k E | k | [ X ]] = X k ∈ V E [ E | k | [ | X k | ε k X ]]= X k ∈ V Cov ( | X k | ε k , X ) . We use Equation (4.2) of [29, Proposition 1] to get ≤ X k ∈ V Z γ (cid:18) F V | k | ,X (cid:19) / Q ε k | X k | ◦ G X ( u ) du = 2 Z k X k X k ∈ V (cid:26) u<γ (cid:18) F V | k | ,X (cid:19) / (cid:27) Q X k ◦ G X ( u ) du ≤ Z k X k X k ∈ V (cid:26) u<θ (cid:18) F V | k | ,X (cid:19) / (cid:27) Q X k ◦ G X ( u ) du ≤ Z k X k X k ∈ V { u<θ | k | / } Q X k ◦ G X ( u ) du. Now let ˜ θ ( u ) = P k ∈ V { u<θ | k | } and note that Q X k = Q X , such that ≤ Z || X || ˜ θ ( u ) Q X ◦ G X ( u ) du. This shows that (5.5) holds if (5.4) is satisfied.37e now derive sufficient criteria such that (5.4) holds similar to [29, Lemma 2].
Lemma 5.3.
Let X = ( X t ) t ∈ Z m be a stationary real-valued random field and θ h defined asabove. Then (5.4) holds if k X k r < ∞ for some r > p > and P ∞ h =0 ( h +1) m ( p − ( r − r − p ) − θ h < ∞ . In particular for p = 2 and r = 2 + δ with δ > the above condition holds if θ h ∈ O ( h − α ) for α > m (1 + δ ) .Proof. As stated in [29, Proof of Lemma 2] we note that R k X k Q r − X ◦ G X ( u ) du = R Q rX ( u ) du = E [ | X | r ]. Applying Hölder’s inequality with q = r − r − p and q = r − p − gives Z k X k ˜ θ ( u ) p − Q p − X ◦ G X ( u ) du ! ( r − ≤ Z k X k ˜ θ ( u ) ( p − ( r − r − p ) du ! ( r − p ) Z k X k Q r − X ◦ G X ( u ) du ! ( p − = Z k X k ˜ θ ( u ) ( p − ( r − r − p ) du ! ( r − p ) k X k ( rp − r ) r . Let us note that θ h as defined in (5.2) is non-increasing. Then, for any function f we have f (˜ θ ( u )) = f X k ∈ V { u<θ | k | } = ∞ X h =0 f X k ∈ V { u<θ | k | } { θ h +1 ≤ u<θ h } = ∞ X h =0 f X k ∈ V : | k |≤ h { θ h +1 ≤ u<θ h } Note that P k ∈ V : | k |≤ h P m − i =0 h (2 h + 1) i = ((2 h + 1) m −
1) such that= ∞ X h =0 f (cid:18) (cid:18) (2 h + 1) m − (cid:19)(cid:19) { θ h +1 ≤ u<θ h } . Let us assume that f is monotonically increasing, sub-multiplicative and f (0) = 0 suchthat f (cid:18) (cid:18) (2 h + 1) m − (cid:19)(cid:19) ≤ f (2 m − ) f (( h +1) m ) = P hk =0 f (2 m − ) ( f (( k + 1) m ) − f ( k m )).Finally we can deduce ≤ ∞ X h =0 f (2 m − ) f (( h + 1) m ) { θ h +1 ≤ u<θ h } = f (2 m − ) ∞ X h =0 ( f (( h + 1) m ) − f ( h m )) { u<θ h } . Applying the above result for f ( x ) = x v with v = ( p − ( r − r − p ) and noting that ( h + 1) vm − h vm ≤ vm ( h + 1) vm − for vm ≥ h + 1) vm − h vm ≤ vm h vm − for vm < h > C = 2 v ( m − θ > Z k X k (˜ θ ( u )) ( p − ( r − r − p ) du ( r − p ) (cid:16) C + R k X k v ( m − vm P ∞ h =0 ( h + 1) vm − { u<θ h } du (cid:17) ( r − p ) , if vm ≥ (cid:16) C + R k X k v ( m − vm P ∞ h =1 h vm − { u<θ h } du (cid:17) ( r − p ) , if vm < (cid:16) C + 2 v ( m − vm P ∞ h =0 ( h + 1) vm − θ h (cid:17) ( r − p ) , if vm ≥ (cid:16) C + 2 v ( m − vm P ∞ h =1 h vm − θ h (cid:17) ( r − p ) , if vm < ≤ max (cid:16) , r − p − (cid:17) C r − p + ( vm )2 v ( m − ∞ X h =0 ( h + 1) vm − θ h !! ( r − p ) , which concludes the proof. Proof of Theorem 2.5.
In order to use [27, Theorem 1] we need to show that X k ∈ V | E [ X k E | k | [ X k ]] | < ∞ . By Lemma 5.2 and Lemma 5.3 the result is proven if θ h ∈ O ( h − α ) with α > m (1 + δ ).Finally, since X is stationary an application of Lemma 5.1 concludes. Proof of Proposition 3.11. (i) Let t ∈ R m , ψ >
0. We restrict the MMAF X to a finite support and define thetruncated sequence X ( ψ ) t = Z S Z A t \ V ψt f ( A, t − s )Λ( dA, ds ) . (5.6)Note that the kernel function f is square integrable such that (3.6), (3.7) and (3.8)hold. Therefore, f is Λ-integrable. Since E [ X t X t ] < ∞ for all t ∈ R m by Proposition3.6 we can derive an upper bound of the expectation E h k X t − X ( ψ ) t k i = E "(cid:13)(cid:13)(cid:13)(cid:13) Z S Z A t ∩ V ψt f ( A, t − s )Λ( dA, ds ) (cid:13)(cid:13)(cid:13)(cid:13) ≤ E "(cid:13)(cid:13)(cid:13)(cid:13) Z S Z A t ∩ V ψt f ( A, t − s )Λ( dA, ds ) (cid:13)(cid:13)(cid:13)(cid:13) = n X κ =1 E (cid:18) Z S Z A t ∩ V ψt f ( A, t − s )Λ( dA, ds ) (cid:19) ( κ ) ! . Using Proposition 3.7 and the translation invariance of A t and V ψt this is equal to (cid:18) Z S Z A ∩ V ψ tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) . G ∈ F and F ∈ F ∗ , i.e. F, G are bounded with k F k ∞ , k G k ∞ ≤ G is additionally Lipschitz-continuous, u ∈ N ∗ , h ∈ R + , Γ = { i , . . . , i u } ∈ ( R m ) u and j ∈ R m as in Definition 2.2 such that i , . . . , i u ∈ V hj . For a ∈ { , . . . , u } define X i a = Z S Z A ia f ( A, i a − s )Λ( dA, ds ) and X ( ψ ) j = Z S Z A j \ V ψj f ( A, j − s )Λ( dA, ds ) . W.l.o.g. we assume that i a ≤ lex i u for all a ∈ { , . . . , u } . If there exists a ψ such that A i u ∩ A j \ V ψj = ∅ , then A i a ∩ A j \ V ψj = ∅ .Now, A is translation invariant with initial sphere of influence A . Furthermore, A satisfies (3.13). Then, for ψ ( h ) as defined in (3.16) it holds A i u ∩ A j \ V ψj = ∅ .From now on we set ψ = ψ ( h ). We then get that I a = S × A i a and J = S × A j \ V ψj aredisjoint or have intersection on a set S × O , where O ⊂ R m and dim( O ) < m . Since( π × λ )( S × O ) = 0, by the definition of a Lévy basis X i a and X ( ψ ) j are independent forall a ∈ { , . . . , u } . Finally, we get that X Γ and X ( ψ ) j are independent and thereforealso F ( X Γ ) and G ( X ( ψ ) j ). Now | Cov ( F ( X Γ ) , G ( X j )) | Âă ≤ | Cov ( F ( X Γ ) , G ( X ( ψ ) j )) | + | Cov ( F ( X Γ ) , G ( X j ) − G ( X ( ψ ) j )) | = | E [( G ( X j ) − G ( X ( ψ ) j )) F ( X Γ )] − E [ G ( X j ) − G ( X ( ψ ) j )] E [ F ( X Γ )] |≤ k F k ∞ E h | G ( X j ) − G ( X ( ψ ) j ) | i ≤ Lip ( G ) k F k ∞ E [ k X j − X ( ψ ) j k ] , and using the above inequality for E [ k X t − X ( ψ ) t k ] with ψ as described above weconclude ≤ Lip ( G ) k F k ∞ (cid:18) Z S Z A ∩ V ψ tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) . Therefore X is θ -lex weakly dependent with θ -lex-coefficients θ X ( h ) ≤ (cid:18) Z S Z A ∩ V ψ tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) , which converge to zero as h goes to infinity by applying the dominated convergencetheorem.(ii) Let t ∈ R m , ψ >
0. As in Proposition 3.11 we define X ( ψ ) t . For the upper bound ofthe expectation we can derive with the help of Proposition 3.7 E h k X t − X ( ψ ) t k i ≤ Z S Z A ∩ V ψt tr( f ( A, t − s )Σ Λ f ( A, t − s ) ) dsπ ( dA )+ (cid:13)(cid:13)(cid:13)(cid:13) Z S Z A ∩ V ψt f ( A, t − s ) µ Λ dsπ ( dA ) (cid:13)(cid:13)(cid:13)(cid:13) ! . θ -lex-coefficients.(iii) Since the kernel function f is in L the Equations (3.9) and (3.10) hold and f isΛ-integrable and E [ X t ] < ∞ by Proposition 3.6. In the following we use the notationof Proposition 3.11. Let t ∈ R m and ψ >
0. Then, we can derive with the help ofProposition 3.7 E h k X t − X ( ψ ) t k i ≤ (cid:18) Z S Z A ∩ V ψ (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA ) + Z S Z A ∩ V ψ Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) y (cid:13)(cid:13)(cid:13) ν ( dy ) dsπ ( dA ) (cid:19) , where we used that E [ R E f ( t ) dµ ( t )] = R E f ( t ) dν ( t ) for a Poisson random measure µ with corresponding intensity measure ν and arbitrary set E .Now for F , G , X Γ , X j and ψ = ψ ( h ) and as described in the proof of Proposition3.11 we get | Cov ( F ( X Γ ) , G ( X j )) | ≤ Lip ( G ) k F k ∞ (cid:18) Z S Z A ∩ V ψ ( h )0 (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA )+ Z S Z A ∩ V ψ ( h )0 Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) y (cid:13)(cid:13)(cid:13) ν ( dy ) dsπ ( dA ) (cid:19) Therefore X is θ -lex weakly dependent with θ -lex-coefficients θ X ( h ) ≤ (cid:18) Z S Z A ∩ V ψ ( h )0 (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA )+ Z S Z A ∩ V ψ ( h )0 Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) y (cid:13)(cid:13)(cid:13) ν ( dy ) dsπ ( dA ) (cid:19) , which converge to zero as h goes to infinity by applying the dominated convergencetheorem.(iv) We use the notation of Proposition 3.11 and realize Λ in distribution as the sumof two R d -valued independent Lévy bases Λ and Λ with characteristic quadruplets( γ, Σ , ν (cid:12)(cid:12)(cid:12) k x k≤ , π ) and (0 , , ν (cid:12)(cid:12)(cid:12) k x k > , π ). Since f ∈ L ∩ L we know that both integrals X (Λ ) t = R S R R m f ( A, t − s )Λ ( dA, ds ) and X (Λ ) t = R S R R m f ( A, t − s )Λ ( dA, ds ) existand additionally it holds R S R R m f ( A, t − s )Λ( dA, ds ) = X (Λ ) t + X (Λ ) t . Let us notethat E h k X t − X ( ψ ) t k i ≤ E h k X (Λ ) t − ( X (Λ ) t ) ( ψ ) k i + E h k X (Λ ) t − ( X (Λ ) t ) ( ψ ) k i ≤ E "(cid:13)(cid:13)(cid:13)(cid:13) X (Λ ) t − ( X (Λ ) t ) ( ψ ) (cid:13)(cid:13)(cid:13)(cid:13) + E "(cid:13)(cid:13)(cid:13)(cid:13) X (Λ ) t − ( X (Λ ) t ) ( ψ ) (cid:13)(cid:13)(cid:13)(cid:13) . Following the proof of (ii) (for the first summand) and (iii) (for the second summand)we obtain the stated bound for the θ -lex-coefficients.41 roof of Proposition 3.12. In order to show that the MMAF Z is well defined weneed to show that g ( A, s ) is Λ-integrable as described in Theorem 3.3, i.e. g ( A, s ) satisfiesthe conditions (3.6), (3.7) and (3.8). Let us consider an induction over k . For the sakeof brevity we will consider the norm k ( x , . . . , x m ) k = k x k + · · · + k x m k for x i ∈ R for i = 1 , . . . , m . Then, for k = 1 we consider g ( A, s ) = (cid:16) f (cid:16) A, s (cid:17) , f (cid:16)
A, s − s , . . . , f (cid:16) A, s − s | S k | (cid:17) , where | S k | = 2 · m − . Note that for x ∈ R d [0 , ( k g ( A, s ) x k ) ≤ [0 , ( k f ( A, s ) x k ) , [0 , ( k g ( A, s ) x k ) ≤ [0 , ( k f ( A, s − s ) x k ) ,. . . [0 , ( k g ( A, s ) x k ) ≤ [0 , ( k f ( A, s − s | S k | ) x k ) , such that Z S Z R m (cid:13)(cid:13)(cid:13) g ( A, s ) γ + Z R d g ( A, s ) x (cid:0) [0 , ( k g ( A, s ) x k ) − [0 , ( k x k ) (cid:1) ν ( dx ) (cid:13)(cid:13)(cid:13) dsπ ( dA ) ≤ Z S Z R m (cid:13)(cid:13)(cid:13) f ( A, s ) γ + Z R d f ( A, s ) x (cid:0) [0 , ( k f ( A, s ) x k ) − [0 , ( k x k ) (cid:1) ν ( dx ) (cid:13)(cid:13)(cid:13) dsπ ( dA )+ Z S Z R m (cid:13)(cid:13)(cid:13) f ( A, s − s ) γ + Z R d f ( A, s − s ) x (cid:0) [0 , ( k f ( A, s − s ) x k ) − [0 , ( k x k ) (cid:1) ν ( dx ) (cid:13)(cid:13)(cid:13) dsπ ( dA )+ · · · + Z S Z R m (cid:13)(cid:13)(cid:13) f (cid:0) A, s − s | S k | ) γ + Z R d f ( A, s − s | S k | ) x (cid:0) [0 , ( k f ( A, s − s | S k | ) k− [0 , ( k x k ) (cid:1) ν ( dx ) (cid:13)(cid:13)(cid:13) dsπ ( dA ) . Since f is Λ-integrable we can conclude that the above expression is finite and (3.6)holds. Now Z S Z R m k g ( A, s )Σ g ( A, s ) k dsπ ( dA )= Z S Z R m k f ( A, s )Σ f ( A, s ) k dsπ ( dA )+ Z S Z R m k f ( A, s − s )Σ f ( A, s − s ) k dsπ ( dA )+ . . . + Z S Z R m k f ( A, s − s | S k | )Σ f ( A, s − s | S k | ) k dsπ ( dA ) , is finite since f is Λ-integrable and (3.7) holds. Since ( P ni =1 a i ) ≤ n P ni =1 a i we have k g ( A, s ) k ≤ | S k | (cid:18) k f ( A, s ) k + k f ( A, s − s ) k + . . . + k f ( A, s − s | S k | ) k (cid:19) , and finally Z S Z R m Z R d (cid:18) ∧ k g ( A, s ) x k (cid:19) ν ( dx ) dsπ ( dA ) ≤ | S k | Z S Z R m Z R d (cid:18) ∧ k f ( A, s ) k (cid:19) ν ( dx ) dsπ ( dA ) + Z S Z R m Z R d (cid:18) ∧ k f ( A, s − s ) k (cid:19) ν ( dx ) dsπ ( dA )+ . . . + Z S Z R m Z R d (cid:18) ∧ k f ( A, s − s | S k | ) k (cid:19) ν ( dx ) dsπ ( dA ) ! , f satisfies (3.8). Thus, g is Λ-integrable and Z is an ( A, Λ)-influencedMMAF.Assume X satisfies the assumptions of Proposition (i) 3.11 and consider ψ ( h ) as definedin (3.16). Then θ ( i ) Z ( h ) ≤ Z S Z A ∩ V ψ ( h )0 tr (cid:18) g ( A, − s )Σ Λ g ( A, − s ) (cid:19) dsπ ( dA ) =2 Z S Z A ∩ V ψ ( h )0 tr (cid:18) f ( A, − s )Σ Λ f ( A, − s ) (cid:19) dsπ ( dA )+ Z S Z A ∩ V ψ ( h )0 tr (cid:18) f ( A, s − s )Σ Λ f ( A, s − s ) (cid:19) dsπ ( dA )+ . . . + Z S Z A ∩ V ψ ( h )0 tr (cid:18) f ( A, s | S k | − s )Σ Λ f ( A, s | S k | − s (cid:17) ) dsπ ( dA ) ≤ | S k | m Z S Z A ∩ V ψ ( h ) − k tr (cid:18) f (cid:16) A, − s (cid:17) Σ Λ f (cid:16) A, − s (cid:17) (cid:19) dsπ ( dA ) = | S k | m ˆ θ ( i ) X ( h − ψ − ( k )) . where ψ − denotes the inverse of ψ , for all ψ ( h ) > k . Thus, Z is a ( k + 1)(2 k + 1) m − -dimensional θ -lex-weakly dependent MMAF. Similar calculations lead to the other state-ments in (3.22). Proof of Theorem 3.14.
Let us first consider X to be univariate. In order to use [27,Theorem 1] we need to show X k ∈ V | X k E | k | [ X ] | ∈ L . (5.7)The Hölder inequality implies k X k E | k | ( X ) k ≤ k X k k k E [ X |F V | k | ] k , where k X k k < C for all k and a constant C . Furthermore, we note that k E [ X |F ] k = k E [ E [ X |G ] |F ] k ≤ k E [ X |G ] k holds for a σ -algebra G , a sub σ -algebra F and an L random variable X , as the conditional expectation is the orthogonal projection in L .Now, using (3.12) k E [ X |F V | k | ] k = (cid:13)(cid:13)(cid:13)(cid:13) E (cid:20)Z S Z V A ( s ) f ( A, − s )Λ( dA, ds ) (cid:12)(cid:12)(cid:12)(cid:12) σ ( X l : l ∈ V | k | ) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) ≤ (cid:13)(cid:13)(cid:13)(cid:13) E (cid:20)Z S Z V A ( s ) f ( A, − s )Λ( dA, ds ) (cid:12)(cid:12)(cid:12)(cid:12) σ (Λ( B ) : B ∈ B ( V | k | )) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) E Z S Z V ∩ V | k | A ( s ) f ( A, − s )Λ( dA, ds )+ Z S Z V \ V | k | A ( s ) f ( A, − s )Λ( dA, ds ) (cid:12)(cid:12)(cid:12)(cid:12) σ (Λ( B ) : B ∈ B ( V | k | )) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) E "Z S Z V | k | A ( s ) f ( A, − s )Λ( dA, ds ) (cid:12)(cid:12)(cid:12)(cid:12) σ (Λ( B ) : B ∈ B ( V | k | )) + E "Z S Z V \ V | k | A ( s ) f ( A, − s )Λ( dA, ds ) (cid:12)(cid:12)(cid:12)(cid:12) σ (Λ( B ) : B ∈ B ( V | k | )) . We note that R S R V | k | A ( s ) f ( A, − s )Λ( dA, ds ) is measurable with respect to σ (Λ( B ) : B ∈B ( V | k | )). Since Λ is a Lévy basis (in particular independent for disjoint sets) we get that R S R V \ V | k | A ( s ) f ( A, − s ) Λ( dA, ds ) is independent of σ (Λ( B ) : B ∈ B ( V | k | )), such thatthe above equation is equal to= (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z S Z V | k | A ( s ) f ( A, − s )Λ( dA, ds ) + E "Z S Z V \ V | k | A ( s ) f ( A, − s )Λ( dA, ds ) . Since γ + R k x k > xν ( dx ) = 0 the second summand is equal to zero and we arrive at= (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z S Z V | k | A ( s ) f ( A, − s ) f ( A, − s )Λ( dA, ds ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = (cid:18) Z S Z A ∩ V | k | tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) = θ X ( | k | ) , using Proposition 3.7.The stated result then follows from [27, Theorem 1] using the dominated convergencetheorem. The Cramér-Wold device establishes the multivariate case straightforwardly. Proof of Proposition 3.19. (i) Let t ∈ R m and ψ >
0. We truncate the MMAF to a finite support, i.e. X ( ψ ) t = Z S Z R m f ( A, t − s ) ( − ψ,ψ ) m ( t − s )Λ( dA, ds ) = Z S Z ( t − ψ,t + ψ ) m f ( A, t − s )Λ( dA, ds ) . (5.8)Note that the kernel function f is square integrable such that (3.6), (3.7) and (3.8)hold. Therefore, f is Λ-integrable. Since E [ X t X t ] < ∞ for all t ∈ R m by Proposition3.6 we can derive an upper bound of the expectation E h k X t − X ( ψ ) t k i E "(cid:13)(cid:13)(cid:13)(cid:13) Z S Z(cid:16) ( t − ψ,t + ψ ) m (cid:17) c f ( A, t − s )Λ( dA, ds ) (cid:13)(cid:13)(cid:13)(cid:13) ≤ E "(cid:13)(cid:13)(cid:13)(cid:13) Z S Z(cid:16) ( t − ψ,t + ψ ) m (cid:17) c f ( A, t − s )Λ( dA, ds ) (cid:13)(cid:13)(cid:13)(cid:13) = n X κ =1 E (cid:18) Z S Z(cid:16) ( t − ψ,t + ψ ) m (cid:17) c f ( A, t − s )Λ( dA, ds ) (cid:19) ( κ ) ! , where x ( κ ) denotes the κ th coordinate of x ∈ R n . Using Proposition 3.7 and thestationarity of X this is equal to (cid:18) Z S Z(cid:16) ( ψ,ψ ) m (cid:17) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) . Now let
F, G ∈ F , i.e. bounded and additionally Lipschitz-continuous, ( u, v ) ∈ N ∗ × N ∗ , h ∈ R + , Γ i = { i , . . . , i u } ∈ ( R m ) u and Γ j = { j , . . . , j v } ∈ ( R m ) v as inDefinition 2.1 such that dist (Γ i , Γ j ) ≥ h . For a ∈ { , . . . , u } and b ∈ { , . . . , v } define X ( ψ ) i a = Z S Z ( i a − ψ,i a + ψ ) m f ( A, i a − s )Λ( dA, ds ) and X ( ψ ) j b = Z S Z ( j b − ψ,j b + ψ ) m f ( A, j b − s )Λ( dA, ds ) . Now consider a ∈ { , . . . , u } and b ∈ { , . . . , v } such that inf ≤ x ≤ u, ≤ y ≤ v k i x − j y k ∞ = k i a − j b k ∞ . Define the two sets I a = S × ( i a − ψ, i a + ψ ) m and J b = S × ( j b − ψ, j b + ψ ) m .Furthermore, consider ψ = h and since k i a − j b k ∞ ≥ h it holds that I a and J b aredisjoint as well as I ˜ a and J ˜ b for all ˜ a = 1 , . . . , u and ˜ b = 1 , . . . , v . By the definition ofa Lévy basis X ( ψ ) i a and X ( ψ ) j b are independent for all a ∈ { , . . . , u } and b ∈ { , . . . , v } .Finally, we get that X ( ψ )Γ i and X ( ψ )Γ j are independent and therefore also F ( X ( ψ )Γ i ) and G ( X ( ψ )Γ j ).Now | Cov ( F ( X Γ i ) , G ( X Γ j )) | Âă ≤ | Cov ( F ( X Γ i ) − F ( X ( ψ )Γ i ) , G ( X Γ j )) | + | Cov ( F ( X ( ψ )Γ i ) , G ( X Γ j ) − G ( X ( ψ )Γ i )) | = | E [( F ( X Γ i ) − F ( X ( ψ )Γ i )) G ( X Γ j )] − E [ F ( X Γ i ) − F ( X ( ψ )Γ i )] E [ G ( X Γ j )] | + | E [( G ( X Γ j ) − G ( X ( ψ )Γ j )) F ( X ( ψ )Γ i )] − E [ G ( X Γ j ) − G ( X ( ψ )Γ j )] E [ F ( X ( ψ )Γ i )] |≤ (cid:18) k G k ∞ E h | F ( X Γ i ) − F ( X ( ψ )Γ i ) | i + k F k ∞ E h | G ( X Γ j ) − G ( X ( ψ )Γ i ) | i(cid:19) ≤ (cid:18) k G k ∞ Lip ( F ) u X l =1 E [ k X i l − X ( ψ ) i l k ] + k F k ∞ Lip ( G ) v X k =1 E [ k X j k − X ( ψ ) j k k ] (cid:19) , and using the above inequality for E [ k X t − X ( ψ ) t k ] with ψ chosen as described abovewe conclude ≤ u k G k ∞ Lip ( F ) + v k F k ∞ Lip ( G )) (cid:18) Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) . X is η -weakly dependent with η -coefficients η X ( h ) ≤ (cid:18) Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c tr( f ( A, − s )Σ Λ f ( A, − s ) ) dsπ ( dA ) (cid:19) , which converge to zero as h goes to infinity by applying the dominated convergencetheorem.(iii) Since the kernel function f is in L the Equations (3.9) and (3.10) hold and f is Λ-integrable. Let t ∈ R m , ψ > X ( ψ ) t as in Proposition 3.19. Moreover, Proposition3.6 implies that E [ X t ] < ∞ . Then, using Proposition 3.7 we arrive at E h k X t − X ( ψ ) t k i ≤ (cid:18) Z S Z(cid:16) ( t − ψ,t + ψ ) m (cid:17) c (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA ) + Z S Z(cid:16) ( t − ψ,t + ψ ) m (cid:17) c Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) x (cid:13)(cid:13)(cid:13) ν ( dx ) dsπ ( dA ) (cid:19) . Now for F , G , X Γ i and X Γ j and ψ as described in the proof of Proposition 3.11 weget | Cov ( F ( X Γ i ) , G ( X Γ j )) | ≤ u k G k ∞ Lip ( F ) + v k F k ∞ Lip ( G )) Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA ) + Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) x (cid:13)(cid:13)(cid:13) ν ( dx ) dsπ ( dA ) . Therefore, X is η weakly dependent with η -coefficients η X ( h ) ≤ Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c (cid:13)(cid:13)(cid:13) f ( A, − s ) γ (cid:13)(cid:13)(cid:13) dsπ ( dA )+ Z S Z(cid:16)(cid:16) − h , h (cid:17) m (cid:17) c Z R d (cid:13)(cid:13)(cid:13) f ( A, − s ) x (cid:13)(cid:13)(cid:13) ν ( dx ) dsπ ( dA ) , which converge to zero as h goes to infinity by applying the dominated convergencetheorem. Proof of Theorem 3.36.
Let k A k F = q tr ( AA ) for A ∈ M n × d ( R ) denote the Frobeniusnorm and k x k = P mν =1 | x ( ν ) | for x ∈ R m . From Proposition 3.19 it follows that X is η -weakly dependent with η -coefficients η X ( h ) = Z (( − h , h ) m ) c tr( g ( − s )Σ L g ( − s ) ) ds = Z (( − h , h ) m ) c k g ( − s )Σ k F ds ≤ k Σ k F Z (( − h , h ) m ) c k g ( − s ) k F ds ≤ k Σ k F M Z (( − h , h ) m ) c e − K k s k ds k Σ k F M Z R m e − K k s k ds − Z ( − h , h ) m e − K k s k ds = k Σ k F M (cid:18) K (cid:19) m − K − e − K h K m = k Σ k F M (2 K ) m − (cid:16) − e − K h (cid:17) m ≤ m k Σ k F M (2 K ) m e − K h , where the last inequality follows from Bernoulli’s inequality. Proof of Proposition 4.4. (i) Let ( t, x ) ∈ R × R m , ψ >
0. We define the two truncated sequences˜ Y ( ψ ) t ( x ) = Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) and Y ( ψ ) t ( x ) = Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s ) σ ( ψ ) s ( ξ )Λ( dξ, ds ) , (5.9)where σ ( ψ ) t ( x ) = Z S Z A σ ( t,x ) ( x ) \ V ψ ( t,x ) j ( x − ξ, t − s )Λ σ ( dA, dξ, ds ) . Since the kernel function j is square integrable we have that (3.6), (3.7) and (3.8)hold. Therefore, j is Λ σ -integrable and σ is well-defined and stationary. Now, byProposition 3.6 it holds that σ t ( x ) ∈ L (Ω). Since additionally l ∈ L ( R m × R ) and σ is stationary it holds that lσ ∈ L (Ω × R m × R ). This implies lσ ∈ L ( R m × R )almost surely. Then, lσ satisfies (3.6), (3.7) and (3.8) almost surely and the ambitfield Y is well-defined. Analogous to Proposition 3.11 we derive an upper bound ofthe expectation using Proposition 4.3 E h | Y t ( x ) − Y ( ψ ) t ( x ) | i ≤ E h | Y t ( x ) − ˜ Y ( ψ ) t ( x ) | i + E h | ˜ Y ( ψ ) t ( x ) − Y ( ψ ) t ( x ) | i == E "(cid:12)(cid:12)(cid:12)(cid:12) Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) (cid:12)(cid:12)(cid:12)(cid:12) + E "(cid:12)(cid:12)(cid:12)(cid:12) Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s )( σ s ( ξ ) − σ ( ψ ) s ( ξ ))Λ( dξ, ds ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ E Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) ! + E Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s )( σ s ( ξ ) − σ ( ψ ) s ( ξ ))Λ( dξ, ds )) ! . A t ( x ) and V ψ ( t,x ) this is equalto (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ (0 , l ( − ξ, − s ) dξds (cid:19) + E Z S Z A σ (0) ∩ V ψ (0 , j ( A, − ξ, − s )Λ σ ( dA, dξ, ds ) ! Σ Λ Z A (0) \ V ψ (0 , l ( − ξ, − s ) dξds = (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ (0 , l ( − ξ, − s ) dξds (cid:19) + Σ Λ σ Z S Z A σ (0) ∩ V ψ (0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ (0 , j ( A, − ξ, − s ) dξdsπ ( dA ) × Σ Λ Z A (0) \ V ψ (0 , l ( − ξ, − s ) dξds . Now let G ∈ F and F ∈ F ∗ , i.e. F, G are bounded with k F k ∞ , k G k ∞ ≤ G additionally Lipschitz-continuous, u ∈ N ∗ , h ∈ R + , Γ = { ( t i , x i ) , . . . , ( t i u , x i u ) } ∈ ( R × R m ) u and ( t j , x j ) ∈ R × R m as in Definition 2.2 such that ( t i , x i ) , . . . , ( t i u , x i u ) ∈ V h ( t j ,x j ) . For a ∈ { , . . . , u } define Y t ia ( x i a ) = Z A tia ( x ia ) l ( x i a − ξ, t i a − s ) σ s ( ξ )Λ( dξ, ds ) and Y ( ψ ) t j ( x j ) = Z A tj ( x j ) \ V ψ ( h )( tj,xj ) l ( x j − ξ, t j − s ) σ ( ψ ) s ( ξ )Λ( dξ, ds ) . W.l.o.g. we assume that ( t i a , x i a ) ≤ lex ( t i u , x i u ) for all a ∈ { , . . . , u } . Since A (0) ∪ A σ (0) satisfy (3.13) we find analogous to (3.16) a function ψ ( h ) = − hb √ m +1 , suchthat A σs ( ξ ) and A σs ( ξ ) \ V ψ ( h )( s ,ξ ) are disjoint for all ( s , ξ ) ∈ A ( t iu ,x iu ) and ( s , ξ ) ∈ A ( t j ,x j ) \ V ψ ( h )( t j ,x j ) or have intersection with zero Lebesgue measure. Then, by the defini-tion of a Lévy basis we get that σ s ( ξ ) and σ ψ ( h ) s ( ξ ) are independent. Furthermore,it holds that A t iu ( x i u ) and A t j ( x j ) \ V ψ ( h )( t j ,x j ) are disjoint. We set ψ = ψ ( h ). Finally, weget that Y t ia ( x i a ) and Y ( ψ ( h )) t j ( x j ) are independent for all a ∈ { , . . . , u } and thereforealso F ( Y Γ ) and G ( Y ( ψ ( h )) t j ( x j )). Now | Cov ( F ( Y Γ ) , G ( Y t j ( x j ))) | Âă ≤ | Cov ( F ( X Γ ) , G ( Y ( ψ ( h )) t j ( x j ))) | + | Cov ( F ( Y Γ ) , G ( Y t j ( x j )) − G ( Y ( ψ ( h )) t j ( x j ))) | = | E [( G ( Y t j ( x j )) − G ( Y ( ψ ( h )) t j ( x j ))) F ( X Γ )] − E [ G ( Y t j ( x j )) − G ( Y ( ψ ( h )) t j ( x j ))] E [ F ( X Γ )] |≤ k F k ∞ E h | ( G ( Y t j ( x j )) − G ( Y ( ψ ( h )) t j ( x j ))) | i ≤ Lip ( G ) k F k ∞ E [ k Y t j ( x j ) − Y ( ψ ( h )) t j ( x j ) k ] , E h | Y t ( x ) − Y ( ψ ( h )) t ( x ) | i we conclude ≤ Lip ( G ) k F k ∞ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) + Σ Λ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA ) Σ Λ Z A (0) \ V ψ ( h )(0 , l ( − ξ, − s ) dξds . Therefore Y is θ -lex weakly dependent with θ -lex-coefficients θ Y ( h ) ≤ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) + Σ Λ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ ( h )(0 , j ( A, − ξ, − s ) dξdsπ ( dA ) × Σ Λ Z A (0) \ V ψ ( h )(0 , l ( − ξ, − s ) dξds , which converges to zero as h goes to infinity by applying the dominated convergencetheorem.(ii) Let ( t, x ) ∈ R × R m , ψ >
0. As in the proof of part (i) we define Y ( ψ ) t ( x ) and ˜ Y ( ψ ) t ( x ).For the upper bound of the expectation we can derive with the help of Proposition4.3 E h | Y t ( x ) − ˜ Y ( ψ ) t ( x ) | i + E h | ˜ Y ( ψ ) t ( x ) − Y ( ψ ) t ( x ) | i ≤ (cid:18) Σ Λ E [ σ (0) ] Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) dξds + µ Λ E [ σ (0) ] (cid:18) Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) dξds (cid:19) + Σ Λ σ Z S Z A σ (0) ∩ V ψ (0 , j ( A, − ξ, − s ) dξdsπ ( dA )+ µ σ Z S Z A σ (0) ∩ V ψ (0 , j ( A, − ξ, − s ) dξdsπ ( dA ) Σ Λ Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s ) dξds + µ Λ (cid:18) Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s ) dξds (cid:19) , θ -lex-coefficients.(iii) Note that the kernel function j is square integrable such that σ is well defined andstationary. Now, by Proposition 3.6 it holds that σ ∈ L (Ω). Since additionally l ∈ L ( R m × R ) and σ is stationary it holds that lσ ∈ L (Ω × R m × R ). This implies lσ ∈ L ( R m × R ) almost surely. Then lσ satisfies (3.9) and (3.10) almost surely andthe ambit field Y is well defined. In the following we use the notation of part (i).Let ( t, x ) ∈ R × R m and ψ >
0. Then, we can derive with the help of Proposition 4.3 E h | Y t ( x ) − ˜ Y ( ψ ) t ( x ) | i + E h | ˜ Y ( ψ ) t ( x ) − Y ( ψ ) t ( x ) | i ≤ E [ | σ (0) | ] (cid:18) | γ | + Z R d | y | ν ( dy ) (cid:19)(cid:18) Z A (0) ∩ V ψ (0 , | l ( − ξ, − s ) | dξds (cid:19) + E [ | σ (0) − σ ( ψ )0 (0) | ] (cid:18) | γ | + Z R d | y | ν ( dy ) (cid:19)(cid:18) Z A (0) \ V ψ (0 , | l ( − ξ, − s ) | dξds (cid:19) . Finally, we can proceed as in the proof of Proposition 4.4 and we obtain a boundfor the θ -lex-coefficients. Proof of Proposition 4.5.
Let ( t, x ) ∈ R × R m , ψ >
0. We define the truncatedsequence Y ( ψ ) t ( x ) = Z A t ( x ) \ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) . Since l ∈ L ( R m × R ), σ ∈ L (Ω) and σ is stationary it holds that lσ ∈ L (Ω × R m × R ).This implies lσ ∈ L ( R m × R ) almost surely. Then lσ satisfies (3.6), (3.7) and (3.8) almostsurely and the ambit field Y is well defined. Finally, analogous to Proposition 3.11 wederive an upper bound of the expectation using Proposition 4.3 E h | Y t ( x ) − Y ( ψ ) t ( x ) | i = E "(cid:12)(cid:12)(cid:12)(cid:12) Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ E Z A t ( x ) ∩ V ψ ( t,x ) l ( x − ξ, t − s ) σ s ( ξ )Λ( dξ, ds ) ! Using Proposition 4.3 and the translation invariance of A t ( x ) and V ψ ( t,x ) this is equal to (cid:18) Σ Λ E [ σ (0)] Z A (0) ∩ V ψ (0 , l ( − ξ, − s ) dξds (cid:19) . Define Γ ∈ ( R × R m ) u , ( t j , x j ) ∈ R × R m and ψ ( h ) as in the proof of Proposition 4.4. Since σ is p -dependent we get that Y Γ and Y ψ ( h ) t j ( x j ) are independent for a sufficiently big h .Then, for these sufficiently big h , Y is θ -lex weakly dependent with θ -lex-coefficients θ Y ( h ) ≤ (cid:18) Σ Λ E [ σ (0) ] Z A (0) ∩ V ψ ( h )(0 , l ( − ξ, − s ) dξds (cid:19) , h goes to infinity by applying the dominated convergence the-orem. References [1]
Andrews, D. W. K. (1984). Non-strong mixing autoregressive processes.
J. Appl.Probab. 21 , 930–934.[2]
Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2014).On stochastic integration for volatility modulated Lévy-driven Volterra processes.
Stoch. Proc. Appl. 124 , 812–847.[3]
Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2010).Modelling electricity forward markets by ambit fields.
Adv. Appl. Probab. 46 , 719–745.[4]
Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2012).Recent advances in ambit stochastics with a view towards tempo-spatial stochasticvolatility/intermittency.
Banach Center Publications 104 , 25–60.[5]
Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2015).Cross-commodity modelling by multivariate ambit fields, in
Commodities, Energyand Environmental Finance, edited by R. Aïd, M. Ludkovski and R. Sircar.
Springer,New York, 109–148.[6]
Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2018).
Ambit Stochastics.
Springer, Cham.[7]
Barndorff-Nielsen, O. E., Corcuera, J. M.., and Podolskij, M. (2011).Multipower variation for Brownian semistationary processes.
Bernoulli 17 (4) ,1159âĂŞ1194.[8]
Barndorff-Nielsen, O. E., Jensen, E. B. V., Jónsdóttir, K. Y., andSchmiegel, J. (2007). Spatio-temporal modelling - with a view to biological growth,in
B. Finkenstdt, L. Held and V. Isham: Statistical Methods for Spatio-Temporal Sys-tems.
Chapman and Hall/CRC, London, 47–76.[9]
Barndorff-Nielsen, O. E., Pakkanen, M. S., and Schmiegel, J. (2014).Assessing relative volatility/intermittency/energy dissipation.
Electron. J. Stat. 8 ,1996–2021.[10]
Barndorff-Nielsen, O. E., and Schmiegel, J. (2007). Ambit processes: withapplications to turbulence and cancer growth, in
Stochastic Analysis and Applica-tions: The Abel Symposium 2005, Abel Symposia, vol. 2, edited by F.E. Benth, G. DiNunno, T. Lindstrom, B. Øksendal and T. Zhang.
Springer, Berlin, 93–124.5111]
Barndorff-Nielsen, O. E., and Schmiegel, J. (2004). Lévy-based tempo-spatial modelling; with applications to turbulence.
Uspekhi Matematicheskikh Nauk159 , 159–01.[12]
Barndorff-Nielsen, O. E., and Stelzer, R. (2011). Multivariate supOU pro-cesses.
Ann. Appl. Probab. 21 , 140–182.[13]
Basse-O’Connor, A., Graversen, S.-E., and Pedersen, J. (2013). Stochasticintegration on the real line.
Theor. Probab. Appl. 58 (2) , 193–215.[14]
Basse-O’Connor, A., Heinrich, C., and Podolskij, M. (2018). On limittheory for Lévy semi-stationary processes.
Bernoulli 24 , 3117–3146.[15]
Berger, D. (2019). Central limit theorems for moving average random fields withnon-random and random sampling on lattices. arXiv:1902.01255v1 .[16]
Berger, D. (2019). Lévy driven CARMA generalized processes and stochasticpartial differential equations. arXiv:1904.02928v1 .[17]
Bolthausen, E. (1982). On the central limit theorem for stationary mixing randomfields.
Ann. Probab. 10 (4) , 1047–1050.[18]
Boyd, S., and Vandenberghe, L. (2004).
Convex Optimization . CambridgeUniversity Press, Cambridge.[19]
Bradley, R. C. (1989). A caution on mixing conditions for random fields.
Stat.Probabil. Lett. 8 , 489–491.[20]
Bradley, R. C. (2007).
Introduction to strong mixing conditions . Kendrick Press.[21]
Brockwell, P. J., and Davis, R. A. (1991).
Time series: theory and methods .Springer, New York.[22]
Brockwell, P. J., and Matsuda, Y. (2017). Continuous auto-regressive movingaverage random fields on R n . J. Roy. Stat. Soc. B Met. 79 , 833–857.[23]
Bulinskii, A. V., and Shashkin, A. (2007).
Limit theorems for associated randomfields and related systems . World Scientific, Singapore.[24]
Chen, D. (1991). A uniform central limit theorem for nonuniform φ -mixing randomfields. Ann. Probab. 19 , 636–649.[25]
Chong, C. and Klüppelberg, C. (2015). Integrability conditions for space-timestochastic integrals: Theory and applications.
Bernoulli 21 , 2190–2216.[26]
Curato, I. V, and Stelzer, R. (2019). Weak dependence and GMM estimationof supOU and mixed moving average processes.
Electron. J. Stat. 13 , 310–360.[27]
Dedecker, J. (1998). A central limit theorem for stationary random fields.
Probab.Theory Rel. 110 , 397–426. 5228]
Dedecker, J. (1998). Exponential inequalities and functional central limit theoremsfor random fields.
ESAIM-Probab. Stat. 5 , 77–104.[29]
Dedecker, J., and Doukhan, P. (2003). A new covariance inequality and appli-cations.
Stoch. Proc. Appl. 106 , 63–80.[30]
Dedecker, J., Doukhan, P., Lang, G., Léon, J. R., Louhichi, S., andPrieur, C. (2008).
Weak dependence: with examples and applications . Springer,New York.[31]
Dedecker, J., and Rio, E. (2000). On the functional central limit theorem for sta-tionary processes.
Annales de l’Institut Henri PoincarÃľ, ProbabilitÃľs et Statistiques36 , 1–34.[32]
Doukhan, P., and Louhichi, S. (1999). A new weak dependence condition andapplications to moment inequalities.
Stoch. Proc. Appl. 84 , 313–342.[33]
Doukhan, P., Mayo, N., and Truquet, L. (2008). Weak dependence, modelsand some applications.
Metrika 69 , 199–225.[34]
Doukhan, P., and Truquet, L. (2007). A fixed point approach to model randomfields.
Alea 3 , 111–137.[35]
Doukhan, P., and Wintenberger, O. (2007). An invariance principle for weaklydependent stationary general models.
Probab. Math. Stat. 27 , 45–73.[36]
Gordin, M. I. (1969). The central limit theorem for stationary processes.
DokladyAkademii Nauk 188 , 739–741.[37]
Hall, A. R. (2005).
Generalized method of moments.
Oxford University Press,Oxford.[38]
Higdon, D. (2002). Space and space-time modeling using process convolutions.In
Anderson C.W., Barnett V., Chatwin P.C., El-Shaarawi A.H. (eds) QuantitativeMethods for Current Environmental Issues.
Springer, London, 37–56.[39]
Jacod, J., and Shiryaev, A. N. (2003).
Limit theorems for stochastic processes,2nd ed.
Springer, Berlin.[40]
Jònsdòttir, K. Y., Rønn-Nielsen, A., Mouridsen, K. and Jensen, E. B. V. (2013). LÃľvy-based modelling in brain imaging.
Scand. J. Stat. 40 , 511–529.[41]
Krengel, U. (1985).
Ergodic theorems . Walter de Gruyter, Berlin.[42]
Klüppelberg, C., and Pham, V. S. (2019). Estimation of causal CARMA ran-dom fields. arXiv:1902.04962v1. [43]
Maltz, A. L. (1999). On the central limit theorem for nonuniform φ -mixing randomfields. J. Theor. Probab. 12 , 643–660. 5344]
Nakhapetyan, B. S. (1988). An approach to proving limit theorems for dependentrandom variables.
Theor. of Probab. Appl. 32 , 535–539.[45]
Newman, C. M. (1980). Normal fluctuations and the FKG inequalities.
Commun.Math. Phys. 74 , 119–128.[46]
Nguyen, M., and Veraart, A. E. D. (2017). Spatio-temporal Ornstein-Uhlenbeck processes: theory, simulation and statistical inference.
Scand. J. Stat.44 , 46–80.[47]
Nguyen, M., and Veraart, A. E. D. (2018). Bridging between short-range andlong-range dependence with mixed spatio-temporal Ornstein-Uhlenbeck processes.
Stochastics 90 , 1023–1052.[48]
Pakkanen, M. S. (2014). Limit theorems for power variations of ambit fields drivenby white noise.
Stoch. Proc. Appl. 124 , 1942–1973.[49]
Passeggeri, P., and Veraart, A. E. D. (2019). Mixing properties of multivari-ate infinitely divisible random fields.
J. Theor. Probab. 32 , 1845–1879.[50]
Pedersen, J. (2003). The Lévy-Itô decomposition of an independently scat-tered random measure.
MaPhySto Research Paper 2, available at .[51]
Pham, V. S. (2018). Lévy-driven causal CARMA random fields. arXiv:1805.08807v1 .[52]
Rajput, B. S., and Rosiński, J. (1989). Spectral representations of infinitelydivisible processes.
Probab. Theory Rel. 82 , 451–487.[53]
Rosenblatt, M. (1985).
Stationary sequences and random fields.
BirkhÃďuser,Basel.[54]
Sato, K. I. (2013).
Lévy processes and infinitely divisible distributions, Cambridgestudies in advanced mathematics 68.
Cambridge University Press, Cambridge.[55]
Stelzer, R., Tosstorff, T., and Wittilinger, M. (2015). Moment basedestimation of supOU processes and a related stochastic volatility model.