Non-central limit theorems for functionals of random fields on hypersurfaces
aa r X i v : . [ m a t h . P R ] O c t Non-central limit theorems for functionals of randomfields on hypersurfaces. ∗ Andriy Olenko and Volodymyr Vaskovych
Department of Mathematics and Statistics, La Trobe University,Melbourne, 3086, [email protected]
Acknowledgements
Andriy Olenko was partially supported under the Australian Research Council’s DiscoveryProjects funding scheme (project number DP160101366) and the La Trobe University DRPGrant in Mathematical and Computing Sciences. ∗ Short title: Non-central asymptotics for random fields on hypersurfaces. bstract This paper derives non-central asymptotic results for non-linear integral functionalsof homogeneous isotropic Gaussian random fields defined on hypersurfaces in R d . Weobtain the rate of convergence for these functionals. The results extend recent findingsfor solid figures. We apply the obtained results to the case of sojourn measures anddemonstrate different limit situations. Keywords:
Non-central limit theorems, Random field, Long-range dependence, Hermite-type distribution, Sojourn measures.
MSC:
Primary 60G60; Secondary 60F05, 60G12
In this article we study real-valued homogeneous isotropic Gaussian random fields withlong-range dependence. Long-range dependence is a well-established empirical phenomenonwhich appears in various fields, such as physics, hydrology, signal processing, network trafficanalysis, telecommunications, finance, econometrics, just to name a few. See [1], [2], [3] formore details.Various functionals of random fields have been a topic of interest in recent years, see,for example, [4], [5], [6]. In this research, we focus on non-linear integral functionals ofGaussian random fields defined on hypersurface sets. These functionals play an importantrole in various fields, for example, in cosmology, meteorology and image analysis. It wasshown, see [7], [8], and [9], that these functionals can produce non-Gaussian limits andrequire normalizing coefficients different from those in central limit theorems. For the moredetailed overview of the problem, history of development, various approaches and existingresults one can refer to [10] and references therein.In this research we use results from [10], [11], [12] and obtain analogous asymptoticsfor the case of hypersurfaces. Most of the research conducted in this area considered onlyrandom fields defined on solid figures. Limit distributions for the functionals on spheres,which are a particular case of hypersurfaces, were studied in [2]. However, there were no2esults about the rate of convergence for the case of hypersurfaces. In this article we considera general case of hypersurface sets. We are interested in both limit distributions, and ratesof convergence to these limits. We prove that, analogously to the solid figure situation, thelimit distribution is a Hermite-type distribution and it depends only on the Hermite rankof the integrands. However, while for all integrands with the same Hermite rank the limitdistribution remains the same, we demonstrate that the rate of convergence can be different.To prove the results we need some fine geometric properties of hypersurfaces. Specifically,we use the rates of the average decay of the Fourier transform of surface measures, see [13]and [14].Geometric properties of random fields on hypersurfaces are of interest in many appliedareas, such as medical imaging, meteorology, and astrophysics. Many of these propertiescan be studied by the use of sojourn measures. Extensive literature is available concerningthis topic, for some examples see [12], [15], [16], [17]. Recently, non-Gaussian limits for thefirst Minkowski functional of random fields defined on 3-dimensional spheres were discussedin [18]. In this article we obtain limits for sojourn measures of random fields defined onarbitrary hypersurfaces. We provide examples when these limits are Gaussian and Hermite-type of the rank 2, 3, and 4.Various authors, see [19], [20], [21] and the references therein, studied a distance betweentwo Wiener-Ito integrals of the same rank. These result can be used to estimate the rate ofconvergence when the integrands are Hermite polynomials of Gaussian random fields. Weestimate the Kolmogorov’s distance between two Wiener-Ito integrals of the same rank andprovide a small comparison of the existing results.The article is organized as follows. In Section 2 we recall some basic definitions andassumptions that are required to present our main results. Section 3 studies the asymptoticbehavior of the considered functionals. Section 4 demonstrates how results from Section 3can be applied in the case of sojourn measures. Section 5 provides the results on the rateof convergence. 3
Definitions and assumptions
In this section we provide main definitions and assumptions that are used in this work.In what follows | · | and k · k denote the Lebesgue measure and the Euclidean distancein R d , d ≥
2, respectively. Let B ( y, s ) be a d -dimensional ball with centre y and radius s ,and let S d − (r) be a sphere in R d with the radius r . We use the symbols C and δ to denoteconstants which are not important for our exposition. Moreover, the same symbol may beused for different constants appearing in the same proof.Let ∆ be a bounded set in R d , d ≥ , with a boundary ∂ ∆. Let ∆( r ), r > , be thehomothetic image of the set ∆ with the centre of homothety at the origin and the coefficient r >
0, that is | ∆( r ) | = r d | ∆ | . Let ∂ ∆ be an Ahlfors-David regular hypersurface in R d . Onecan find more information about Ahlfors-David regular sets in [22] and references therein. Definition . [22] A closed hypersurface ∂ ∆ is called Ahlfors-David regular if there exists aconstant C such that for any y ∈ ∂ ∆ and s > C − s d − < Z ∂ ∆ ∩ B ( y,s ) d σ ( x ) < Cs d − , (1)where d σ ( · ) is the d − C / class. Let K ( x ) := Z ∂ ∆ e i
C > r ≥ C the function L ( · ) can be written in the form L ( r ) = exp (cid:18) ζ ( r ) + Z rC ζ ( u ) u d u (cid:19) , where ζ ( · ) and ζ ( · ) are such measurable and bounded functions that ζ ( r ) → ζ ( r ) → C ( | C | < ∞ ) , when r → ∞ . If L ( · ) varies slowly, then r a L ( r ) → ∞ , r − a L ( r ) → a > r → ∞ , see Proposition 1.3.6 [24]. 7 efinition . [24] A measurable function g : (0 , ∞ ) → (0 , ∞ ) is said to be regularly varyingat infinity, denoted g ( · ) ∈ R τ , if there exists τ such that, for all t > , it holds thatlim λ →∞ g ( λt ) g ( λ ) = t τ . Definition . [24] Let g : (0 , ∞ ) → (0 , ∞ ) be a measurable function and g ( x ) → x → ∞ .A slowly varying function L ( · ) is said to be slowly varying with remainder of type 2, or thatit belongs to the class SR2, if ∀ λ > L ( λx ) L ( x ) − ∼ k ( λ ) g ( x ) , x → ∞ , for some function k ( · ).If there exists λ such that k ( λ ) = 0 and k ( λµ ) = k ( µ ) for all µ , then g ( · ) ∈ R τ for some τ ≤ k ( λ ) = ch τ ( λ ), where h τ ( λ ) = ln( λ ) , if τ = 0 , λ τ − τ , if τ = 0 . (5) Remark . An example of a function that satisfies Definition 6 for τ = 0 is L ( x ) = ln( x ) . Indeed, L ( λx ) L ( x ) − λ ) + ln( x )ln( x ) − λ ) · x ) . Assumption . Let η ( x ) , x ∈ R d , be a homogeneous isotropic Gaussian random field with E η ( x ) = 0 and a covariance function B ( x ) such that B (0) = 1 , B ( x ) = E η (0) η ( x ) = k x k − α L ( k x k ) , where L ( k·k ) is a function slowly varying at infinity. Assumption . The random field η ( x ) , x ∈ R d , has the spectral density f ( k λ k ) = c ( d, α ) k λ k α − d L (cid:18) k λ k (cid:19) , (6)8here c ( d, α ) := Γ (cid:0) d − α (cid:1) α π d/ Γ (cid:0) α (cid:1) , and L ( k·k ) is a locally bounded function which is slowly varying at infinity and satisfies forsufficiently large r the condition (cid:12)(cid:12)(cid:12)(cid:12) − L ( tr ) L ( r ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C g ( r ) h τ ( t ) , t ≥ , (7)where g ( · ) ∈ R τ , τ ≤
0, such that g ( x ) → , x → ∞ , and h τ ( t ) is defined by (5). Remark . By Tauberian and Abelian theorems, see [25], for L ( · ) and L ( · ) given in As-sumptions 1 and 2 it holds L ( r ) ∼ L ( r ) , r → + ∞ . Remark . [10] If L satisfies (7), then for any k ∈ N , δ >
0, and sufficiently large r (cid:12)(cid:12)(cid:12)(cid:12) − L k/ ( tr ) L k/ ( r ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C g ( r ) h τ ( t ) t δ , t ≥ . Definition . Let Y and Y be arbitrary random variables. The uniform (Kolmogorov)metric for the distributions of Y and Y is defined by the formula ρ ( Y , Y ) = sup z ∈ R | P ( Y ≤ z ) − P ( Y ≤ z ) | . The next result follows from Lemma 1.8 [26].
Lemma 1. If X, Y and Z are arbitrary random variables, then for any ε > ρ ( X + Y, Z ) ≤ ρ ( X, Z ) + ρ ( Z + ε, Z ) + P ( | Y | ≥ ε ) . In this section we are interested in the asymptotic distribution of the random variable K r = R ∂ ∆( r ) G ( η ( x )) dσ ( x ) . First, we prove Theorem 1, which is an analogue of the so calledreduction theorem, see Theorem 4 in [11], in the case of hypersurface integrals. Using this9esult, in Theorem 2 we derive normalizing coefficients and limit distributions of the randomvariable K r that depend on the Hermite rank κ of the function G ( · ). Theorem 1.
Suppose that H rank G = κ ∈ N and η ( x ) , x ∈ R d , satisfies Assumption for α ∈ (0 , ( d − /κ ) . If at least one of the following random variables K r √ Var K r , K r p Var K r,κ and K r,κ p Var K r,κ , has a limit distribution, then the limit distributions of the other random variables also existand they coincide when r → ∞ . Proof.
Let V r := X j ≥ κ +1 C j j ! Z ∂ ∆( r ) H j ( η ( x )) dσ ( x ) , then by Remark 2 Var K r = Var K r,κ + Var V r . By (3) and (4)
Var K r,κ = C κ κ ! Z ∂ ∆( r ) Z ∂ ∆( r ) k x − y k − ακ L κ ( k x − y k ) d σ ( x ) d σ ( y )= | ∂ ∆ | r d − − ακ C κ κ ! diam { ∆ } Z z − ακ L κ ( rz ) ψ ∆ ( z ) dz. If α ∈ (0 , ( d − /κ ) then by asymptotic properties of integrals of slowly varying functions(see Theorem 2.7 [27]) we get Var K r,κ = c ( κ, α, ∆) | ∂ ∆ | C κ κ ! r d − − κα L κ ( r )(1 + o (1)) , r → ∞ , where c ( κ, α, ∆) := diam { ∆ } Z z − ακ ψ ∆ ( z ) dz. Var K r,κ we obtain Var V r = | ∂ ∆ | r d − X j ≥ κ +1 C j j ! r · diam { ∆ } Z z − αj L j ( z ) ψ ∆( r ) ( z ) dz. It follows from z − α L ( z ) ∈ [0 , , z ≥ , that Var V r ≤ | ∂ ∆ | r d − − ( κ +1) α X j ≥ κ +1 C j j ! diam { ∆ } Z z − α ( κ +1) L κ +10 ( rz ) ψ ∆ ( z ) dz = | ∂ ∆ | r d − − κα L κ ( r ) X j ≥ κ +1 C j j ! diam { ∆ } Z z − ακ L κ ( rz ) L κ ( r ) L ( rz )( rz ) α ψ ∆ ( z ) dz. Let us split the above integral into two parts I and I with the ranges of integration[0 , r − β ] and ( r − β , diam { ∆ } ] respectively, where β ∈ (0 , . As z − α L ( z ) ∈ [0 , , z ≥ , we can estimate the first integral as follows I ≤ r − β Z z − ακ L κ ( rz ) L κ ( r ) ψ ∆ ( z ) dz ≤ sup ≤ s ≤ r − β s δ L κ ( s ) r δ L κ ( r ) r − β Z z − ( δ + ακ ) ψ ∆ ( z ) dz ≤ sup ≤ s ≤ r s δ/k L ( s ) r δ/k L ( r ) ! κ r − β Z z − ( δ + ακ ) ψ ∆ ( z ) dz. (8)By Theorem 1.5.3 [24] and the definition of slowly varying functionslim r →∞ sup ≤ s ≤ r s δ/k L ( s ) r δ/k L ( r ) = 1 . By (3) we can rewrite the integral in (8) as follows r − β Z z − ( δ + ακ ) ψ ∆ ( z ) dz = | ∂ ∆ | − Z ∂ ∆ Z ∂ ∆ χ ( k x − y k ≤ r − β ) k x − y k − ( δ + ακ ) d σ ( x ) d σ ( y ) ≤ | ∂ ∆ | − Z ∂ ∆ max y Z ∂ ∆ χ ( k x − y k ≤ r − β ) k x − y k − ( δ + ακ ) d σ ( x ) d σ ( y )11 | ∂ ∆ | − max y Z ∂ ∆ χ ( k x − y k ≤ r − β ) k x − y k − ( δ + ακ ) d σ ( x ) = | ∂ ∆ | − max y Z ∂ ∆ ∩ B ( y,r − β ) k x − y k − ( δ + ακ ) d σ ( x ) . Since ∂ ∆ is Ahlfors-David regular, applying upper-bound from (1) we get Z ∂ ∆ ∩ B ( y,r − β ) k x − y k − ( δ + ακ ) d σ ( x ) = ∞ X i =0 Z ∂ ∆ ∩ [ B ( y,r − β − i ) \ B ( y,r − β − i − ] k x − y k − ( δ + ακ ) d σ ( x ) ≤ ∞ X i =0 Z ∂ ∆ ∩ [ B ( y,r − β − i ) \ B ( y,r − β − i − ] r β ( δ + ακ ) ( i +1)( δ + ακ ) d σ ( x ) ≤ ∞ X i =0 r β ( δ + ακ ) ( i +1)( δ + ακ ) × Z ∂ ∆ ∩ B ( y,r − β − i ) d σ ( x ) ≤ Cr β ( δ + ακ ) ∞ X i =0 ( i +1)( δ + ακ ) r − β ( d − − i ( d − = C δ + ακ − − ( d − (1+ δ + ακ )) r − β ( d − (1+ δ + ακ )) . Thus, we have r − β Z z − ( δ + ακ ) ψ ∆ ( z ) dz ≤ Cr − β ( d − (1+ δ + ακ )) . (9)For the second integral we obtain I ≤ sup r − β ≤ s ≤ r · diam { ∆ } s δ L κ ( s ) r δ L κ ( r ) · sup r − β ≤ s ≤ r · diam { ∆ } L ( s ) s α · diam { ∆ } Z z − ( δ + ακ ) ψ ∆ ( z ) dz. Using Theorem 1.5.3 [24] we conclude thatlim r →∞ sup r − β ≤ s ≤ r · diam { ∆ } s δ L κ ( s ) r δ L κ ( r ) ≤ lim r →∞ sup ≤ s ≤ r · diam { ∆ } s δ L κ ( s )( r · diam { ∆ } ) δ L κ ( r · diam { ∆ } ) × lim r →∞ diam δ { ∆ } L κ ( r · diam { ∆ } ) L κ ( r ) = diam δ { ∆ } .
12y Proposition 1.3.6 and Theorem 1.5.3 [24] it follows thatsup r − β ≤ s ≤ r · diam { ∆ } L ( s ) s α ≤ sup s ≥ r − β s − α L ( s ) r − α (1 − β ) L ( r − β ) · L (cid:0) r − β (cid:1) r δ (1 − β ) · r ( δ − α )(1 − β ) = o ( r ( δ − α )(1 − β ) ) . (10)We can choose β = 1 / δ arbitrary close to 0. Then by (9), (10) we obtainlim r →∞ Var V r Var K r = 0 and lim r →∞ Var K r Var K r,κ = 1 . Thuslim r →∞ E K r √ Var K r − K r,κ p Var K r,κ ! = lim r →∞ E (cid:16) V r + (cid:16) − q Var K r Var K r,κ (cid:17) K r,κ (cid:17) Var K r = 0 , and lim r →∞ E K r p Var K r,κ − K r,κ p Var K r,κ ! = lim r →∞ E (cid:16) V r q Var K r Var K r,κ (cid:17) Var K r = 0which completes the proof. Lemma 2. If τ , ..., τ κ , κ ≥ , are such positive constants, that P κi =1 τ i < d − , then Z R dκ |K ( λ + · · · + λ κ ) | d λ . . . d λ κ k λ k d − τ · · · k λ κ k d − τ κ < ∞ . (11) Proof.
For κ = 1 we get d − τ >
1. Using integration formula for polar coordinates, andthe fact that |K ( λ ) | ≤ | ∂ ∆ | for all λ ∈ R d we get Z R d |K ( λ ) | d λ k λ k d − τ = ∞ Z r d − Z S d − (1) |K ( ωr ) | r d − τ d ω d r = r Z r d − Z S d − (1) |K ( ωr ) | r d − τ d ω d r + ∞ Z r r d − Z S d − (1) |K ( ωr ) | r d − τ d ω d r ≤ | ∂ ∆ | r Z r d − d rr d − τ + ∞ Z r r d − Z S d − (1) |K ( ωr ) | r d − τ d ω d r.
13y (2) we obtain Z R d |K ( λ ) | d λ k λ k d − τ ≤ | ∂ ∆ | r Z d rr − τ + C ∞ Z r r − d +1 r − τ d r = | ∂ ∆ | r Z d rr − τ + C ∞ Z r d rr d − τ < ∞ . For κ > λ κ − = λ κ − / k u k : Z R dκ |K ( λ + · · · + λ κ ) | d λ . . . d λ κ k λ k d − τ · · · k λ κ k d − τ κ = | u = λ κ − + λ κ | = Z R d ( κ − |K ( λ + · · · + λ κ − + u ) | × Z R d d λ κ − k λ κ − k d − τ κ − k u − λ κ − k d − τ κ · d λ . . . d λ κ − d u k λ k d − τ · · · k λ κ − k d − τ κ − = Z R d ( κ − |K ( λ + · · · + λ κ − + u ) | d λ . . . d λ κ − k λ k d − τ · · · k λ κ − k d − τ κ − k u k d − τ κ − − τ κ Z R d d˜ λ κ − d u (cid:13)(cid:13)(cid:13) ˜ λ κ − (cid:13)(cid:13)(cid:13) d − τ κ − (cid:13)(cid:13)(cid:13) u k u k − ˜ λ κ − (cid:13)(cid:13)(cid:13) d − τ κ ≤ C Z R d ( κ − |K ( λ + · · · + λ κ − + u ) | d λ . . . d λ κ − d u k λ k d − τ · · · k λ κ − k d − τ κ − k u k d − τ κ − − τ κ ≤ ... ≤ C Z R d |K ( u ) | d u k u k d − P κi =1 τ i < ∞ . Theorem 2.
Let η ( x ) , x ∈ R d , be a homogeneous isotropic Gaussian random field with E η ( x ) = 0 . If Assumptions and hold, α ∈ (0 , ( d − /κ ) , and H rank G = κ ∈ N , thenfor r → ∞ the random variable X κ,r (∆) := r ( κα ) / − d +1 L − κ/ ( r ) Z ∂ ∆( r ) H κ ( η ( x )) d σ ( x ) converge weakly to X κ (∆) := c κ/ ( d, α ) Z R dκ ′ K ( λ + · · · + λ κ ) W (d λ ) . . . W (d λ κ ) k λ k ( d − α ) / · · · k λ κ k ( d − α ) / , (12) where R R dκ ′ denotes the multiple stochastic Wiener-Itˆo integral. emark . Note, that from the following proof it is clear that it is sufficient to use only (6)instead of Assumption 2.
Proof.
Using It´o formula (2.3.1) in [28] we obtain Z ∂ ∆( r ) H κ ( η ( x ))d σ ( x ) = Z ∂ ∆( r ) Z R dκ ′ e i<λ + ··· + λ κ ,x> κ Y j =1 q f ( k λ j k ) W (d λ ) . . . W (d λ κ )d σ ( x ) . As κ Q j =1 p f ( k λ j k ) ∈ L ( R dκ ) then a stochastic Fubini theorem, see Theorem 5.13.1 in [23],can be used to interchange the integrals which results in X κ,r (∆) D = c κ/ ( d, α ) Z R dκ ′ K ( λ + · · · + λ κ ) Q r ( λ , . . . , λ κ ) W (d λ ) . . . W (d λ κ ) k λ k ( d − α ) / · · · k λ κ k ( d − α ) / , (13)where Q r ( λ , . . . , λ κ ) := r κ ( α − d ) / L − κ/ ( r ) c − κ/ ( d, α ) κ Y j =1 k λ j k d − α f (cid:18) k λ j k r (cid:19) / . (14)By the isometry property of multiple stochastic integrals R r := E | X κ,r (∆) − X κ (∆) | c κ ( d, α ) = Z R dκ |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − k λ k d − α · · · k λ κ k d − α d λ . . . d λ κ . Using (6) and properties of slowly varying functions we conclude that Q r ( λ , . . . , λ κ )converges pointwise to 1, when r → ∞ . Hence, by Lebesgue’s dominated convergence the-orem the integral converges to zero if there is some integrable function which dominatesintegrands for all r. Let us split R dκ into the regions B µ := { ( λ , ..., λ κ ) ∈ R dκ : || λ j || ≤ , if µ j = − , and || λ j || > , if µ j = 1 , j = 1 , ..., κ } , where µ = ( µ , ..., µ κ ) ∈ {− , } κ is a binary vector of length κ. Then we can represent the15ntegral R r as R r := [ µ ∈{− , } κ Z B µ |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − d λ . . . d λ κ k λ k d − α · · · k λ κ k d − α . If ( λ , ..., λ κ ) ∈ B µ we estimate the integrand as follows |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − k λ k d − α · · · k λ κ k d − α ≤ |K ( λ + · · · + λ κ ) | k λ k d − α · · · k λ κ k d − α (cid:0) Q r ( λ , . . . , λ κ ) + 1 (cid:1) = 2 |K ( λ + · · · + λ κ ) | k λ k d − α · · · k λ κ k d − α κ Y j =1 || λ j || µ j δ · κ Y j =1 (cid:16) r || λ j || (cid:17) µ j δ L (cid:16) r || λ j || (cid:17) r µ j δ L ( r ) + 1 ≤ |K ( λ + · · · + λ κ ) | k λ k d − α · · · k λ κ k d − α κ Y j =1 k λ k µ j δ · sup ( λ ,...,λ κ ) ∈ B µ κ Y j =1 (cid:16) r || λ j || (cid:17) µ j δ L (cid:16) r || λ j || (cid:17) r µ j δ L ( r ) , where δ is an arbitrary positive number. By Theorem 1.5.3 [24]lim r →∞ sup || λ j ||≤ (cid:16) r || λ j || (cid:17) − δ L (cid:16) r || λ j || (cid:17) r − δ L ( r ) = lim r →∞ sup z ≥ r z − δ L ( z ) r − δ L ( r ) = 1;lim r →∞ sup || λ j || > (cid:16) r || λ j || (cid:17) δ L (cid:16) r || λ j || (cid:17) r δ L ( r ) = lim r →∞ sup z ∈ [0 ,r ] z δ L ( z ) r δ L ( r ) = 1 . Therefore, there exists r > r ≥ r and ( λ , ..., λ κ ) ∈ B µ |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − k λ k d − α · · · k λ κ k d − α ≤ |K ( λ + · · · + λ κ ) | k λ k d − α · · · k λ κ k d − α + 2 C |K ( λ + · · · + λ κ ) | k λ k d − α − µ δ · · · k λ κ k d − α − µ κ δ . (15)By Lemma 2, if we chose δ ∈ (0 , min ( α, ( d − /κ − α )) , the upper bound in (15) is anintegrable function on each B µ and hence on R dκ too. By Lebesgue’s dominated convergencetheorem lim r →∞ E | X κ,r (∆) − X κ (∆) | = 0 , which completes the proof.16 Application to sojourn measures
An important example of Theorem 2 is sojourn measures of random fields defined on hy-persurfaces, see [15], [18]. Namely, consider an application of Theorem 2 to the functionals Z ∂ ∆( r ) χ ( S ( η ( x )) > b )d σ ( x ) , where S : R → R is a such function that the set { t : S ( t ) > b } can be represented as afinite union of intervals ( t , t ) , −∞ ≤ t < t ≤ + ∞ . Examples of the function S ( · ) arepolynomials or other smooth functions having finite number of zeros. Remark . As particular cases, this construction includes R S d − (r) χ ( η ( x ) > b )d σ ( x ) and R S d − (r) χ ( | η ( x ) | > b )d σ ( x ) considered in [2].As for some N ≥ { t : S ( t ) > b } = N S i =1 ( t i , t i +1 ) , where the intervals ( t i , t i +1 )are disjoint, we have to study Z ∂ ∆( r ) χ η ( x ) ∈ N [ i =1 ( t i , t i +1 ) ! d σ ( x ) = N X i =1 Z ∂ ∆( r ) χ ( η ( x ) ∈ ( t i , t i +1 )) d σ ( x ) . Note, that the indicator function χ ( ω > t ) can be expanded in the Hermite series as χ ( ω > t ) = ∞ X j =0 C ( t ) j H j ( ω ) j ! , where C ( t ) j = − Φ( t ) , j = 0 ,φ ( t ) H j − ( t ) , j ≥ , and Φ( · ) and φ ( · ) are the cdf and pdf for N (0 ,
1) respectively.Then, χ ( ω ∈ ( t i , t i +1 )) = χ ( ω > t i ) − χ ( ω > t i +1 ) = Φ( t i +1 ) − Φ( t i )17 ∞ X j =1 φ ( t i ) H j − ( t i ) − φ ( t i +1 ) H j − ( t i +1 ) j ! H j ( w ) , where φ ( ±∞ ) = 0 . Hence, N X i =1 χ ( ω ∈ ( t i , t i +1 )) = N X i =1 (Φ( t i +1 ) − Φ( t i ))+ ∞ X j =1 N X i =1 φ ( t i ) H j − ( t i ) − φ ( t i +1 ) H j − ( t i +1 ) j ! H j ( w ) . Therefore, the Hermite rank of the function χ ( S ( x ) > b ) is such j ∗ ≥ C j ∗ ,b = N X i =1 φ ( t i ) H j ∗ − ( t i ) − φ ( t i +1 ) H j ∗ − ( t i +1 ) = 0 . Theorem 3.
Let j ∗ = min { j ∈ N : N P i =1 φ ( t i ) H j − ( t i ) − φ ( t i +1 ) H j − ( t i +1 ) = 0 } . Then, underassumptions of Theorem 2 X κ,r (∆) = r ( κα ) / − d +1 L − κ/ ( r ) Z ∂ ∆( r ) χ ( S ( η ( x )) > b ) d σ ( x ) converges to C κ,b κ ! X κ (∆) , where X κ (∆) is given by (12), and κ = j ∗ .Example . Let us study R ∂ ∆( r ) χ ( η l ( x ) > b ) d σ ( x ) . If l is odd, then χ (cid:0) ω l > b (cid:1) = χ (cid:0) ω > b /l (cid:1) . In this case C ,b = φ ( b /l ) = 0 and the asymptotic is given by φ ( b /l ) X (∆) which has aGaussian distribution.If l is even, then for b > χ (cid:0) ω l > b (cid:1) = χ (cid:0) ω > b /l (cid:1) + χ (cid:0) ω < − b /l (cid:1) = 1 − χ (cid:0) − b /l < ω < b /l (cid:1) . In this case, C ,b = φ ( − b /l ) − φ ( b /l ) = 0. However, for j = 2 weobtain C ,b = φ ( − b /l )( − b /l ) − φ ( b /l ) b /l = − b /l φ ( b /l ) = 0 . Therefore, the asymptotic is the Rosenblatt-type distribution of − b /l φ ( b /l ) X (∆) . xample . Now, let us study R ∂ ∆( r ) χ ( S ( η ( x )) >
0) d σ ( x ) , where S ( x ) = − x + b x and b = (2 ln(2)) / . Since Z ∂ ∆( r ) χ ( S ( η ( x )) >
0) d σ ( x ) = Z ∂ ∆( r ) χ ( η ( x ) ∈ ( −∞ , − b ) ∪ (0 , b )) d σ ( x ) , we can compute coefficients C j,b as follows C ,b = − φ ( − b ) H ( − b ) + φ (0) H (0) − φ ( b ) H ( b ) = φ (0) − φ ( b ) = 0 ,C ,b = − φ ( − b ) H ( − b ) + φ (0) H (0) − φ ( b ) H ( b ) = bφ ( − b ) − bφ ( − b ) = 0 ,C ,b = − φ ( − b ) H ( − b ) + φ (0) H (0) − φ ( b ) H ( b ) = − φ ( − b )( b − − φ (0) − φ ( − b )( b −
1) = − φ (0) − φ (0)( b −
1) = − b φ (0) = 0 , because b = (2 ln(2)) / . Thus, in this case the limit distribution has H rank = 3. Example . In this example we show how to obtain the Hermite limit distribution with H rank = 4. Lemma . For each p ∈ (0 , there exist q > , such that pφ ( p ) = qφ ( q ) . Proof.
Note that ( xφ ( x )) ′ = φ ( x ) − x φ ( x ) = φ ( x )(1 − x ) . Thus, xφ ( x ) is an increasingfunction on (0 ,
1) and it is decreasing on (1 , ∞ ). As xφ ( x ) = 0 for x = 0 and x = + ∞ ,then 0 < pφ ( p ) < φ (1) . Because xφ ( x ) is a continuous function there is q > pφ ( p ) = qφ ( q ) . Note, that pφ ( p ) = qφ ( q ) , p, q > p φ ( p ) = q φ ( q ), i.e. q is a positivesolution of the equation − p e − p = − q e − q . Thus, q = r − LambertW − (cid:16) − p e p (cid:17) , where lambertW − ( · ) is the branch of LambertWfunction satisfying LambertW(x) ≤ − , − /e < x < , see [29].19et S ( x ) = − ( x − p )( x − q ) . Then, { x ∈ R : S ( x ) > } = ( − q, − p ) ∪ ( p, q ) . Let us compute the coefficient C j, . C , = φ ( − q ) − φ ( − p ) + φ ( p ) − φ ( q ) = 0 ,C , = φ ( − q )( − q ) − φ ( − p )( − p ) + φ ( p ) p − φ ( q ) q = 2( φ ( p ) p − φ ( q ) q ) = 0 ,C , = φ ( − q )( q − − φ ( − p )( p −
1) + φ ( p )( p − − φ ( q )( q −
1) = 0 ,C , = φ ( − q )( − q + 3 q ) − φ ( − p )( − p + 3 p ) + φ ( p )( p − p ) − φ ( q )( q − q )= φ ( − q )( − q ) − φ ( − p )( − p ) + φ ( p ) p − φ ( q ) q = 2( φ ( p ) p − φ ( q ) q ) < q ( φ ( p ) p − φ ( q ) q ) = 0 . Therefore C , = 0 and the asymptotic of R ∂ ∆( r ) χ ( S ( η ( x )) >
0) d σ ( x ) when r → ∞ isthe random variable C , X (∆) . In this section we investigate rates of convergence of random variables K r and K r,κ to theirasymptotic distribution derived in Theorem 2. For readability we will denote Wiener-It´ointegrals of rank κ by I κ ( f ), where f ( · ) is an integrand. For more details about Wiener-It´o integrals and properties of function f ( · ) one can refer to [30, 31]. To obtain rates ofconvergence we will use some fine properties of Hermite-type distributions. The followingresult was obtained in [10] for X κ (∆). Since the proof does not rely on the specific form of X κ (∆), this theorem can be easily generalized as follows Theorem 4. [10]
For any κ ∈ N and an arbitrary positive ε it holds ρ ( I κ ( f ) , I κ ( f ) + ε ) ≤ Cε a , where a = 1 if κ < and a = 1 /κ if κ ≥ . K r does notdepend on the “tail” V r in the Hermite expansion of the function G ( r ) . However, in thissection we will show that although V r does not affect the limit distribution it does affectthe rate of convergence.First, let us consider the case where G ( · ) = C κ κ ! H κ ( · ). Then, V r = 0 and the Hermiterank of G ( · ) is κ . We are interested in ρ (cid:18) κ ! K r,κ C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) = ρ ( X κ,r (∆) , X κ (∆)) . By (13) X κ,r (∆) = c κ/ ( d, α ) Z R dκ ′ K ( λ + · · · + λ κ ) Q r ( λ , . . . , λ κ ) W (d λ ) . . . W (d λ κ ) k λ k ( d − α ) / · · · k λ κ k ( d − α ) / , where Q ( · ) is defined by (14). Therefore, ρ ( X κ,r (∆) , X κ (∆)) is the Kolmogorov’s distancebetween two multiple Wiener-It´o integrals of the rank κ . To estimate this distance we provethe following result. Lemma 4.
Let I κ ( f ) and I κ ( f ) be two Wiener-It`o integrals of order κ , and f , f besymmetric functions in L ( R d ) , d ≥ . Then, ρ ( I κ ( f ) , I κ ( f )) ≤ C k f − f k κ +1 / , if κ ≥ , and ρ ( I κ ( f ) , I κ ( f )) ≤ C k f − f k , if κ < . Proof.
By applying Lemma 1 to X = I κ ( f ) , Y = I κ ( f ) − I κ ( f ) , and Z = I κ ( f ) we obtain ρ ( I κ ( f ) , I κ ( f )) ≤ ρ ( I κ ( f ) + ε, I κ ( f )) + P {| I κ ( f ) − I κ ( f ) | ≥ ε } . ρ ( I κ ( f ) , I κ ( f )) ≤ Cε a + P {| I κ ( f ) − I κ ( f ) | ≥ ε }≤ Cε a + ε − Var ( I κ ( f ) − I κ ( f )) ≤ C (cid:0) ε a + ε − k f − f k (cid:1) , where a is defined in Theorem 4. By choosing ε = k f − f k β we get ρ ( I κ ( f ) , I κ ( f )) ≤ C (cid:0) k f − f k βa + k f − f k − β (cid:1) . Since sup β min( aβ, − β ) = a a , we have ρ ( I κ ( f ) , I κ ( f )) ≤ C k f − f k a a . Note, that a = 1 when κ <
3, thus ρ ( I κ ( f ) , I κ ( f )) ≤ C k f − f k , κ = 1 , . Furthermore, in the case of general κ , a = 1 /κ and therefore ρ ( I κ ( f ) , I κ ( f )) ≤ C k f − f k /κ /κ = C k f − f k κ +1 / . Remark . For the total variation distance ρ T V ( · ) it was stated in [19] that ρ T V ( I κ ( f ) , I κ ( f )) ≤ C k f − f k κ . Since the Kolmogorov’s distance can be estimated by the total variation distance (for anyrandom variables ξ and η it holds ρ ( ξ, η ) ≤ ρ T V ( ξ, η )), result in [19] is an improvement ofLemma 4. But, in [19] only a sketch of a proof is provided, and [20] questioned the result.Therefore, [20] proved that ρ T V ( I κ ( f ) , I κ ( f )) ≤ C k f − f k κ . Note, that this result isworse than ours if we were to use it to estimate Kolmogorov’s distance. Thus, Lemma 4 is22resented as a fully proven, self-contained result. Unfortunately, Lemma 1 that was usedto obtain the result is not applicable for the total variation distance. Hence, our methodcan not be used for the total variation distance. Therefore, while the result in [20] performsworse in our case, it is more general as a whole.Recently, for the case of κ = 2, it was shown in [21] that ρ T V ( I ( f ) , I ( f )) ≤ C k f − f k . This result is an obvious improvement of the existing results. Thus, in the case κ = 2 we canuse it to further sharpen our upper bound. However, we don’t see how methods in [21] canbe used to obtain similar results for an arbitrary κ as they heavily rely on the Chi-squareexpansion of the second order Wiener-It`o integrals, which is not available for κ > Theorem 5.
Let H rank G = κ ∈ N and Assumptions and hold for α ∈ (0 , d − κ ) .If τ ∈ (cid:0) − d − κα , (cid:1) then for any κ < a a min (cid:16) α ( d − − κα ) d − − ( κ − α , κ (cid:17) ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) = o ( r − κ ) , r → ∞ , where κ := min (cid:16) − τ, d − α + ··· + d − κα + d − − κα (cid:17) and a is the parameter from Theorem .If τ = 0 then ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) = g a a ( r ) , r → ∞ . Remark . If κ = 1, then κ = min ( − τ, d − − α ) . Remark . Note, that for τ = 0 the rate of convergence does not depend on α or d . This isdue to the reason that parameters α and d affect the power of r in the rate of convergence,but, in the case τ = 0, the function g ( r ) converges to 0 slower than any power of r . Proof.
Since H rank G = κ, it follows that K r can be represented in the space of squared-integrable random variables L (Ω) as K r = K r,κ + V r := C κ κ ! Z ∂ ∆( r ) H κ ( η ( x )) d σ ( x ) + X j ≥ κ +1 C j j ! Z ∂ ∆( r ) H j ( η ( x )) d σ ( x ) , C j are coefficients of the Hermite series of the function G ( · ) . By the proof of Theorem 1 (specifically estimates (9) and (10)), for sufficiently large r Var V r ≤ C r d − − κα L κ ( r ) (cid:16) r − β ( d − − κα − δ ) + o (cid:16) r − ( α − δ )(1 − β ) (cid:17)(cid:17) . Since, by Remark 4, L ( · ) ∼ L ( · ), we can replace L ( · ) by L ( · ) in the above estimate. Thus,choosing β = αd − − ( κ − α to minimize the upper bound we get Var V r ≤ Cr d − − κα L κ ( r ) r − α ( d − − κα ) d − − ( κ − α + δ . It follows from Theorem 4 that ρ ( X κ (∆) + ε, X κ (∆)) ≤ Cε a . Applying Chebyshev’s inequality and Lemma 1 to X = X κ,r (∆), Y = κ ! V r C κ r d − − κα L κ ( r ) , and Z = X κ (∆) , we get ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) = ρ (cid:18) X κ,r (∆) + κ ! V r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ ρ ( X κ,r (∆) , X κ (∆)) + C (cid:16) ε a + ε − r − α ( d − − κα ) d − − ( κ − α + δ (cid:17) , for a sufficiently large r. Choosing ε := r − α ( d − − κα )(2+ a )( d − − ( κ − α ) to minimize the second term we obtain ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ ρ ( X κ,r (∆) , X κ (∆)) + C r − aα ( d − − κα )(2+ a )( d − − ( κ − α ) + δ . (16) Remark . As we can see from (16), for a sufficiently large r , the upper bound in (16)can be estimated by C max (cid:16) ρ ( X κ,r (∆) , X κ (∆)) , r − aα ( d − − κα )(2+ a )( d − − ( κ − α ) + δ (cid:17) . Here, the part r − aα ( d − − κα )(2+ a )( d − − ( κ − α ) + δ appears only when V r = 0, i.e. G ( · ) = C κ κ ! H κ ( · ). Depending on thevalues of parameters d , κ and α it can considerably affect the rate of convergence. We willdiscuss it in more details at the end of this section.24sing Lemma 4 we get ρ ( X κ,r (∆) , X κ (∆)) ≤ C Z R κd |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − d λ . . . d λ κ k λ k d − α . . . k λ κ k d − α a a , (17)where Q r ( λ , . . . , λ κ ) := r κ ( α − d ) / L − κ/ ( r ) c − κ/ ( d, α ) κ Y j =1 k λ j k d − α f (cid:18) k λ j k r (cid:19) / . Let us rewrite the integral in (17) as the sum of two integrals I and I with the integra-tion regions A ( r ) := { ( λ , . . . , λ κ ) ∈ R κd : max i =1 ,κ ( || λ i || ) ≤ r γ } and R κd \ A ( r ) respectively,where γ ∈ (0 , . Our intention is to use the monotone equivalence property of regularlyvarying functions in the regions A ( r ) . First we consider the case of ( λ , . . . λ κ ) ∈ A ( r ) . By Assumption 2 and the inequality (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)vuut κ Y i =1 x i − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ κ X i =1 (cid:12)(cid:12)(cid:12) x κ i − (cid:12)(cid:12)(cid:12) we obtain | Q r ( λ , . . . , λ ) − | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)vuuut κ Y j =1 L (cid:16) r k λ j k (cid:17) L ( r ) − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ κ X j =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) L κ (cid:16) r k λ j k (cid:17) L κ ( r ) − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . By Remark 5, if || λ j || ∈ (1 , r γ ) , j = 1 , κ, then for arbitrary δ > r we get (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − L κ (cid:16) r k λ j k (cid:17) L κ ( r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = L κ (cid:16) r k λ j k (cid:17) L κ ( r ) · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − L κ ( r ) L κ (cid:16) r k λ j k (cid:17) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C L κ (cid:16) r k λ j k (cid:17) L κ ( r ) g (cid:18) r k λ j k (cid:19) × k λ j k δ h τ ( k λ j k ) = C k λ j k δ h τ ( k λ j k ) g ( r ) g (cid:16) r k λ j k (cid:17) g ( r ) L (cid:16) r k λ j k (cid:17) L ( r ) κ . β and β , applying Theorem 1.5.6 [24] to g ( · ) and L ( · ) and using thefact that h τ (cid:0) t (cid:1) = − t τ h ( t ) we obtain (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − L κ (cid:16) r k λ j k (cid:17) L κ ( r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C k λ j k δ + κβ + β h τ ( k λ j k ) k λ j k τ g ( r ) = C k λ j k δ h τ (cid:18) k λ j k (cid:19) g ( r ) . (18)By Remark 5 for || λ j || ≤ , j = 1 , κ , and arbitrary δ > , we obtain (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − L κ (cid:16) r k λ j k (cid:17) L κ ( r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C k λ j k − δ h τ (cid:18) k λ j k (cid:19) g ( r ) . (19)Hence, by (18) and (19) | Q r ( λ , . . . λ κ ) − | ≤ k κ X j =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) L κ (cid:16) r k λ j k (cid:17) L κ ( r ) − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C κ X j =1 h τ (cid:18) k λ j k (cid:19) g ( r ) max (cid:16) k λ j k − δ , k λ j k δ (cid:17) , for ( λ , . . . λ κ ) ∈ A ( r ).Notice, that in the case τ = 0 for any δ > C > h ( x ) = ln( x ) 1, and h ( x ) = ln( x ) < Cx − δ , x < 1. Hence, by Lemma 2 for − τ ≤ d − κα we get Z A ( r ) ∩ [0 , κd h τ (cid:16) k λ j k (cid:17) max (cid:16) k λ j k − δ , k λ j k δ (cid:17) (cid:12)(cid:12)(cid:12)(cid:12) K (cid:18) κ P i =1 λ i (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) d λ . . . d λ κ k λ k d − α . . . k λ κ k d − α < ∞ . Therefore, we obtain for sufficiently large rI ≤ C g ( r ) κ X j =1 Z A ( r ) ∩ R κd h τ (cid:16) k λ j k (cid:17) · max (cid:16) k λ j k − δ , k λ j k δ (cid:17) k λ k d − α . . . k λ κ k d − α ×|K ( λ + . . . λ κ ) | d λ . . . d λ κ ≤ C g ( r ) Z A ( r ) ∩ R κd h τ (cid:16) k λ k (cid:17) k λ k d − α . . . k λ κ k d − α max (cid:16) k λ k − δ , k λ k δ (cid:17) |K ( λ + . . . λ κ ) | d λ . . . d λ κ ≤ C g ( r ) . (20)It follows from Assumption 2 and the specification of the estimate (15) in the proof ofTheorem 2 that for each positive δ there exists r > r ≥ r , ( λ , . . . , λ κ ) ∈ B (1 ,µ ,...,µ κ ) = { ( λ , . . . , λ κ ) ∈ R κd : || λ j || ≤ , if µ j = − , and || λ j || > , if µ j = 1 , j =1 , k } , and µ j ∈ {− , } , it holds |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . λ κ ) − k λ k d − α . . . k λ κ k d − α ≤ C |K ( λ + · · · + λ κ ) | k λ k d − α . . . k λ κ k d − α + C |K ( λ + · · · + λ κ ) | k λ k d − α − δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ . Since the integrands are non-negative, we can estimate I as it is shown below I ≤ κ Z R ( κ − d Z || λ || >r γ |K ( λ + · · · + λ κ ) | ( Q r ( λ , . . . , λ κ ) − d λ . . . d λ κ k λ k d − α . . . k λ κ k d − α ≤ C Z R ( κ − d Z || λ || >r γ |K ( λ + · · · + λ ) | d λ . . . d λ κ k λ k d − α . . . k λ κ k d − α + C X µi ∈{ , , − } i ∈ ,κ Z R ( κ − d Z || λ || >r γ |K ( λ + · · · + λ κ ) | d λ . . . d λ κ k λ k d − α − δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ ≤ C max µi ∈{ , , − } i ∈ ,κ Z R ( κ − d Z || λ || >r γ |K ( λ + · · · + λ κ ) | d λ . . . d λ κ k λ k d − α − δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ . (21)Replacing λ + λ by u we obtain I ≤ C max µi ∈{ , , − } i ∈ ,κ Z R ( κ − d Z || λ || >r γ |K ( u + λ + · · · + λ κ ) | k λ k d − α − δ k u − λ k d − α − µ δ × d λ d u d λ . . . d λ κ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ ≤ C max µi ∈{ , , − } i ∈ ,κ Z R ( κ − d k u k d − α − ( µ +1) δ × |K ( u + λ + · · · + λ κ ) | k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ Z k λ k > rγ k u k d λ d u d λ . . . d λ κ k λ k d − α − δ (cid:13)(cid:13)(cid:13) u k u k − λ (cid:13)(cid:13)(cid:13) d − α − µ δ . δ ∈ (0 , min( α, d/κ − α ))sup u ∈ R d \{ } Z R d d λ k λ k d − α − δ (cid:13)(cid:13)(cid:13) u k u k − λ (cid:13)(cid:13)(cid:13) d − α − µ δ ≤ C, we obtain I ≤ C max µi ∈{ , , − } i ∈ ,κ Z R ( κ − d max µ ∈{ , , − } Z || u ||≤ r γ |K ( u + λ + · · · + λ κ ) | k u k d − α − ( µ +1) δ × d λ . . . d λ κ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ Z || λ || >r γ − γ d λ d u k λ k d − α − δ (cid:13)(cid:13)(cid:13) u k u k − λ (cid:13)(cid:13)(cid:13) d − α − µ δ + max µ i ∈{ , , − } Z || u || >r γ |K ( u + λ + · · · + λ κ ) | d u d λ . . . d λ κ k u k d − α − ( µ +1) δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ , where γ ∈ (0 , γ ) . By Lemma 2, there exists r > r ≥ r the first summand is boundedby C max µ ∈{ , , − } Z || u ||≤ r γ |K ( u + λ + · · · + λ κ ) | d u d λ . . . d λ κ k u k d − α − ( µ +1) δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ × Z || λ || >r γ − γ d λ k λ k d − α − δ − µ δ ≤ Cr − ( γ − γ )( d − α − δ ) . Therefore, for sufficiently large r,I ≤ Cr − ( γ − γ )( d − α − δ ) + C max µi ∈{ , , − } i ∈ ,κ Z R ( κ − d Z || u || >r γ |K ( u + λ + · · · + λ κ ) | d u d λ . . . d λ κ k u k d − α − δ k λ k d − α − µ δ . . . k λ κ k d − α − µ κ δ . Notice that the second summand here coincides with (21) if κ is replaced by κ − 1. Thus,28e can repeat the above procedure κ − I ≤ Cr − ( γ − γ )( d − α − δ ) + · · · + Cr − ( γ κ − − γ κ − )( d − κα − κδ ) + C Z k u k >r γκ − |K ( u ) | d u k u k d − κα − κδ , (22)where γ > γ > γ > · · · > γ κ − . Using integration formula for polar coordinates and estimate (2) we obtain Z k u k >r γκ − |K ( u ) | d u k u k d − κα − κδ ≤ ∞ Z r γκ − t d − Z S d − (1) |K ( ωt ) | t d − κα − κδ d ω d t ≤ C ∞ Z r γκ − d tt d − κ ( α + δ ) ≤ C r − γ κ − ( d − − κ ( α + δ )) . (23)Now let us consider the case τ < 0. In this case by Theorem 1.5.6 [24] for any δ > g ( r ) as follows g ( r ) ≤ C r τ + δ . (24)Combining estimates (16), (17), (20), (22), (23),(24) we obtain ρ (cid:18) κ ! K r C κ r d − κα L κ ( r ) , X κ (∆) (cid:19) ≤ C (cid:16) r − aα ( d − − κα )(2+ a )( d − − ( κ − α ) + δ + (cid:16) r τ +2 δ + r − ( γ − γ )( d − α − δ ) + · · · + r − ( γ κ − − γ κ − )( d − κα − κδ ) + r − γ κ − ( d − − κα − κδ ) (cid:17) a a (cid:19) . Therefore, for any ˜ κ ∈ (0 , κ ) one can choose a sufficiently small δ > ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ Cr δ (cid:16) r − aα ( d − − κα )(2+ a )( d − − ( κ − α ) + r − a ˜ κ a (cid:17) , (25)where κ := sup >γ>γ > ··· >γ κ − =0 min ( − τ, ( γ − γ )( d − α ) , . . . , ( γ κ − − γ κ − )( d − κα ) , ( γ κ − − γ κ − ) ( d − − κα )) . Lemma 5. Let x = ( x , . . . , x n ) ∈ R n +1+ be some fixed vector and Γ = { γ = ( γ , . . . , γ n +1 ) | b = γ > γ > · · · > γ n +1 = 0 } . he function G ( γ ) = min i ( γ i − γ i +1 ) x i reaches its maximum at ¯ γ = (¯ γ , . . . , ¯ γ n +1 ) ∈ Γ such that for any ≤ i ≤ n it holds ( ¯ γ i − ¯ γ i +1 ) x i = (¯ γ i +1 − ¯ γ i +2 ) x i +1 . (26) Proof. Let us show that any deviation of γ from ¯ γ leads to a smaller result. Consider avector ˆ γ such that for some i ∈ , n and some ε > γ i − ˆ γ i +1 = ¯ γ i − ¯ γ i +1 + ε. Since n P i =0 ˆ γ i − ˆ γ i +1 = ˆ γ − ˆ γ n +1 = b we can conclude that there exist some j = i, j ∈ , n, and ε > γ j − ˆ γ j +1 = ¯ γ j − ¯ γ j +1 − ε .Obviously, in this case G (ˆ γ ) ≤ ( ˆ γ j − ˆ γ j +1 ) x j = ( ¯ γ j − ¯ γ j +1 − ε ) x j = ( ¯ γ j − ¯ γ j +1 ) x j − ε x j Since ε > x j > G (ˆ γ ) ≤ ( ¯ γ j − ¯ γ j +1 ) x j − ε x j < ( ¯ γ j − ¯ γ j +1 ) x j = G (¯ γ ) . So it’s clearly seen that any deviation from ¯ γ will yield a smaller result.Note, that for fixed γ ∈ (0 , 1) by Lemma 5sup γ>γ > ··· >γ κ − =0 min (( γ − γ )( d − α ) , . . . , ( γ κ − − γ κ − )( d − κα ) , ( γ κ − − γ κ − ) ( d − κα ))= γ d − α + · · · + d − κα + d − − κα and sup γ ∈ (0 , γ d − α + · · · + d − κα + d − − κα = 1 d − α + · · · + d − κα + d − − κα . Thus, κ = κ , and from (25) the first statement of the theorem follows.30ow let us consider the case τ = 0. In this case by Theorem 1.5.6 [24] for any s > r g ( r ) > r − s . (27)By combining estimates (16), (17), (20), (22), (23) and using (27) to replace all powers of r by g ( r ) we obtain ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ C (cid:16) g ( r ) + g a a ( r ) (cid:17) . Since a ≤ , it follows that ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ Cg a a ( r ) . This proves the second statement of the theorem. Remark . For example, for g ( x ) = x ) in Remark 3 we obtain ρ (cid:18) κ ! K r C κ r d − − κα L κ ( r ) , X κ (∆) (cid:19) ≤ C ln − a a ( r ) . Let us study how the upper bounds in the rate of convergence perform depending ontheir parameters.When τ = 0 it is quite straightforward to see that for g ( r ) close to 0 the upper bounddecreases as a increases.For the case τ < 0, let us investigate the upper bound of κ as a function of α . κ < a a min (cid:18) α ( d − − κα ) d − − ( κ − α , κ (cid:19) = a a min α + d − − κα , κ ! . Since κ > 0, it is obvious that if α → α → d − κ the upper bound decreasesto 0. Thus, as expected, for these values of α our estimate does not provide a good rate ofconvergence.Let us determine α that corresponds to the best possible bound. We have to compare31 α + d − − κα and d − α + ··· + d − κα + d − − κα . Notice, that α is a decreasing function of α , but d − α + · · · + d − κα is an increasing function of α on (0 , d − κ ) . Also, for α → d − κ we get α → κd − , and1 d − α + · · · + 1 d − κα → d − − κ ) + 1 + · · · + 1( d − − κκ ) + 1 . Hence, we have two cases depending on the values of parameters κ and d . Case 1 . If κd − < d − − κ ) + 1 + · · · + 1( d − − κκ ) + 1 (28)then there exists α ⋆ = arg( α = d − α + · · · + d − κα ) that provides the best possible bound κ < a a min α ⋆ + d − − κα ⋆ , − τ ! . Case 2 . If condition (28) doesn’t hold, then, regardless of α , we have α + d − − κα < κ andthe upper bound is a a (cid:16) α + d − − κα (cid:17) . Choosing α = d − κ +1 that maximizes this expressionwe get the best bound κ < a a min (cid:18) d − 12 + 2 κ , − τ (cid:19) . Since the bound does not depend on the right part of κ in this case, then the rate ofconverge is determined only by the tail of the Hermite expansion of the function G ( · ) andby a parameter of the random field τ introduced in Assumption 2. Remark . If κ = 1 then there are no such d that condition (28) holds true and only Case 2is applicable. Example . If d = 2 then for any κ ∈ N it holds − κ + · · · + − κ − κ + 1 < κ . Therefore,only Case 2 is possible and for any κ the best bound is κ < a a min (cid:18) 12 + 2 κ , − τ (cid:19) . Example . If κ = 2 and d = 4 then condition (28) holds and α ⋆ = arg( α = − α ) = 4 / a = 1 for κ = 2, we get that the best bound is κ < min (4 / , − τ / References [1] Doukhan, P., Oppenheim, G., Taqqu, M.S. (2003). Long-Range Dependence: Theoryand Applications . Boston, Birkh¨auser.[2] Ivanov, A.V., Leonenko, N.N. (1989). Statistical Analysis of Random Fields . Dor-drecht, Kluwer Academic Publishers.[3] Wackernagel, H. (1998). Multivariate Geostatistics . Berlin, Springer-Verlag.[4] Anh, V.V., Leonenko, N.N., Ruiz-Medina, M.D. (2013). Macroscaling limit theo-rems for filtered spatiotemporal random fields. Stochastic Anal. Appl. Stochastic Anal. Appl. Stochastic Anal.Appl. Z. Wahrsch. Verw. Gebiete. Z. Wahrsch. Verw. Gebiete Z.Wahrsch. Verw. Gebiete Bernoulli Bernoulli Decay of the Fourier Transform: Analytic and Geo-metric Aspects . Basel, Birkh¨auser.[14] Iosevich, A., Rudnev, M. (2009). Freiman theorem, Fourier transformand additive structure of measures. J. Aust. Math. Soc. Random Fields and Geometry . New York, Springer.[16] Novikov, D., Schmalzing, J., Mukhanov, V.F. (2000). On non-Gaussianity in the cos-mic microwave background. Astronom. Astrophys. Statist. Sci. Theory Probab. Math. Statist. Statistics and control of random process . Preila, Nauka, Moscow. 55–57 (inRussian).[20] Nourdin, I., Poly, G. (2013). Convergence in total variation on Wiener chaos. Stoch.Process. Appl. Stat. Probab. Lett. , 83(10):2160–2167. DOI:10.1016/j.spl.2013.05.030[22] Reyes, J.B., Blaya, R.A. (2005). Cauchy transform and rectifiability in Clifford anal-ysis. Z. Anal. Anwendungen Wiener Chaos: Moments, Cumulants and Diagrams.A Survey with Computer Implementation . Berlin, Springer.[24] Bingham, N.H., Goldie, C.M., Teugels, J.L. (1987). Regular Variation. Cambridge,Cambridge University Press.[25] Leonenko, N.N., Olenko, A. (2013). Tauberian and Abelian theorems for long-range dependent random fields. Methodol. Comput. Appl. Probab. Limit Theorems of Probability Theory . New York, Oxford Univ.Press.[27] Seneta, E. (1976). Regularly Varying Functions. Berlin, Springer-Verlag.[28] Leonenko, N. (1999). Limit Theorems for Random Fields with Singular Spectrum .Dordrecht, Kluwer Academic Publishers.[29] Corless, R.M., Gonnet, G.H., Hare, D.E., Jeffrey, D.J., Knuth, D.E. (1996). On theLambertW function. Adv. Comput. Math. J. Math. Soc. Japan