Non-Improved Uniform Tail Estimates for Normed Sums of Independent Random Variables with Heavy Tails, with applications
aa r X i v : . [ m a t h . P R ] O c t NON-IMPROVED UNIFORM TAIL ESTIMATESFOR NORMED SUMS OF INDEPENDENT RANDOMVARIABLES WITH HEAVY TAILS,with applications.
E.Ostrovsky and L.Sirota,
Israel . Department of Mathematics and Statistics, Bar-Ilan University, 59200,Ramat Gan, Israel. e - mails: [email protected]; [email protected]
Abstract.
We obtain an uniform tail estimates for natural normed sums of indepen-dent random variables (r.v.) with regular varying tails of distributions.We give also many examples on order to show the exactness of offered esti-mates and discuss some applications in the method Monte-Carlo and statistics,and obtain the sufficient conditions for Central and stable limit theorem in theBanach space of continuous function.There are considered a slight generalization on a random variables withsuper-heavy tails and martingale difference scheme.
Key words and phrases:
Tail function, normed, centered and ordinary ran-dom variables (r.v.), continuous, regular and slowly varying functions, mo-ments and moment spaces, characteristical functions, Central and Stable Lim-it Theorem (CLT, SLT), Monte-Carlo method, statistics, Gaussian and stabledistribution, Orlicz and Grand Lebesgue spaces of random variables, accom-plishing infinite divisible distribution, martingale and martingale differences,Banach space.AMS 2000 subject classifications: Primary 60G50, 60B11, secondary62G20.
Let ξ = ξ be a random variable defined on some sufficiently rich probabilisticspace (Ω , A , P ) with regularly varying as x → ∞ tail behavior: T ( x ) = T ξ ( x ) = x − r log γ ( x ) L (log x ) , r = const > , x > e, (1 . T ( x ) = T ξ ( x ) for the r.v. ξ may be definedas follows: T ( x ) = T ξ ( x ) = P ( | ξ | ≥ x ) , x >
0; (1 . L = L ( x ) is positive continuous slowly varying as x → ∞ function.Further, we denote as φ ( t ) = φ ξ ( t ) the characteristical function of the r.v. ξ : φ ( t ) = φ ξ ( t ) = E exp( itξ ) (1 . ψ ( t ) = ψ ξ ( t ) its addition: ψ ( t ) = ψ ξ ( t ) = 1 − Re [ φ ξ ( t )] = E (1 − Re [exp( itξ )]) . (1 . ξ has symmetrical distribution, then ψ ( t ) = ψ ξ ( t ) = 1 − φ ξ ( t ) = E (1 − exp( itξ )) . (1 . a )Denote also in symmetrical case ψ ( t ) = ψ ξ ( t ) = sup λ ∈ (0 , " ψ ( λt ) ψ ( λ ) ; K ( p ) = 2 π − Γ(1 + p ) sin( πp/ , p ∈ (0 , . Note that the function d ( t, s ) = d ξ ( t, s ) = q ψ ( t − s )is a bounded translation invariant continuous distance between a two pointson the real line R. Let ξ ( k ) , k = 1 , , . . . , n be independent copies of ξ ; we define the naturalnorming sequence { b ( n ) } as follows: b (1) = 1 and for n = 2 , , . . . as a positivesolution of an equation: ψ ξ (1 /b ( n )) = n. (1 . { b ( n ) } is naturalnorming sequence for the sum of independent variables { ξ ( i ) } . Note that for sufficiently greatest values n the values { b ( n ) } exists, is uniqueand lim b ( n ) = ∞ , n → ∞ . For instance, if
Var ( ξ ) ∈ (0 , ∞ ) , then b ( n ) ≍ √ n, (the classical normingsequence.)Another example. Assume that the r.v. ξ has a symmetric stable distribu-tion of order r :Law( ξ ) = St ( r ) ⇔ φ ξ ( t ) = exp ( −| t | r ) , r ∈ (0 , . In this case b ( n ) ≍ n /r . Let us denote S ( n ) = b ( n ) − n X k =1 ξ ( k ) , (1 . U ξ ( x ) = U ( x ) = sup n =1 , ,... T S ( n ) ( x ) . (1 . U ( x ) is uniform tail function for natural normed sums of inde-pendent random variables ξ ( k ) , k = 1 , , . . . , n. Our aim is estimate of the uniform tail function U ( x ) through thesource tail function T ( x ) . We will distinguish and investigate all the cases r ∈ (0 ,
2) (heavy tails), r = 2 (intermediate tails), and r > E | ξ | r = ∞ (infinitenessof main moment) and E | ξ | r < ∞ (finiteness of main moment).In the last section we consider the case of super-heavy tails, i.e. when thetail function decreases logarithmicaly.For the exponential tail function T ( x ) = exp ( − Cx m ) , m = const ∈ [1 , ∞ )the non-improved estimate of the function U ( x ) has a view: U ( x ) ≤ exp (cid:16) − C ( C, m ) x min( m, (cid:17) , see [4], [25], [33].Notice that in exponential case the critical value of the parameter m is alsothe value m = 2 . The moment estimates for S ( n ) in the case when r ≤ E | ξ | r < ∞ isinvestigated in [5]. Another estimates (tail and moments) see in [1], [9], [17],[21], [27], [30], [39], [47] etc.The applications of these estimates, e.g. in the Monte-Carlo method forerrors estimates for integrals with infinite dispersion see in [11], [33]. In detail,let us consider the problem of numerical computation of an absolute convergentintegral (multiple, in general case) of a view: I = Z D f ( y ) ν ( dy ) , where ν ( · ) is probabilistic measure on the set D : ν ( D ) = 1 . Let τ ( k ) , k = 1 , , . . . , n be independent r.v. with distribution ν : P ( τ ( k ) ∈ A ) = ν ( A ) . The Monte-Carlo estimation I n of an integral I is I n = n − n X k =1 f ( τ ( k )) . Suppose for some r ∈ (1 , E | f ( τ (1)) | r < ∞ or more generally that the r.v. f ( τ ( k )) − I satisfies the condition (1.1); notethat in the case r > I of areliability 1 − δ, δ = 0 .
05; 0 .
01 etc. we consider the probability U n ( x ) = P b ( n ) − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X k =1 ( f ( τ ( k ) − I )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > x ! . U n ( x ) ≤ U ( x ); therefore, we conclude denoting by X ( δ ) the solutionof an equation U ( X ( δ )) = δ that with probability at least 1 − δ | I n − I | ≤ X ( δ ) b ( n ) /n. Obviously, under condition (1.1)lim n →∞ b ( n ) /n = 0 . Analogous application appears in statistics. Indeed, let us consider thefollowing classical scheme of date elaboration. η ( k ) = θ + ξ ( k ) , k = 1 , , . . . , n ;where θ is unknown deterministic parameter, { ξ ( k ) } are i.,i.d. centered r.v.satisfying the condition (1.1) with r > θ has a viewˆ θ n = n − n X k =1 η ( k ) . We conclude as before that with probability at least 1 − δ under formulatedabove conditions and notations (cid:12)(cid:12)(cid:12) ˆ θ n − θ (cid:12)(cid:12)(cid:12) ≤ X ( δ ) b ( n ) /n. Remark 1.1.
Suppose T ( x ) = 0 (cid:16) x − r (cid:17) , x → ∞ , r = const ∈ (0 , , then the distribution of ξ ( k ) dominated by symmetric stable distribution C · St ( r ); then by virtue of a main result of S.Kwapien [27] b ( n ) ∼ n − /r and U ( x ) = 0 (cid:16) x − r (cid:17) , r = const ∈ (0 , . Remark 1.2.
Suppose E | ξ | r < ∞ , r = const ∈ (0 , . This condition is equivalent to the follows: Z ∞ e x − log γ ( x ) L (log x ) dx < ∞ . We conclude using famous result belonging to B. von Bahr and C.-G. Esseen[1] that again b ( n ) ∼ n − /r andsup n E | S ( n ) | r ≤ E | ξ | r , U ( x ) = o (cid:16) x − r (cid:17) , x → ∞ , r = const ∈ (0 , . The paper is organized as follows. In the next section we consider the so-called case of heavy tails: r ∈ (0 , . Third section is devoted mainly to theconsideration of intermediate case r = 2 . Fourth section contains the investi-gation of the case moderate tails: r ∈ (2 , ∞ ) . In the fifth section we generalize preceding results on the martingale case,i.e. when the summands { ξ ( i ) } are centered martingale differences relativesome filtration { F ( i ) } . The sixth section contains the tail evaluating for normedsums of the r.v. with superheavy, i.e. logarithmical, tails of distribution. Inthe 7 th section we obtain sufficient conditions for Stable and Central Lim-it Theorems for heavy tail random fields in the Banach space of continuousfunctions on the compact metric spaces.The last section is devoted to concluding remarks.The letter C, with or without subscript, denotes a finite positive non es-sential constants, not necessarily the same at each appearance. Theorem 2.1.
Let the r.v. ξ has a symmetrical distribution. Then U ( x ) ≤ . x Z /x − /x ψ ( t ) dt, x > . (2 . Proof.
We will use the well-known inequality, which is true for arbitrary r.v. η : T η (2 /a ) ≤ a − Z a − a ψ η ( t ) dt, a = const > . (2 . a = 2 /x. We denote V ( n ) = n X k =1 ξ ( k ) = b ( n ) S ( n ) . M.Braverman in [5] proved that ψ V ( n ) ( t ) ≤ n X k =1 ψ ξ ( k ) ( t ) = n ψ ξ ( t ) = n ψ ( t ) , therefore ψ S ( n ) ( t ) ≤ n ψ ( t/b ( n )) . We get: ψ S ( n ) ( t ) ≤ sup n [ n ψ ( t/b ( n ))] ≤ sup n [ n ψ ( t · ψ − (1 /n ))] ≤ λ ∈ (0 , ψ ( λt ) ψ ( λ ) = ψ ( t ) . (2 . Theorem 2.2.
Suppose again the r.v. ξ has a symmetrical distributionand let r ∈ (0 , . Then U ( x ) ≤ inf p ∈ (0 ,r ) (cid:20) K ( p ) x − p Z ∞ ψ ( t ) t − p − dt (cid:21) . (2 . Proof.
We will use the famous formula: E | η | p = K ( p ) Z ∞ ψ η ( t ) t − p − dt, (2 . η = S ( n ) and we may replace in the last inequality ψ S ( n ) ( t )instead ψ η ( t ) . Estimating ψ S ( n ) ( t ) by means of inequality (2.3), we obtain the assertionof theorem 2.2. Corollary 2.1.
Choosing in (2.4) the value p = r − C/ log x, x > e , weconclude: U ( x ) ≤ e C (cid:20) K ( r − C/ log x ) x − r Z ∞ ψ ( t ) t − r + C/ log x − dt (cid:21) . (2 . ψ ( t ) ≤ | t | r | log t | β , | t | < /e, β = const ≥ , (2 . C in (2.5) is C = β + 1 and hence forthe values x > exp(2( β + 1) /r ) U ( x ) ≤ e β +1 ( β + 1) − β − K ( r − ( β + 1) / log x ) x − r [log x ] β +1 ≤ C ( β, r ) x − r [log x ] β +1 . (2 . We investigate further in this section the case only when in the condition(1.1) r ∈ (0 , . Definition 2.1.
We will say that the r.v. η has a regular tail, write:Law( η ) ∈ RT, if is true the inverse inequality to the inequality (2.2) up tomultiplicative constant, i.e. when there exists constant C > aT η (2 /a ) ≥ C a − Z a − a ψ η ( t ) dt. (2 . ξ satisfies the condition (1.1) with r ∈ (0 , , then Law( ξ ) ∈ RT. ξ = 0 and if for some ∆ = const > E | ξ | < ∞ , thenLaw( ξ ) / ∈ RT.
Namely, it follows from Tchebychev’s inequality T ξ (2 /a ) ≤ C a , a ∈ (0 , , but a − Z a − a ψ ξ ( t ) dt ≥ C a . Definitions 2.2.
We will say that the r.v. ξ (more exactly, the distributionLaw( ξ ) of the r.v. ξ ) belongs to the class M I (Monotonically Increasing), write:Law( ξ ) ∈ M I, if the function λ → θ ( λ ) , λ ∈ (0 , , where θ ( λ ) = ψ ( λt ) ψ ( λ ) , (2 . t in some neighborhood t ∈ [0 , ∆) , ∆ > . Analogously, we will say that the r.v. ξ (more exactly, the distributionLaw( ξ ) of the r.v. ξ ) belongs to the class M D (Monotonically Decreasing),write: Law( ξ ) ∈ M D, if the function λ → θ ( λ ) , λ ∈ (0 , , where θ ( λ ) = ψ ( λt ) ψ ( λ ) , (2 . t in some neighborhood t ∈ [0 , ∆) , ∆ > . Obviously, if Law( ξ ) ∈ M D, then ψ ( t ) = ψ ( t ) , t ∈ [0 , ∆] (2 . ξ ) ∈ M I and if the distribution of the r.v. ξ satisfies the condition(1.1), then ψ ( t ) = t r , t ∈ [0 , ∆] . (2 . ξ satisfies the condition (1.1)and γ > , then Law( ξ ) ∈ M D and ψ ( t ) = ψ ( t ) , t ∈ [0 , ∆]; if the distributionof the r.v. ξ satisfies the condition (1.1) and γ < , then Law( ξ ) ∈ M I and ψ ( t ) = t r , t ∈ [0 , ∆] . Consider now the case γ = 0 . Suppose T ( x ) = T ξ ( x ) = x − r [log log x ] κ L (log log x ) , κ = const , x > e , (2 . L = L ( z ) is slowly varying as z → ∞ positive function.If in (2.13) κ > , then Law( ξ ) ∈ M D and if in (2.13) κ < , thenLaw( ξ ) ∈ M I.
Theorem 2.3.
If Law( ξ ) ∈ RT ∩ M D, then we propose the followingnon-improvable up to multiplicative constant estimates:7 ( x ) ≤ U ( x ) ≤ C ( r, L ) T ( x ) , x > x = const > . (2 . Proof.
The left inequality in (2.14) is trivial; it remains to prove theright-hand inequality.We will use the following fact: if the equality (1.1) holds and r ∈ (0 , , then as t → ψ ξ ( t ) ∼ C ( r ) t r | log t | γ L ( | log t | ) , (2 . C ( r ) = Γ(1 − r ) cos( πr/ , r = 1 , [19], p. 86-87, see also [5].As long as Law( ξ ) ∈ M D,ψ ξ ( t ) ∼ ψ ξ ( t ) , t ∈ (0 , ∆) . (2 . < r < , we conclude by virtue of the equality (2.16) and the conditionLaw( ξ ) ∈ RT that U ( x ) ≤ C ( r, L ) x − r [log x ] γ L (log x ) = C ( r, L ) T ( x ) , (2 . Theorem 2.4.
If Law( ξ ) ∈ RT ∩ M I, then U ( x ) ≤ C ( r, L ) x − r , x > x = const > , (2 . ξ. Notice that the member L ( · ) is absent in the right-hand of the inequality(2.19). This expression included in the definition on the norming sequence { b ( n ) } . Examples.A.
Suppose the r.v. ξ is symmetrically distributed, satisfies the condition (1.1)and γ >
0; then U ( x ) ≤ C ( r, γ, L ) x − r [log x ] γ L (log x ) = C ( r, L ) T ( x ) , x > e . B. Suppose now the r.v. ξ is symmetrically distributed, satisfies the con-dition (1.1) and γ <
0; then U ( x ) ≤ C ( r, L ) x − r , x > x = const > . C. Consider now up to end of this section the case when γ = 0 . Suppose8 ( x ) = T ξ ( x ) = x − r [log log x ] κ L (log log x ) , κ = const , x > e , where as before L = L ( z ) is slowly varying as z → ∞ positive function.If κ > , then U ( x ) ≤ C ( r, κ, L ) x − r [log log x ] κ L (log log x ) , x > e . D. If κ < , we conclude U ( x ) ≤ C ( r, κ, L ) x − r , x > . Remark 2.1.
Note that the application of theorem 2.2. give us moreslight result, namely U ( x ) ≤ C x − r [log x ] γ +1 L (log x ) = T ( x ) , x > e. We consider in this section the case when in the condition (1.1) r = 2 . Theorem 3.1.
Suppose the r.v. ξ is symmetrically distributed, satisfiesthe condition (1.1) for r = 2 and for some γ = const ∈ R ; then: A. If γ ≥ − , then U ( x ) ≤ C ( L ) x − [log x ] γ +1 L (log x ) = C ( r, γ, L ) T ( x ) log x, x > e ; (3 . a ) B. If γ < − , then U ( x ) ≤ C ( L ) x − , x > . (3 . b ) Proof.
Let us introduce a following function: H ( x ) = − Z x u dT ξ ( u ); (3 . ψ ξ ( t ) ∼ . t H (1 / | t | ) , t → , see [19], p.86-88; see also [5].In the considered case ψ ξ ( t ) ∼ C ( γ, L ) t | log t | γ +1 L (log x ) , t → . (3 . Remark 3.1.
Note concerning the lower bound for U ( x ) in consideredcase r = 2 that we can show only the trivial bound U ( x ) ≥ x − [log x ] γ L (log x ) = T ( x ) , x > e ; (3 . x ] ∆ , ∆ > U ( x ) . We concentrate our attention in this section on the case when in the equality(1.1) r > , and suppose E ξ = 0 . Recall that in this case the norming sequence b ( n ) is ordinary: b ( n ) = √ n. In order to formulate and prove the main result in this case, we recall herefor reader convenience some facts about so-called Grand Lebesgue Spaces [13],[14], [20], [29] etc. or equally ”moment” spaces of random variables definedon fixed probabilistic space (Ω , A , P ); more detail description see in [25], [29],[33], [34].Let us consider the following norm (the so-called ”moment norm”) on theset of r.v. defined in our probability space by the following way: the space G ( ν ) = G ( ν ; r ) consist, by definition, on all the r.v. with finite norm || ξ || G ( ν ) def = sup p ∈ (2 ,r ) [ | ξ | p /ν ( p )] , | ξ | p := E /p | ξ | p . (4 . r = const > , ν ( · ) is some continuous positive on the semi-open interval[1 , r ) function such thatinf p ∈ (2 ,r ) ν ( p ) > , ν ( p ) = ∞ , p > r ;and as usually | ξ | p def = [ E | ξ | p ] /p We will denote supp( ν ) def = [1 , r ) = { p : ν ( p ) < ∞} . The case r = + ∞ is investigated in [25], [33], [34]; therefore, we supposefurther 2 < r < ∞ . Let ξ be a r.v. such that p > r ⇒ | ξ | p def = [ E | ξ | p ] /p = ∞ The natural function ν ξ ( p ) may be defined as follows: ν ξ ( p ) := | ξ | p = [ E | ξ | p ] /p . (4 . || ξ || G ( ν ξ ) = 1 . natural function for the family { ξ ( · ) } of a r.v. { ν ( · ) } = { ν ξ ( k ) ( p ) } , k =1 , , . . . may be defined as follows: ν { ξ } ( p ) := | ξ | p = sup k [ E | ξ ( k ) | p ] /p , (4 . a )if there exists and is finite.The complete description of a possible natural functions see in [34], [33],chapter 1, section 3. Example 4.1.
Suppose the r.v. ξ satisfies the condition (1.1) for r > γ > − . Then (see [29], [34]) for the values p ∈ [1 , r ) , p → r − E | ξ | p ∼ C ( r, γ, L ) ( r − p ) − γ − L (1 / ( r − p )) . Note that an inequality ψ ( r − < ∞ is equivalent to the moment restric-tion | ξ | r < ∞ . We recall now the relations between moments for r.v. ξ, ξ ∈ G ( ν, r ) andits tail behavior. Namely, for p < r | ξ | p = (cid:20) p Z ∞ u p − T | ξ | ( u ) du (cid:21) /p , therefore || ξ || G ( ν ; r ) = sup p
Example 4.3.
Let ζ be a discrete r.v. with distribution P (cid:16) ζ = exp( e k ) (cid:17) = C exp (cid:16) βrk − re k (cid:17) ,k = 1 , , . . . , β = const > , where obviously 11 /C = ∞ X k =1 exp (cid:16) βrk − re k (cid:17) . We conclude after some calculations: | ζ | p ≍ ( r − p ) − β , but for the sequence x ( k ) = exp(exp( k )) T ζ ( x ( k )) ≥ c [log x ( k )] β ( x ( k )) − r . Let us introduce the so-called Rosenthal’s constants (more exactly, Rosen-tal’s functions) [41] R ( p ) as follows: R ( p ) = sup ζ ( k ): E ζ ( k )=0 | P nk =1 ζ ( k ) | p max( | P nk =1 ζ ( k ) | , [ P nk =1 | ζ ( k ) | pp ] /p ) , (4 . { ζ ( k ) } with condition | ζ ( k ) | p < ∞ . This constants was intensively investigated in many publications, see,e.g.[41], [9], [17], [21], [22], [28], [35], [42], [46] etc. It is known, for instance,that there exists an absolute constant C R , which is exactly calculated in [35]: C R ≈ . , so that R ( p ) ≤ C R pe · log p , p ∈ [2 , ∞ ) . (4 . { ζ ( k ) } the correspondent Rosen-thal’s constant C R is (approximately) equal to ≈ . . It follows from inequality (4.4) that if the r.v. { η ( k ) } , k = 1 , , . . . , n arecentered, i., i.d., and | η (1) | p < ∞ , p ≥ , thensup n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − / n X k =1 η ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p ≤ R ( p ) | η (1) | p . (4 . centered r.v. ξ belongs to some space G ( ν ; r ) , r ∈ (2 , ∞ ); this condition is satisfied, if ξ satisfied the condition (1.1) for some r ∈ (2 , ∞ ) and E ξ = 0 . For instance, the function ν = ν ( p ) may be the natural function for ther.v. ξ : ν = ν ξ ( p ) . Theorem 4.1.
We have the following non-improved inequality in the termsof G ( ν ; r ) norms:sup n || n − / n X k =1 ξ ( k ) || G ( ν ; r ) ≤ C ( ν ; r ) || ξ || G ( ν ; r ); (4 . a ) || ξ || G ( ν ξ ) ≤ sup n || n − / n X k =1 ξ ( k ) || G ( ν ξ ) ≤ C ( ν ξ ) || ξ || G ( ν ξ ) . (4 . b )12 roof. Let ξ ∈ G ( ν ; r ) and E ξ = 0 . We can and will assume without lossof generality || ξ || G ( ν ; r ) = 1 . It follows from this equality | ξ | p ≤ ν ( p ) , p ∈ [2 , r ) . We conclude by means of inequality (4.5) by virtue of inequality p < r :sup n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − / n X k =1 ξ ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p ≤ R ( p ) | ξ | p ≤ R ( p ) ν ( p ) ≤ C ( r ) ν ( p ) , (4 . n sup p ∈ [2 ,r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − / n X k =1 ξ ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p /ν ( p ) ≤ C ( r ) , which is equivalent to the assertion (4.6a) of our theorem.The proposition (4.6b) follows from (4.6a) after choosing ν ( p ) = ν ξ ( p ) . As a consequence:
Theorem 4.2. U ( x ) ≤ C ( r, Law( ξ )) T ( ν ) ( x ) . (4 . Example 4.4.
Let E ξ = 0 and suppose the r.v. ξ satisfies the condition(1.1). Then x − r log γ ( x ) L (log x ) ≤ U ( x ) ≤ C ( r, γ, L ) x − r log γ +1 ( x ) L (log x ) , x > e. (4 . Remark 4.1.
We can not prove that the power of the logarithmic multiplier γ + 1 in the right-hand side of last inequality is not improvable. But we can offer the following example.
Example 4.5.
Let ζ be the r.v. from the example 4.3., put ζ o = ζ − E ζ . Define a r.v. θ as a r.v. with accomplishing infinite divisible distributionto the distribution ζ o . This imply that θ = τ X m =1 ζ o ( m ) , where ζ o ( m ) are independent copies of ζ o , the r.v. τ gas a standard Poissondistribution with unit parameter: P ( τ = j ) = 1 / ( e j !) , j = 0 , , , . . . ; P m =1 =0 . Obviously, φ θ ( t ) = exp [ φ ζ o ( t ) − . It follows from the last formula that the asymptotical behavior of the function φ θ ( t ) as t → T θ ( x ) as x → ∞ alike the asymptot-ical behavior of the function φ ζ o ( t ) , t → T ζ o ( x ) as x → ∞ . For instance, Yu.V.Prohorov in [39] proved the followinginequality: 13 n X k =1 θ ( k ) > x ! ≤ P n X k =1 ζ o ( k ) > x/ ! . (4 . n = n − / n X k =1 θ ( k ) , where { θ ( k ) } are independent copies of the r.v. θ. We conclude: | θ | p ≍ ( r − p ) − β , p ∈ [1 , r ) , but there exists a constant q ∈ (0 ,
1) such that for the sequence x ( k ) =exp(exp( k )) and for all the values n = 1 , , . . .T Θ n ( x ( k )) ≥ C q n [log x ( k )] β ( x ( k )) − r . Remark 4.2.
It is not necessary to suppose in this section that the r.v. ξ ( i ) are identical distributed; it is sufficient to assume that the r.v. { ξ ( i ) } areindependent,centered and such thatsup i T ξ ( i ) ( x ) ≤ x − r log γ ( x ) L (log x ) , r = const > , x > e. (4 . Remark 4.3.
The application of the interpolation technique, see for in-stance [23], gives us more slight result as in the theorems 4.1-4.2. For example,we conclude under conditions in example 4.4 U ( x ) ≤ C ( r, γ, L ) x − r log γ +1 ( x ) log log x L (log x ) , x > e e . (4 . ξ ( k ) , k = 1 , , n be i., i.d. and satisfy the equality(1.1). Let also δ = const > . Consider the following Orlicz’s function: N ( u ) = N r,γ,L ( u ) = δ − | u | r (cid:16) log − γ − δ | u | (cid:17) L (log | u | ) , u > e, (4 . N ( u ) = C u , | u | ≤ e, so that the function N = N ( u ) iscontinuous.We can realize the sequence ξ ( k ) , k = 1 , , . . . on the direct (Cartesian)product of probabilistic spaces (Ω k , A k , P k ) :(Ω , A , P ) = ⊗ ∞ k =1 (Ω k , A k , P k )so that ξ ( k ) is symmetrical distributed measurable function ξ ( k ) : Ω k → R. As long as r > , there is two numbers p , p for which 2 < p < r < p < ∞ , for example, p = 0 . r ) , p = r + 1 . T ( ξ (1) , ξ (2) , . . . , ξ ( n )) = n − / n X k =1 ξ ( k ) . (4 . T is bounded operator from thespaces L p j , j = 1 , T is bounded uniformly in n operator from the Orlicz’s space M ( N r,γ,L , Ω , P ) def = M ( r, γ, L ) into at the same space:sup n || S ( n ) || M ( r, γ, L ) ≤ C ( r, γ, L ) sup k || ξ ( k ) || M ( r, γ, L ) . (4 . k and δ ∈ (0 , .
08) bounded and thereforesup δ ∈ (0 , . sup n || S ( n ) || M ( r, γ, L ) ≤ C ( r, γ, L ) < ∞ . (4 . N ( u ) = N r,γ,L ( u ) satisfies the ∆ condition, we get: (see[26], chapter 2, section 9) Proposition 4.1. sup δ ∈ (0 , . sup n E N r,γ,L ( S ( n )) ≤ C ( r, γ, L ) sup δ ∈ (0 , . sup k E N r,γ,L ( ξ ( k )) . (4 . δ ∈ (0 , . sup n T S ( n ) ( x ) ≤ C ( r, γ, L ) x − r δ − log γ +1+ δ x L (log x ) . We obtain choosing δ = (log log x ) − U ( x ) ≤ C ( r, γ, L ) x − r log γ +1 ( x ) log log x L (log x ) , x > e e , Q.E.D.
In this fifth section we generalize preceding results on the martingale case,i.e. when the summands { ξ ( i ) } are centered E ξ ( i ) = 0 martingale differencesrelative some filtration { F ( i ) } : F (0) = {∅ , Ω } , F ( i ) ⊂ F ( i + 1) ⊂ A . Thisimply, by definition, E ξ ( k ) /F ( k −
1) = 0 , k = 1 , , . . . .
15e will use in the sequel the following generalization of Rosenthal’s in-equality for centered martingales, see [36]: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − / n X k =1 ξ ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p ≤ p √ i | ξ ( i ) | p , p ≥ . (5 . Theorem 5.1.
We have alike in the fourth section in the case 2 < r < ∞ the following non-improved inequality in the terms of G ( ν ; r ) norms:sup n || n − / n X k =1 ξ ( k ) || G ( ν ; r ) ≤ C ( ν ; r ) || ξ || G ( ν ; r ); (5 . i || ξ ( i ) || G (cid:16) ν { ξ } (cid:17) ≤ sup n || n − / n X k =1 ξ ( k ) || G (cid:16) ν { ξ } (cid:17) ≤ C ( ν ξ ) sup i || ξ ( i ) || G (cid:16) ν { ξ } (cid:17) . (5 . ν { ξ } ( p ) def = sup i || ξ ( i ) || G ( ν ; r ) . Proof.
Let sup i || ξ ( i ) || G ( ν ; r ) < ∞ and E ξ = 0 . We can and will assumewithout loss of generality sup i || ξ ( i ) || G ( ν ; r ) = 1 . It follows from this equalitysup i | ξ ( i ) | p ≤ ν ( p ) , p ∈ [2 , r ) . We conclude by means of inequality (5.1) by virtue of inequality p < r :sup n | n − / n X k =1 ξ ( k ) | p ≤ p √ | ξ | p ≤ p √ ν ( p ) ≤ C ( r ) ν ( p ) , (5 . n sup p ∈ [2 ,r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − / n X k =1 ξ ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p /ν ( p ) ≤ C ( r ) , which is equivalent to the assertion of our theorem.The second proposition of one follows from (5.4) after choosing ν ( p ) =sup i ν ξ ( i ) ( p ) . As a consequence:
Theorem 5.2. U ( x ) ≤ C ( r, Law( { ξ ( i ) } )) T ( ν ) ( x ) . (5 . Example 5.1.
Let E ξ = 0 and suppose the r.v. ξ ( i ) satisfies the condition(1.1) uniformly over i. Then for the values x > ex − r log γ ( x ) L (log x ) ≤ U ( x ) ≤ C ( r, γ, L ) x − r log γ +1 ( x ) L (log x ) . (5 . Superheavy tails
We will investigate in this section the case when the r.v. ξ has symmetricaldistribution such that T ξ ( x ) = K log κ x L (log x ) , (6 . K, κ = const > , L ( z ) is slowly varying as z → ∞ positivecontinuous function, ξ ( k ) , k = 1 , , . . . are independent copies of ξ. We do not know the exact norming sequence for the sums P nk =1 ξ ( k ) . Weintend to represent here a more slight result as in the second section.Let us define for any fixed increasing deterministic tending to + ∞ non-random sequence w ( n ) , w (1) = 1 the following norming sequence B n : B n = exp (cid:16) ( Kn ) /κ L /κ (cid:16) ( Kn ) /κ (cid:17) w ( n ) (cid:17) (6 . S ( n ) = P nk =1 ξ ( k ) B n ,U ( x ) = sup n T S ( n ) ( x ) , x > e . Theorem 6.1.
For some positive finite constant C = C ( κ, L, { w ( n ) } ) T ξ ( x ) ≤ U ( x ) ≤ T ξ ( x/C ) . (6 . a )Further, the sequence S ( n ) tends in probability to zero as n → ∞ : P nk =1 ξ ( k ) B n P → , (6 . b )a ”double Weak” Law of Large Numbers. Proof.
We conclude as before as t → ψ ξ ( t ) = 2 t Z ∞ sin( tx ) T ξ ( x ) dx ∼ t · K Z ∞ e sin( tx ) K log κ x L (log x ) dx. (6 . t →
0+ (Fourier transform) iscalculated in the classical book of A.Zygmund [48], p. 186-188: ψ ξ ( t ) ∼ K | log t | − κ L ( | log t | ) . (6 . S ( n ) , we concludeby means of estimate (2.2)sup n (cid:12)(cid:12)(cid:12) ψ S ( n ) ( t ) (cid:12)(cid:12)(cid:12) ≤ K | log t | − κ L ( | log t | ) , t ∈ (0 , /e ) , (6 . n →∞ (cid:12)(cid:12)(cid:12) ψ S ( n ) ( t ) (cid:12)(cid:12)(cid:12) → . (6 .
1. Continuity.
Let η ( v ) , v ∈ V be separable random field (r.f.) (process) defined asidefrom the probabilistic space Ω on any set V. We suppose that for arbitrarypoint v ∈ V the r.v. η ( v ) satisfies the condition (1.1) up to continuous bilateralbounded multiplicative constant K ( v ) : T η ( v ) ( x ) = K ( v ) x − r log γ ( x ) L (log x ) , r = const ∈ (1 , ∞ ) , x > e, (7 . γ > − , C ≤ K ( v ) ≤ C , C , C = const , ≤ C ≤ C < ∞ . Without loss of generality we can and do assume that for some fixed non-random value v , v ∈ V K ( v ) = 1 . Let us introduce the following function: θ ( p ) = | η ( v ) | p = ν η ( v ) ( p ) , ≤ p < r ;then θ ( p ) ∼ ( r − p ) − γ − L (1 / ( r − p )) , p → r − . From the equality (7.1) follows thatsup v ∈ V || η ( v ) || Gθ < ∞ . (7 . natural distance d ( v , v ) (more exactly, semi-distance: fromthe equality d ( v , v ) = 0 does not follow v = v ) may be defined by theformula d ( v , v ) = || η ( v ) − η ( v ) || Gθ. (7 . d ( v , v ) follows immediately from (7.2). Remark 7.1.
The continuity of the coefficient K = K ( v ) is understoodrelative the distance d = d ( v , v ) . We denote as usually the metric entropy of the set V in the distance d ( · , · )as H ( V, d, ǫ ); recall that H ( V, d, ǫ ) is the natural logarithm of the minimal18umber of d − closed balls with radius ǫ, ǫ > V. Bydefinition, N ( V, d, ǫ ) = exp H ( V, d, ǫ ) . The classical theorem of Hausdorff tell us that ∀ ǫ > N ( V, d, ǫ ) < ∞ iff theset V is precompact set relative the distance d. Theorem 7.1.
If the following integral converges: Z N /r ( V, d, ǫ ) H γ/r ( V, d, ǫ ) L /r ( H ( V, d, ǫ )) dǫ < ∞ , (7 . η ( t ) are d ( · , · ) continuous with probability one: P ( η ( · ) ∈ C ( V, d )) = 1 (7 . P (sup v ∈ V | η ( v ) | ≥ x ) ≤ C x − r log γ ( x ) L (log x ) , x ≥ e. (7 . Proof.
From the definition of the norm || · || Gθ follows the inequality P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) η ( v ) − η ( v ) d ( v , v ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ x ! ≤ C x − r log γ ( x ) L (log x ) , x > e. (7 . / . It remains to use the result of the chapter 4, section 4.3 of the monograph[33].
Remark 7.1.
The case whensup v ∈ V | η ( v ) | r < ∞ , r ≥ ρ ( v , v ) = | η ( v ) − η ( v ) | r was considered by G.Pizier [38]. Indeed, if Z N /r ( V, ρ, ǫ ) dǫ < ∞ , then P ( η ( · ) ∈ C ( V, ρ )) = 1and | sup v ∈ V | η ( v ) | | r < ∞ . Remark 7.1.
Another approach via the so-called majorizing measures, see in[12], [44], [45]. 19 . Stable and Central Limit theorems.
We assume in addition that the random field η ( v ) is symmetrically dis-tributed. Let η k ( v ) be independent copies of η ( v ) . Define as before the normingsequence b ( n ) as a solution of equation n − = b − r ( n ) | log b ( n ) | γ L (log b ( n ))in the case r ≤ b ( n ) = √ n when r > . Put β n ( v ) = 1 b ( n ) n X k =1 η k ( v ) . (7 . β n ( v ) converge as n → ∞ to the finite-dimensional distribution of the random field, which wedenote as β ( v ) . The last r.f. has a stable distribution when r < η ( v ) : E β ( v ) β ( v ) = E η ( v ) η ( v ) . More information about stable distributions in the Banach spaces see in themonograph N.N.Vakhania, V.I.Tarieladze and S.A.Chobanan [47], chapter 5.We will say as ordinary that when the sequence r.f. β n ( · ) and r.f. β ( · )are d − continuous with probability one and the distributions in the space C ( V, d ) of r.f. β n ( · ) converge weakly as n → ∞ to the distribution of β ( · ) , that the random fields η k ( v ) , k = 1 , , . . . satisfy the Limit Theorem in thespace C ( V, d ) . In the case r < r ≥ Theorem 7.2.
If the following integral is finite: Z N /r ( V, d, ǫ ) H ( γ +1) /r ( V, d, ǫ ) L /r ( H ( V, d, ǫ )) dǫ < ∞ , (7 . η k ( v ) , k = 1 , , . . . satisfy the Limit Theorem in thespace C ( V, d ) . Moreover,sup n P (sup v ∈ V | β n ( v ) | ≥ x ) ≤ C x − r log γ +1 ( x ) L (log x ) , x ≥ e. (7 . Proof.
As long as the condition (7.10) is more strong as (7.4), we conclude P ( β n ( · ) ∈ C ( V, d )) = P ( β ( · ) ∈ C ( V, d )) = 1 . It remains to prove the tightness of the measures µ n generated by thesequence { β n ( · ) } : 20 n ( A ) = P ( β n ( · ) ∈ A ) , where A is Borelian set, in the space C ( V, d ) . We obtain using theorem 2.1thatsup n P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) β n ( v ) − β n ( v ) d ( v , v ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ x ! ≤ C x − r log γ +1 ( x ) L (log x ) , x > e. (7 . / . It remains to use the result of the chapter 4, section 4.3 of the monograph[33].
3. Applications.A.
We return now to the problem computation of (multiple) parametric integral of a view: I ( v ) = Z D f ( v, y ) ν ( dy ) , v ∈ V, (7 . ν ( · ) is again probabilistic measure on the set D : ν ( D ) = 1 . Let τ ( k ) , k = 1 , , . . . , n be as before independent r.v. with distribution ν : P ( τ ( k ) ∈ A ) = ν ( A ) . The Monte-Carlo consistent estimation I n ( v ) of anintegral I ( t ) is I n ( v ) = n − n X k =1 f ( v, τ ( k )) . (7 . r ∈ (1 ,
2) and for all the values v ∈ V E | f ( v, τ (1)) | r < ∞ or more generally that the r.v. f ( v, τ ( k )) − I ( v ) satisfies the condition (1.1);uniformly in v. In order to construct a non-asymptotical confidence intervalfor I of a reliability 1 − δ, δ = 0 .
05; 0 .
01 etc. in uniform over v ∈ V we considerthe probability U n ( x ) = P sup v ∈ V b ( n ) − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X k =1 ( f ( v, τ ( k ) − I ( v ))) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > x ! . (7 . { f ( v, τ ( k ) − I ( v )) } satisfy Limit Theorem,then ∀ x > n →∞ U n ( x ) → U ( x ) , (7 . U ( x ) = P sup v ∈ V | ζ ( v ) | > x ! , (7 . ζ ( v ) = ζ ( ω, v ) is stable or Gaussian random field.21he asymptotical or non-asymptotical behavior of U ( x ) as x → ∞ in boththe cases: SLT or CLT is known, see, e.g. [47], chapter 5; [37], [33], chapter 3.Therefore, we conclude asymptotically as n → ∞ denoting by X ( δ ) thesolution of an equation U ( X ( δ )) = δ that with probability at least 1 − δ in the uniform normsup v ∈ V | I n ( v ) − I ( v ) | ≤ X ( δ ) b ( n ) /n. (7 . n →∞ b ( n ) /n = 0 . Notice that it may be used non-asymptotical approach, where the proba-bility U n ( x ) allows the evaluating as follows: U n ( x ) ≤ sup n U n ( x ) ≤ C x − r log γ +1 ( x ) L (log x ) . (7 . B. Analogous application appears in statistics. Indeed, let us consider thefollowing classical scheme of date-process elaboration. η k ( v ) = θ ( v ) + ξ k ( v ) , k = 1 , , . . . , n ; (7 . θ ( v ) , v ∈ V is unknown deterministic function, { ξ ( k ) } are i.,i.d. cen-tered r.f. satisfying the condition (1.1) with r > θ ( v ) has a viewˆ θ n ( v ) = n − n X k =1 η k ( v ) . (7 . − δ under formulatedabove conditions and notationssup v ∈ V (cid:12)(cid:12)(cid:12) ˆ θ n ( v ) − θ ( v ) (cid:12)(cid:12)(cid:12) ≤ X ( δ ) b ( n ) /n. (7 . A. Non-symmetrical case.
The results of the second and third section remains true still without re-striction of symmetrical distribution of the independent r.v. { ξ ( i ) } . Indeed, itis sufficient to assume in the case r ∈ (1 ,
2] in addition to the equality (1.1) E ξ ( i ) = 022nd in the case r = 1 sup a> E | ξ ( i ) I ( | ξ ( i ) | ≤ a ) | < ∞ ;see, e.g., [1], [5]. The proof may be obtained also from the symmetrizationarguments, see also [28]. B. Not identical distributed r.v.
It is not necessary to suppose also in the independent case when r > ξ ( i ) are identical distributed; it is sufficientto assume in addition that the r.v. { ξ ( i ) } are independent, centered and suchthat for some positive finite constants C , C C x − r log γ ( x ) L (log x ) ≤ T ξ ( i ) ( x ) ≤ C x − r log γ ( x ) L (log x ) , x > e. C. About calculation of the norming sequence.
We investigate here the equation nψ (1 /b ( n )) = 1 for the norming sequence { b ( n ) } in the case r < . We have under condition (1.1): n − = b − r ( n ) | log b ( n ) | γ L (log b ( n )) . (8 . n → ∞ b ( n ) ∼ n /r log γ/r ( n ) L /r (log n ) . (8 . b ( n ) is true whenthe following condition holds: L X /r log γ/r X · L /r (log X ) ! ≍ L ( X ) , X → ∞ . (8 . L ( X ) = C (log X ) ∆ , ∆ = const . D. Tail comparison through moments inequalities.
If for two r.v. ξ and η T ξ ( x ) ≤ T η ( x ) , x > , then evidently | ξ | p ≤ | η | p , p ≥ . (8 . T ξ ( x ) . In detail, assume that for any values p from the non-trivial segment p ∈ [1 , r ) , r ∈ (1 , ∞ ) 23 ξ | p ≤ | η | p , p ≥ . (8 . T ξ ( x ) ≤ x − p | η | pp , x > , p ∈ supp ν η , therefore T ξ ( x ) ≤ inf p ∈ supp ν η h x − p | η | pp i , x > . (8 . ν η = [1 , r ) , or equally when ν η ( r + 0) = ∞ for some r > , then T ξ ( x ) ≤ inf p ∈ [1 ,r ) h x − p | η | pp i , x > . (8 . p = r − C/ log x, C =const > xT ξ ( x ) ≤ e C x − r E | η | r − C/ log x . (8 . E. Non-uniform norming sequence.
M.Braverman in the article [5] considered a more general as uniformnorming vector a = { a (1) , a (2) , . . . , a ( n ) } . In detail, let r ∈ (0 ,
2) and let { ξ ( k ) } , k = 1 , , . . . , n be again the independent copies of the symmetricalr.v. ξ satisfying the condition (1.1). Put U ( a ) ( x ) = P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X k =1 a ( k ) ξ ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)! . (8 . a = { a (1) , a (2) , . . . , a ( n ) } by means of Orlicz’s function ψ ( t ) = ψ ξ ( t ) : || a || ψ = inf { t, t > , n X k =1 ψ ( | a ( k ) | /t ) ≤ } , (8 . L p , p < r boundary.Denote U ( x ) = sup a : || a || ψ ≤ U ( a ) ( x ) . (8 . U ( x ) ≤ . x Z /x − /x ψ ( t ) dt, x > e. (8 . eferences [1] B. von Bahr and C.-G. Esseen. Inequalities for the r’th absolute momentof a sum of random variables, < r < . Ann. Mathem. Statist., (1965), 229-303.[2] P. Billingsley.
Convergence of Probabilistic Measures.
Oxford, OSU,(1973).[3] G.K.Binmore and H.H.Stratton.
A note on characteristic function.
Ann.Math. Stat., (1969), 301-307.[4] M.Sh. Bravernan.
Exponential Orlicz Spaces and independent randomVariables.
Probability and Mathematical Statistics, (1991), Vol. 12, Issue2, 245-250.[5] M.Sh. Bravernan.
On some Moment Conditions for Sums of independentrandom Variables.
Probability and Mathematical Statistics, (1993), Vol.14, Issue 1, 45-56.[6] M.Sh. Bravernan.
Independent Random Variables in Lorentz Spaces.
Bull.London Math. Soc., (1996),
Probability.
SIAM, (1993), Philadelphia.[8] V.V. Buldygin V.V., D.I.Mushtary, E.I.Ostrovsky, M.I.Pushalsky M.I.
New Trends in Probability Theory and Statistics.
Mokslas, (1992), V.1,p. 78-92; Amsterdam, Utrecht, New York, Tokyo.[9] N.I.Carothers and S.J.Dilworts.
Inequalities for sums of independent ran-dom variables.
Proc. of AMS, , (1988), 221-226.[10] R.M.Dudley R.M.
Uniform Central Limit Theorem.
Cambridge, Univer-sity Press, (1999), 352-367.[11] S.A.Egishjanz, E.I.Ostrovsky.
Approximation of random fields by general-ized linear splines.
Matem. Zametki, (1998), N o , B. 63, 690-696.[12] Fernique X. (1975).
Regularite des trajectoires des function aleatioresgaussiennes.
Ecole de Probablite de Saint-Flour, IV - 1974, Lecture Notesin Mathematic, (1975), , 1 - 96, Springer Verlag, Berlin.[13]
Fiorenza A.
Duality and reflexivity in grand Lebesgue spaces. Col-lectanea Mathematica (electronic version), , 2, (2000), 131 - 148.[14] Fiorenza A., and Karadzhov G.E.
Grand and small Lebesgue spacesand their analogs. Consiglio Nationale Delle Ricerche, Instituto per le Ap-plicazioni del Calcoto Mauro Picine, Sezione di Napoli, Rapporto tecnicon. 272/03, (2005). 2515] A.S.Frolov, N.N.Tchentzov.
On the calculation by the Monte-Carlomethod definite integrals depending on the parameters.
Journal of Com-putetional Mathematics and Mathematical Physics, (1962), V. 2, Issue 4,p. 714-718 (in Russian).[16] M.L.Grigorjeva, E.I.Ostrovsky E.I.
Calculation of Integrals on discontinu-ous Functions by means of depending trials method.
Journal of Compute-tional Mathematics and Mathematical Physics, (1996), V. 36, Issue 12, p.28-39 (in Russian).[17] P.Hitczenko and S.Montgomery-Smith.
Measuring the magnitude of sumsof independent random variables.
Ann. Probab., v. 25, No 3, (1997),447-466.[18] C.R.Heatcote and J.W.Frimen.
An inequality for characteristic function.
Bull. Austral. Math. Soc., (1972), 1-9.[19] I.A.Ibragimov and Yu.V.Linnik. Independent and stationary dependentrandom variables.
Wolters-Noordhoff, (1971), Groningen, Netherlands.
Hernandes E., Weiss G.
A First Course on Wavelets. (1996), CRCPress, Boca Raton, New York.[20]
Iwaniec T., and Sbordone C.
On the integrability of the Jacobianunder minimal hypotheses. Arch. Rat.Mech. Anal., 119, (1992), 129 143.[21] W.B.Johnson and G.Shechtman.
Sums of independent random variablesin rearrangement invariant spaces.
Ann. Probab., (1989), 789-808.[22] W.B.Johnson, G.Shechtman, J.Zinn.
Best Constants in Moment Inequal-ities for linear Combinations of independent and changeable random vari-ables.
Ann. Probab., (1985), 234-253.[23] A.Yu.Karlowich, L.Maligranda.
On the interpolation constant for subad-ditive operators in Orlicz spaces.
Proc. of the AMS, ISSN 1088-6836, (3001), 2727-2739.[24] T.Kawata.
Fourier analysis in probability theory.
Academic Press, (1972),New York.[25] Yu.V. Kozatchenko and E.I. Ostrovsky,
Banach spaces of random vari-ables of subgaussian type.
Theory Probab. Math. Stat., Kiev, (1985), 42-56(in Russian).[26] M.A.Krasnoselsky M.A., Rutisky Ya.B.
Convex functions and Orlicz’sSpaces.
P. Noordhoff LTD, The Netherland, 1961, Groningen.[27] S.Kwapien.
Sums of independent Banach space valued random variables.
Seminaire Maurey-Shwartz, (1972-1973), exp. 6.[28] Latala R.
Estimation of Moments of Sums of independent real randomVariables.
Ann. Probab., (1997), V. 25 B.3, 1502-1513.2629] E.Liflyand, E.Ostrovsky, L.Sirota.
Structural Properties of BilateralGrand Lebesgue Spaces.
Turk. J. Math.; (2010), 207-219.[30] A.Litvak, C.Schutt and Y.Gordon. Orlicz norm of sequences of randomvariables.
Ann. Probab., (2002), 1833-1853.[31] E.Lukacz.
Characteristic functions.
Second edition, Griffin, (1970), Lon-don.[32] D.I.Mushtary.
Probability and Topology in Banach Spaces.
Kasan, KSU,(1979), (in Russian).[33] E.I. Ostrovsky.
Exponential Estimations for Random Fields.
Moscow -Obninsk, OINPE, (1999), in Russian.[34] E. Ostrovsky, L. Sirota.
Moment Banach Spaces: Theory and Applica-tions.
HIAT Journal of Science and Engineering, Holon, Israel, v. 4, Issue1-2, (2007), 233 - 262.[35] E. Ostrovsky, L.Sirota.
Schl¨omilch and Bell series for Bessel’s functions,with probabilistic applications. arXiv:0804.0089 [math.CV] 1 Apr 2008.[36] E. Ostrovsky.
Bide-side exponential and moment inequalities for tail ofdistribution of polynomial martingales. arXiv: math.Pr/0406532 V1 Jun2004.[37]
Piterbarg V.I.
Asymptotic Methods in the Theory of Gaussian Pro-cessec and Fields. AMS, Providence, Rhode Island, V. 148, (1991), (trans-lation from Russian).[38] G.Pizier.
Condition d’entropic assupant la continuite de certains processuset application a l’analyse harmonique.
Seminaire d’analyse fonctionalle,(1980), Exp. 13 p. 23-29.[39] Yu.V.Prohorov.
Strong stability of sums and infinitely divisible distribu-tions.
Theory Probab. Appl., (1958), 157-214.[40] Yu.V. Prokhorov. Convergense of Random Processes and Limit Theoremsof Probability Theory.
Probab. Theory Appl., (1956), V. 1, 177-238.[41] H.P.Rosenthal.
On the subspaces of L p ( p > spanned by sequences ofindependent random variables. Israel J. Math., (1970), V.3, 273-278.[42] R.Ibragimov, R.Sharachmedov.
The Exact Constant in the Rosenthal In-equality for Sums of independent real Random Variables with mean zero.
Theory Probab. Appl.,
B.1, (2001), 127-132.[43] E. Seneta.
Regularly Varying Functions.
Springer Verlag; Russian edition,Moscow, Science, 1985.[44] M.Talagrand.
Majorizing measure: The generic chaining.
Ann. Probab.,(1996), The extremal Problems in Probability Theory.
Probab. TheoryAppl., (1984), V. 28 B.2, 421-422.[47] N.N.Vakhania, V.I.Tarieladze and S.A.Chobanan.
Probabilistic distribu-tion on Banach spaces.
Reidel, Dorderecht, (1987).[48] A.Zygmund.