Unbiased Monte Carlo estimation for solving of linear integral equation, with error estimate
aa r X i v : . [ m a t h . NA ] A ug UNBIASED MONTE CARLO ESTIMATION FOR SOLVINGOF LINEAR INTEGRAL EQUATION,with error estimate.E . Ostrovsky a , L . Sirota b a Corresponding Author. Department of Mathematics and computer science,Bar-Ilan University, 84105, Ramat Gan, Israel.E-mail: [email protected] [email protected] b Department of Mathematics and computer science. Bar-Ilan University, 84105,Ramat Gan, Israel.E-mail: [email protected]
Abstract.
We offer a new Monte-Carlo method for solving linear integral equation whichgives the unbiased estimation for solution of Volterra’s and Fredholm’s type, andconsider the problem of confidence region building.We study especially the case of the so-called equations with weak singularity inthe kernel of Abelian type.
Key words and phrases:
Integral equations, Neuman series, Monte Carlo method,random variables and vectors (r.v.), Poisson, Mittag-Lefler and Geometrical integerdistributions, constrained optimization, random number of elapsed r.v., Kroneker’ssquare of the linear operator, Central Limit Theorem (CLT), conditional probabilityand expectation, tail estimation.
We intent in this article to study the numerical Monte-Carlo method for solving ofthe linear integral equation, for example, of a form x ( t ) = f ( t ) + λ Z t K ( t, s ) x ( s ) ds, (1 . x ( t ) is unknown function, K ( t, s ) is kernel, the function f ( t ) is ”right-hand”side, λ is positive number. 1riefly: x = f + λ K [ x ] , (1 . a )where K [ x ] = K [ x ]( t ) is linear integral operator of Volterra’s type: K [ x ]( t ) = Z t K ( t, s ) x ( s ) ds. The offered here method gives as ordinary the optimal rate of con-vergence, exponential tail estimation for confidence probability, but inaddition our (random!) estimates of solution are unbiased.
We will consider the equation (1.0), as well as all the next equations, in the spaceof continuous functions C ( T ) defined on the set t ∈ [0 , T ] , where T = const ∈ (0 , ∞ ) , suppose therefore f ( · ) ∈ C ( T ) , K ( · , · ) ∈ C ( T × T ) , and denote as usually || f || = max t ∈ [0 ,T ] | f ( t ) | , || K || = max s,t ∈ [0 ,T ] || K ( t, s ) || . (1 . x ( t ) there exists,is unique and dependent continuously on the entries: f ( · ) , K ( · , · ) and may becomputed by means of the standard recursion procedure: x n = f + λ K [ x n − ] , , x = 0 , x = f ; n = 2 , , . . . (1 . The solution x ( · ) may be represented by means of the so - called Neuman’s series: x = f + ∞ X n =1 λ n K n [ f ] = ∞ X n =0 λ n K n [ f ] , (2 . K n denotes the n th iteration (power) of the operator K, so that K [ f ] = f ;which converges uniformly.Denote also y n ( t ) = K n [ f ]( t ); then x = P ∞ n =0 y n and we have for the values n = 1 , , . . . : y n ( t ) = λ n × Z t ds Z s ds . . . Z s n − ds n · K ( t, s ) K ( s , s ) . . . K ( s n − , s n ) f ( s n ) = λ n t n × Z ds Z s . . . Z s n − ds n · K ( t, ts ) K ( ts , ts ) . . . K ( ts n − , ts n ) f ( ts n ) . (2 . n − dimensional simplex (polygon) S ( n ) := { s , s , . . . , s n : 0 < s < , < s < s , < s < s , . . . , < s n < s n − } , (2 . L n ( t, ~s ) = L n ; K,f ( t, ~s ) = L n ( t, s ) = K ( t, ts ) K ( ts , ts ) . . . K ( ts n − , ts n ) f ( ts n ) , ~s = s = ( s , s , . . . , s n );then y n ( t ) = Z S ( n ) L n ( t, ~s ) ds. (2 . S ( n ) is equal to 1 /n ! , and if we introduce the probability measure µ n ( · ) on the Borelian subsets of the simplex S ( n ) with a density n ! ds : µ n ( A ) = n ! Z A ds, then the expression for x ( t ) may be rewritten as follows: x ( t ) = f ( t ) + ∞ X n =1 λ n t n n ! Z S ( n ) L n ( t, ~s ) µ n ( ds ) . (2 . x λ = x λ ( t ) = e − λt · x ( t ) , then x λ ( t ) = e − λt f ( t ) + ∞ X n =1 e − λt λ n t n n ! Z S ( n ) L n ( t, ~s ) µ n ( ds ) , (2 . x λ ( t ) = ∞ X n =0 e − λt λ n t n n ! Z S ( n ) L n ( t, ~s ) µ n ( ds ) , (2 . a )Let us introduce a sufficiently rich probability space (Ω , B, P ) with probability P , expectation E and variance Var; and the r.v. τ which has a Poisson distributionwith parameter λ · t (which is closely related with Poisson flow of intensity λ ) : P ( τ = n ) = e − λt λ n t n n ! ; (2 . x λ ( t ) = E Z S ( τ ) L τ ( t, ~s ) µ τ ( ds ) , (2 . a )Further, the integral in the right - hand side (2.6a) may be represented as fol-lows. We introduce the random vector ξ τ of a dimension τ which has the uniform(conditional) distribution in the simplex S ( τ ) : P ( ξ τ ∈ A ) /τ = µ τ ( A ) , (2 . Z S ( τ ) L τ ( t, ~s ) µ τ ( ds ) = E τ L τ ( t, ~ξ τ ) , (2 . x λ ( t ) = E L τ ( t, ~ξ τ ) . (2 . fixed value τ the integral in the expression (2.8) may be computed bymeans of the Monte Carlo method: Z S ( τ ) L τ ( t, ~s ) µ τ ( ds ) ≈ k k X j =1 L τ ( t, ~ξ ( j ) τ ) , (2 . ~ξ ( j ) τ are independent copies of the r.v. ~ξ τ . Correspondingly, the Monte Carlo approximation for the whole value x λ ( t ) maybe offered as follows: ˆ x λ = ˆ x λ,N,Z ( t ) def = 1 Z Z X i =1 N ( τ ( i )) N ( τ ( i )) X j =1 L τ ( i ) (cid:16) t, ~ξ ( j ) τ ( i ) (cid:17) , (2 . where τ ( i ) are independent copies of the r.v. τ, i.e. are independent Poisson dis-tributed r.v. with parameter Λ = λ t, and N = N ( n ) be some non-random positivenumerical sequence, her choice will be specified later. We state by definition 1 N ( n ) N ( n ) X j =1 L j def = 0 , (2 . a )if N ( n ) = 0 . Evidently, the approximation ˆ x λ = ˆ x λ,N,Z ( t ) of the solution x λ ( t ) is unbiased: E ˆ x λ,N,Z ( t ) = x λ ( t ) . Another approach for Monte Carlo solving of linear integral equation which givesbiased estimation see in the article [17].Let’s count the amount R of elapsed r.v. for the ˆ x λ = ˆ x λ,N,Z ( t ) computation. R = Z · Z X i =1 τ ( i ) N ( τ ( i )) , (2 . R is random variable with the expectationΘ def = E R ≍ Z · ∞ X n =1 e − Λ Λ n n ! · ( nN ( n )) , (2 . λt. Error estimate.
Let us estimate the variance of the approximate solution ˆ x λ = ˆ x λ,N,Z ( t ) . Note firstof all that Var [ˆ x λ ] = 1 Z · Var N ( τ ) N ( τ ) X j =1 L τ (cid:16) t, ~ξ ( j ) τ (cid:17) . (3 . ζ ) = E n E ( ζ − E ζ ) /τ o : Z · Var [ˆ x λ ] ≤ || f || · e − Λ X n Λ n || K || n N ( n ) n ! = || f || · e − Λ X n Q n N ( n ) n ! , Q := Λ || K || . (3 . Lemma 3.1.
Let us consider the following constrained optimization problem: D := min { N ( n ) }≥ X n A ( n ) N ( n ) / (X n B ( n ) N ( n ) ≤ M ) . (3 . { A ( n ) } , { B ( n ) } are (may be unbounded) positive sequences, M = const >> , N ( n ) > . We derive using the Lagrange’s factors method that D = hP n q A ( n ) B ( n ) i M , (3 . N ( n ) := N ( n ) def = M P n q A ( n ) B ( n ) · vuut A ( n ) B ( n ) , (3 . N ( n ) := Ent[ N ( n )] + 1 , where Ent( z ) denotes the integer part of real positive number z, if it is known thatthe numbers N ( n ) are integer.In the last case we have only D ≍ nP n q A ( n ) B ( n ) o M . (3 . E R be a fixed ”great” number, for instance, 10 − . It seems thefollowing constrained optimization problem of the variance minimization:5 n Q n N ( n ) n ! → min / ∞ X n =1 Λ n n ! · ( nN ( n )) = e Λ Θ /Z. Of course, it will be supposed that Θ e Λ >> Z. We observe using lemma 3.1min { N ( n ) } " Var( Z · ˆ x λ ) e − Λ || f || ≈ · X n Λ n || K || n √ n ( n !) / ! , (3 . N ( n ) = N ( n ) are following: N ( n ) = Ent " e Λ Q/Z [ P m Λ m || K || m ( m !) − / √ m ] · || K || n · √ nn ! + 1 , (3 . N ( n ) is sufficiently small,for instance, if N ( n ) ≤ , then we oblige to take N ( n ) := 0; see (2.11a).This imply that the rate of convergence of offered method is equal to 1 / √ Z, asin the classical Monte Carlo method.It remains to use the classical Central Limit Theorem (CLT) to construct theconfidence interval for the solution x λ ( t ) . We consider in this section the Monte - Carlo approach for solution of Fredholm’s[22] linear integral equation of a second kind x ( u ) = f ( u ) + λ Z V K ( u, v ) x ( v ) ν ( dv ) , (4 . x ( u ) = f ( u ) + λ K [ x ]( u ) , (4 . a )where λ = const ∈ (0 , , ( V, A, ν ) with a distance ρ = ρ ( v , v ) is compact met-ric measurable probability space: ν ( V ) = 1; u, v ∈ V, and both the functions f ( · ) , K ( · , · ) are continuous, and we denote as in the first section || f || = max u ∈ V | f ( u ) | , || K || = max u,v ∈ V | K ( u, v ) | . Another (deterministic) approach via the so-called Fredholm’s determinants [22]computing implementation see in the article [21].The norm of linear operator K [ x ]( u ) = R V K ( u, v ) x ( v ) ν ( dv ) in the space of con-tinuous functions C ( V ) = C ( V, ρ ) will be denoted by ||| K ||| ; it is known that ||| K ||| = max u ∈ V Z V | K ( u, v ) | ν ( dv ) . (4 . ||| K ||| ≤ || K || . Further, let us denote 6 n = r n ( K ) = ||| K n ||| /n ; r = r ( K ) = lim n →∞ r n ( K ) . (4 . K ; see[24], chapters 4,5. We suppose at first that ||| K ||| ≤
1; then the continuous solution x = x ( u ) of(4.1) there exists, is unique and may be represented by means of uniform convergeNeuman series: x ( u ) = f ( u ) + ∞ X n =1 λ n K n [ f ]( u ) = f ( u ) + ∞ X n =1 λ n y n ( u ) , y n ( u ) := (4 . Z V ν ( ds ) Z V ν ( ds ) . . . Z V ν ( ds n ) K ( u, s ) K ( s , s ) . . . K ( s n − , s n ) f ( s n ) . (4 . < λ < . Let γ ( i ) , i = 1 , , . . . , n be independent random variables with distribution ν : P ( γ ( i ) ∈ A ) = ν ( A ) . Then the function y n ( · ) has a probabilistic representation: y n ( u ) = y n ( u ) = E K ( u, γ (1)) K ( γ (1) , γ (2)) . . . K ( γ ( n − , γ ( n )) . (4 . x λ ( u ) = (1 − λ ) x ( u ) , ~s n = { s (1) , s (2) , . . . , s ( n ) } ,L n ( u, ~s n ) = K ( u, s (1)) K ( s (1) , s (2)) . . . K ( s ( n − , s ( n )) ,~θ n = θ n = { γ (1) , γ (2) , . . . , γ ( n ) } , then y n ( u ) = E L n ( u, ~θ n )and x λ = ∞ X n =0 (1 − λ ) λ n E L n ( u, ~θ n ) . (4 . P (∆ = n ) = (1 − λ ) λ n , n = 0 , , , . . . is named (integer) geometrical distributed, write: Law(∆) = G λ . We can rewrite the expression (4.7) using this notation as follows x λ = E L ∆ ( u, ~θ ∆ ) . (4 . x λ = ˆ x λ ( u ) = ˆ x λ ; Z, { N ( n ) } ( u ) for x λ may bewritten as before ˆ x λ ( u ) = 1 Z Z X i =1 N (∆( i )) N (∆( i )) X j =1 L ∆( i ) ( u, θ ( j )∆( i ) ) , (4 . { ∆( i ) } are independent copies of the r.v. ∆ and θ ( j )∆( i ) are independent copiesof the random vector θ ∆( i ) . In order to calculate (estimate) the variance of ˆ x λ ( u ) , we need to use the followingdefinition. Let K [ x ]( u ) be any linear integral operator with kernel K : K [ x ]( u ) = Z V K ( u, v ) x ( v ) ν ( dv ) . The
Kroneker’s square K [2] of the operator K is an operator acting as follows: K [2] [ x ]( u ) def = Z V K ( u, v ) x ( v ) ν ( dv ) . (4 . We impose another condition on the coefficient λ and on the kernel K : λ · ||| K [2] ||| < . (4 . Z (1 − λ ) || f || Var ˆ x λ ≤ X n λ n ||| K [2] ||| n N ( n ) . (4 . R F of elapsed standard, i.e. uniform [0,1] distribut-ed, r.v. for the ˆ x λ = ˆ x λ,N,Z ( t ) computation. We suppose that for one θ jk random vector number generation are need exactly k · d, d = const = 1 , , . . . standard r.v. Then R F = Z · Z X i =1 τ ( i ) N ( τ ( i )) , (4 . R F is random variable with the expectationΘ F def = E R F ≍ Z · d · ∞ X n =0 [(1 − λ ) λ n · ( nN ( n ))] . (4 . M F = Θ F (1 − λ ) Z d to the following constrained optimization problem assuming M F a fixed number oflarge: 8 (1 − λ ) || f || Z × X n λ n ||| K [2] ||| n N ( n ) → min / "X n λ n · ( nN ( n )) = M F . (4 . G α ( z ) = ∞ X n =0 n α z n ; α = const ≥ , ≤ x < . We find tacking into account the proposition of lemma 3.1: D F := min { N ( n ) } Var ˆ x λ ≍ (1 − λ ) d || f || Θ F · G / (cid:18) λ q ||| K [2] ||| (cid:19) (4 . N ( n ) := 1 + Ent( N ( n )) , where N ( n ) def = M F G / (cid:18) λ q ||| K [2] ||| (cid:19) × ||| K [2] ||| n/ √ n . (4 . N ( n ) ≤ , we must take N ( n ) = 0 . We conclude again that the rate of convergence offered estimate is equal to1 / √ Θ F , as in the one dimensional case. Remark 4.1.
Note that as z → − G / ( z ) ∼ . √ π | ln z | − / . We consider in this section the Volterra’s type integral equation with weak (Abel’s)singularity of a form x ( t ) = f ( t ) + λ Z t K ( t, s ) x ( s ) ds ( t − s ) α , (5 . λ = const > , t ∈ [0 .T ] , T = const > , f ( · ) , K ( · , · ) are continuousfunctions, α := 1 − β = const ∈ (0 , α = 0 was considered in the secondsection.This case α > x ( t ) = f ( t ) + P ∞ n =1 y n ( t ) , where y n ( t ) = λ n × Z t ds Z s ds . . . Z s n − ds n K ( t, s ) K ( s , s ) . . . K ( s n − , s n ) f ( s n )[( t − s )( s − s ) . . . ( s n − − s n )] α = [ λt β ] n × ds Z s . . . Z s n − ds n K ( t, ts ) K ( ts , ts ) . . . K ( ts n − , ts n ) f ( ts n )[(1 − s )( s − s ) . . . ( s n − − s n )] α . (5 . R α,n ( ~s ) = (1 − s ) − α ( s − s ) − α ( s − s ) − α . . . ( s n − s n − ) − α ,W n ( β ) = Γ n ( β )Γ(1 + nβ ) , where Γ( · ) is ordinary Gamma function. Evidently, lim n →∞ W n ( β ) = 0 . Note that
Z Z . . . Z S ( n ) ds ds . . . ds n (1 − s ) α ( s − s ) α ( s − s ) α . . . ( s n − s n − ) α = W n ( β ) . The following function h α ( s ) = h ~α ( s ) , s ∈ S ( n ) , could be chosen as a density ofdistribution with support on the simplex S ( n ) : h α ( s ) = R α,n ( s ) W n ( β ) . Definition 5.1. (See [17].) The random vector κ = κ α,n = ~κ = ~κ α,n withvalues in the polygon S ( n ) has by definition a polygonal Beta distribution, write:Law( κ ) = P B ( α, n ) , iff it has a density h α ( s ) , s ∈ S ( n ) . On the other word, P ( κ ∈ G ) = Z G h α ( s ) ds def = µ α,n ( G ) , G ⊂ S ( n ) . Evidently, µ α,n ( · ) is the probabilistic Borelian measure on the set S ( n ) . The expression for the function y n ( · ) may be rewritten as follows: y n ( t ) = λ n t nβ · W n ( β ) · E L n ( t ~η n ) , (5 . ~η n has the polygonal beta distribution of dimension n withparameters ( α, n ) . Recall that the function of Mittag - Lefler E β ( z ) , more exactly, the family of thefunctions which dependent on the positive real parameter β, β > , is defined forall (may be complex) values z by the formula E β ( z ) = ∞ X n =0 z n Γ(1 + nβ ) , β = const > . We define also some slight generalization of the Mittag - Lefler function: E β,α,δ ( z ) = ∞ X n =1 z n n δ Γ α (1 + nβ ) , α, β = const > , δ = const . E β, , ( z ) = E β ( z ) − . This definition with investigation of properties of this function belongs toG.Mittag-Lefler [15]. See also a recent article [11], (with reference therein,) whereare described in particular some interest applications of these functions.
Definition 5.2.
The integer valued non - negative random variable ζ hasby definition Mittag-Lefler distribution with parameters ( β, µ ) , β, µ > , write:Law( ζ ) = R β ( µ ) , iff P ( ζ = n ) = µ n / Γ(1 + βn ) E β ( µ ) , n = 0 , , , . . . (5 . Remark 5.1.
Our definition (5.2) is unlike from ones in the article [11], wherewas introduced the so - called continuous
Mittag - Lefler distribution.Denote x λ,β ( t ) = x ( t ) E β (Λ β ( t )) . The function x λ,β ( t ) , which is proportional to the true solution x ( t ) of the equa-tion (5.1), may be probability represented as follows x λ,β ( t ) = E L ζ ( t, ~η ζ ) . (5 . fixed value ζ the integral in the expression (5.5) may be computed bymeans of the Monte Carlo method: Z S ( ζ ) L ζ ( t, ~s ) µ ζ ( ds ) ≈ k k X j =1 L ζ ( t, ~η ( j ) ζ ) , where ~η ( j ) ζ are independent copies of the r.v. ~η ζ . Correspondingly, the Monte Carlo approximation for the whole value x ( t ) maybe offered as in the second section as follows:ˆ x λ,β = ˆ x λ,β ; N,Z ( t ) def = 1 Z Z X i =1 N ( ζ ( i )) N ( ζ ( i )) X j =1 L ζ ( i ) (cid:16) t, ~η ( j ) ζ ( i ) (cid:17) , (5 . ζ ( i ) are independent copies of the r.v. ζ , i.e. are independent Mittag -Lefler’s distributed r.v. with parameter β, Λ β = Λ β ( t ) def = λ t β , and N = N ( n ) besome non-random integer positive numerical sequence, her choice will be specifiedlater.Further, let us estimate the variance of our estimation ˆ x : Z Var ˆ x λ,β ≤ X n W n ( β ) Λ nβ ( t ) N ( n ) Z S ( n ) L ( t, s ) µ n ( ds ) ≤|| f || X n W n ( β ) Λ nβ ( t ) || K || n N ( n ) . (5 . R β elapsed random variables. We findanalogously to the second section R β = Z · Z X i =1 ζ ( i ) N ( ζ ( i )) , (5 . R β is random variable with the expectationΘ β def = E R β ≍ Z · ∞ X n =1 Γ n ( β ) Λ nβ / Γ(1 + βn ) E β (Λ β ) · ( n N ( n )) . (5 . β = E R be a fixed ”great” number. Denote M β = E β (Λ β ( t )) · Θ β Z ;it will be presumed that M β is also a great number, for instance, 10 − . It seems the following constrained optimization problem of the variance mini-mization: || f || X n W n ( β ) || K || n Λ nβ N ( n ) → min / X n Λ nβ Γ n ( β )Γ(1 + βn ) · ( nN ( n )) ≤ M β . (5 . { N ( n ) } Var ˆ x λ,β ≈ || f || M β · E β, / , / (cid:16) Λ β || K || Γ / ( β ) (cid:17) , (5 . N ( n ) = N ( n ) are following: N ( n ) =Ent " M β E β, / , / (Λ β || K || Γ / ( β )) · W n ( β ) || K || n Γ / (1 + βn ) √ n Γ n/ ( β ) + 1 . (5 . N ( n ) is sufficientlysmall, for instance, if N ( n ) ≤ , then we must take N ( n ) := 0; see (2.11a).This imply that the rate of convergence of offered method is equal to 1 / √ Z, asin the classical Monte Carlo method.It remains as ordinary to use the classical Central Limit Theorem (CLT) toconstruct the confidence interval for the solution x λ,β ( t ) . A. Confidence region for solution in the uniform norm.
All the offered (Monte Carlo approximated) solutions, see for example (4.9),ˆ x = ˆ x ( u ) , u ∈ V have a form 12 x ( u ) = ˆ x Z = 1 Z Z X i =1 N (∆( i )) N (∆( i )) X j =1 L ∆( i ) (cid:16) u, θ ( j )∆( i ) (cid:17) , (6 . ξ i ( u ) = 1 N (∆( i )) N (∆( i )) X j =1 L ∆( i ) (cid:16) u, θ ( j )∆( i ) (cid:17) − x ( u ) , (6 . { ξ i ( u ) } are continuous, mean zero, identical distributed and √ Z (ˆ x Z ( u ) − x ( u )) = 1 √ Z Z X i =1 ξ i ( u ) . (6 . || x || = max u ∈ V | x ( u ) | for the solution x = x ( u ) , we need to use the so - called Central Limit Theorem(CLT) in the space of continuous functions C ( V ) . see [1], [2], [13], [14], [17], [23],[25].Namely, if the sequence of random fields ξ i = ξ i ( u ) , u ∈ V satisfies this CLT,then lim Z →∞ P sup u ∈ V | (ˆ x Z ( u ) − x ( u )) | > Q ! = P sup u ∈ V | ζ ( u ) | > Q ! , (6 . ζ ( u ) is continuous centered Gaussian random field with at the same covaria-tion function R ( u , u ) as ξ ( u ) : R ( u , u ) = E ζ ( u ) ζ ( u ) = Cov( ξ ( u ) , ξ ( u )) . Many sufficient conditions for CLT in the Banach space C ( V ) may be found in [3],[5], [6], [7], [9], [10], [12] etc. CLT in another separable Banach spaces is investigated,e.g. in [4], [8], [10], [6], [18], [27], [28], [19], [20].The technology of application of the Banach space valued Central Limit Theoremin the parametric Monte Carlo method is described in [23], [25], [1], [26]. B. Analogously may be considered the integral equations of a form x ( t , t ) = f ( t , t ) + Z t ds Z t ds · K ( t , t , s , s ) x ( s , s ) ds ds , with or without weak singularities. References [1]
Ostrovsky E.I. (1999).
Exponential estimations for random Fields and itsApplications, (in Russian).
Moscow-Obninsk, OINPE.132]
Ostrovsky E. and Rogover E.
Exact exponential Bounds for the randomField Maximum Distribution via the Majorizing Measures (Generic Chaining.) arXiv:0802.0349v1 [math.PR] 4 Feb 2008[3]
Araujo A., Gine E.
The central limit theorem for real and Banach valuedrandom variables.
Wiley, (1980), London, New York.[4]
Billingsley P.
Convergence of probability measures.
Wiley, (1968), London,New York.[5]
Dudley R.M.
Uniform Central Limit Theorem . Cambridge University Press,(1999)[6]
Ledoux M., Talagrand M. (1991)
Probability in Banach Spaces.
Springer,Berlin, MR 1102015.[7]
Fortet R. and Mourier E.
Les fonctions alratoires comme elementsaleatoires dans les espaces de Banach.
Studia Math., , (1955), 62-79.[8] Garling D.J.H.
Functional Central Limit Theorems in Banach Spaces.
TheAnnals of Probability, Vol. 4, No. 4 (Aug., 1976), pp. 600-611[9]
Gine E.
On the Central Limit theorem for sample continuous processes.
Ann.Probab. (1974), 2, 629-641.[10]
Gine E., Zinn J.
Central Limit Theorem and Weak Laws of Large Numbersin certain Banach Spaces.
Z. Wahrscheinlichkeitstheory verw. Gebiete. ,(1983), 323-354.[11] Martin Grothaus, Florian Jahnert, Felix Riemann, Jose Luisda Silva.
Mittag-Leffler Analysis I: Construction and characterization. arX-iv:1407.8308v1 [math.FA] 31 Jul 2014[12]
Heinkel B.
Measures majorantes et le theoreme de la limite centrale dans C ( S ) . Z. Wahrscheinlichkeitstheory. verw. Geb., (1977). 38, 339-351.[13]
Jain N.C. and Marcus M.B.
Central limit theorem for C ( S ) valued randomvariables. J. of Funct. Anal., (1975), 19, 216-231.[14]
Kozachenko Yu. V., Ostrovsky E.I. (1985).
The Banach Spaces of ran-dom Variables of subgaussian type.
Theory of Probab. and Math. Stat. (inRussian). Kiev, KSU, , 43-57.[15] Mittag-Leffler G.
Sur la reprsentation analytique d’une branche uniformed’une fonction monogene (cinquieme note).
Acta Math., , 101 - 181,1905. 1416]
Ostrovsky E., Sirota L.
CLT for continuous random processes under ap-proximations terms. arXiv:1304.0250v1 [math.PR] 31 Mar 2013[17]
Ostrovsky E., Sirota L.
Monte Carlo computation of multiple weak singularintegrals of spherical and Volterra’s type. arXiv:1405.6344v1 [math.NA] 24 May2014[18]
Pisier G., Zinn J.
On the limit theorems for random variables with values inthe spaces L p , ≤ p < ∞ . Z. Wahrscheinlichkeitstheorie verw. Gebiete 41, 289- 304, (1978).[19]
Rackauskas A, Suquet Ch.
Central limit theorems in H¨older topologiesfor Banach space valued random fields.
Teor. Veroyatnost. i Primenen., 2004,Volume 49, Issue 1, Pages 109 125 (in Russian).[20]
Zinn J.
A Note on the Central Limit Theorem in Banach Spaces.
Ann. Probab.Volume 5, Number 2 (1977), 283 - 286.[21]
Bornemann Folkmar
On the numerical evaluation of Fredholm determi-nants. arXiv:0804.2543v1 [math.NA] 16 Apr 2008[22]
Fredholm I.
Sur une classe d’equations fonctionelles. (1903), Acta Math.,
Frolov A.S., Tchentzov N.N.
On the calculation by the Monte-Carlomethod definite integrals depending on the parameters.
Journal of Computa-tional Mathematics and Mathematical Physics, (1962), V. 2, Issue 4, p. 714-718(in Russian).[24]
Dunford N., Schwartz J.T.
Linear operators. Part 1, General Theory,(1958), Interscience Publishers, New York, London.[25]
Grigorjeva M.L., Ostrovsky E.I.
Calculation of Integrals on discontinu-ous Functions by means of depending trials method.
Journal of ComputationalMathematics and Mathematical Physics, (1996), V. 36, Issue 12, p. 28-39 (inRussian).[26]
Ostrovsky E., Sirota L.
Monte-Carlo method for multiple parametric in-tegrals calculation and solving of linear integral Fredholm equations of a secondkind, with confidence regions in uniform norm. arXiv:1101.5381v1 [math.FA]27 Jan 2011[27]
Ostrovsky E., Sirota L.
Central Limit Theorem and exponential tail esti-mations in mixed (anisotropic) Lebesgue spaces. arXiv:1308.5606v1 [math.PR]26 Aug 2013 1528]
Ostrovsky E., Sirota L.