On the Gaussian approximation of vector-valued multiple integrals
aa r X i v : . [ m a t h . P R ] S e p On the Gaussian approximation of vector-valued multiple integrals
Salim Noreddine ∗ and Ivan Nourdin †‡ Université Paris 6 and
Université Nancy 1
Abstract:
By combining the findings of two recent, seminal papers by Nualart, Peccati andTudor, we get that the convergence in law of any sequence of vector-valued multiple integrals F n towards a centered Gaussian random vector N , with given covariance matrix C , is reduced to justthe convergence of: ( i ) the fourth cumulant of each component of F n to zero; ( ii ) the covariancematrix of F n to C . The aim of this paper is to understand more deeply this somewhat surprisingphenomenom. To reach this goal, we offer two results of different nature. The first one is anexplicit bound for d ( F, N ) in terms of the fourth cumulants of the components of F , when F is a R d -valued random vector whose components are multiple integrals of possibly different orders, N is the Gaussian counterpart of F (that is, a Gaussian centered vector sharing the same covariancewith F ) and d stands for the Wasserstein distance. The second one is a new expression for thecumulants of F as above, from which it is easy to derive yet another proof of the previously quotedresult by Nualart, Peccati and Tudor. Keywords:
Central limit theorem; Cumulants; Malliavin calculus; Multiple integrals; Ornstein-Uhlenbeck Semigroup.
Let B = ( B t ) t ∈ [0 ,T ] be a standard Brownian motion. The following result, proved in [7, 8],yields a very surprising condition under which a sequence of vector-valued multiple integralsconverges in law to a Gaussian random vector. (If needed, we refer the reader to section 2for the exact meaning of R [0 ,T ] q f ( t , . . . , t q ) dB t . . . dB t q .) Theorem 1.1 (Nualart-Peccati-Tudor)
Let q d , . . . , q > be some fixed integers. Con-sider a R d -valued random sequence of the form F n = ( F ,n , . . . , F d,n )= (cid:18)Z [0 ,T ] q f ,n ( t , . . . , t q ) dB t . . . dB t q , . . . , Z [0 ,T ] qd f d,n ( t , . . . , t q d ) dB t . . . dB t qd (cid:19) , ∗ Laboratoire de Probabilités et Modèles Aléatoires, Université Paris 6, Boîte courrier 188, 4 PlaceJussieu, 75252 Paris Cedex 5, France. Email: [email protected] † Institut Élie Cartan, Université Henri Poincaré, BP 239, 54506 Vandoeuvre-lès-Nancy, France. Email: [email protected] ‡ Supported in part by the (french) ANR grant ‘Exploration des Chemins Rugueux’ here each f i,n ∈ L ([0 , T ] q i ) is supposed to be symmetric. Let N ∼ N d (0 , C ) be a centeredGaussian random vector on R d with covariance matrix C . Assume furthermore that lim n →∞ E [ F i,n F j,n ] = C ij for all i, j = 1 , . . . , d . (1.1) Then, as n → ∞ , the following two assertions are equivalent:(i) F n Law −→ N ;(ii) ∀ i = 1 , . . . , d : E [ F i,n ] − E [ F i,n ] → . This theorem represents a drastic simplification with respect to the method of moments.The original proofs performed in [7, 8] are both based on tools coming from Brownianstochastic analysis, such as the Dambis, Dubins and Schwarz theorem. In [6], Nualart andOrtiz-Latorre gave an alternative proof exclusively using the basic operators δ , D and L of Malliavin calculus. Later on, combining Malliavin calculus with Stein’s method in thespirit of [1], Nourdin, Peccati and Réveillac were able to associate an explicit bound toconvergence ( i ) in Theorem 1.1: Theorem 1.2 (see [4])
Consider a R d -valued random vector of the form F = ( F , . . . , F d )= (cid:18)Z [0 ,T ] q f ( t , . . . , t q ) dB t . . . dB t q , . . . , Z [0 ,T ] qd f d ( t , . . . , t q d ) dB t . . . dB t qd (cid:19) , where q , . . . , q d > are some given integers, and each f i ∈ L ([0 , T ] q i ) is symmetric. Let C = ( C ij ) i,j d ∈ M d ( R ) be the covariance matrix of F , i.e. C ij = E [ F i F j ] . Consider acentered Gaussian random vector N ∼ N d (0 , C ) with same covariance matrix C . Then: d ( F, N ) := sup h ∈ Lip(1) (cid:12)(cid:12) E [ h ( F )] − E [ h ( N )] (cid:12)(cid:12) k C − k op k C k / op ∆ C ( F ) , (1.2) with the convention k C − k op = + ∞ whenever C is not invertible. Here:- Lip(1) is the set of Lipschitz functions with constant 1 (that is, the set of functions h : R d → R so that | h ( x ) − h ( y ) | k x − y k R d for all x, y ∈ R d ),- k C k op = sup x ∈ R d \{ } k Cx k R d / k x k R d denotes the operator norm on M d ( R ) ,- the quantity ∆ C ( F ) is defined as ∆ C ( F ) := vuut d X i,j =1 E "(cid:18) C ij − q j h DF i , DF j i L ([0 ,T ]) (cid:19) , (1.3) where D indicates the Malliavin derivative operator (see section 2) and h· , ·i L ([0 ,T ]) is theusual inner product on L ([0 , T ]) . C of F is not invertible (or when one is not able to checkwhether it is or not), one is forced to work with functions h that are smoother than the oneinvolved in the definition (1.2) of d ( F, N ) . To this end, we adopt the following simplifiednotation for functions h : R d → R belonging to C : k h ′′ k ∞ = max i,j =1 ,...,d sup x ∈ R d (cid:12)(cid:12)(cid:12)(cid:12) ∂ h∂x i ∂x j ( x ) (cid:12)(cid:12)(cid:12)(cid:12) . (1.4) Theorem 1.3 (see [2])
Let the notation and assumptions of Theorem 1.2 prevail. Then: d ( F, N ) := sup k h ′′ k ∞ (cid:12)(cid:12) E [ h ( F )] − E [ h ( N )] (cid:12)(cid:12)
12 ∆ C ( F ) , (1.5) with ∆ C ( F ) still given by (1.3). Are the upper bounds (1.2)-(1.5) in Theorems 1.2 and 1.3 relevant? Yes, very! Indeed,we have the following proposition.
Proposition 1.4 (see [6])
Let the notation and assumptions of Theorem 1.1 prevail. Re-call the definition (1.3). Then, as n → ∞ , ∆ C ( F n ) → if and only if E [ F i,n ] − E [ F i,n ] → for all i . In the present paper, as a first result we offer the following quantitative version of Propo-sition 1.4.
Theorem 1.5
Let the notation and assumptions of Theorem 1.2 prevail, and recall thedefinition (1.3) of ∆ C ( F ) . Then: ∆ C ( F ) ψ (cid:0) E [ F ] − E [ F ] , E [ F ] , . . . , E [ F d ] − E [ F d ] , E [ F d ] (cid:1) , (1.6) with ψ : ( R × R + ) d → R the function defined as ψ (cid:0) x , y , . . . , x d , y d (cid:1) = d X i,j =1 { q i = q j } vuut q i − X r =1 (cid:18) rr (cid:19) | x i | / + d X i,j =1 { q i = q j } (cid:26) √ √ y j | x i | / + q i ∧ q j − X r =1 q q i + q j − r )! (cid:18) q j r (cid:19) | x i | / ) . (1.7)Since, for each compact B ⊂ (0 , ∞ ) d , it is readily checked that there exists a constant c B,q ,...,q d > so that sup ( y ,...,y d ) ψ ( x , y , . . . , x s , y d ) c B,q ,...,q d d X i =1 (cid:0) | x i | / + | x i | / (cid:1) ,
3e immediately see that the upper bound (1.6), together with Theorem 1.3, now show ina clear manner why ( ii ) implies ( i ) in Theorem 1.1.In a second part of this paper, we are interested in ‘calculating’, by means of the basicoperators D and L of Malliavin calculus, the cumulants of any vector-valued functional F of the Brownian motion B . (Actually, we will even do so for functionals of any given isonormal Gaussian process X ). In fact, this part is nothing but the multivariate extensionof the results obtained by Nourdin and Peccati in [3].Then, in the particular case where the components of F have the form of a multipleWiener-Itô integral (as in Theorem 1.2), our formula leads to a new compact representationfor the cumulants of F (see Theorem 1.6 just below), implying in turn yet another proofof Theorem 1.1 (see section 4.3). Theorem 1.6
Let m ∈ N d \ { } with | m | > . Write m = l + . . . + l | m | , where l i ∈{ e , . . . , e d } for each i . (Up to possible permutations of factors, we have existence anduniqueness of this decomposition of m .) Consider a R d -valued random vector of the form F = ( F , . . . , F d )= (cid:18)Z [0 ,T ] q f ( t , . . . , t q ) dB t . . . dB t q , . . . , Z [0 ,T ] qd f d ( t , . . . , t q d ) dB t . . . dB t qd (cid:19) , where q , . . . , q d > are some given integers, and each f i ∈ L ([0 , T ] q i ) is symmetric. When l k = e j , we set λ k = j , so that F l k = F λ k for all k = 1 , . . . , | m | . Then: κ m ( F ) = ( q λ | m | )!( | m |− X c q,l ( r , . . . , r | m |− ) h f λ e ⊗ r f λ . . . e ⊗ r | m |− f λ | m |− , f λ | m | i L ([0 ,T ] qλ | m | ) , where the sum P runs over all collections of integers r , . . . , r | m |− such that:(i) r i q λ i for all i = 2 , . . . , | m | − ;(ii) r + . . . + r | m |− = q λ + ... + q λ | m |− − q λ | m | ;(iii) r < q λ + q λ , . . . , r + . . . + r | m |− < q λ + ... + q λ | m |− ;(iv) r q λ + q λ − r , . . . , r | m |− q λ + q λ | m |− − r − . . . − r | m |− ;and where the combinatorial constants c q,l ( r , . . . , r s ) are recursively defined by the relations c q,l ( r ) = q λ ( r − (cid:18) q λ − r − (cid:19)(cid:18) q λ − r − (cid:19) , and, for s > , c q,l ( r , . . . , r s ) = q λ s ( r s − (cid:18) q λ + . . . + q λ s − r − . . . − r s − − r s − (cid:19) × (cid:18) q λ s − r s − (cid:19) c q,l ( r , . . . , r s − ) . In this section, we present the basic elements of Gaussian analysis and Malliavin calculusthat are used throughout this paper. The reader is referred to [5] for any unexplaineddefinition or result.Let H be a real separable Hilbert space. For any q > , let H ⊗ q be the q th tensorpower of H , and denote by H ⊙ q the associated q th symmetric tensor power. We write X = { X ( h ) , h ∈ H } to indicate an isonormal Gaussian process over H (fixed once for all),defined on some probability space (Ω , F , P ) . This means that X is a centered Gaussianfamily, whose covariance is given by the relation E [ X ( h ) X ( g )] = h h, g i H . We also assumethat F = σ ( X ) , that is, F is generated by X .For every q > , let H q be the q th Wiener chaos of X , defined as the closed linearsubspace of L (Ω , F , P ) generated by the family { H q ( X ( h )) , h ∈ H , k h k H = 1 } , where H q is the q th Hermite polynomial given by H q ( x ) = ( − q e x d q dx q (cid:0) e − x (cid:1) . We write by convention H = R . For any q > , the mapping I q ( h ⊗ q ) = q ! H q ( X ( h )) canbe extended to a linear isometry between the symmetric tensor product H ⊙ q (equippedwith the modified norm √ q ! k·k H ⊗ q ) and the q th Wiener chaos H q . For q = 0 , we write I ( c ) = c , c ∈ R . For q = 1 , we have I ( h ) = X ( h ) , h ∈ H . Moreover, a random variableof the type I q ( h ) , h ∈ H ⊙ q , has finite moments of all orders.In the particular case where H = L ([0 , T ]) , one has that ( B t ) t ∈ [0 ,T ] = (cid:0) X ( [0 ,t ] ) (cid:1) t ∈ [0 ,T ] is a standard Brownian motion. Moreover, H ⊙ q = L s ([0 , T ] q ) is the space of symmetricand square integrable functions on [0 , T ] q , and I q ( f ) =: Z [0 ,T ] q f ( t , . . . , t q ) dB t . . . dB t q , f ∈ H ⊙ q , coincides with the multiple Wiener-Itô integral of order q of f with respect to B , see [5]for further details about this point.It is well-known that L (Ω) := L (Ω , F , P ) can be decomposed into the infinite or-thogonal sum of the spaces H q . It follows that any square integrable random variable F ∈ L (Ω) admits the following so-called chaotic expansion: F = ∞ X q =0 I q ( f q ) , (2.8)5here f = E [ F ] , and the f q ∈ H ⊙ q , q > , are uniquely determined by F . For every q > , we denote by J q the orthogonal projection operator on the q th Wiener chaos. Inparticular, if F ∈ L (Ω) is as in (2.8), then J q F = I q ( f q ) for every q > .Let { e k } k > be a complete orthonormal system in H . Given f ∈ H ⊙ p and g ∈ H ⊙ q , forevery r = 0 , . . . , p ∧ q , the contraction of f and g of order r is the element of H ⊗ ( p + q − r ) defined by f ⊗ r g = ∞ X i ,...,i r =1 h f, e i ⊗ . . . ⊗ e i r i H ⊗ r ⊗ h g, e i ⊗ . . . ⊗ e i r i H ⊗ r . (2.9)Note that the definition of f ⊗ r g does not depend on the particular choice of { e k } k > ,and that f ⊗ r g is not necessarily symmetric; we denote its symmetrization by f e ⊗ r g ∈ H ⊙ ( p + q − r ) . Moreover, f ⊗ g = f ⊗ g equals the tensor product of f and g , whereas f ⊗ q g = h f, g i H ⊗ q whenever p = q .It can be shown that the following product formula holds: if f ∈ H ⊙ p and g ∈ H ⊙ q ,then I p ( f ) I q ( g ) = p ∧ q X r =0 r ! (cid:18) pr (cid:19)(cid:18) qr (cid:19) I p + q − r ( f e ⊗ r g ) . (2.10)We now introduce some basic elements of the Malliavin calculus with respect to theisonormal Gaussian process X . Let S be the set of all cylindrical random variables of theform F = g ( X ( φ ) , . . . , X ( φ n )) , (2.11)where n > , g : R n → R is an infinitely differentiable function such that its partialderivatives have polynomial growth, and each φ i belongs to H . The Malliavin derivative of F with respect to X is the element of L (Ω , H ) defined as DF = n X i =1 ∂g∂x i ( X ( φ ) , . . . , X ( φ n )) φ i . In particular, DX ( h ) = h for every h ∈ H . By iteration, one can define the m th derivative D m F , which is an element of L (Ω , H ⊙ m ) , for every m > . For m > and p > , D m,p denotes the closure of S with respect to the norm k · k m,p , defined by the relation k F k pm,p = E [ | F | p ] + m X i =1 E (cid:2) k D i F k p H ⊗ i (cid:3) . One also writes D ∞ = T m > T p > D m,p . The Malliavin derivative D obeys the followingchain rule. If ϕ : R n → R is continuously differentiable with bounded partial derivativesand if F = ( F , . . . , F n ) is a vector of elements of D , , then ϕ ( F ) ∈ D , and D ϕ ( F ) = n X i =1 ∂ϕ∂x i ( F ) DF i . (2.12)6he conditions imposed on ϕ for (2.12) to hold (that is, the partial derivatives of ϕ must bebounded) are by no means optimal. For instance, the chain rule combined with a classicalapproximation argument leads to D ( X ( h ) m ) = mX ( h ) m − h for m > and h ∈ H .We denote by δ the adjoint of the operator D , also called the divergence operator. Arandom element u ∈ L (Ω , H ) belongs to the domain of δ , noted Dom δ , if and only if itverifies | E h DF, u i H | c u k F k L (Ω) for any F ∈ D , , where c u is a constant depending onlyon u . If u ∈ Dom δ , then the random variable δ ( u ) is defined by the duality relationship E [ F δ ( u )] = E h DF, u i H , (2.13)which holds for every F ∈ D , .The operator L is defined as L = P ∞ q =0 − qJ q . The domain of L is Dom L = { F ∈ L (Ω) : ∞ X q =1 q E [( J q F ) ] < ∞} = D , .There is an important relation between the operators D , δ and L . A random variable F belongs to D , if and only if F ∈ Dom ( δD ) (i.e. F ∈ D , and DF ∈ Dom δ ) and, in thiscase, δDF = − LF. (2.14)For any F ∈ L (Ω) , we define L − F = P ∞ q =1 − q J q ( F ) . The operator L − is called thepseudo-inverse of L . Indeed, for any F ∈ L (Ω) , we have that L − F ∈ Dom L = D , , and LL − F = F − E [ F ] . (2.15)We end up these preliminaries on Malliavin calculus by stating a useful lemma, that isgoing to be intensively used in the forthcoming Section 4. Lemma 2.1
Suppose that F ∈ D , and G ∈ L (Ω) . Then, L − G ∈ D , and we have: E [ F G ] = E [ F ] E [ G ] + E [ h DF, − DL − G i H ] . (2.16) Proof . By (2.14) and (2.15), E [ F G ] − E [ F ] E [ G ] = E [ F ( G − E [ G ])] = E [ F × LL − G ] = E [ F δ ( − DL − G )] , and the result is obtained by using the integration by parts formula (2.13). The aim of this section is to prove Theorem 1.5. We restate it here for convenience, byreformulating it in the more general context of isonormal Gaussian process rather thanBrownian motion. 7 heorem 1.5
Let X = { X ( h ) , h ∈ H } be an isonormal Gaussian process, and q d , . . . , q > be some fixed integers. Consider a R d -valued random vector of the form F = ( F , . . . , F d ) = (cid:0) I q ( f ) , . . . , I q d ( f d ) (cid:1) , where each f i belongs to H ⊙ q i . Let C = ( C ij ) i,j d ∈ M d ( R ) be the covariance matrix of F , i.e. C ij = E [ F i F j ] , and consider a centered Gaussian random vector N ∼ N d (0 , C ) with same covariance matrix C . Then ∆ C ( F ) ψ (cid:0) E [ F ] − E [ F ] , E [ F ] , . . . , E [ F d ] − E [ F d ] , E [ F d ] (cid:1) , (3.17) with ∆ C ( F ) given by (1.3), and where ψ : ( R × R + ) d → R is the function given by (1.7). In order to prove Theorem 1.5, we first need to gather several results from the existingliterature. We collect them in the following lemma. We freely use the definitions andnotation introduced in sections 1 and 2.
Lemma 3.1
Let F = I p ( f ) and G = I q ( g ) , with f ∈ H ⊙ p and g ∈ H ⊙ q ( p, q > ).1. If p = q , one has the estimate: E "(cid:18) E [ F G ] − p h DF, DG i H (cid:19) (3.18) p p − X r =1 ( r − (cid:18) p − r − (cid:19) (2 p − r )! (cid:0) k f ⊗ p − r f k H ⊗ r + k g ⊗ p − r g k H ⊗ r (cid:1) , whereas, if p < q , one has that E "(cid:18) q h DF, DG i H (cid:19) p ! (cid:18) q − p − (cid:19) ( q − p )! k f k H ⊗ p k g ⊗ q − p g k H ⊗ p (3.19) + p p − X r =1 ( r − (cid:18) p − r − (cid:19) (cid:18) q − r − (cid:19) ( p + q − r )! (cid:0) k f ⊗ p − r f k H ⊗ r + k g ⊗ q − r g k H ⊗ r (cid:1) .
2. One has the identity: E [ F ] − E [ F ] = p − X r =1 p ! (cid:18) pr (cid:19) (cid:26) k f ⊗ r f k H ⊗ p − r + (cid:18) p − rp − r (cid:19) k f e ⊗ r f k H ⊗ p − r (cid:27) . (3.20) Proof . Inequalities (3.18)-(3.19) correspond to [4, Lemma 3.7] (see also [6, Proof of Lemma6]), whereas identity (3.20) is shown in [7, page 182]. However, for convenience of thereader (and also because the notation used in [7] is not exactly the same than our), weprovide here a detailed proof of (3.18), (3.19) and (3.20).8. Thanks to the multiplication formula (2.10), we can write h DF, DG i H = p q h I p − ( f ) , I q − ( g ) i H = p q p ∧ q − X r =0 r ! (cid:18) p − r (cid:19)(cid:18) q − r (cid:19) I p + q − − r ( f e ⊗ r +1 g )= p q p ∧ q X r =1 ( r − (cid:18) p − r − (cid:19)(cid:18) q − r − (cid:19) I p + q − r ( f e ⊗ r g ) . It follows that E "(cid:18) α − q h DF, DG i H (cid:19) (3.21) = α + p P pr =1 ( r − (cid:0) p − r − (cid:1) (cid:0) q − r − (cid:1) ( p + q − r )! k f e ⊗ r g k H ⊗ ( p + q − r ) if p < q, (cid:0) α − E [ F G ] (cid:1) + p P p − r =1 ( r − (cid:0) p − r − (cid:1) (2 p − r )! k f e ⊗ r g k H ⊗ (2 p − r ) if p = q. If r < p q then k f e ⊗ r g k H ⊗ ( p + q − r ) k f ⊗ r g k H ⊗ ( p + q − r ) = h f ⊗ p − r f, g ⊗ q − r g i H ⊗ r k f ⊗ p − r f k H ⊗ r k g ⊗ q − r g k H ⊗ r (cid:0) k f ⊗ p − r f k H ⊗ r + k g ⊗ q − r g k H ⊗ r (cid:1) . (3.22)If r = p < q , then k f e ⊗ p g k H ⊗ ( q − p ) k f ⊗ p g k H ⊗ ( q − p ) k f k H ⊗ p k g ⊗ q − p g k H ⊗ p . (3.23)By plugging these two inequalities into (3.21), we deduce both (3.18) and (3.19).2. Without loss of generality, in the proof of (3.20) we can assume that H is a L -spaceof the form H = L ( A, A , µ ) . Let σ be a permutation of { , . . . , p } (that is, σ ∈ S p ), andlet f ∈ H ⊙ p . If r ∈ { , . . . , p } denotes the cardinality of { σ (1) , . . . , σ ( p ) } ∩ { , . . . , p } thenit is readily checked that r is also the cardinality of { σ ( p + 1) , . . . , σ (2 p ) } ∩ { p + 1 , . . . , p } and that Z A p f ( t , . . . , t p ) f ( t σ (1) , . . . , t σ ( p ) ) f ( t p +1 , . . . , t p ) f ( t σ ( p +1) , . . . , t σ (2 p ) ) dµ ( t ) . . . dµ ( t p )= Z A p − r f ⊗ r f ( x , . . . , x p − r ) dµ ( x ) . . . dµ ( x p − r ) = k f ⊗ r f k H ⊗ (2 p − r ) . (3.24)Moreover, for any fixed r ∈ { , . . . , p } , there are (cid:0) pr (cid:1) ( p !) permutations σ ∈ S p such that { σ (1) , . . . , σ ( p ) } ∩ { , . . . , p } = r . (Indeed, such a permutation is completely determinedby the choice of: ( a ) r distinct elements x , . . . , x r of { , . . . , p } ; ( b ) p − r distinct elements x r +1 , . . . , x p of { p + 1 , . . . , p } ; ( c ) a bijection between { , . . . , p } and { x , . . . , x p } ; ( d ) a9ijection betwenn { p + 1 , . . . , p } and { , . . . , p } \ { x , . . . , x p } .) Now, observe that thesymmetrization of f ⊗ f is given by f e ⊗ f ( t , . . . , t p ) = 1(2 p )! X σ ∈ S p f ( t σ (1) , . . . , t σ ( p ) ) f ( t σ ( p +1) , . . . , t σ (2 p ) ) . Therefore, k f e ⊗ f k H ⊗ p = 1(2 p )! X σ,σ ′ ∈ S p Z A p f ( t σ (1) , . . . , t σ ( p ) ) f ( t σ ( p +1) , . . . , t σ (2 p ) ) × f ( t σ ′ (1) , . . . , t σ ′ ( p ) ) f ( t σ ′ ( p +1) , . . . , t σ ′ (2 p ) ) dµ ( t ) . . . dµ ( t p )= 1(2 p )! X σ ∈ S p Z A p f ( t , . . . , t p ) f ( t p +1 , . . . , t p ) × f ( t σ (1) , . . . , t σ ( p ) ) f ( t σ ( p +1) , . . . , t σ (2 p ) ) dµ ( t ) . . . dµ ( t p )= 1(2 p )! p X r =0 X σ ∈ S p { σ (1) ,...,σ ( p ) }∩{ ,...,p } = r Z A p f ( t , . . . , t p ) f ( t p +1 , . . . , t p ) × f ( t σ (1) , . . . , t σ ( p ) ) f ( t σ ( p +1) , . . . , t σ (2 p ) ) dµ ( t ) . . . dµ ( t p ) . Using (3.24), we deduce that (2 p )! k f e ⊗ f k H ⊗ p = 2( p !) k f k H ⊗ p + ( p !) p − X r =1 (cid:18) pr (cid:19) k f ⊗ r f k H ⊗ (2 p − r ) . (3.25)On the other hand, we infer from the product formula (2.10) that F = I p ( f ) = p X r =0 r ! (cid:18) pr (cid:19) I p − r ( f e ⊗ r f ) . Using the orthogonality and isometry properties of the integrals I p , this yields E [ F ] = p X r =0 ( r !) (cid:18) pr (cid:19) (2 p − r )! k f e ⊗ r f k H ⊗ (2 p − r ) = (2 p )! k f e ⊗ f k H ⊗ (2 p ) + ( p !) k f k H ⊗ p + p − X r =1 ( r !) (cid:18) pr (cid:19) (2 p − r )! k f e ⊗ r f k H ⊗ (2 p − r ) . By inserting (3.25) in the previous identity (and because ( p !) k f k H ⊗ p = E [ F ] ), we get(3.20). ✷
10e are now ready to prove Theorem 1.5. If Z ∈ L (Ω) , as usual we write χ ( Z ) = E [ Z ] − E [ Z ] for the fourth cumulant of Z . We deduce from (3.20) that, for all p > , f ∈ H ⊙ p and r ∈ { , . . . , p − } , one has χ ( I p ( f )) > and k f ⊗ r f k H ⊗ p − r r ! ( p − r )! p ! χ ( I p ( f )) . Therefore, if f, g ∈ H ⊙ p , inequality (3.18) leads to E "(cid:18) E [ I p ( f ) I p ( g )] − p h DI p ( f ) , DI p ( g ) i H (cid:19) (cid:2) χ ( I p ( f )) + χ ( I p ( g )) (cid:3) p − X r =1 r (2 p − r )!2 p ( p − r )! (cid:2) χ ( I p ( f )) + χ ( I p ( g )) (cid:3) p − X r =1 (cid:18) rr (cid:19) . (3.26)On the other hand, if p < q , f ∈ H ⊙ p and g ∈ H ⊙ q , inequality (3.19) leads to E "(cid:18) p h DI p ( f ) , DI q ( g ) i H (cid:19) = q p E "(cid:18) q h DI p ( f ) , DI q ( g ) i H (cid:19) E [ I p ( f ) ] q χ ( I q ( g )) + 12 p p − X r =1 r ( p + q − r )! × (cid:20) q ! ( q − r )! p ! χ ( I p ( f )) + p ! ( p − r )! q ! χ ( I q ( g )) (cid:21) E [ I p ( f ) ] q χ ( I q ( g )) + 12 p − X r =1 ( p + q − r )! × "(cid:18) qr (cid:19) χ ( I p ( f )) + (cid:18) pr (cid:19) χ ( I q ( g )) , so that, if p = q , f ∈ H ⊙ p and g ∈ H ⊙ q , one has that both E (cid:20)(cid:16) p h DI p ( f ) , DI q ( g ) i H (cid:17) (cid:21) and E (cid:20)(cid:16) q h DI p ( f ) , DI q ( g ) i H (cid:17) (cid:21) are less or equal than E [ I p ( f ) ] q χ ( I q ( g )) + E [ I q ( g ) ] q χ ( I p ( f )) (3.27) + 12 p ∧ q − X r =1 ( p + q − r )! "(cid:18) qr (cid:19) χ ( I p ( f )) + (cid:18) pr (cid:19) χ ( I q ( g )) . Since two multiple integrals of different orders are orthogonal, on has that C ij = E [ F i F j ] = E [ I q i ( f i ) I q j ( f j )] = 0 whenever q i = q j .Thus, by using (3.26)-(3.27) together with √ x + . . . + x n √ x + . . . + √ x n , we eventuallyget the desired conclusion (3.17). ✷ Cumulants for random vectors on the Wiener space
In all this part of the paper, we let the notation of section 2 prevail. In particular, X = { X ( h ) , h ∈ H } denotes a given isonormal Gaussian process. In this section, by means of the basic operators D and L , we calculate the cumulants ofany vector-valued functional F of a given isonormal Gaussian process X .First, let us recall the standard multi-index notation. A multi-index is a vector m =( m , . . . , m d ) of N d . We write | m | = d X i =1 m i , ∂ i = ∂∂t i , ∂ m = ∂ m . . . ∂ m d d , x m = d Y i =1 x m i i . By convention, we have = 1 . Also, note that | x m | = y m , where y i = | x i | for all i . If s ∈ N d , we say that s m if and only if s i m i for all i . For any i = 1 , . . . , d , we let e i ∈ N d be the multi-index defined by ( e i ) j = δ ij , with δ ij the Kronecker symbol. Definition 4.1
Let F = ( F , . . . , F d ) be a R d -valued random vector such that E | F | m < ∞ for some m ∈ N d \ { } , and let φ F ( t ) = E [ e i h t,F i R d ] , t ∈ R d , stand for the characteristicfunction of F . The cumulant of order m of F is (well) defined by κ m ( F ) = ( − i ) | m | ∂ m log φ F ( t ) | t =0 . For instance, if F i , F j ∈ L (Ω) , then κ e i ( F ) = E [ F i ] and κ e i + e j ( F ) = Cov[ F i , F j ] .Now, we need to (recursively) introduce some further notation: Definition 4.2
Let F = ( F , . . . , F d ) be a R d -valued random vector with F i ∈ D , for each i . Let l , l , . . . be a sequence taking values in { e , . . . , e d } . We set Γ l ( F ) = F l . If therandom variable Γ l ,...,l k ( F ) is a well-defined element of L (Ω) for some k > , we set Γ l ,...,l k +1 ( F ) = h DF l k +1 , − DL − Γ l ,...,l k ( F ) i H . Since the square-integrability of Γ l ,...,l k ( F ) implies that L − Γ l ,...,l k ( F ) ∈ Dom L ⊂ D , ,the definition of Γ l ,...,l k +1 ( F ) makes sense.The next lemma, whose proof is left to the reader because it is an immediate extensionof Lemma 4.2 in [3] to the multivariate case, gives sufficient conditions on F ensuring thatthe random variable Γ l ,...,l k ( F ) is a well-defined element of L (Ω) . Lemma 4.3
1. Fix an integer j > , and assume that F = ( F , . . . , F d ) is such that F i ∈ D j, j for all i . Let l , l , . . . , l j be a sequence taking values in { e , . . . , e d } . Then,for all k = 1 , . . . , j , we have that Γ l ,...,l k ( F ) is a well-defined element of D j − k +1 , j − k +1 ; inparticular, one has that Γ l ,...,l j ( F ) ∈ D , ⊂ L (Ω) and that the quantity E [Γ l ,...,l j ( F )] iswell-defined and finite.2. Assume that F = ( F , . . . , F d ) is such that F i ∈ D ∞ for all i . Let l , l , . . . be asequence taking values in { e , . . . , e d } . Then, for all k > , the random variable Γ l ,...,l k ( F ) is a well-defined element of D ∞ .
12e are now ready to state and prove the main result of this section, which is nothingbut the multivariate extension of Theorem 4.3 in [3].
Theorem 4.4
Let m ∈ N d \ { } . Write m = l + . . . + l | m | where l i ∈ { e , . . . , e d } for each i . (Up to possible permutations of factors, we have existence and uniqueness ofthis decomposition of m .) Suppose that the random vector F = ( F , . . . , F d ) is such that F i ∈ D | m | , | m | for all i . Then, we have κ m ( F ) = ( | m | − E (cid:2) Γ l ,...,l | m | ( F ) (cid:3) . (4.28) Remark 4.5
A careful inspection of the forthcoming proof of Theorem 4.4 shows that thequantity E (cid:2) Γ l ,...,l | m | ( F ) (cid:3) in (4.28) is actually symmetric with respect to l , . . . , l | m | , that is, ∀ σ ∈ S | m | , E (cid:2) Γ l ,...,l | m | ( F ) (cid:3) = E (cid:2) Γ l σ (1) ,...,l σ ( | m | ) ( F ) (cid:3) . Proof of Theorem 4.4 . The proof is by induction on | m | . The case | m | = 1 is clear because κ e j ( F ) = E [ F j ] = E [Γ e j ( F )] for all j . Now, assume that (4.28) holds for all multi-indices m ∈ N d such that | m | N , for some N > fixed, and let us prove that it continues tohold for all the multi-indices m verifying | m | = N + 1 . Let m ∈ N d be such that | m | N ,and fix j = 1 , . . . , d . By applying repeatidely (2.16) and then the chain rule (2.12), we canwrite E [ F m + e j ] = E [ F m × Γ e j ( F )]= E [ F m ] E [Γ e j ( F )] + E [ h DF m , − DL − Γ e j ( F ) i H ]= E [ F m ] E [Γ e j ( F )] + X i | m | E [ F m − l i h DF l i , − DL − Γ e j ( F ) i H ]= E [ F m ] E [Γ e j ( F )] + X i | m | E [ F m − l i Γ e j ,l i ( F )]= E [ F m ] E [Γ e j ( F )] + X i | m | E [ F m − l i ] E [Γ e j ,l i ( F )] + X i ,i | m | i ,i different E [ F m − l i − l i Γ e j ,l i ,l i ( F )]= . . . = E [ F m ] E [Γ e j ( F )] + X i | m | E [ F m − l i ] E [Γ e j ,l i ( F )]+ X i ,i | m | i ,i different E [ F m − l i − l i ] E [Γ e j ,l i ,l i ( F )]+ . . . + X i ,...,i | m |− | m | i ,...,i | m |− pairwise different E [ F m − l i − ... − l i | m |− ] E [Γ e j ,l i ,...,l i | m |− ( F )]+ | m | ! E [Γ e j ,l i ,...,l i | m | ( F )]
13o that, using the induction property, E [ F m + e j ] = E [ F m ] 10! κ e j ( F ) + X i | m | E [ F m − l i ] 11! κ e j + l i ( F )+ X i ,i | m | i ,i different E [ F m − l i − l i ] 12! κ e j + l i + l i ( F )+ . . . + X i ,...,i | m |− | m | i ,...,i | m |− pairwise different E [ F m − l i − ... − l i | m |− ] 1( m − κ e j + l i + ... + l i | m |− ( F )+ | m | ! E [Γ e j ,l i ,...,l i | m | ( F )]= X s m | s | m − E [ F m − s ] 1 | s | ! κ e j + s ( F ) B s + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] . Here, B s stands for the set of pairwise different indices i , . . . , i | s | ∈ { , . . . , | m |} such that l i + . . . + l i | s | = s , whereas B s denotes the cardinality of B s . Also, let D j = { i =1 , . . . , | m | : l i = e j } and observe that m = ( m , . . . , m d ) with m j = D j . For any s m , it is readily checked that B s = (cid:0) m s (cid:1) . . . (cid:0) m d s d (cid:1) | s | ! . (Indeed, to build a multi-index s = ( s , . . . , s d ) so that s m , one must choose s indices among the m indices of D up to s d indices among the m d indices of D d , and then the order of the factors in the sum l i + . . . + l i | s | .) Therefore, E [ F m + e j ] = X s m | s | m − (cid:18) m s (cid:19) . . . (cid:18) m d s d (cid:19) E [ F m − s ] κ e j + s ( F ) + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )]= X s m (cid:18) m s (cid:19) . . . (cid:18) m d s d (cid:19) E [ F m − s ] κ e j + s ( F ) + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] − κ e j + m ( F )= X s m (cid:18) m s (cid:19) . . . (cid:18) m d s d (cid:19) ( − i ) | m |−| s | ∂ m − s φ F (0) × ( − i ) | s | +1 ∂ e j + s log φ F (0)+ | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] − κ e j + m ( F )= ( − i ) | m | +1 ∂ m (cid:0) φ F ddt j log φ F (cid:1) (0) + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] − κ e j + m ( F )= ( − i ) | m | +1 ∂ m + e j φ F (0) + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] − κ e j + m ( F )= E [ F m + e j ] + | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] − κ e j + m ( F ) , leading to | m | ! E [Γ e j ,l i ,...,l i | m | ( F )] = κ e j + m , implying in turn that (4.28) holds with m replaced by m + e j . The proof by induction isconcluded. ✷ .2 The case of vector-valued multiple integrals We now focus on the calculation of cumulants associated to random vectors whose com-ponent are in a given chaos. In (4.29) (and in its proof as well), we use the followingconvention. For simplicity, we drop the brackets in the writing of f λ e ⊗ r . . . e ⊗ r | m |− f λ | m |− ,by implicitely assuming that this quantity is defined iteratively from the left to the right.For instance, f e ⊗ α g e ⊗ β h e ⊗ γ k actually means (( f e ⊗ α g ) e ⊗ β h ) e ⊗ γ k .For convenience, we restate Theorem 1.6 (in the more general context of isonormalGaussian process). Theorem 4.6
Let m ∈ N d \ { } with | m | > . Write m = l + . . . + l | m | , where l i ∈{ e , . . . , e d } for each i . (Up to possible permutations of factors, we have existence anduniqueness of this decomposition of m .) Consider a R d -valued random vector of the form F = ( F , . . . , F d ) = (cid:0) I q ( f ) , . . . , I q d ( f d ) (cid:1) , where each f i belongs to H ⊙ q i . When l k = e j , we set λ k = j , so that F l k = F λ k for all k = 1 , . . . , | m | . Then: κ m ( F ) = ( q λ | m | )!( | m | − X c q,l ( r , . . . , r | m |− ) h f λ e ⊗ r f λ . . . e ⊗ r | m |− f λ | m |− , f λ | m | i H ⊗ qλ | m | , (4.29) where the sum P runs over all collections of integers r , . . . , r | m |− such that:(i) r i q λ i for all i = 2 , . . . , | m | − ;(ii) r + . . . + r | m |− = q λ + ... + q λ | m |− − q λ | m | ;(iii) r < q λ + q λ , . . . , r + . . . + r | m |− < q λ + ... + q λ | m |− ;(iv) r q λ + q λ − r , . . . , r | m |− q λ + q λ | m |− − r − . . . − r | m |− ;and where the combinatorial constants c q,l ( r , . . . , r s ) are recursively defined by the relations c q,l ( r ) = q λ ( r − (cid:18) q λ − r − (cid:19)(cid:18) q λ − r − (cid:19) , and, for s > , c q,l ( r , . . . , r s ) = q λ s ( r s − (cid:18) q λ + . . . + q λ s − r − . . . − r s − − r s − (cid:19) × (cid:18) q λ s − r s − (cid:19) c q,l ( r , . . . , r s − ) . roof . If f ∈ H ⊙ p and g ∈ H ⊙ q ( p, q > ), the multiplication formula yields h DI p ( f ) , − DL − I q ( g ) i H = p h I p − ( f ) , I q − ( g ) i H = q p ∧ q − X r =0 r ! (cid:18) p − r (cid:19)(cid:18) q − r (cid:19) I p + q − − r ( f e ⊗ r +1 g )= q p ∧ q X r =1 ( r − (cid:18) p − r − (cid:19)(cid:18) q − r − (cid:19) I p + q − r ( f e ⊗ r g ) . (4.30)Thanks to (4.30), it is straightforward to prove by induction on | m | that Γ l ,...,l | m | ( F ) (4.31) = q λ ∧ q λ X r =1 . . . [ q λ + ... + q λ | m |− − r − ... − r | m |− ] ∧ q λ | m | X r | m | =1 c q,l ( r , . . . , r | m | ) { r < qλ qλ } . . . × { r + ... + r | m |− < qλ ... + qλ | m |− } I q λ + ... + q λ | m | − r − ... − r | m | (cid:0) f λ e ⊗ r f λ . . . e ⊗ r | m | f λ | m | (cid:1) . (4.32)Now, let us take the expectation on both sides of (4.32). We get κ m ( F )= ( | m | − E [Γ l ,...,l | m | ( F )]= ( | m | − q λ ∧ q λ X r =1 . . . [ q λ + ... + q λ | m |− − r − ... − r | m |− ] ∧ q λ | m | X r | m | =1 c q,l ( r , . . . , r | m | ) { r < qλ qλ } . . . × { r + ... + r | m |− < qλ ... + qλ | m |− } { r + ... + r | m | = qλ ... + qλ | m | } × f λ e ⊗ r f λ . . . e ⊗ r | m | f λ | m | . Observe that, if r + . . . + 2 r | m | = q λ + . . . + q λ | m | and r | m | q λ + . . . + q λ | m |− − r − . . . − r | m |− , then r | m | = q λ | m | + (cid:0) q λ + . . . + q λ | m |− − r − . . . − r | m |− (cid:1) > q λ | m | + r | m | , that is, r | m | > q λ | m | , so that r | m | = q λ | m | . Therefore, κ m ( F )= ( | m | − q λ ∧ q λ X r =1 . . . [ q λ + ... + q λ | m |− − r − ... − r | m |− ] ∧ q λ | m | X r | m | =1 c q,l ( r , . . . , r | m | ) { r < qλ qλ } . . . × { r + ... + r | m |− < qλ ... + qλ | m |− } { r + ... + r | m | = qλ ... + qλ | m | } ×h f λ e ⊗ r f λ . . . e ⊗ r | m |− f λ | m |− , f λ | m | i H ⊗ qλ | m | , which is the announced result, since c q,l ( r , . . . , r | m |− , q λ | m | ) = ( q λ | m | )! c q,l ( r , . . . , r | m |− ) . ✷ .3 Yet another proof of Theorem 1.1 As a corollary of Theorem 4.6, we can now perform yet another proof of the implication ( ii ) → ( i ) (the only one which is difficult) in Theorem 1.1. So, let the notation andassumptions of this theorem prevail, suppose that ( ii ) is in order, and let us prove that ( i ) holds. Applying the method of moments/cumulants, we are left to prove that thecumulants of F n verify, for all m ∈ N d , κ m ( F n ) → κ m ( N ) = (cid:26) if | m | 6 = 2 C ij if m = e i + e j as n → ∞ . Let m ∈ N d \ { } . If m = e j for some j (that is, if and only if | m | = 1 ), we have κ m ( F n ) = E [ F j,n ] = 0 . If m = e i + e j for some i, j (that is, if and only if | m | = 2 ), we have κ m ( F n ) = E [ F i,n F j,n ] → C ij by assumption (1.1). If | m | > , we consider the expression(4.29). Thanks to (3.20), from ( ii ) we deduce that k f i,n ⊗ r f i,n k L ([0 ,T ] qi ) → as n → ∞ for all i , whereas, thanks to (1.1), we deduce that q i ! k f i,n k L ([0 ,T ] qi ) = E [ F i,n ] → C ii for all i , so that sup n > k f i,n k L ([0 ,T ] qi ) < ∞ for all i . Let r , . . . , r | n |− be some integers such that ( i ) – ( iv ) in Theorem 4.6 are satisfied. In particular, r < q λ + q λ . From (3.22)-(3.23), itcomes that k f λ ,n e ⊗ r f λ ,n k L ([0 ,T ] qλ qλ − r ) → as n → ∞ . Hence, using Cauchy-Schwarzinequality successively through k g e ⊗ r h k L ([0 ,T ] p + q − r ) k g ⊗ r h k L ([0 ,T ] p + q − r ) k g k L ([0 ,T ] p ) k h k L ([0 ,T ] q ) whenever g ∈ L s ([0 , T ] p ) , h ∈ L s ([0 , T ] q ) and r = 1 , . . . , p ∧ q , we get that h f λ ,n e ⊗ r f λ ,n . . . e ⊗ r | m |− f λ | m |− ,n , f λ | m | ; n i L ([0 ,T ] qλ | m | ) → as n → ∞ . Therefore, κ m ( F n ) → as n → ∞ by (4.29). ✷ References [1] I. Nourdin and G. Peccati (2009). Stein’s method on Wiener chaos.
Probab. TheoryRelat. Fields , no. 1, 75-118.[2] I. Nourdin and G. Peccati (2010). Stein’s method meets Malliavin calculus: a short sur-vey with new estimates. In:
Recent Development in Stochastic Dynamics and Stochas-tic Analysis (J. Duan, S. Luo and C. Wang, eds), Interdisciplinary MathematicalSciences Vol. , World Scientific 2010.[3] I. Nourdin and G. Peccati (2010). Cumulants on the Wiener Space. J. Funct. Anal. , 3775-3791.[4] I. Nourdin, G. Peccati and A. Réveillac (2008). Multivariate normal approximationusing Stein’s method and Malliavin calculus.
Ann. Inst. H. Poincaré Probab. Statist. ,to appear. 175] D. Nualart (2006). The Malliavin calculus and related topics. Springer-Verlag, Berlin,2nd edition.[6] D. Nualart and S. Ortiz-Latorre (2008). Central limit theorems for multiple stochasticintegrals and Malliavin calculus.
Stoch. Proc. Appl. (4), 614-628.[7] D. Nualart and G. Peccati (2005). Central limit theorems for sequences of multiplestochastic integrals.
Ann. Probab. , no. 1, 177-193.[8] G. Peccati and C.A. Tudor (2005). Gaussian limits for vector-valued multiple stochas-tic integrals. In: Séminaire de Probabilités XXXVIII , 247-262. Lecture Notes in Math.1857