aa r X i v : . [ m a t h . P R ] A ug On the structure of Gaussian random variables
Ciprian A. TudorSAMOS/MATISSE, Centre d’Economie de La Sorbonne,Universit´e de Panth´eon-Sorbonne Paris 1,90, rue de Tolbiac, 75634 Paris Cedex 13, [email protected] 19, 2009
Abstract
We study when a given Gaussian random variable on a given probability space(Ω , F , P ) is equal almost surely to β where β is a Brownian motion defined on thesame (or possibly extended) probability space. As a consequence of this result, weprove that the distribution of a random variable in a finite sum of Wiener chaoses(satisfying in addition a certain property) cannot be normal. This result also allows tounderstand better a characterization of the Gaussian variables obtained via Malliavincalculus. Key words:
Gaussian random variable, representation of martingales, multiplestochastic integrals, Malliavin calculus.
We study when a Gaussian random variable defined on some probability space can be ex-pressed almost surely as a Wiener integral with respect to a Brownian motion defined on thesame space. The starting point of this work are some recent results related to the distancebetween the law of an arbitrary random variable X and the Gaussian law. This distancecan be defined in various ways (the Kolmogorov distance, the total variations distance orothers) and it can be expressed in terms of the Malliavin derivative DX of the randomvariable X when this derivative exists. These results lead to a characterization of Gaussianrandom variables through Malliavin calculus. Let us briefly recall the context. Suppose that(Ω , F , P ) is a probability space and let ( W t ) t ∈ [0 , be a F t Brownian motion on this space,where F t is its natural filtration. Equivalent conditions for the standard normality of a cen-tered random variable X with variance 1 are the following: E (cid:0) − h DX, D ( − L ) − i| X (cid:1) = 0or E (cid:0) f ′ z ( X )(1 − h DX, D ( − L ) − i (cid:1) = 0 for every z where D denotes the Malliavin derivative,1 is the Ornstein-Uhlenbeck operator, h· , ·i denotes the scalar product in L ([0 , f z is the solution of the Stein’s equation (see e.g. [4]). This character-ization is of course interesting and it can be useful in some cases. It is also easy to understandit for random variables that are Wiener integrals with respect to W . Indeed, assume that X = W ( h ) where h is a deterministic function in L ([0 , k h k L ([0 , = 1. In this case DX = h = D ( − L ) − X and then h DX, D ( − L ) − i = 1 and the above equivalent conditionsfor the normality of X can be easily verified. In some other cases, it is difficult, even impos-sible, to compute the quantity E (cid:0) h DX, D ( − L ) − i| X (cid:1) or E (cid:0) f ′ z ( X )(1 − h DX, D ( − L ) − i (cid:1) .Let us consider for example the case of the random variable Y = R sign ( W s ) dW s . This isnot a Wiener integral with respect to W . But it is well-known that it is standard Gaus-sian because the process β t = R t sign ( W s ) dW s is a Brownian motion as follows from theL´evy’s characterization theorem. The chaos expansion of this random variable is knownand it is recalled in Section 2. In fact Y can be expressed as an infinite sum of multipleWiener-Itˆo stochastic integrals and it is impossible to check if the equivalent conditions forits normality are satisfied (it is even not differentiable in the Malliavin calculus sense). Thephenomenon that happens here is that Y can be expressed as the value at time 1 of theBrownian motion β which is actually the Dambis-Dubbins-Schwarz (DDS in short) Brow-nian motion associated to the martingale M Y = ( M Yt ) t ∈ [0 , , M Yt = E ( Y |F t ) (recall that F t is the natural filtration of W and β is defined on the same space Ω (or possibly on aextension of Ω) and is a G s -Brownian motion with respect to the filtration G s = F T ( s ) where T ( s ) = inf( t ∈ [0 , h M Y i t ≥ s )). This leads to the following question: is any standardnormal random variable X representable as the value at time 1 of the Brownian motionassociated, via the Dambis-Dubbins-Schwarz theorem, to the martingale M X , where forevery t M Xt = E ( X |F t )? (1)By combining the techniques of Malliavin calculus and classical tools of the probabilitytheory, we found the following answer: if the bracket of the F t martingale M X is boundeda.s. by 1 then this property is true, that is, X can be represented as its DDS Brownianmotion at time 1. The property also holds when the bracket h M X i is bounded by anarbitrary constant and h M X i and β h M X i are independent. If the bracket of M X is notbounded by 1, then this property is not true. An example when it fails is obtained byconsidering the standard normal random variable W ( h ) sign ( W ( h )) where h , h are twoorthonormal elements of L ([0 , that includes Ω and a Brownian motion on Ω such that X isequal almost surely with this Brownian motion at time 1. The construction is done by themeans of the Karhunen-Lo`eve theorem. Some consequences of this result are discussed here;we believe that these consequences could be various. We prove that the standard normalrandom variables such that the bracket of its associated DDS martingale is bounded by 1cannot live in a finite sum of Wiener chaoses: they can be or in the first chaos, or in aninfinite sum of chaoses. We also make a connection with some results obtained recently viaStein’s method and Malliavin calculus. 2e structured our paper as follows. Section 2 starts with a short description of theelements of the Malliavin calculus and it also contains our main result on the structure ofGaussian random variables. In Section 3 we discusses some consequences of our characteri-zation. In particular we prove that the random variables whose associated DDS martingalehas bracket bouned by 1 cannot belong to a finite sum of Wiener chaoses and we relateour work with recent results on standard normal random variables obtained via Malliavincalculus. Let us consider a probability space (Ω , F , P ) and assume that ( W t ) t ∈ [0 , is a Brownianmotion on this space with respect to its natural filtration ( F t ) t ∈ [0 , . Let I n denote themultiple Wiener-Itˆo integral of order n with respect to W . The elements of the stochasticcalculus for multiple integrals and of Malliavin calculus can be found in [3] or [6]. Wewill just introduce very briefly some notation. We recall that any square integrable randomvariable which is measurable with respect to the σ -algebra generated by W can be expandedinto an orthogonal sum of multiple stochastic integrals F = X n ≥ I n ( f n ) (2)where f n ∈ L ([0 , n ) are (uniquely determined) symmetric functions and I ( f ) = E [ F ].The isometry of multiple integrals can be written as: for m, n positive integers and f ∈ L ([0 , n ), g ∈ L ([0 , m ) E ( I n ( f ) I m ( g )) = n ! h f, g i L ([0 , ⊗ n if m = n, E ( I n ( f ) I m ( g )) = 0 if m = n. (3)It also holds that I n ( f ) = I n (cid:0) ˜ f (cid:1) where ˜ f denotes the symmetrization of f defined by ˜ f ( x , . . . , x x ) = n ! P σ ∈S n f ( x σ (1) , . . . , x σ ( n ) ).We will need the general formula for calculating products of Wiener chaos integrals of anyorders m, n for any symmetric integrands f ∈ L ([0 , ⊗ m ) and g ∈ L ([0 , ⊗ n ); it is I m ( f ) I n ( g ) = m ∧ n X l =0 l ! C lm C ln I m + m − l ( f ⊗ l g ) (4)where the contraction f ⊗ l g (0 ≤ l ≤ m ∧ n ) is defined by( f ⊗ ℓ g )( s , . . . , s n − ℓ , t , . . . , t m − ℓ )= Z [0 ,T ] m + n − ℓ f ( s , . . . , s n − ℓ , u , . . . , u ℓ ) g ( t , . . . , t m − ℓ , u , . . . , u ℓ ) du . . . du ℓ . (5)3ote that the contraction ( f ⊗ ℓ g ) is an element of L ([0 , m + n − ℓ ) but it is not necessarysymmetric. We will by ( f ˜ ⊗ ℓ g ) its symmetrization.We denote by D , the domain of the Malliavin derivative with respect to W whichtakes values in L ([0 , × Ω). We just recall that D acts on functionals of the form f ( X ),with X ∈ D , and f differentiable, in the following way: D α f ( X ) = f ′ ( X ) D α X for every α ∈ (0 ,
1] and on multiple integrals I n ( f ) with f ∈ L ([0 , n ) as D α I n ( f ) = nI n − f ( · , α ).The Malliavin derivative D admits a dual operator which is the divergence integral δ ( u ) ∈ L (Ω) if u ∈ Dom ( δ ) and we have the duality relationship E ( F δ ( u )) = E h DF, u i , F ∈ D , , u ∈ Dom ( δ ) . (6)For adapted integrands, the divergence integral coincides with the classical Itˆo integral.Let us fix the probability space (Ω , F , P ) and let us assume that the Wiener process( W t ) t ∈ [0 , lives on this space. Let X be a centered square integrable random variable onΩ. Assume that X is measurable with respect to the sigma-algebra F . After Proposition1 the random variable X will be assumed to have standard normal law.The following result is an immediate consequence of the Dambis-Dubbins-Schwarztheorem (DDS theorem for short, see [2], Section 3.4, or [8], Chapter V). Proposition 1
Let X be a random variable in L (Ω) . Then there exists a Brownian motion ( β s ) s ≥ (possibly defined on an extension of the probability space) with respect to a filtration ( G s ) s ≥ such that X = β h M X i where M X = ( M Xt ) t ∈ [0 , is the martingale given by (1). Moreover the random time T = h M X i is a stopping time for the filtration G s and it satisfies T > a.s. and E T = E X . Proof:
Let T ( s ) = inf (cid:0) t ≥ , h M X i t ≥ s (cid:1) . By applying Dambis-Dubbins-Schwarz theo-rem β s := M T ( s ) is a standard Brownian motion with respect to the filtration G s := F T ( s ) and for every t ∈ [0 ,
1] we have M Xt = β h M X i t a.s. P . Taking t = 1 we get X = β h M X i a.s. . The fact that T is a ( G s ) s ≥ stopping time is well known. It is true because ( h M X i ≤ s ) = ( T ( s ) ≥ ∈ F T ( s ) = G s . Also clearly T > E T = E X .In the sequel we will call the Brownian β obtained via the DDS theorem as the DDSBrownian associated to X .Recall the Ocone-Clark formula: if X is a random variable in D , then X = E X + Z E ( D α X |F α ) dW α . (7)4 emark 1 If the random variable X has zero mean and it belongs to the space D , thenby the Ocone-Clark formula (7) we have M Xt = R t E ( D α X |F α ) dW α and consequently X = β R ( E ( D α X |F α )) dα where β is the DDS Brownian motion associated to X . Assume from now on that X ∼ N (0 , X can be written as thevalue at a random time of a Brownian motion β (which is fact the Dambis-Dubbins-SchwarzBrownian associated to the martingale M X ). Note that β has the time interval R + even if W is indexed over [0 , β T has a standard normal law, what can we sayabout the random time T ? It is equal to 1 almost surely? This is for example the case ofthe variable X = R sign ( W s ) dW s because here, for every t ∈ [0 , M Xt = R t sign ( W s ) dW s and h M X i t = R t ( sign ( B s ) ds = t . An other situation when this is true is related to Besselprocesses. Let ( B (1) , . . . B ( d ) ) be a d -dimensional Brownian motion and consider the randomvariable X = Z B (1) s q ( B (1) s ) + . . . + ( B ( d ) s ) dB (1) s + . . . + Z B ( d ) s q ( B (1) s ) + . . . + ( B ( d ) s ) dB ( d ) s (8)It also satisfies T := h M X i t = t for every t ∈ [0 ,
1] and in particular h M X i = 1 a.s..We will see below that the fact that any N (0 ,
1) random variable is equal a.s. to β (itsassociated DDS Brownian evaluated at time 1) is true only for random variables for whichthe bracket of their associated DDS martingale is almost surely bounded and T and β T areindependent or if T is bounded almost surely by 1.We will assume the following condition on the stopping time T .There exist a constant M > T ≤ M a.s. (9)The problem we address in this section is then the following: let ( β t ) t ≥ be a G t -Brownian motion and let T be a almost surely positive stopping time for its filtration suchthat E ( T ) = 1 and T satisfies (9). We will show when T = 1 a.s.Let us start with the following result. Theorem 1
Assume (9) and assume that T is independent by β T . Then it holds that E T = 1 . Proof:
Let us apply Itˆo’s formula to the G t martingale β T ∧ t . Letting t → ∞ (recall that T is a.s. bounded) we get E β T = 6 E Z T β s ds. β T has N (0 ,
1) law, we have that E β T = 3. Consequently E Z T β s ds = 12 . Now, by the independence of T and β T , we get E ( T β T ) = E T E β T = 1. Applying again Itˆoformula to β T ∧ t with f ( t, x ) = tx we get E T β T = E Z T β s ds + E Z T sds. Therefore E R T sds = and then E T = 1. Theorem 2
Let ( β t ) t ≥ be a G t Wiener process and let T be a G t bounded stopping timewith E T = 1 . Assume that T and β t are independent. Suppose β T has a N (0 , law. Then T = 1 a.s. Proof:
It is a consequence of the above proposition, since E ( T − = E T − E ( T )+1 = 0. Proposition 2
Assume that ( ) is satisfied with M ≤ . Then T = 1 almost surely. Proof:
By Itˆo’s formula, E β T = 6 E Z T β s ds = 6 E Z β s ds + E Z R + β s [ T, ( s ) ds. Since 6 E R β s ds = 3 and E β T = 3 it follows that E R R + β s [ T, ( s ) ds = 0 and this impliesthat β s ( ω )1 [ T ( ω ) , ( s ) = 0 for almost all s and ω . Clearly T = 1 almost surely.Next, we will try to understand if this property is always true without the assump-tion that the bracket of the martingale M X is finite almost surely. To this end, we willconsider the following example. Let ( W t ) t ∈ [0 , a standard Wiener process with respect to itsnatural filtration F t . Consider h , h two functions in L ([0 , h h , h i L ([0 , = 0and k h k L ([0 , = k h k L ([0 , = 1. For example we can choose h ( x ) = √ [0 , ] ( x ) and h ( x ) = √ [ , ( x )(so, in addition, h and h have disjoint support). Define the random variable X = W ( h ) signW ( h ) . (10)It is well-known that X is standard normal. Note in particular that X = W ( h ) . We willsee that it cannot be written as the value at time 1 of its associated DDS martingale. Tothis end we will use the chaos expansion of X into multiple Wiener-Itˆo integrals.6ecall that if h ∈ L ([0 , k h k L ([0 , = 1 then (see e.g. [1]) sign ( W ( h )) = X k ≥ b k +1 I k +1 ( h ⊗ (2 k +1) ) with b k +1 = 2( − k √ π (2 k + 1) k !2 k , k ≥ . We have
Proposition 3
The standard normal random variable X given by (10) is not equal a.s. to β where β is its associated DDS martingale. Proof:
By the product formula (4) we can express X as (note that h and h are orthogonaland there are not contractions of order l ≥ X = X k ≥ b k +1 I k +2 (cid:16) h ˜ ⊗ h ⊗ k +12 (cid:17) and E ( X |F t ) = X k ≥ b k +1 I k +2 (cid:16) ( h ˜ ⊗ h ⊗ k +12 )1 ⊗ k +2[0 ,t ] ( · ) (cid:17) for every t ∈ [0 , . We have ( h ˜ ⊗ h ⊗ k +12 )( t , . . . , t k +2 ) = 12 k + 2 k +1 X i =1 h ( t i ) h ⊗ k +12 ( t , .., ˆ t i , .., t k +2 ) (11)where ˆ t i means that the variable t i is missing. Now, M Xt = E ( X |F t ) = R t u s dW s where,by (11) u s = X k ≥ b k +1 (2 k + 2) I k +1 (cid:16) ( h ˜ ⊗ h k +12 )( · , s )1 ⊗ k +1[0 ,s ] ( · ) (cid:17) = X k ≥ b k +1 h h ( s ) I k +1 (cid:16) h ⊗ k +12 ⊗ k +1[0 ,s ] ( · ) (cid:17) +(2 k + 1) h ( s ) I ( h [0 ,s ] ( · )) I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) . i for every s ∈ [0 , h and h , h ( s ) h ( u )1 [0 ,s ] ( u ) = 0 for every s, u ∈ [0 , . Thus the first summand of u s vanishes and u s = X k ≥ b k +1 (2 k + 1) h ( s ) I ( h [0 ,s ] ( · )) I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) . Note also that h ( x )1 [0 ,s ] ( x ) = h ( x ) for every s in the interval [ , s ∈ [0 , u s = W ( h ) X k ≥ b k +1 (2 k + 1) h ( s ) I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) . R u s ds. Takinginto account the fact that h and h have disjoint support we can write Z u s ds = X k,l ≥ b k +1 b l +1 (2 k + 1)(2 l + 1) W ( h ) Z dsh ( s ) I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) I l (cid:16) h ⊗ l ⊗ l [0 ,s ] ( · ) (cid:17) . Since W ( h ) = I (cid:0) h ⊗ (cid:1) + Z h ( u ) du = I (cid:0) h ⊗ (cid:1) + 1and E ( sign ( W ( h )) = Z dsh ( s ) E X k ≥ b k +1 (2 k + 1) I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) = 1we get Z u s ds = (cid:0) I (cid:0) h ⊗ (cid:1)(cid:1) × X k,l ≥ b k +1 b l +1 (2 k + 1)(2 l + 1) Z dsh ( s ) h I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) I l (cid:16) h ⊗ l ⊗ l [0 ,s ] ( · ) (cid:17) − E I k (cid:16) h ⊗ k ⊗ k [0 ,s ] ( · ) (cid:17) I l (cid:16) h ⊗ l ⊗ l [0 ,s ] ( · ) (cid:17)i(cid:17) =: (cid:0) I (cid:0) h ⊗ (cid:1)(cid:1) (1 + A ) . Therefore we obtain that R u s ds = 1 almost surely if and only if (cid:0) I (cid:0) h ⊗ (cid:1)(cid:1) (1 + A ) = 1almost surely which implies that I ( h ⊗ )(1 + A ) + A = 0 a.s. and this is impossible because I ( h ⊗ ) and A are independent.We obtain an interesting consequence of the above result. Corollary 1
Let X be given by (10). Then the bracket of the martingale M X with M Xt = E ( X |F t ) is not bounded by 1. Proof:
It is a consequence of Proposition 3 and of Theorem 2.
Remark 2
Proposition 3 provides an interesting example of a Brownian motion β and ofa stopping time T for its filtration such that β T is standard normal and T is not almostsurely equal to 1. X is astandard normal random variable and the bracket of M X is bounded a.s. by 1 then X can beexpressed almost surely as a Wiener integral with respect to a Brownian motion on the same(or possibly extended) probability space. The Brownian is obtained via DDS theorem. Theproperty is still true when the bracket is bounded and T and β T are independent randomvariables. If the bracket of M X is not bounded, then X is not necessarily equal with β , β being its associated DDS Brownian motion. This is the case of the variable (10).Nevertheless, we will see that after a suitable extension of the probability space,any standard normal random variable can be written as the value at time 1 of a Brownianmotion constructed on this extended probability space. Proposition 4
Let X be a standard normal random variable on (Ω , F , P ) and for every i ≥ let (Ω i , F i , P i , X i ) be independent copies of (Ω , F , P , X ) . Let (Ω , F , P ) be theproduct probability space. On Ω define for every t ∈ [0 , W t = X k ≥ f k ( t ) X k where ( f k ) k ≥ are orthonormal elements of L ([0 , . Then W is a Brownian motion on Ω and X = R (cid:16)R u dsf ( s ) (cid:17) dW u a.s.. Proof:
The fact that W is a Brownian motion is a consequence of the Karhunen-Lo`evetheorem. Also, note that X = h W , f i = Z W s f ( s ) ds and the conclusion is obtained by interchanging the order of integration. Remark 3
Let us denote by F t the natural filtration of W . It also holds that E (cid:0) X |F t (cid:1) = E Z t g u dW u where g u = R u dsf ( s ) . It is obvious that the martingale E (cid:0) X |F t (cid:1) is a Brownian motionvia the DDS theorem and X can be expressed as a Brownian at time 1. We think that the consequences of this result are multiple. We will prove here first that arandom variable X which lives in a finite sum of Wiener chaoses cannot be Gaussian if thebracket of M X is bounded by 1. Again we fix a Wiener process ( W t ) t ∈ [0 , on Ω.Let us start with the following lemma.9 emma 1 Fix N ≥ . Let g ∈ L ([0 , ⊗ N +1 ) symmetric in its first N variables such that R dsg ( · , s ) ˜ ⊗ g ( · , s ) = 0 almost everywhere on [0 , ⊗ N . Then for every k = 1 , . . . , N − itholds that Z dsg ( · , s ) ˜ ⊗ k g ( · , s ) = 0 a.e. on [0 , N − k . Proof:
Without loss of generality we can assume that g vanish on the diagonals ( t i = t j )of [0 , ⊗ ( N +1) . This is possible from the construction of multiple stochastic integrals. Fromthe hypothesis, the function( t , . . . , t N ) → N )! X σ ∈ S N Z dsg ( t σ (1) , . . . , t σ ( N ) , s ) g ( t σ ( N +1) , . . . , t σ (2 N ) , s )vanishes almost everywhere on [0 , ⊗ N . Put t N − = t N = x ∈ [0 , x ,the function( t , . . . t N − ) → X σ ∈ S N − Z dsg ( t σ (1) , . . . , t σ ( N − , x, s ) g ( t σ ( N ) , . . . , t σ (2 N − , x, s )is zero a.e. on [0 , ⊗ (2 N − and integrating with respect to x we obtain that R dsg ( · , s ) ˜ ⊗ g ( · , s ) =0 a.e. on [0 , ⊗ (2 N − . By repeating the procedure we obtain the conclusion.Let us also recall the following result from [7]. Proposition 5
Suppose that F = I N ( f N ) with f ∈ L ([0 , N ) symmetric and N ≥ fixed.Then the distribution of F cannot be normal. We are going to prove the same property for variables that can be expanded into afinite sum of multiple integrals.
Theorem 3
Fix N ≥ and et let X be a centered random variable such that X = P N +1 n =1 I n ( f n ) where f ∈ L ([0 , n ) are symmetric functions. Suppose that the bracket ofthe martingale M X (1) is bounded almost surely by 1. Then the law of X cannot be normal. Proof:
We will assume that E X = 1. Suppose that X is standard normal. We can write X as X = R u s dW s where u s = P Nn =1 I n ( g n ( · , s )). As a consequence of Proposition 4, Z u s ds = 1 a. s.But from the product formula (4) Z u s ds = Z ds N X n =1 I n ( g n ( · , s )) ! = Z ds N X m,n =1 m ∧ n X k =1 k ! C kn C km I m + n − k ( g n ( · , s ) ⊗ g m ( · , s )) ds. N in the abovedecomposition. As we said, it appears only when we multiply I N by I N and consists inthe random variable I N (cid:16)R g N ( · , s ) ⊗ g N ( · , s ) ds (cid:17) . The isometry of multiple integrals (3)implies that Z g N ( · , s ) ˜ ⊗ g N ( · , s ) ds = 0 a. e. on [0 , N and by Lemma 1, for every k = 1 , . . . , N − Z g N ( · , s ) ˜ ⊗ k g N ( · , s ) ds = 0 a. e. on [0 , N − k . (12)Consider now the the random variable Y := I N +1 ( f N +1 ). It can be written as Y = R I N ( g N ( · , s )) dW s and b y the DDS theorem, Y = β Y R ds ( I N ( g N ( · ,s ))) . The multiplica-tion formula together with (12) shows that R ds ( I N ( g N ( · , s ))) is deterministic and as aconsequence Y is Gaussian. This is in contradiction with Proposition 5.The conclusion of the above theorem still holds if M X satisfies (9) and h M X i isindependent by β h M X i .Finally let us make a connection with several recent results obtained via Stein’smethod and Malliavin calculus. Recall that the Ornstein-Uhlenbeck operator is defined as LF = − P n ≥ nI n ( f n ) if F is given by (2). There exists a connection between δ, D and L in the sense that a random variable F belongs to the domain of L if and only if F ∈ D , and DF ∈ Dom ( δ ) and then δDF = − LF .Let us denote by D the Malliavin derivative with respect to W and let, for any X ∈ D , G X = h DX, D ( − L ) − X i . The following theorem is a collection of results in several recent papers.
Theorem 4
Let X be a random variable in the space D , . Then the following affirmationsare equivalent.1. X is a standard normal random variable.2. For every t ∈ R , one has E (cid:0) e itX (1 − G X ) (cid:1) = 0 .3. E ((1 − G X ) /X ) = 0 .4. For every z ∈ R , E ( f ′ z (1 − G X )) = 0 , where f z is the solution of the Stein’s equation(see [4]). roof: We will show that 1 . ⇒ . ⇒ . ⇒ . ⇒
1. First suppose that X ∼ N (0 , E (cid:0) e itX (1 − G X ) (cid:1) = E ( e itX ) − it E h De itX , D ( − L ) − X i = E ( e itX n ) − it E (cid:0) Xe itX (cid:1) = ϕ X ( t ) − t ϕ ′ X ( t ) = 0 . Let us prove now the implication 2 . ⇒ . It has also proven in [5], Corollary 3.4. Set F = 1 − G X . The random variable E ( F | X ) is the Radon-Nykodim derivative with respectto P of the measure Q ( A ) = E ( F A ), A ∈ σ ( X ). Relation 1. means that E (cid:0) e itX E ( F/X ) (cid:1) = E Q ( e itX ) = 0 and consequently Q ( A ) = E ( F A ) = 0 for any A ∈ σ ( X n ). In other words, E ( F | X ) = 0 . The implication 3 . ⇒ . ⇒ . is a consequenceof a result in [4].As we said, this property can be easily understood and checked if X is in thefirst Wiener chaos with respect to W . Indeed, if X = W ( f ) with k f k L ([0 , = 1 then DX = D ( − L ) − X = f and clearly G X = 1. There is no need to compute the conditionalexpectation given X , which is in practice very difficult to be computed. Let us consider nowthe case of the random variable Y = R sign ( W s ) dW s . The chaos expansion of this variableis known. But Y is not even differentiable in the Malliavin sense so it is not possible tocheck the conditions from Theorem 4. Another example is related to the Bessel process (seethe random variable 8). Here again the chaos expansion of X can be obtained (see e.g. [1])but is it impossible to compute the conditional expectation given X .But on the other hand, for both variables treated above their is another explanationof their normality which comes from L´evy’s characterization theorem. Another explanationcan be obtained from the results in Section 2. Note that these two examples are randomvariables such that the bracket of M X is bounded a.s. Corollary 2
Let X be an integrable random variable on (Ω , F , P ) . Then X is a stan-dard normal random variable if and only if there exists a Brownian motion ( β t ) t ≥ on anextension of Ω such that h D β X, D β ( − L β ) − X i = 1 . (13) Proof:
Assume that X ∼ N (0 , X = β where β is a Brownianmotion on an extended probability space. Clearly (13) holds. Suppose that there exists β aBrownian motion on (Ω , F , P ) such that (13) holds. Then for any continuous and piecewisedifferentiable function f with E f ′ ( Z ) < ∞ we have E (cid:0) f ′ ( Z ) − f ( X ) X (cid:1) = E (cid:16) f ′ ( X ) − f ′ ( X ) h D β X, D β ( − L β ) − X i (cid:17) = E (cid:16) f ′ ( Z )(1 − h D β X, D β ( − L β ) − X i (cid:17) = 0and this implies that X ∼ N (0 ,
1) (see [4], Lemma 1.2).12 cknowledgement:
The Proposition 4 has been introduced in the paper after adiscussion with Professor Giovanni Peccati. We would to thank him for this. We are alsograteful to Professor P. J. Fitzsimmons for detecting a mistake in the first version of thiswork.
References [1] Y.Z. Hu and D. Nualart (2005): Some processes associated with fractional Besselprocesses.
Journal of Theoretical Probability.,
Brownian motion and stochastic calculus. SecondEdition.
Springer.[3] P. Malliavin, P. (2002):
Stochastic Analysis.
Springer-Verlag.[4] I. Nourdin and G. Peccati (2007): Stein’s method on Wiener chaos. To appear in
Probability Theory and Related Fields. [5] I. Nourdin and F. Viens (2008):
Density formula and concentration inequalities withMalliavin calculus.
Preprint.[6] D. Nualart (2006):
Malliavin Calculus and Related Topics. Second Edition.
Springer.[7] D. Nualart and S. Ortiz-Latorre (2008): Central limit theorem for multiple stochasticintegrals and Malliavin calculus.
Stochastic Processes and their Application,