Complex Obtuse Random Walks and their Continuous-Time Limits
aa r X i v : . [ m a t h . P R ] F e b Complex Obtuse Random Walksand their Continuous-Time Limits ∗ S. Attal, J. Deschamps and C. Pellegrini
Abstract
We study a particular class of complex-valued random variables and their asso-ciated random walks: the complex obtuse random variables. They are the general-ization to the complex case of the real-valued obtuse random variables which wereintroduced in [4] in order to understand the structure of normal martingales in R n .Theextension to the complex case is mainly motivated by considerations from QuantumStatistical Mechanics, in particular for the seek of a characterization of those quan-tum baths acting as classical noises. The extension of obtuse random variables to thecomplex case is far from obvious and hides very interesting algebraical structures.We show that complex obtuse random variables are characterized by a 3-tensor whichadmits certain symmetries which we show to be the exact 3-tensor analogue of thenormal character for 2-tensors (i.e. matrices), that is, a necessary and sufficient con-dition for being diagonalizable in some orthonormal basis. We discuss the passageto the continuous-time limit for these random walks and show that they converge indistribution to normal martingales in C N . We show that the 3-tensor associated tothese normal martingales encodes their behavior, in particular the diagonalizationdirections of the 3-tensor indicate the directions of the space where the martingalebehaves like a diffusion and those where it behaves like a Poisson process. We finallyprove the convergence, in the continuous-time limit, of the corresponding multiplica-tion operators on the canonical Fock space, with an explicit expression in terms ofthe associated 3-tensor again. ∗ Work supported by ANR project “HAM-MARK” N ◦ ANR-09-BLAN-0098-01 ontents R N . . . . . . . . . . . . . . . . . . 344.2 Normal Martingales in C N . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.3 Complex Unitary Transforms of Real Normal Martingales . . . . . . . . . . 41 Introduction and Motivations
Real obtuse random variables are particular random variables which were defined in [4]in order to understand the discrete-time analogue of normal martingales in R n . Theywere shown to be deeply connected to the Predictable Representation Property and theChaotic Representation Property for discrete-time martingales in R n . They are kind ofminimal, centered and normalized random variables in R n and they exhibit a very inter-esting underlying algebraical structure. This algebraical structure is carried by a certainnatural 3-tensor associated to the random variable. This 3-tensor has exactly the neces-sary and sufficient symmetries for being diagonalizable in some orthonormal basis (thatis, they satisfy the exact extension to 3-tensors of the condition for being real symmetricfor 2-tensors). The corresponding orthonormal basis carries the behavior of the associatedrandom walk and in particular of its continuous-time limit. It is shown in [4] that, for thecontinuous-time limit, in the directions associated to the null eigenvalues, the limit processbehaves like a diffusion process, while it behaves like a pure jump process in the directionsassociated to the non-null eigenvalues. In [6] it is concretely shown how the 3-tensors ofthe discrete-time obtuse random walks converge to the ones of normal martingales in R n .Since this initial work of [4] was only motivated by Probability Theory and StochasticProcess considerations, there was no real need for an extension of this notion to the com-plex case. The need for such an extension has appeared naturally through considerationsin Quantum Statistical Mechanics. More precisely, the underlying motivation is to char-acterize the onset of classical noises emerging from quantum baths, in the so-called modelof Repeated Quantum Interactions.Repeated quantum interaction models are physical models, introduced and developed in[5], which consist in describing the Hamiltonian dynamics of a quantum system undergoinga sequence of interactions with an environment made of a chain of identical systems. Thesemodels were developed for they furnish toy models for quantum dissipative systems, theyare at the same time Hamiltonian and Markovian, they spontaneously give rise to quantumstochastic differential equations in the continuous time limit. It has been proved in [7] and[8] that they constitute a good toy model for a quantum heat bath in some situationsand that they can also give an account of the diffusive behavior of an electron in anelectric field, when coupled to a heat bath. When adding to each step of the dynamicsa measurement of the piece of the environment which has just interacted, we recover allthe discrete-time quantum trajectories for quantum systems ([15], [16], [17]). Physically,this model corresponds exactly to physical experiments such as the ones performed by S.Haroche et al on the quantum trajectories of a photon in a cavity ([10], [11], [9]).The discrete-time dynamics of these repeated interaction systems, as well as theircontinuous-time limits, give rise to time evolutions driven by quantum noises coming fromthe environment. These quantum noises emerging from the environment describe all thepossible actions inside the environment (excitation, return to ground state, jumps in be-tween two energy levels, ...). It is a remarkable fact that these quantum noises can also3e combined together in order to give rise to classical noises. In discrete-time they giverise to any random walk, in continuous-time they give rise to many well-known stochasticprocesses among which are all the Levy processes.The point is that complex obtuse random variables and their continuous-time limitsare the key for understanding what kind of classical noise is appearing at the end from thequantum bath. The 3-tensor helps to read directly from the Hamiltonian which kind ofclassical noise will be driving the evolution equation. This was our initial motivation fordeveloping the complex theory of obtuse random variables and normal martingales in C N .Surprisingly, the extension of obtuse random variables, obtuse random walks and theircontinuous-time limits, to the complex case is far from obvious. The algebraical propertiesof the associated 3-tensors give rise to the same kind of behaviors as in the real case, but,as we shall see in this article, many aspects (such as the diagonalization theorem) becomenow really non-trivial. Let us have here a more detailed discussion on these physical motivations underlying ourstudy. These motivations do not appear anymore in the rest of the article which is de-voted entirely to the probabilistic properties of complex obtuse random walks and theircontinuous-time limits, but we have felt that it could be of interest for the reader to have aclearer picture of the physical motivations which have brought us to consider the complexcase extension of obtuse random walks. This part can be skipped by the reader, it has noinfluence whatsoever on the rest of the article, these physical applications are developedin detail in [2].Let us illustrate this situation with concrete examples. Consider a quantum system,with state space H S , in contact with a quantum bath of the form H E = ⊗ N C , that is,a spin chain. Let us denote by a ij , i, j = 0 ,
1, the usual basis of elementary matrices on C and by a ij ( n ) the corresponding matrix but acting only on the n -th copy of C . TheHamiltonian for the interaction between H S and one copy of C is of the typical form H tot = H S ⊗ I + I ⊗ H E + L ⊗ a + L ∗ ⊗ a . Assume that the interaction between these two parts lasts for a small amount of time h ,then the associated unitary evolution operator is U = e − ihH tot which can be decomposedas U = P i,j =0 U ij ⊗ a ij for some operators U ij on H S .The action of the environment (the spin chain) by acting repeatedly of the system H S ,spin by spin, each time for a time duration h , gives rises to a time evolution driven by asequence of unitary operators ( V n ) which satisfies (cf [5] for details) V n +1 = X i,j =0 ( U ij ⊗ I ) V n ( I ⊗ a ij ( n + 1)) . This describes a rather general discrete time evolution for a quantum system and theoperators a ij ( n ) here play the role of discrete time quantum noises, they describe all thepossible innovations brought by the environment.4n [5] it is shown that if the total Hamiltonian H tot is renormalized under the form H tot = H S ⊗ I + I ⊗ H E + 1 √ h (cid:0) L ⊗ a + L ∗ ⊗ a (cid:1) (this can be understood as follows: if the time duration of the interactions h tends to 0, thenthen the interaction needs to be strengthen adequately if one wishes to obtain a non-triviallimit) then the time evolution ( V nh ) converges, when h tends to 0, to a continuous-timeunitary evolution ( V t ) satisfying an equation of the form dV t = K V t dt + L V t da ∗ ( t ) − L ∗ V t da ( t ) , which is a quantum stochastic differential equation driven by quantum noises da ( t ) and da ∗ ( t ) on some appropriate Fock space. In other words, we obtain a perturbation of aSchr¨odinger equation by some additional quantum noise terms.The point now is that in the special case where L = L ∗ then the discrete-time evolutionand its continuous-time limit are actually driven by classical noises, for some of the termsin the evolution equation factorize nicely and make appearing classical noises instead ofquantum noises (the noises get grouped in order to furnish a family of commuting self-adjoint operators, that is, a classical stochastic process). Indeed, one can show (cf [6] and[3]) that the discrete time evolution can be written under the form of classical randomwalk on the unitary group U ( H S ): V n +1 = A V n + B V n X n +1 , where ( X n ) is a sequence of i.i.d. symmetric Bernoulli random variables. The continuoustime limit, with the same renormalization as above, gives rise to a unitary evolution drivenby a classical Brownian motion ( W t ): dV t = (cid:18) iH − L (cid:19) V t dt + L V t dW t . The equation above is the typical one for the perturbation of a Schr¨odinger equation bymeans of a Brownian additional term, if one wants the evolution to keep unitary at alltimes.This example is a very simple one and belongs to those which were already well-known(cf [6]); they involve real obtuse random variables and real normal martingales.The point now is that one can consider plenty of much more complicated examples of achoice for the Hamiltonian H tot which would give rise to classical noises instead of quantumnoises. Our motivation was to understand and characterize when such a situation appearsand to read on the Hamiltonian which kind of noise is going to drive the dynamics. Let usillustrate this with a more complicated example. Assume now that the environment is madeof a chain of 3-level quantum systems, that is, H E = ⊗ N C . For the elementary interaction5etween the quantum system H S and one copy of C we consider an Hamiltonian of theform H tot = H ⊗ I + A ⊗ − i − i − i i + B ⊗ − i i − i − i − i , which is self-adjoint under the condition B = − (1 / A + (1 + 2 i ) A ∗ ).In this case the quantum dynamics in discrete time happens to be driven by a classicalnoise too, but this does not appear obviously here! We will understand, with the toolsdeveloped in this article, that the resulting discrete time dynamics is of the form V n +1 = A V n + A V n X n +1 + A V n X n +1 , where the random variables ( X n , X n ) are i.i.d. in C taking the values v = (cid:18) i (cid:19) , v = (cid:18) − i (cid:19) , v = − (cid:18) i i (cid:19) with probabilities p = 1 / p = 1 / p = 5 /
12 respectively.Putting a 1 / √ h normalization factor in front of A and B and taking the limit h goesto 0, we will show in this article, that this gives rise to a continuous time dynamics of theform dV t = L V t dt + L V t dZ t + L V t dZ t , where ( Z , Z ) is a normal martingale in C given by Z t = i √ W t + i √ W t Z t = − i √ W t + √ W t , where ( W , W ) is a 2-dimensional real Brownian motion.We will also see in this article how to produce any kind of example in C n which mixesBrownian parts and Poisson parts.The way these random walks and their characteristics are identified, the way thecontinuous-time limits and their characteristics are identified, are non-trivial and makeuse of all the tools we develop along this article: associated doubly-symmetric 3-tensor,diagonalisation of the 3-tensor, probabilistic characteristics of the associated random walk,passage to the limit on the tensor, passage to the limit on the discrete-time martingale,identification of the law of the limit martingale, etc. This article is structured as follows. In Section 2 we introduce the notions of obtuse systems , obtuse random variables and their associated 3-tensors. We show a kind of uniqueness resultand we show that they generate all finitely supported random variables in C N .6n Section 3 we establish the important symmetries shared by the 3-tensors of obtuserandom variables and we show one of our main results: these symmetries are the necessaryand sufficient conditions for the 3-tensor to be diagonalizable in some orthonormal basis.We show how to recover the real case, which remarkably does not correspond to the realcharacter of the 3-tensor but to a certain supplementary symmetry.Section 4 is kind of preparatory to the continuous-time limit of complex obtuse randomwalks. In this section we show an important connection between complex obtuse randomvariables and real ones. This connection will be the key for understanding the continuous-time limits. In Section 4 we gather all the results concerning this connection with thereal obtuse random variables and its consequences. We recall basic results on real normalmartingales and deduce the corresponding ones for the complex normal martingales. Inparticular we establish what is the complex extension of a structure equation . We connectthe behavior of the complex normal martingale to the diagonalization of its associated3-tensor.In Section 5 we finally prove our continuous-time convergence theorems. First of all, viathe convergence of the tensors, exploiting the results of [18], we prove a convergence in lawfor the processes. Secondly, in the framework of Fock space approximation by spin chainsdeveloped in [1], we prove the convergence of the associated multiplication operators, withexplicit formulas in terms of quantum noises.We finally illustrate our results in Section 6 through 2 examples, showing up the differenttypes of behavior. Let N ∈ N ∗ be fixed. In C N , an obtuse system is a family of N + 1 vectors v , . . . , v N +1 such that h v i , v j i = − i = j . In that case we put b v i = (cid:18) v i (cid:19) ∈ C N +1 , so that h b v i , b v j i = 0for all i = j . They then form an orthogonal basis of C N +1 . We put p i = 1 k b v i k = 11 + k v i k , for i = 1 , . . . N + 1. 7 emma 2.1 We then have N +1 X i =1 p i = 1 (1) and N +1 X i =1 p i v i = 0 . (2) Proof:
We have, for all j , * N +1 X i =1 p i b v i , b v j + = p j k b v j k = 1 = (cid:28)(cid:18) (cid:19) , b v j (cid:29) . As the b v j ’s form a basis, this means that N +1 X i =1 p i b v i = (cid:18) (cid:19) . This implies the two announced equalities. (cid:3)
Lemma 2.2
We also have N +1 X i =1 p i | v i ih v i | = I C N . (3) Proof:
As the vectors ( √ p i b v i ) i ∈{ ,...,N +1 } form an orthonormal basis of C N +1 we have I C N +1 = N +1 X i =1 p i | b v i ih b v i | . Now put u = (cid:18) (cid:19) and e v i = (cid:18) v i (cid:19) , for all i = 1 , . . . , N + 1. We get I C N +1 = N +1 X i =1 p i | u + e v i ih u + e v i | = N +1 X i =1 p i | u ih u | + N +1 X i =1 p i | u ih e v i | + N +1 X i =1 p i | e v i ih u | + N +1 X i =1 p i | e v i ih e v i | . Using (1) and (2), we get I C N +1 = | u ih u | + N +1 X i =1 p i | e v i ih e v i | .
8n particular we have N +1 X i =1 p i | v i ih v i | = I C N , that is, the announced equality. (cid:3) Let us consider an example that we shall follow along the article. On C , the 3 vectors v = (cid:18) i (cid:19) , v = (cid:18) − i (cid:19) , v = − (cid:18) i i (cid:19) form an obtuse system of C . The associated p i ’s are then respectively p = 13 , p = 14 , p = 512 . Consider a random variable X , with values in C N , which can take only N + 1 differentnon-null values v , . . . , v N +1 with strictly positive probability p , . . . , p N +1 respectively.We shall denote by X , . . . , X N the coordinates of X in C N . We say that X is centered if its expectation is 0, that is, if E [ X i ] = 0 for all i . We say that X is normalized if itscovariance matrix is I , that is, ifcov( X i , X j ) = E [ X i X j ] − E [ X i ] E [ X j ] = δ i,j , for all i, j = 1 , . . . N .We consider the canonical version of X , that is, we consider the probability space(Ω , F , P ) where Ω = { , . . . , N + 1 } , F is the full σ -algebra of Ω, the probability measure P is given by P ( { i } ) = p i and the random variable X is given by X ( i ) = v i , for all i ∈ Ω.The coordinates of v i are denoted by v ki , for k = 1 , . . . , N , so that X k ( i ) = v ki .In the same way as above we put b v i = (cid:18) v i (cid:19) ∈ C N +1 , for all i = 1 , . . . , N + 1.We shall also consider the deterministic random variable X on (Ω , F , P ), which isalways equal to 1. For i = 0 , . . . , N let e X i be the random variable defined by e X i ( j ) = √ p j X i ( j )for all i = 0 , . . . , N and all j = 1 , . . . , N + 1.9 roposition 2.3 The following assertions are equivalent.1) X is centered and normalized.2) The ( N + 1) × ( N + 1) -matrix (cid:16) e X i ( j ) (cid:17) i,j is a unitary matrix.3) The ( N + 1) × ( N + 1) -matrix (cid:0) √ p i b v ji (cid:1) i,j is a unitary matrix.4) The family { v , . . . , v N +1 } is an obtuse system with p i = 11 + k v i k , for all i = 1 , . . . , N + 1 . Proof: ⇒ X is centered and normalized, each component X i has a zero mean and the scalar product between two components X i , X j is given by thematrix I . Hence, for all i in { , . . . , N } , we get E (cid:2) X i (cid:3) = 0 ⇐⇒ N +1 X k =1 p k v ik = 0 , (4)and for all i, j = 1 , . . . N , E [ X i X j ] = δ i,j ⇐⇒ N +1 X k =1 p k v ik v jk = δ i,j . (5)Now, using Eqs. (4) and (5), we get, for all i, j = 1 , . . . , N D e X , e X E = N +1 X k =1 p k = 1 , D e X , e X i E = N +1 X k =1 √ p k √ p k v ik = 0 , D e X i , e X j E = N +1 X k =1 √ p k v jk √ p k v ik = δ i,j . The unitarity follows immediately.2) ⇒ (cid:16) e X i ( j ) (cid:17) i,j is unitary, the scalar products of columnvectors give the mean 0 and the covariance I for the random variable X .2) ⇔ (cid:0) √ p j b v ji (cid:1) i,j is the transpose matrix of (cid:16) e X i ( j ) (cid:17) i,j . Therefore, if one ofthese two matrices is unitary, its transpose matrix is unitary too.10) ⇔ (cid:0) √ p j b v ij (cid:1) i,j is unitary if and only if (cid:10) √ p i b v i , √ p j b v j (cid:11) = δ i,j , for all i, j = 1 , . . . , N + 1. On the other hand, the condition (cid:10) √ p i b v i , √ p i b v i (cid:11) = 1 isequivalent to p i (1 + k v i k ) = 1, whereas the condition (cid:10) √ p i b v i , √ p j b v j (cid:11) = 0 is equivalentto √ p i √ p j (1 + h v i , v j i ) = 0, that is, h v i , v j i = − . This gives the result. (cid:3)
Definition 1
Random variables in C N which take only N + 1 different values with strictlypositive probability, which are centered and normalized, are called obtuse random variablesin C N . We shall here present several results which show the particular character of obtuse randomvariables. The idea is that somehow they generate all the finitely supported probabilitydistributions on C N .First of all, we show that obtuse random variables on C N with a prescribed probabilitydistribution { p , . . . , p N +1 } are essentially unique. Theorem 2.4
Let X be an obtuse random variable of C N having { p , . . . , p N +1 } as asso-ciated probabilities. Then the following assertions are equivalent.i) The random variable Y is an obtuse random variable on C N with same probabilities { p , . . . , p N +1 } .ii) There exists a unitary operator U on C N such that Y = U X . Proof:
One direction is obvious. If Y = U X , then E [ Y ] = U E [ X ] = U E [ Y Y ∗ ] = E [ U XX ∗ U ∗ ] = U E [ XX ∗ ] U ∗ = U IU ∗ = I .
Hence Y is a centered and normalized random variable on C N , taking N + 1 differentvalues, hence it is an obtuse random variable. The probabilities associated to Y are clearlythe same as for X .In the converse direction, let v , . . . , v N +1 be the possible values of X , associated to theprobabilities p , . . . , p N +1 respectively. Let w , . . . , w N +1 be the ones associated to Y . Inparticular, the vectors b v i = √ p i (cid:18) v i (cid:19) , i = 1 , . . . , N + 1 , form an orthonormal basis of C N +1 . The same holds with the b w i ’s. Hence there exists aunitary operator V on C N +1 such that b w i = V b v i , for all i = 1 , . . . , N + 1.We shall index the coordinates of C N +1 , from 0 to N in order to be compatible withthe embedding of C N that we have chosen. In particular we have V + N X j =1 V j v ji = 111or all i = 1 , . . . , N + 1. This gives in particular N X j =1 V j ( v j − v ji ) = 0 (6)for all i ∈ { , . . . , N + 1 } .As the b v i ’s are linearly independent then so are the √ p b v − √ p i b v i , for i = 2 , . . . , N + 1.Furthermore, we have √ p b v − √ p i b v i = (cid:18) v − v i (cid:19) this means that the v − v i ’s, for i = 2 , . . . , N + 1, are linearly independent.As a consequence the unique solution of the system (6) is V j = 0 for all j = 1 , . . . , N .This implies V = 1 obviously.The same kind of reasoning applied to the relation b v i = V ∗ b w i shows that the columncoefficients V j , j = 1 , . . . , N are also all vanishing. Finally the operator V is of the form V = (cid:18) h || i U (cid:19) , for some unitary operator U on C N . This gives the result. (cid:3) Having proved that uniqueness, we shall now prove that obtuse random variables gen-erate all the other random variables (at least with finite support). First of all, a rathersimple remark which shows that the choice of taking N + 1 different values is the minimalone for centered and normalized random variables in C N . Proposition 2.5
Let X be a centered and normalized random variable in C d , taking n different values v , . . . , v n , with probability p , . . . , p n respectively. Then we must have n ≥ d + 1 . Proof:
Let X be centered and normalized in C d , taking the values v , . . . , v n with proba-bilities p , . . . , p n and with n ≤ d , that is, n < d + 1. Put e X = √ p ... √ p n , e X = √ p v ... √ p n v n , . . . , e X d = √ p v d ... √ p n v dn . They are d + 1 vectors of C n . We have, for all i, j = 1 , . . . , d D e X , e X E = n X i =1 p i = 1 D e X , e X i E = n X k =1 p k v ik = E [ X i ] = 0 D e X i , e X j E = n X k =1 p k v ik v jk = E [ X i X j ] = δ i,j . d +1 vectors e X , . . . , e X d is orthonormal in C n . This is impossible if n < d +1. (cid:3) We can now state the theorem which shows how general, finitely supported, randomvariables on C d are generated by the obtuse ones. We concentrate only on centered andnormalized random variables, for they obviously generate all the others, up to an affinetransform of C d . Theorem 2.6
Let n ≥ d + 1 and let X be a centered and normalized random variable in C d , taking n different values v , . . . , v n , with probabilities p , . . . , p n respectively.If Y is any obtuse random variable on C n − associated to the probabilities p , . . . , p n ,then there exists a partial isometry A from C n − to C d with Ran A = C d and such that X = AY .
Proof:
Assume that the obtuse random variable Y takes the values w , . . . , w n in C n − .We wish to find a ( n − × d matrix A such that A i w j + A i w j + . . . + A in − w n − j = v ij (7)for all i = 1 , . . . , d , all j = 1 , . . . , n . In particular, for each fixed i = 1 , . . . , d , we have thefollowing subsystem of n − n − A i , . . . , A in − : P n − k =1 w k A ik = v i P n − k =1 w k A ik = v i ... P n − k =1 w kn − A ik = v in − . (8)The vectors w = w ... w n − , . . . , w n − = w n − ... w n − n − are linearly independent. Thus so are the vectors w ... w n − , . . . , w n − ... w n − n − . Hence the system (8) can be solved and furnishes the coefficients A ik , k = 1 , . . . , n −
1. Wehave to check that these coefficients are compatible with all the equations of (8). Actually,the only equation from (7) that we have forgotten in (8) is X k A ik w kn = v in . n − p j : n − X j =1 X k A ik p j w kj = n − X j =1 p j v ij . This gives, using E [ X i ] = E [ Y i ] = 0, X k A ik ( − p n w kn ) = − p n v in , which is the required relation.We have proved the relation X = AY with A being a linear map from C n − to C d . Thefact that X is normalized can be written as E [ XX ∗ ] = I d . But E [ XX ∗ ] = E [ AY Y ∗ A ∗ ] = A E [ Y Y ∗ ] A ∗ = A I n A ∗ = AA ∗ . Hence A must satisfy AA ∗ = I d , which is exactly saying that A is a partial isometry withrange C d . (cid:3) Obtuse random variables are naturally associated to some 3-tensors with particular sym-metries. This is what we shall prove here.In this article, a C n is an element of ( C N ) ∗ ⊗ C N ⊗ C N , that is, a linearmap from C N to C N ⊗ C N . Coordinate-wise, it is represented by a collection of coefficients( S ijk ) i,j,k =1 ,...,n . It acts on C N as ( S ( x )) ij = n X k =1 S ijk x k . We shall see below that obtuse random variables on C N have a naturally associated3-tensor on C N +1 . Note that, because of our notation choice X , X , . . . , X N , the 3-tensoris indexed by { , , . . . , N } instead of { , . . . , N + 1 } . Proposition 2.7
Let X be an obtuse random variable in C N . Then there exists a unique3-tensor S on C N +1 such that X i X j = N X k =0 S ijk X k , (9) for all i, j = 0 , . . . , N . This -tensor S is given by S ijk = E [ X i X j X k ] , (10) for all i, j, k = 0 , . . . N . e also have the relation, for all i, j = 0 , . . . , NX i X j = N X k =0 S ikj X k . (11) Proof: As X is an obtuse random variable, that is, a centered and normalized randomvariable in C N taking exactly N +1 different values, the random variables { X , X , . . . , X N } are orthonormal in L (Ω , F , P ), hence they form an orthonormal basis of L (Ω , F , P ), forthe latter space is N +1-dimensional. These random variables being bounded, the products X i X j are still elements of L (Ω , F , P ), hence they can be written, in a unique way, as linearcombinations of the X k ’s. As a consequence, there exists a unique 3-tensor S on C N +1 such that X i X j = N X k =0 S ijk X k for all i, j = 0 , . . . , N . In particular we have E [ X i X j X k ] = N X l =0 S ijl E [ X l X k ] = S ijk . This shows the identity (10).Finally, we have, by the orthonormality of the X k ’s X i X j = N X k =0 E [ X i X j X k ] X k , that is, X i X j = N X k =0 S ikj X k , by (10). This gives the last identity. (cid:3) This 3-tensor S has quite some symmetries, let us detail them. Proposition 2.8
Let S be the 3-tensor associated to an obtuse random variable X on C N .Then the 3-tensor S satisfies the following relations, for all i, j, k, l = 0 , . . . NS i k = δ ik , (12) S ijk is symmetric in ( i, j ) , (13) N X m =0 S imj S klm is symmetric in ( i, k ) , (14) N X m =0 S imj S lmk is symmetric in ( i, k ) . (15)15 roof: The relation (12) is immediate for S i k = E [ X i X k ] = δ ik . Equation (13) comes directly from Formula (10) which shows a clear symmetry in ( i, j ).By (11) we have X i X j = N X m =0 S imj X m , whereas X k X l = N X n =0 S kln X n . Altogether this gives E h X i X j X k X l i = N X m =0 S imj S klm . But the left hand side is clearly symmetric in ( i, k ) and (14) follows.In order to prove (15), we write, using (11) X i X j = N X m =0 S imj X m and X l X k = N X n =0 S lmk X n . Altogether we get E h X i X j X l X k i = N X m =0 S imj S lmk . But the left hand side is clearly symmetric in ( i, k ) and (15) is proved. (cid:3)
Let X be an obtuse random variable in C N , with associated 3-tensor S and let (Ω , F , P S ) bethe canonical space of X . Note that we have added the dependency on S for the probabilitymeasure P S . The reason is that, when changing the obtuse random variable X on C N , thecanonical space Ω and the canonical σ -field F do not change, only the canonical measure P does change. 16e have seen that the space L (Ω , F , P S ) is a N + 1-dimensional Hilbert space andthat the family { X , X , . . . , X N } is an orthonormal basis of that space. Hence for everyobtuse random variable X , with associated 3-tensor S , we have a natural unitary operator U S : L (Ω , F , P S ) −→ C N +1 X i e i , where { e , . . . , e N } is the canonical orthonormal basis of C N +1 . The operator U S is calledthe canonical isomorphism associated to X .The interesting point with these isomorphisms U S is that they canonically transportall the obtuse random variables of C N onto a common canonical space. But the pointis that the probabilistic informations concerning the random variable X are not correctlytransferred via this isomorphism: all the informations about the law, the independencies, ...are lost when identifying X i to e i . The only way to recover the probabilistic informationsabout the X i ’s on C N +1 is to consider the multiplication operator by X i , defined as follows.On the space L (Ω , F , P S ), for each i = 0 , . . . , N , we consider the multiplication operator M X i : L (Ω , F , P S ) −→ L (Ω , F , P S ) Y X i Y ,
These multiplication operators carry all the probabilistic informations on X , even througha unitary transform such as U S , for we have, by the usual functional calculus for normaloperators E [ f ( X , . . . , X N )] = h X , f ( M X , . . . , M X N ) X i L (Ω , F , P T ) = h e , U S f ( M X , . . . , M X N ) U ∗ S e i C N +1 = h e , f ( U S M X U ∗ S , . . . , U S M X N U ∗ S ) e i C N +1 . On the space C N +1 , with canonical basis { e , . . . , e N } we consider the basic matrices a ij ,for i, j = 0 , . . . , N defined by a ij e k = δ i,k e j . We shall see now that, when carried out on the same canonical space by U S , the obtuserandom variables of C N admit a simple and compact matrix representation in terms oftheir 3-tensor. Theorem 2.9
Let X be an obtuse random variable on C N , with associated 3-tensor S andcanonical isomorphism U S . Then we have, for all i, j = 0 , . . . , NU S M X i U ∗ S = N X j,k =0 S ijk a jk . (16) for all i = 0 , . . . , N .The operator of multiplication by X i is given by U S M X i U ∗ S = N X j,k =0 S ikj a jk . (17)17 roof: We have, for any fixed i ∈ { , . . . , N } , for all j = 0 , . . . , NU S M X i U ∗ S e j = U S M X i X j = U S X i X j = U S N X k =0 S ijk X k = N X k =0 S ijk e k . Hence the operator U S M X i U ∗ S has the same action on the orthonormal basis { e , . . . , e N } as the operator N X k =0 S ijk a jk . This proves the representation (16).The last identity is just an immediate translation of the relation (11). (cid:3)
Let us illustrate the previous subsections with our example. To the obtuse system v = (cid:18) i (cid:19) , v = (cid:18) − i (cid:19) , v = − (cid:18) i i (cid:19) of C is associated the random variable X on C which takes the values v , v , v withprobability p = 1 / p = 1 / p = 5 /
12 respectively. Then the 3-tensor S associatedto X is directly computable. We present S as a collection of matrices S j = (cid:0) S i jk (cid:1) i,k , whichare then the matrices of multiplication by X j : S = , S = − (1 − i ) 0 − (2 + i ) − (1 − i ) 0 (2 + i ) S = − (1 − i ) 0 (2 + i ) (1 − i ) − i − (1 − i ) . These matrices are not symmetric (we shall see in Subsection 3.3 what the symmetry ofthe matrices S j corresponds to). We recognize the particular form of S , for it correspondsto M X = I . 18 Complex Doubly-Symmetric 3-Tensors
We are going to leave for a moment the obtuse random variables and concentrate on thesymmetries we have obtained above. The relation (12) is really specific to obtuse randomvariables, we shall leave it for a moment. We concentrate on the relation (13), (14) and(15) which have important consequences for the 3-tensor.
Definition 2
A 3-tensor S on C N +1 which satisfies (13), (14) and (15) is called a complexdoubly-symmetric 3-tensor on C N +1 . The main result concerning complex doubly-symmetric 3-tensors in C N +1 is that theyare the exact generalization for 3-tensors of normal matrices for 2-tensors: they are exactlythose 3-tensors which can be diagonalized in some orthonormal basis of C N +1 . Definition 3
A 3-tensor S on C N +1 is said to be diagonalizable in some orthonormalbasis ( a m ) Nm =0 of C N +1 if there exists complex numbers ( λ m ) Nm =0 such that S = N X m =0 λ m a ∗ m ⊗ a m ⊗ a m . (18) In other words S ( x ) = N X m =0 λ m h a m , x i a m ⊗ a m (19) for all x ∈ C N +1 . Note that, as opposed to the case of 2-tensors (that is, matrices), the “eigenvalues” λ m are not completely determined by the representation (19). Indeed, if we put e a m = e iθ m a m for all m , then the e a m ’s still form an orthonormal basis of C N +1 and we have S ( x ) = N X m =1 λ m e iθ m h e a m , x i e a m ⊗ e a m . Hence the λ m ’s are only determined up to a phase; only their modulus is determined bythe representation (19).Actually, there are more natural objects that can be associated to diagonalizable 3-tensors; they are the orthogonal families in C N . Indeed, if S is diagonalizable as above,for all m such that λ m = 0 put v m = λ m a m . The family { v m ; m = 1 , . . . , K } is then anorthogonal family in C N +1 and we have S ( v m ) = | v m ih v m | m . In terms of the v m ’s, the decomposition (19) of S becomes S ( x ) = K X m =1 k v m k h v m , x i v m ⊗ v m . (20)This is the form of diagonalization we shall retain for 3-tensors. Be aware that in theabove representation the vectors are orthogonal, but not normalized anymore. Also notethat they represent the eigenvectors of S associated only to the non-vanishing eigenvaluesof S .We can now state the main theorem. Theorem 3.1
A 3-tensor S on C N +1 is diagonalizable in some orthonormal basis if andonly if it is doubly-symmetric.More precisely, the formulas V = (cid:8) v ∈ C N +1 \ { } ; S ( v ) = v ⊗ v (cid:9) , and S ( x ) = X v ∈V k v k h v , x i v ⊗ v , establish a bijection between the set of complex doubly-symmetric 3-tensors S and the setof orthogonal systems V in C N +1 . Proof:
Firste step: let V = { v m ; m = 1 , . . . , K } be an orthogonal familly in C N +1 \ { } .Put S ijk = K X m =1 k v m k v im v jm v km , for all i, j, k = 0 , . . . , N . We shall check that S is a complex doubly-symmetric 3-tensor in C N . The symmetry of S ijk in ( i, j ) is obvious from the definition. This gives (13).We have N X m =0 S imj S klm = N X m =0 K X n,p =1 k v n k k v p k v in v jn v mn v mp v kp v lp = K X n,p =1 k v n k k v p k v in v jn h v p , v n i v kp v lp = K X n =1 k v n k v in v jn v kn v ln and the symmetry in ( i, k ) is obvious. This gives (14).20e have N X m =0 S imj S lmk = N X m =0 K X n,p =1 k v n k k v p k v in v mn v jn v lp v mp v kp = K X n,p =1 k v n k k v p k v in v jn h v n , v m i v kp v lp = K X n =1 k v n k v in v jn v kn v ln and the symmetry in ( i, k ) is obvious. This gives (15).We have proved that the formula S ( x ) = X v ∈V k v k h v , x i v ⊗ v (21)defines a complex doubly-symmetric 3-tensor if V is any family of (non-vanishing) orthog-onal vectors.Second step: now given a complex doubly-symmetric 3-tensor S of the form (21), weshall prove that the set V coincides with the set b V = { v ∈ C N \ { } ; S ( v ) = v ⊗ v } . Clearly, if y ∈ V we have by (21) S ( y ) = y ⊗ y . This proves that
V ⊂ b V . Now, let v ∈ b V . On one side we have S ( v ) = v ⊗ v , on the other side we have S ( v ) = X y ∈V k y k h y , v i y ⊗ y . In particular, applying h y | ∈ S ∗ to both sides, we get h y , v i v = h y , v i y and thus either v is orthogonal to y or v = y . This proves that v is one of the elements y of V , for it were orthogonal to all the y ∈ S we would get v ⊗ v = S ( v ) = 0 and v wouldbe the null vector.We have proved that V coincides with the set { v ∈ C N \ { } ; S ( v ) = | v ih v |} . S on C N +1 are diagonalizable in some orthonormal basis. The property (13) indicates that thematrices S k = ( S ijk ) i,j =1 ,...,N are symmetric. But, as they are complex-valued matrices, this does not imply any propertyof diagonalization. Rather we have the following theorem ([12]). Theorem 3.2 (Takagi Factorization)
Let M be a complex symmetric matrix, there exista unitary U matrix and a diagonal matrix D such that M = U DU T = U D ( U ) − . (22)Secondly, we shall need to simultaneously “factorize” the S k ’s as above. We shall makeuse of the following criteria (same reference). Theorem 3.3 (Simultaneous Takagi factorization)
Let F = { A i ; i ∈ J } be a fam-ily of complex symmetric matrices in M n ( C ) . Let G = (cid:8) A i A j ; i, j ∈ J (cid:9) . Then thereexists a unitary matrix U such that, for all i in J , the matrix U A i U T is diagonal if andonly if the family G is commuting. This is the first part of Step three: proving that in our case the matrices S i S j commute.Using the 3 symmetry properties of S we get (cid:0) S i S j S k S l (cid:1) m,n = N X x,y,z =0 S mxi S xyj S yzk S znl = N X x,y,z =0 S mxy S xij S yzk S znl = N X x,y,z =0 S zxy S xij S ymk S znl = N X x,y,z =0 S zxn S xij S ymk S zyl = N X x,y,z =0 S zxi S xnj S ymk S zyl = N X x,y,z =0 S myk S yzl S zxi S xnj = (cid:0) S k S l S i S j (cid:1) m,n . S i S j S k S l = S k S l S i S j . The family (cid:8) S i S j , i, j = 1 , · · · , N (cid:9) is commuting.Thus, by Theorem 3.3, the matrices S k can be simultaneously Takagi-factorized. Thereexists then a unitary matrix U = ( u ij ) i,j =0 , ··· ,N such that, for all k in { , · · · , N } , S k = U D k U − , (23)where the matrix D k is a diagonal matrix, D k = diag ( λ k , · · · , λ Nk ). Thus, the coefficient S ijk can be written as S ijk = N X m =0 λ mk u im u jm . Let us denote by a m the m th column vector of U , that is, a m = ( u lm ) l =0 , ··· ,N . Moreover,we denote by λ m the vector of λ mk , for k = 0 , · · · , N . Since the matrix U is unitary, thevectors a m form an orthonormal basis of C N +1 . We have S ijk = N X m =0 a im λ mk a km . Our aim now is to prove that λ m is proportional to a m . To this end, we shall use thesymmetry properties of S . From the simultaneous reduction (23), we get S j S q = U D j D q t U , where t U is the transpose matrix of U . Thus, we have( S j S q ) i,r = N X m =0 S imj S mrq = N X m =0 a im λ mj λ mq a rm . In particular we have, for all p ∈ { , . . . , N } N X i,j,q,r =0 ( S j S q ) i,r a ip λ pj λ pq a rp = N X m =0 h a m , a p i h λ m , λ p i h λ p , λ m i h a p , a m i = k λ p k . But applying the symmetry (15) this is also equal to N X i,j,q,r =0 N X m =0 a qm λ mj λ mi a rm a ip λ pj λ pq a rp = N X m =0 h a p , λ m i h λ m , λ p i (cid:10) a m , λ p (cid:11) h a p , a m i = |h a p , λ p i| k λ p k . This gives |h a p , λ p i| = k λ p k = k a p k k λ p k . µ p ∈ C suchthat λ p = µ p a p , for all p = 0 , . . . , N . This way, the 3-tensor T can be written as S ijk = N X m =0 µ m a im a jm a km . (24)In other words S ( x ) = N X m =0 µ m h a m , x i a m ⊗ a m . We have obtained the orthonormal diagonalization of S . The proof is complete. (cid:3) The theorem above is a general diagonalization theorem for 3-tensors. For the moment itdoes not take into account the relation (12). When we make it enter into the game, we seethe obtuse systems appearing.
Theorem 3.4
Let S be a doubly-symmetric 3-tensor on C N +1 satisfying also the relation S i k = δ ik for all i, k = 0 , . . . , N . Then the orthogonal system V such that S ( x ) = X v ∈V k v k h v , x i v ⊗ v (25) is made of exactly N + 1 vectors v , . . . , v N +1 , all of them satisfying v i = 1 . In particularthe family of N +1 vectors of C N , obtained by restricting the v i ’s to their N last coordinates,forms an obtuse system in C N . Proof:
First assume that V = { v , . . . , v K } . By hypothesis, we have S ijk = K X m =1 k v m k v im v jm v km , for all i, j, k = 0 , . . . , N . With hypothesis (12) we have in particular S i k = K X m =1 k v m k v im v m v km = δ ik for all i, k = 0 , . . . , N .Consider the orthonormal family of C N +1 made of the vectors e m = v m / k v m k . We haveobtained above the relation K X m =0 v m | e m ih e m | = I
24s matrices acting on C N +1 . The above is thus a spectral decomposition of the identitymatrix, this implies that the e m ’s are exactly N + 1 vectors and that all the v m are equalto 1.This proves the first part of the theorem. The last part concerning obtuse systems isnow obvious and was already noticed when we have introduced obtuse systems. (cid:3) In particular we have proved the following theorem.
Theorem 3.5
The set of doubly-symmetric 3-tensors S on C N +1 which satisfy also therelation S i k = δ ik for all i, k = 0 , . . . , N , is in bijection with the set of obtuse random variables X on C N .The bijection is described by the following, with the convention X = 1l :– The random variable X is the only random variable satisfying X i X j = N X k =0 S ijk X k , for all i, j = 1 , . . . , N .– The 3-tensor S is obtained by S ijk = E [ X i X j X k ] , for all i, j, k = 0 , . . . , N .In particular the different possible values taken by X in C N coincide with the vectors w n ∈ C N , made of the last N coordinates of the eigenvectors v n associated to S in therepresentation (25). The associated probabilities are then p n = 1 / (1 + k w n k ) = 1 / k v n k . In [4] have been introduced the notions of real obtuse random variables and their associated real doubly-symmetric 3-tensors. In the same way they obtained certain symmetries onthe tensor which corresponded exactly to the condition for being diagonalizable in somereal orthonormal basis. Note that in [4] the situation for the diagonalization theorem wasmuch easier, for the symmetries associated to the 3-tensor came down to simultaneousdiagonalization of commuting symmetric real matrices.The question we want to answer here is: How do we recover the real case from thecomplex case? By this we mean: On what condition a complex doubly-symmetric 3-tensorcorrespond to a real one, that is, corresponds to real-valued random variables? Surprisinglyenough, the answer is not: When the coefficients S ijk are all real! Let us see that with acounter-example. 25et us consider the one dimensional random variable X which takes values i , − i withprobability 1 /
2. As usual denote by X the constant random variable equal to 1 and by X the random variable X . We have the relations X X = X X X = X X X = X X X = − X which give us the following matrices for the associated 3-tensor S : S = ( S i k ) i,k = M X = (cid:18) (cid:19) S = ( S i k ) i,k = M X = (cid:18) −
11 0 (cid:19) . They are real-valued matrices, but they are associated to a complex (non real) randomvariable.In fact, the major difference between a complex (non real) doubly-symmetric 3-tensorand a real doubly-symmetric 3-tensor is the commutation property of indices i and k inthe coefficients S ijk . Let us make this more precise. Definition 4
A doubly symmetric 3-tensor S on C N which also satisfies (12) is said to be real if the associated obtuse random variable X on C N is only real-valued. Proposition 3.6
Let S be a complex doubly symmetric tensor on C N which satisfies (12).The following assertions are equivalent.1) For all i, j, k we have S ijk = S kji .
2) The 3-tensor S is real. Proof:
The commutation relation implies that E [ X i X j X k ] = E [ X k X j X i ] , for all i, j, k = 0 , , . . . , N . Since { X j ; j = 0 , , . . . , N } is an orthonormal basis of thecanonical space L (Ω , F , P S ), we get X i X k = X i X k , a.s for all i, k . Then X k X i is almost surely real for all i, k . Considering the case k = 0 impliesthat X i is almost surely real and the result follows. (cid:3) In the counter-example above, one can check that S = 1 and S = −
1. The commu-tation condition is not satisfied. 26 .4 From Complex to Real Obtuse Random Variables
The aim of this subsection is to prove that every complex obtuse random variable is ob-tained by a unitary transform of C N applied to some real obtuse random variable. Thiswill be obtained in several steps, here is the first one. Proposition 3.7
1) Let X be an obtuse random variable in C N and let X , . . . , X N be its coordinate randomvariables. If Y , . . . Y n are real random variables of the form Y = U X , with U being aunitary operator of C N , then Y is a real obtuse random variable of R N .2) Conversely, if Y is a real obtuse random variable on R N and if X = U Y with U beinga unitary operator on C N , then X is an obtuse random variable in C N . Proof:
This is essentially the same argument as in Theorem 2.4, at least for the secondproperty. For the first property one has to write that, if Y is real-valued and Y = U X then Y t = Y ∗ so that E [ Y Y t ] = E [ Y Y ∗ ] = U E [ X X ∗ ] U ∗ = I in the same way as in the proof of Theorem 2.4. Hence Y is a centered and normalizedreal random variable in R N , taking N + 1 different values, hence it is a real obtuse randomvariable of R N , as is proved in [4] in a theorem similar to Proposition 2.3. (cid:3) Now recall the following classical result.
Proposition 3.8
Let v , . . . , v N be any linearly free family of N vectors of C N . Then thereexists a unitary operator U on C N such that the vectors w i = U v i are of the form w = z ... , w = z z ... , . . . , w N − = z N − z N − ... z n − N − , w N = z N z N ... z N − N z NN . Proof:
It is clear that with a well chosen unitary operator U one can map v onto C e .The family w , . . . , w N of images by U of v , . . . , v N is free. Furthermore, if we put w i = (cid:18) a i y i (cid:19) with a i ∈ C and y i ∈ C N − , then we claim that the family { y , . . . , y N } is free in C N − .Indeed, we must have a = 0 and if N X i =2 λ i y i = 027hen N X i =2 λ i w i − P Ni =2 λ i a i a w = 0and all the λ i vanish.Once this has been noticed, we consider the unitary operator V on C N − which maps y onto (1 , , . . . ,
0) and the unitary operator U ′ on C N given by U ′ = . . . V . We repeat the procedure until all the coordinates are exhausted. (cid:3)
Now, here is an independence property specific shared by the obtuse systems.
Proposition 3.9
Every strict sub-family of an obtuse family is linearly free.
Proof:
Let { v , . . . , v N +1 } be an obtuse family of C N . Let us show that { v , . . . , v N } isfree, which would be enough for our claim. If we had v N = N − X i =1 λ i v i then, taking the scalar product with v N we would get k v N k = N − X i =1 − λ i , whereas, taking the scalar product with v N +1 would give − N − X i =1 − λ i . This would imply k v N k = −
1, which is impossible. (cid:3)
Finally, using Proposition 3.8 we make an important step towards the main result.
Proposition 3.10
Let { w , . . . , w N +1 } be an obtuse system of C N such that w = z ... , w = z z ... , . . . , w N − = z N − z N − ... z N − N − . hen there exist φ , . . . , φ N , modulus 1 complex numbers, such that for every i = 1 , . . . N +1 we have w i = φ a i ... φ N a Ni with the a ji ’s being reals. Proof:
First note that the z ii ’s, i = 1 , . . . , N , cannot vanish, for otherwise, the family { w , . . . , w N } would not be linearly free, contradicting Proposition 3.9.Secondly, the scalar product conditions h w , w j i = −
1, for j = 2 , . . . , N + 1, imply z = . . . = z N +1 = − (cid:16) z (cid:17) − . In particular, all the z i ’s, i = 1 , . . . , N + 1, have the same argument.With the conditions h w , w j i = −
1, for j = 3 , . . . , N + 1, we get z z = . . . = z z N +1 = − − (cid:12)(cid:12) z (cid:12)(cid:12) . Hence all the z i ’s are equal for i = 3 , . . . , n + 1 and all z i ’s have same argument ( i =2 , . . . , N + 1).One easily obtains the result in the same way, line by line. (cid:3) Altogether we have proved the following theorem.
Theorem 3.11
For every obtuse family v , . . . , v N +1 of C N there exists a unitary operator U of C N such that the vectors w i = U v i all have real coordinates. This family of vectorsof R N form a real obtuse system of R N , with same probabilities as the initial family v .In other words, every complex obtuse random variable X in C N is of the form X = U Y for some unitary operator U on C N and some real obtuse random variable Y on R N . As every complex obtuse random variable X can be obtained as U Y for some unitaryoperator U and some real obtuse random variable Y , we shall concentrate for a whileon the unitary transformations of obtuse random variables and their consequences on theassociated 3-tensors, on the multiplication operators, etc.As a first step, let us see how is transformed the associated 3-tensor under a unitarymap of the random variable. Lemma 3.12
Let X and Y be two obtuse random variables on C N such that there exist aunitary operator U on C N satisfying X = U Y . e extend C N to C N +1 by adding some e vector to the orthonormal basis; we extend U to a unitary operator on C N +1 by imposing U e = e . Let ( u ij ) i,j =0 ,...,N be the coefficientsof U on C N +1 .If S and T are the 3-tensors of X and Y respectively, we then have S ijk = N X m,n,p =0 u im u jn u kp T mnp , (26) for all i, j, k = 0 , . . . , N .Conversely, the tensor T can be deduced from the tensor S by T ijk = N X m,n,p =0 u mi u nj u pk S mnp . Proof:
With the extension of U to C N +1 and the coordinates X = Y = 1l associated to X and Y as previously, we have X i = N X m =0 u im Y m for all i = 0 , . . . , N .The 3-tensor S is given by S ijk = E [ X i X j X k ]= N X m,n,p =0 u im u jn u kp E [ Y m Y n Y p ]= N X m,n,p =0 u im u jn u kp T mnp . The converse formula is obvious, replacing U by U ∗ . (cid:3) Definition 5
In the following if two 3-tensors S and T are connected by a formula of theform (26) we shall denote it by S = U ◦ T .
Let us see now what are the consequences on the representation of multiplication oper-ators, such as given by Theorem 2.9. First of all notice that if X = U Y then the underlyingprobability measures P S and P T of their canonical spaces are the same; the unitary trans-form does not change the probabilities, only the values. Hence X and Y are defined onthe same probability space (Ω , F , P ). Though, the canonical isomorphisms U T and U S aredifferent, they differ by a change of basis actually.30 roposition 3.13 Under the conditions and notations above, we have U T M Y i U ∗ T = N X j,k =0 T ijk a jk and U S M X i U ∗ S = N X j,k =0 N X m,n,p =0 u im u jn u kp T mnp a jk (27)= N X j,k =0 S ijk a jk . (28) Proof:
We have U S M X i U S ∗ = N X m =0 u im U S M Y m U S ∗ = N X m =0 u im U S U T ∗ U T M Y m U T ∗ U T U S ∗ = N X m =0 N X n,p =0 u im T mnp U S U T ∗ a np U T U S ∗ . An explicit formula for the operator U S U T ∗ a np U T U S ∗ is obtained easily by acting on thebasis: U S U T ∗ a np U T U S ∗ e j = U S U T ∗ a np U T X j = N X k =0 u jk U S U T ∗ a np U T Y k = N X k =0 u jk U S U T ∗ a np e k = u jn U S U T ∗ e p = u jn U S Y p = N X k =0 u jn u kp U S X k = N X k =0 u jn u kp e k . This proves that U S U T ∗ a np U T U S ∗ = N X j,k =0 u jn u kp a jk . U S M X i U S ∗ = N X m =0 N X n,p =0 N X j,k =0 u im u jn u kp T mnp a jk . That is, we get (27) and (28) immediately. (cid:3)
The point is that this unitary operator has not been yet obtained very constructively.The following theorem gives it a little more explicitly, from the associated 3-tensor.
Theorem 3.14
Let X be a normal martingale in C N with associated 3-tensor S . Thenthe matrix S = ( S ij ) i,j =0 ,...,N is symmetric and unitary, it can be decomposed as V V t for some unitary matrix V of C N . For any such unitary operator V the random variable R = V ∗ X is a real obtuse random variable of C N . Proof:
By definition we have S ij = E [ X i X j ] and hence is symmetric in ( i, j ). Now let uscheck it is a unitary matrix. We have N X m =0 S im S jm = N X m =0 E [ X i X m ] E [ X m X j ]= N X m =0 D X i , X m E D X m , X j E for the scalar product of L (Ω , F , P )= D X i , X j E for the X m ’s form an o.n.b. of L (Ω , F , P )= E [ X i X j ] = δ ij . We have proved the unitarity.By Takagi Theorem 3.2, this matrix S can be decomposed as U D U t for some unitary U and some diagonal matrix D . But as S is unitary we have I = S ∗ S = U D U ∗ U D U t , hence | D | = U t U = I and the matrix D is unitary too. In particular its entries are complex numbers of modulus1. Let L be the diagonal matrix whose entries are the square root of the entries of D , theyare also of modulus 1, so that L L = I Put V = U L , then
V V t = U L L U t = U D U t = S , but also V V ∗ = U L L U ∗ = U U ∗ = I .
32e have proved the announced decomposition of S .We now check the last assertion. Let v ij be the coefficients of V . Define the 3-tensor R = V ∗ ◦ S , that is, R ijk = N X m,n,p =0 v mi v nj v pk S mnp . Computing S ijk = E [ X i X j X k ] in another way, we get S ijk = E [ X i X j X k ]= N X m =0 S kmj E [ X i X m ]= N X m =0 S im S kmj . Injecting this relation in the expression of R ijk above, we get R ijk = N X m,n,p,α,β =0 v mi v nj v pk v mβ v αβ S pαn = N X n,p,α,β =0 δ iβ v nj v pk v αβ S pαn = N X n,p,α =0 v nj v pk v αi S pαn . But the above expression is clearly symmetric in ( i, j ), for S pαn is symmetric in ( p, α ). ByProposition 3.6 this means that the 3-tensor R is real. The theorem is proved. (cid:3) The aim of next section is to give explicit results concerning the continuous-time limit ofrandom walks made of sums of obtuse random variables. The continuous time limits willgive rise to particular martingales on C N . In the real case, the limiting martingales arewell-understood, they are the so-called normal martingales of R N satisfying a structureequation (cf [4]). In the real case the stochastic behavior of these martingales is intimatelyrelated to a certain doubly symmetric 3-tensor and to its diagonalization. To make itshort, the directions corresponding to the null eigenvalues of the limiting 3-tensor arethose where the limit process behaves like a Brownian motion; the other directions (non-vanishing eigenvalues) correspond to a Poisson process behavior. This was developed indetails in [4] and we shall recall their main results below.33n the next section of this article we wish to obtain two types of time-continuous results:– a limit in distribution for the processes, for which we would like to rely on the results of[18] where is proved that the convergence of the 3-tensors associated to the discrete timeobtuse random walks implies the convergence in law of the processes;– a limit theorem for the multiplication operators, for which we would like to rely onthe approximation procedure developed in [1], where is constructed an approximation ofthe Fock space by means of spin chains and where is proved the convergence of the basicoperators a ij ( n ) to the increments of quantum noises.When considering the complex case we had two choices: either develop a complex theoryof normal martingales and structure equations, extend all the results of [4], of [18] and of[1] to the complex case and prove the limit theorems we wished to obtain; or find a way toconnect the complex obtuse random walks to the real ones and rely on the results of thereal case, in order to derive the corresponding one for the complex case. We have chosenthe second scenario, for we have indeed the same connection between the complex obtuserandom variables and the complex ones as we have obtained in the discrete time case. Inthis section we shall present, complex normal martingales and their structure equations,the connection between the complex and the real case, together with their consequences.Only in next section we shall apply these results in order to derive the continuous-timelimit theorems. R N We now recall the main results of [4] concerning the behavior of normal martingales in R N and their associated 3-tensor. Definition 6
A martingale X = ( X , . . . , X N ) with values in R N is a normal martingale if X = 0 a.s. and if its angle brackets satisfy h X i , X j i t = δ ij t , (29) for all t ∈ R + and all i, j = 1 , . . . , N .This is equivalent to saying that the process ( X it X jt − δ ij t ) t ∈ R + is a martingale, or elsethat the process ([ X i , X j ] t − δ ij t ) t ∈ R + is a martingale (where [ · , · ] here denotes the squarebracket). Definition 7
A normal martingale X in R N is said to satisfy a structure equation if thereexists a family { Φ ijk ; i, j, k = 1 , . . . , N } of predictable processes such that [ X i , X j ] t = δ ij t + N X k =1 Z t Φ ijk ( s ) dX ks . (30)34ote that if X has the predictable representation property (i.e. every square integrablemartingale is a stochastic integral with respect to X ) and if X is L , then X satisfiesa structure equation, for (30) is just the integral representation of the square integrablemartingale ([ X i , X j ] t − δ ij t ) t ∈ R + .The following theorem is proved in [4]. It establishes the fundamental link between the3-tensors Φ( s ) associated to the martingale X and the behavior of X . Theorem 4.1
Let X be a normal martingale in R N satisfying the structure equation [ X i , X j ] t = δ ij t + N X k =1 Z t Φ ijk ( s ) dX ks , for all i, j . Then for almost all ( s, ω ) the quantities Φ( s, ω ) are all valued in doubly sym-metric 3-tensors in R N .If one denotes by V s ( ω ) the orthogonal family associated to the non-vanishing eigen-values of Φ( s, ω ) and by Π s ( ω ) the orthogonal projector onto V s ( ω ) ⊥ , that is, on the null-egeinvalue subspace of Φ( s, ω ) , then the continuous part of X is X ct = Z t Π s ( dX s ) , the jumps of X only happen at totally inaccessible times and they satisfy ∆ X t ( ω ) ∈ V t ( ω ) . The case we are concerned with is a simple case where the process Φ is actually constant.In that case, things can be made much more explicit, as is proved in [4] again.
Theorem 4.2
Let Φ be a doubly-symmetric 3-tensor in R N , with associated orthogonalfamily V . Let W be a Brownian motion with values in the space V ⊥ . For every v ∈ V ,let N v be a Poisson process with intensity k v k − . We suppose W and all the N v to beindependent processes.Then the martingale X t = W t + X v ∈V (cid:18) N vt − k v k t (cid:19) v (31) satisfies the structure equation [ X i , X j ] t − δ ij t = N X k =1 Φ ijk ( t ) X kt , (32) Conversely, any solution of (32) has the same law as X .The martingale X solution of (32) possesses the chaotic representation property. .2 Normal Martingales in C N We consider a martingale X , defined on its canonical space (Ω , F , P ), with values in C N ,satisfying the following properties (similar to the corresponding definition in R N ):– the angle bracket h X i , X j i t is equal to δ ij t ,– the martingale X has the Predictable Representation Property.To these conditions we add the following simplifying condition:– the functions t E [[ X i , X j ] t ] are absolutely continuous with respect to the Lebesguemeasure.Applying all these conditions, we know that, for all i, j, k = 1 , . . . , N , there existpredictable processes (Λ ijt ) t ∈ R + , ( S ijk ( t )) t ∈ R + and ( T ijk ( t )) t ∈ R + , such that[ X i , X j ] t = Z t Λ ijs ds + N X k =1 Z t S ijk ( s ) dX ks , (33)[ X i , X j ] t = δ ij t + N X k =1 Z t T ijk ( s ) dX ks . (34)Before proving the main properties and symmetries of the coefficients (Λ ijt ) t ∈ R + , ( S ijk ( t )) t ∈ R + and ( T ijk ( t )) t ∈ R + , we shall prove a uniqueness result. Lemma 4.3 If A and B k , k = 1 , . . . , N are predictable processes on (Ω , F , P ) such that Z t A s ds + N X k =1 Z t B ks dX ks = 0 (35) for all t ∈ R + , then A t = B kt = 0 almost surely, for almost all t ∈ R + , for all k = 1 , . . . , N . Proof:
Let us write Y t = Z t A s ds + N X k =1 Z t B ks dX ks . Then | Y t | = Y t Y t = Z t Y s dY s + Z t Y s dY s + [ Y , Y ] t = Z t Y s dY s + Z t Y s dY s + N X k,l =1 Z t B ks B ls d [ X k , X l ] s . In particular, E [ | Y t | ] = Z s E (cid:2) Y s A s + Y s A s (cid:3) ds + N X k =1 Z t E h(cid:12)(cid:12) B ks (cid:12)(cid:12) i ds . Y is the null process then B ks = 0 almost surely, for a.a. s and for all k . This meansthat Z t A s ds = 0for all t and thus A vanishes too. (cid:3) We now detail the symmetry properties of S , T and Λ, together with some intertwiningrelations between S and Λ. Theorem 4.4
1) The processes ( S ( t )) t ∈ R + and ( T ( t )) t ∈ R + are connected by the relation T ijk ( s ) = S ikj ( s ) , (36) almost surely, for a.a. s ∈ R + and for all i, j, k = 1 , . . . , N .2) The process ( S ( t )) t ∈ R + takes its values in the set of doubly-symmetric 3 tensors of C N .3) The process (Λ t ) t ∈ R + takes its values in the set of complex symmetric matrices.4) We have the relation N X m =1 S ijm ( s ) Λ mks = N X m =1 S kjm ( s ) Λ mis , (37) almost surely, for a.a. s ∈ R + and for all i, j, k = 1 , . . . , N .5) We have the relation N X m =1 S kmj ( s ) Λ ims = S ijk ( s ) , (38) almost surely, for a.a. s ∈ R + and for all i, j, k = 1 , . . . , N .6) The process ( X it ) t ∈ R + has the predictable representation X it = N X m =1 Z t Λ ims dX ms .
7) The matrix Λ is unitary. Proof:
The proof is a rather simple adaptation of the arguments used in Proposition 2.7and Proposition 2.8. First of all, the symmetry [ X i , X j ] t = [ X j , X i ] t gives Z t Λ ijs ds + N X k =1 Z t S ijk ( s ) dX ks = Z t Λ jis ds + N X k =1 Z t S jik ( s ) dX ks for all t . By the uniqueness Lemma 4.3 this gives the symmetry of the matrices Λ s andthe first symmetry relation (13) for the 3-tensors S ( s ).37omputing [[ X i , X j ] , X k ] t we get[[ X i , X j ] , X k ] t = N X m =1 Z t S ijm ( s ) d [ X m , X k ] s = Z t S ijk ( s ) ds + N X m,n =1 Z t S ijm ( s ) T kmn ( s ) dX ns . But this triple bracket is also equal to[ X i , [ X j , X k ]] t = N X m =1 Z t T jkm ( s ) d [ X i , X m ] s = Z t T jki ( s ) ds + N X m,n =1 Z t T jkm ( s ) T min ( s ) dX ns . Again, by the uniqueness lemma, and the symmetry (13), we get the relation (36).Now, in the same way as in the proof of Proposition 2.8, we compute [[ X i , X j ] , [ X k , X l ]] t in two ways, using the symmetry in ( i, k ) of that quadruple bracket:[[ X i , X j ] , [ X k , X l ]] t = Z t N X m,n =1 Z t T ijm ( s ) S kln ( s ) d [ X m , X n ] s [[ X k , X j ] , [ X i , X l ]] t = Z t N X m,n =1 Z t T kjm ( s ) S iln ( s ) d [ X m , X n ] s . By uniqueness again, the time integral part gives the relation N X m =1 T ijm ( s ) S klm ( s ) = N X m =1 T kjm ( s ) S ilm ( s ) , that is, N X m =1 S imj ( s ) S klm ( s ) = N X m =1 S kmj ( s ) S ilm ( s ) , which is the relation (14).The relation (15) is obtained exactly in the same way, from the symmetry of [[ X i , X j ] , [ X l , X k ]] t in ( i, k ). We have proved that the 3-tensors S ( s ) are doubly-symmetric.Computing [ X i , [ X j , X k ]] t in two different ways we get[ X i , [ X j , X k ]] t = N X m =1 Z t S jkm ( s ) d [ X i , X m ] s = N X m =1 Z t S jkm ( s ) Λ ims ds + N X mn =1 Z t S jkm ( s ) S imn ( s ) dX ns ,
38n one hand, and[[ X i , X j ] , X k ] t = N X m =1 Z t S ijm ( s ) d [ X m , X k ] s = N X m =1 Z t S ijm ( s ) Λ mks ds + N X mn =1 Z t S ijm ( s ) S mkn ( s ) dX ns , on the other hand. Identifying the time integrals, we get the relation (37).Finally, computing [ X i , [ X j , X k ]] t in two different ways we get[ X i , [ X j , X k ]] t = N X m =1 Z t S jmk ( s ) d [ X i , X m ] s = N X m =1 Z t S jmk ( s ) Λ ims ds + N X m,n =1 Z t S jmk ( s ) S imn ( s ) dX ns , on one hand, and[[ X i , X j ] , X k ] t = N X m =1 Z t S ijm ( s ) d [ X m , X k ] s = N X m =1 Z t S ijm ( s ) δ mk ds + N X mn =1 Z t S ijm ( s ) S mnk ( s ) dX ns , on the other hand. Identifying the time integrals, we get the relation (38).Let us prove the result 6). The process ( X it ) t ∈ R + is obviously a square integrable mar-tingale and its expectation is 0. Hence it admits a predictable representation of the form X it = N X k =1 Z t H iks dX ks for some predictable processes H ik . We write[ X i , X j ] t = [ X i , X j ] t = Z t Λ ijs ds + N X k =1 Z t S ijk ( s ) dX ks = Z t Λ ijs ds + N X k,l =1 Z t S ijk ( s ) H kls dX ls X i , X j ] t N X k =1 Z t H iks d [ X k , X j ] s = N X k =1 Z t H iks δ kj ds + N X k,l =1 H iks S jlk ( s ) . The unicity lemma gives the relation H ijs = Λ ijs , almost surely, for a.a. s .We now prove 7). We have[ X i , X j ] t = N X k =1 Z t H iks d [ x k , X j ] s = N X k =1 Z t H iks Λ kjs ds + N X k,l =1 Z t H iks S kjl ( s ) dX ls , but also [ X i , X j ] t = δ ij t + N X k =1 Z t S ikj ( s ) dX ks . Again, by the uniqueness lemma we get δ ij = N X k =1 H iks Λ kjs = N X k =1 Λ iks Λ kjs . This proves the announced unitarity. (cid:3)
We can now state one of our main result concerning the structure of complex normalmartingales in C N . Theorem 4.5
Let X be a normal martingale in C N satisfying the structure equations [ X i , X j ] t = Z t Λ ijs ds + N X k =1 Z t S ijk ( s ) dX ks [ X i , X j ] t = δ ij t + N X k =1 Z t S ikj ( s ) dX ks . Then the matrix Λ admits a decomposition of the form Λ =
V V t for some unitary V of C N . For any such unitary V , put R = V ∗ ◦ S . Then R is a realdoubly-symmetric 3-tensor. Proof:
This is exactly the same proof as for Theorem 3.14 : the decomposition of Λ comesfrom Takagi’s Theorem, the expression of R ijk in terms of the coefficients of V and of the S ijk ’s is transformed with the help of the relation XX. One then see that R satisfies thesymmetry property which makes it real. (cid:3) .3 Complex Unitary Transforms of Real Normal Martingales From the result above concerning real normal martingales, we shall deduce easily thecorresponding behavior of complex normal martingales, as they are obtained by unitarytransforms of real normal martingales.In the following we shall be interested in the following objects. Let Y be a normalmartingale on R N , satisfying a structure equation with constant 3-tensor T . Let U bea unitary operator on C N . Injecting canonically R N into C N , we consider the complexmartingale X t = U Y t , t ∈ R + . We consider the 3-tensor S = U ◦ T .
We also put Λ =
U U t . We choose the following notations. Let W = { w , . . . , w k } ⊂ R N be the orthogonal systemassociated to T , that is, the directions of non-vanishing eigenvalues of T . Let W ⊥ be itsorthogonal space in R N , that is the null space of T , and let us choose an orthonormal basis { b w , . . . , b w N − k } of W ⊥ . Let V = U W = { U w , . . . , U w k } ⊂ C N , this set coincides withthe orthogonal system associated to S that is, the directions of non-vanishing eigenvaluesof S . We consider the set b V = { b v , . . . , b v N − k } , where b v i = U b w i , for all i = 1 , . . . , N − k .We denote by V R and b V R the following real subspaces of C N seen as a 2 N -dimensional realvector space: V R = R v ⊕ . . . ⊕ R v k b V R = R b v ⊕ . . . ⊕ R b v N − k . In particular note that V R ⊕ b V R = U R N is a N -dimensional real subspace of C N .Finally we denote by Π S the orthogonal projector from C N onto b V R where both spacesare seen as real vector spaces. Theorem 4.6
With the notations above, the complex martingale X satisfies the followingtwo “structure equations” [ X i , X j ] t = Λ ij t + N X k =1 S ijk X kt , (39)[ X i , X j ] t = δ ij t + N X k =1 S ikl X kt . (40) The solutions to both Equation (39) and Equation (40) are unique in distribution. Thisdistribution is described as follows. The process X is valued in U R N = V R ⊕ b V R . Thecontinuous part of X is X ct = Z t Π S ( dX s ) , hich lives in b V R . The jumps of X only happen at totally inaccessible times and theysatisfy, almost surely ∆ X t ( ω ) ∈ V R . In other words, let W be a N − k dimensional Brownian motion with values in the realspace b V R . For every v ∈ V , let N v be a Poisson process with intensity k v k − . We suppose W and all the N v to be independent processes. Then the martingale X t = W t + X v ∈V (cid:18) N vt − k v k t (cid:19) v (41) satisfies the structure equation (39).The process X possesses the chaotic representation property. Proof:
The martingale Y satisfies the structure equation[ Y i , Y j ] t = δ ij t + N X k =1 T ijk Y kt . The process X t = U Y t , t ∈ R + , is a martingale with values in C N . Clearly we have[ X i , X j ] t = N X m,n =1 u im u jn [ Y m , Y n ] t = N X m,n =1 u im u jn δ mn t + N X m,n =1 N X p =1 u im u jn T mnp Y pt = N X m =1 u im u jm t + N X m,n =1 N X p =1 N X k =1 u im u jn u kp T mnp X kt = S ij t + N X k =1 S ijk X kt . This gives (39). 42ith a similar computation, we get[ X i , X j ] t = N X m,n =1 u im u jn [ Y m , Y n ] t = N X m,n =1 u im u jn δ mn t + N X m,n =1 N X p =1 u im u jn T mnp Y pt = N X m =1 u im u jm t + N X m,n =1 N X p =1 N X k =1 u im u jn u kp T mnp X kt = N X m =1 δ ij t + N X m,n =1 N X p =1 N X k =1 u im u kp u jn T mpn X kt = δ ij t + N X k =1 S ikj X kt . This gives (40).Let us now prove uniqueness in law for the solutions of (39) and (40). Let Z be anothersolution of the two equations. Consider the unitary operator U associated to the 3-tensor S , that is for which the 3-tensor T = U ∗ ◦ S is real and doubly-symmetric. Put A t = U ∗ t Z t for all t ∈ R p . Then we get[ A i , A j ] t = N X m,n =1 u mi u nj [ Z m , Z n ] t = N X m,n =1 u mi u nj S mn t + N X m,n =1 N X p =1 u mi u nj S mnp Z pt = N X m,n =1 n X p =1 u mi u nj u mp u np t + N X m,n =1 N X p =1 N X k =1 u mi u nj u pk S mnp A kt = δ ij t + N X k =1 R ijk A kt A i , A j ] t = N X m,n =1 u mi u nj [ Z m , Z n ] t = N X m,n =1 u mi u nj δ mn t + N X m,n =1 N X p =1 u mi u nj S mpn Z pt = N X m =1 u mi u mj t + N X m,n =1 N X p =1 N X k =1 u mi u nj u pk S mpn A kt = δ ij t + N X k =1 R ikj A kt . But as R is a real-valued 3-tensor, symmetric in i, j, k , the last expression gives[ A i , A j ] t = δ ij t + N X k =1 R ijk A kt . Decomposing each A jt as B jt + iC jt (real and imaginary parts), the last two relations oughtto [ B i , B j ] t = δ ij t + N X k =1 R ijk B kt , [ C i , C j ] t = 0 , [ B i , C j ] t = N X k =1 R ijk C kt . This clearly implies that the C j ’s vanish, the processes A j are real-valued. They satisfythe same structure equation as the real process Y underlying the definition of X . ByTheorem 4.2 the processes A and Y have same law. Thus so do the processes Z = U A and X = U Y . This proves the uniqueness in law for the processes in C N satisfying the twoequations (39) and (40).The projector Π S onto b V R is given by Π S = U Π T U ∗ , as can be checked easily, eventhough we are here considering the vector spaces as real ones. The continuous part of X and the jumps of X are clearly the ones of Y but mapped by U . Hence, altogether we get X ct = U Y ct = Z t U Π T ( dY s ) = Z t U U ∗ Π S U ( U ∗ dX s ) = Z t Π S ( dX s ) . The part (41) of the theorem is obvious, again by application of the map U .44inally, let us prove the chaotic representation property. The chaotic representationproperty for Y says that every random variable F ∈ L (Ω , F , P ), the canonical space of Y ,can be decomposed as F = E [ F ] + ∞ X n =1 N X i ,...,i n =1 Z ≤ t <... We are now given a time parameter h > h may also appear in the internal parameters of the walk, that is, in the probabilities p i andthe values v i of X .Hence, we are given an obtuse random variable X ( h ) in C N , with coordinates X i ( h ), i = 1 , . . . , N and together with the random variable X = 1l. The associated 3-tensor of X ( h ) is given by X i ( h ) X j ( h ) = N X k =0 S ijk ( h ) X k ( h ) . Considering the random walk associated to X ( h ), that is, consider a sequence ( X n ( h )) n ∈ N ∗ of i.i.d. random variables in C N all having the same distribution as X ( h ), the random walkis the stochastic process with time step h : Z hnh = n X i =1 √ h X i ( h ) . This calls for defining b X = h X and b X j ( h ) = √ h X j ( h )45or all j = 1 , . . . , n . Putting ε = 1 and ε i = 1 / i = 1 , . . . , n , we then have, for all i, j = 0 , . . . , N b X i ( h ) b X j ( h ) = N X k =0 b S ijk ( h ) b X k ( h ) , where b S ijk ( h ) = h ε i + ε j − ε k S ijk . Finally, we put M ijk = lim h → b S ijk ( h ) , if it exists. Lemma 5.1 We then get, for all i, j, k = 1 , . . . NM = 0 ,M k = 0 ,M k = 0 ,M k = 0 ,M ij = lim h → S ij ( h ) (if it exists) ,M i k = 0 ,M jk = 0 ,M ijk = lim h → h / S ijk ( h ) (if it exists) . Proof: These are direct applications of the definitions and the symmetries verified by the S ijk ( h )’s. For example: b S jk ( h ) = h S jk ( h ) = h S j k = h δ jk . This gives immediately that M jk = 0. And so on for all the other cases. (cid:3) Proposition 5.2 Under the hypothesis that the limits above all exist, the 3-tensor M ,restricted to its coordinates i, j, k = 1 , . . . , N , is a doubly-symmetric 3-tensor of C N . Proof: Let us check that ( M ijk ) i,j,k =1 ,...,N satifies the three conditions for being a doubly-symmetric 3-tensor. Recall that for these indices, we have M ijk = lim h → h / S ijk ( h ) . The first condition M ijk = M kji is obvious from the same property of S ijk ( h ) and passing tothe limit. 46e wish now to prove that P Nm =1 M imj M klm is symmetric in ( i, k ). The correspondingproperty for S ( h ) gives S i j ( h ) S kl ( h ) + N X m =1 S imj ( h ) S klm ( h ) = S k j ( h ) S il ( h ) + N X m =1 S kmj ( h ) S ilm ( h ) . In particular, multiplying by h , we get h δ ij S kl ( h ) + N X m =1 b S imj ( h ) b S klm ( h ) = h δ kj S il ( h ) + N X m =1 b S kmj ( h ) b S ilm ( h ) . By hypothesis lim h → S kl ( h ) and lim h → S il ( h ) exist hence, passing to the limit, we get N X m =1 M imj M klm = N X m =1 M kmj M ilm , which is the second symmetry asked to M for being doubly-symmetric.The third symmetry is obtained in a similar way. Indeed, we have S i j ( h ) S l k ( h ) + N X m =1 S imj ( h ) S lmk ( h ) = S k j ( h ) S l i ( h ) + N X m =1 S kmj ( h ) S lmi ( h ) . This gives, multiplying by h again h δ ij δ lk + N X m =1 b S imj ( h ) b S lmk ( h ) = h δ kj d li + N X m =1 b S kmj ( h ) b S lmi ( h ) . Now, passing to the limit as h tends to 0, we get N X m =1 M imj M lmk = N X m =1 M kmj M lmi . This gives the last required symmetry. (cid:3) We can now give our convergence in distribution theorem. Theorem 5.3 Let X ( h ) be an obtuse random variable on C N , depending on a parameter h > , let S ( h ) be its associated doubly symmetric 3-tensor. Let ( X n ( h )) n ∈ N ∗ be a sequenceof i.i.d. random variables with same law as X ( h ) . Consider the discrete-time random walk Z hnh = n X i =1 √ h X i ( h ) . f the limits M ij = lim h → S ij ( h ) and M ijk = lim h → √ h S ijk ( h ) exist for all i, j, k = 1 , . . . , N , then the process Z h converges in distribution to the normalmartingale Z in C N solution of the structure equations [ Z i , Z j ] t = M ij t + N X k =1 M ijk Z kt , (42)[ Z i , Z j ] t = δ ij t + N X k =1 M ikj Z kt . (43) Proof: For each h > 0, the random variables X ( h ) can be written U ( h ) Y ( h ) for a unitaryoperator U ( h ) on C N and a real obtuse random variable Y ( h ) in R N . In terms of theassociated 3-tensors, recall that this means S ( h ) = U ( h ) ◦ T ( h )or else T ( h ) = U ( h ) ∗ ◦ S ( h ) . Consider any sequence ( h n ) n ∈ N which tends to 0. By hypothesis S ( h n ) converges to M .Furthermore, as the sequence ( U ( h n )) n ∈ N lives in the compact group U ( C N ) it admitsa subsequence ( h n k ) k ∈ N converging to some unitary V . As a consequence the sequence( T ( h n k )) k ∈ N converges to a real 3-tensor N = V ◦ M .The convergence of the 3-tensors ( T ( h n k )) k ∈ N to N imply the convergence in distributionof the associated real martingales Y h nki , for a subsequence ( h n ki ) i ∈ N , by G. Taviot’s Thesis(cf [18], Proposition 4.2.3, Proposition 4.3.2., Proposition 4.3.3). The limit is a real normalmartingale Y whose associated 3-tensor is N .Applying the unitary operators U h nki , which converge to V , we have the convergence inlaw of the process Z h nki to the process Z = V Y . By Theorem 4.6 the process Z is solutionof the complex structure equations associated to the tensor S .For the moment we have proved that for every sequence ( h n ) n ∈ N there exists a subse-quence ( h n j ) j ∈ N such that Z h nj converges in law to Z solution of the complex structureequations associated to S . As we have proved that the solutions to these equation areunique in law, the limit in law is unique. Hence the convergence is true not only forsubsequences, but more generally for h tending to 0. The convergence in law is proved. (cid:3) .3 Convergence of the Multiplication Operators Let us first recall very shortly the main elements of the construction and approximationdeveloped in [1], which will now serve us in order to prove the convergence of the mul-tiplication operators. This convergence of multiplication operators is not so usual in aprobabilistic framework, but it is the one interesting in the framework of applications inQuantum Statistical Mechanics, for it shows the convergence of the quantum dynamics ofrepeated interactions towards a classical Langevin equation, when the unitary interactionis unitary (cf [2]).In Subsection 2.5 we have seen the canonical isomorphism of the canonical space L (Ω , F , P S ) of any obtuse random variable X , with the space C N +1 . Recall the basicoperators a ij that were defined there.When dealing with i.i.d. sequences ( X n ) n ∈ N of copies of X , the canonical space is thenisomorphic to the countable tensor product T Φ = O n ∈ N C N +1 . When dealing with the associated random walk with time step hZ hnh = n X i =1 √ h X ih the canonical space is naturally isomorphic to T Φ( h ) = O n ∈ h N C N +1 . There, natural ampliations of the basic operators a ij are defined: the operators a ij ( nh ) isthe acting as a ij of the copy nh of C N +1 and as the identity of the other copies.On the other hand, when given a normal martingale A in R N with the chaotic represen-tation property, its canonical space L (Ω ′ , F ′ , P ′ ) is well-known to be naturally isomorphicto the symmetric Fock space Φ = Γ s ( L ( R + ; C N )) , via a unitary isomorphism denoted by U A . This space is the natural space for the quantumnoises a ij ( t ), made of the time operator a ( t ) = tI , the creation noises a i ( t ), the annihilationnoises a i ( t ) and the exchange processes a ij ( t ), with i, j = 1 , . . . N (cf [14]).The main constructions and results developed in [1] are the following:– each of the spaces T Φ( h ) can be naturally seen as concrete subspace of Φ;– when h tends to 0 the subspace T Φ( h ) fills in the whole space Φ, that is, concretely, theorthogonal projector P h onto T Φ( h ) converges strongly to the identity I ;– the basic operators a ij ( nh ), now concretely acting on Φ, converge to the quantum noises,that is, more concretely the operator X n ; nh ≤ t h ε ij a ij ( nh )49onverges strongly to a ij ( t ) on a certain domain D (which we shall not make explicit here,please cf [1]), where e ij = i = j = 0 , / i = 0 , j = 0 or i = 0 , j = 0 , i, j = 0 . Finally recall the representation of the multiplication operators for real-valued normalmartingales in R N (cf [6]). Theorem 5.4 If A is a normal martingale in R N with the chaotic representation propertyand satisfying the structure equation [ A i , A j ] t = δ ij t + N X k =1 N ijk A kt , then its multiplication operator acting of Φ = Γ s ( L ( R + , C N )) is equal to U A M A it U ∗ A = a i ( t ) + a i ( t ) + N X j,k =1 N ijk a jk ( t ) or else U A rM A it U ∗ A = N X j,k =0 N ijk a jk ( t ) if one extends the coefficients N ijk to the 0 index, by putting N ij = δ ij . Once this is recalled, the rest is now rather easy. We can prove the convergence theoremfor the multiplication operators. Theorem 5.5 The operators of multiplication M Z ht , acting of Φ , converge strongly on D to the operators Z t = n X j,k =0 M ijk a jk ( t ) . (44) These operators are the operators of multiplication by Z the complex martingale satisfying [ Z i , Z j ] t = M ij t + N X k =1 M ijk Z kt (45) and [ Z i , Z j ] t = δ ij t + N X k =1 M ikl Z kt . (46)50 roof: The convergence toward the operator Z t given by (44) is a simple application ofthe convergence theorems of [1], let us detail the different cases.If j, k = 0, we know that √ h S ijk converges to M ijk and by [1] we have that P [ t/h ] m =1 a jk ( m )converges to a jk ( t ).If j = 0 and k = 0, we know that S ij converges to M ij and that P [ t/h ] m =1 √ h a j ( m )converges to a j ( t ).If k = 0 and j = 0, we know that S i k converges to M i k (actually their are all equal to δ ik ) and that P [ t/h ] m =1 √ h a k ( m ) converges to a k ( t ).The fact that Z t is indeed the multiplication operator by the announced normal mar-tingale comes as follows. The martingale Z is the image U A , under a unitary operator U of some real normal martingale A . The 3-tensor M is the image U ◦ N , under the unitaryoperator U , of some real tensor N . The real normal martingale A associated to the real3-tensor N has its multiplication operator equal to U A M A it U ∗ A = N X j,k =0 N ijk a jk ( t )by Theorem 5.4. As Z t is equal to U A t its canonical space is the same as the one of A ,only the canonical isomorphism is modified by a change of basis. The rest of the proof isthen exactly similar to the one of Proposition 3.13. (cid:3) We shall detail 2 examples in dimension 2, showing up typical different behaviors.The first one is the one we have followed along this article, let us recall it. We are givenan obtuse random variable X in C taking the values v = (cid:18) i (cid:19) , v = (cid:18) − i (cid:19) , v = − (cid:18) i i (cid:19) with probabilities p = 1 / p = 1 / p = 5 / 12 respectively. Then the 3-tensor S associated to X is given by S = , S = − (1 − i ) 0 − (2 + i ) − (1 − i ) 0 (2 + i ) S = − (1 − i ) 0 (2 + i ) (1 − i ) − i − (1 − i ) . Z hnh = n X i =1 √ h X n , where ( X n ) n ∈ N is a sequence of i.i.d. random variables with same law as X , the continuous-time limit of Z h is the normal martingale in C with associated tensor given by the limitsof Lemma . Here we obtain, for all i, j, k = 1 , M ijk = 0and M = − (1 − i ) − (1 − i ) − (1 − i ) (1 − i ) . The limit process ( Z t ) t ∈ R + is a normal martingale in C , solution of the structure equations[ Z i , Z j ] t = M ij t [ Z i , Z j ] t = δ ij t . It is then rather easy to find a unitary matrix V such that V V t = M , we find V = i √ i √ − i √ 10 1 √ , for example. Following our results on complex normal martingales, this means that theprocess Z has the following distribution: given a 2-dimensional real Brownian motion W = ( W , W ) then Z t = i √ W t + i √ W t Z t = − i √ W t + √ W t . For the second example, we consider a fixed parameter h > 0. We consider the obtuserandom variable X ( h ) in C whose values are v = 1 √ i ! , v = 1 √ h − i √ hi − √ h ! , v = − √ √ h + i i √ h ! with probabilities p = 1 / p = h/ (1 + 2 h ) and p = 1 / (2 + 4 h ) respectively. Thenthe 3-tensor S associated to X is given by the following, where we have only detailed the52eading orders in hS = , S = √ h + O (1) − i √ h + O (1) i i √ h + O (1) √ h + O (1) S = i i √ h + O (1) √ h + O (1)0 − √ h + O (1) i √ h + O (1) . The renormalized 3-tensor converges to the 3-tensor M = 12 √ − ii ! M = 12 √ i − i ! and the matrix M = ii ! . In order to diagonalize the 3-tensor, we solve( M ij x + M ij y ) i,j =1 , = xy ! ⊗ xy ! = x xyxy y ! . There is a unique solution v = √ i ! . This means that the continuous-time limit process Z is a compensated Poisson process inthe direction v .This is all for the information which is given by the 3-tensor. If we want to know thedirection where the process is Brownian, we need to look at the decomposition of M as V V t for a unitary V . We easily find V = 12 i − i i − i ! . This unitary operator is the one from which has been rotated a real Brownian motion inorder to land in the orthogonal space of (cid:18) i (cid:19) . That is we seek for a direction w in R such53hat V w is proportional to (cid:18) i (cid:19) . We easily find w = (cid:18) (cid:19) and the process Z is a Brownianmotion in the direction (cid:18) i (cid:19) .The process Z is finally described as follows, let N and W be a standard Poisson processand a Brownian motion, respectively, independant of each other. Then Z t = √ ( N t − t ) + iW t Z t = i √ ( N t − t ) + W t . By choosing an example with two directions whose probabilities are of order h and onedirection’s probability is of order 1 − h , we shall end up with a 3-tensor M that can becompletely diagonalized and a process which is made of two compensated Poisson processeson two orthogonal directions of C (cf the example at the end of [6] for an example in R ). References [1] S. Attal : ”Approximating the Fock space with the toy Fock space”, S´eminaire deProbabilit´es XXXVI, Springer L.N.M. 1801 (2003) , p. 477-497.[2] S. Attal, J. Deschamps, C. Pellegrini : ”Classical Noises Emerging From QuantumEnvironments”, preprint .[3] S. Attal, A. Dhahri, “Repeated Quantum Interactions and Unitary Random Walks”, Journal of Theoretical Probability , 23, p. 345-361, 2010.[4] S. Attal, M. Emery, “Equations de structure pour des martingales vectorielles”, S´eminaire de Probabilit´es , XXVIII, p. 256278, Lecture Notes in Math., 1583, Springer,Berlin, 1994.[5] S. Attal, Y. Pautrat, “From repeated to continuous quantum interactions”, AnnalesHenri Poincar´e A, Journal of Theoretical and Mathematical Physics , 7, 2006, p. 59–104.[6] S. Attal, Y. Pautrat, “From (n+1)-level atom chains to n -dimensional noises”, Ann.Inst. H. Poincar´e Probab. Statist. 41 (2005), no. 3, p. 391407.[7] L. Bruneau, C.-A. Pillet, “Thermal relaxation of a QED cavity”, J. Stat. Phys. J. Math. Phys. 52 (2011), no. 2, 022109, 19 pp.549] B. Bauer, T. Benoist, D. Bernard, “Iterated Stochastic Measurements”, J. Phys. A:Math. Theor. 45 (2012) 494020[10] S. Haroche, S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Delglise, U. Busk-Hoff, M.Brune and J-M. Raimond, “Quantum jumps of light recording the birth and death ofa photon in a cavity”, Nature Nature , 477,73 (2011)[12] Y. P. Hong, R. A. Horn, “On Simultaneous Reduction of Families of Matrices toTriangular or Diagonal Form by Unitary Congruence”, Linear and Multilinear Algebra ,17 (1985), p.271-288.[13] R.L. Hudson, K.R. Parthasarathy, ”Quantum Ito’s formula and stochastic evolutions”, Comm. Math. Phys. , 93 (1984), p.301-323.[14] P.-A. Meyer, ” Quantum probability for probabilists ”, Lecture Notes in Mathematics1538, Springer-Verlag, Berlin (1993).[15] C. Pellegrini, “Existence, uniqueness and approximation of a stochastic Schr¨odingerequation: the diffusive case”, Ann. Probab. 36 (2008), no. 6, p. 2332–2353.[16] C. Pellegrini, “Existence, Uniqueness and Approximation of the jump-type StochasticSchrdinger Equation for two-level systems”, Stochastic Process and their Applications ,2010 vol 120 No 9, pp. 1722-1747.[17] C. Pellegrini, “Markov Chain Approximations of Jump-Diffusion Stochastic MasterEquations”, Annales de l’institut Henri Poincar´e: Probabilit´es et Statistiques , 2010,vol 46, pp. 924-948.[18] G. Taviot, “Martingales et ´equations de structure : ´etude g´eom´etrique”, Th`ese deDoctorat de l’Universit´e Louis Pasteur , 29 Mars 1999.