aa r X i v : . [ m a t h . P R ] J u l A. I. STAN* AND F. CATRINA
Abstract.
A definition of d –dimensional n –Meixner random vectors is given first. Thisdefinition involves the commutators of their semi–quantum operators. After that we willfocus on the 1-Meixner random vectors, and derive a system of d partial differential equationssatisfied by their Laplace transform. We provide a set of necessary conditions for thissystem to be integrable. We use these conditions to give a complete characterization ofall non–degenerate three–dimensional 1–Meixner random vectors. It must be mentionedthat the three–dimensional case produces the first example in which the components of a1–Meixner random vector cannot be reduced, via an injective linear transformation, to threeindependent classic Meixner random variables. semi–quantum operators and commutatorsand Gamma distributions and 1–Meixner random vectors and Laplace transform42C05 and 46L53 Introduction
Since its discovery in [6], the class of Meixner random variables has been intensively studiedby many authors. It seems that, among the six types of random variables belonging to thisclass (after a shifting and re-scaling): Gaussian, Poisson, negative binomial, Gamma, twoparameter hyperbolic secant, and binomial, the two simplest should be the Gaussian andPoisson ones. This is apparent from the fact that the principal Szeg˝o-Jacobi parameters, { ω n } n ≥ , for these two types of random variables, can be expressed as a linear function (withno constant term) of n , ω n = tn, where t is a nonnegative number, while for the other four types, these parameters are qua-dratic functions (with no constant term) of n , ω n = βn + ( t − β ) n, where either both β and t are non-negative, or β is negative and t is a negative integermultiple of β (hence t is positive).However, as it was pointed out in [8], from the point of view of the commutator betweenthe semi-quantum operators generated by each classic Meixner random variable, the twosimplest are the Gaussian and Gamma distributed ones.There are some possible reasons why the Gamma distributed random variables could beconsidered more “basic” than the Poisson random variables. First of all, by looking onlyat the simplicity of the principal Szeg˝o-Jacobi parameters, { ω n } n ≥ , means to ignore theimportance of the secondary Szeg˝o-Jacobi parameters, { α n } n ≥ , which for all classic Meixner random variables are linear functions of n : α n = αn + α , for all n ≥
0, where α and α are both fixed real numbers. For the Gamma and Gaussiandistributed random variables, the fixed real numbers α and β are not independent of eachother, but they are linked by the relation: α = 4 β. The Szeg˝o-Jacobi parameters make sense in the one-dimensional case. It has been pro-posed in [1] and [2] that a good replacement of the Szeg˝o-Jacobi parameters, in the multi-dimensional case, is given by the quantum operators: creation, preservation, and annihilationoperators. For polynomially symmetric random vectors, the preservation operators vanish,see [1]. For this reason, it is much easier to study the polynomially symmetric randomvectors than the non-symmetric ones. In the symmetric case, the multiplication operatorgenerated by each random variable (assumed to have finite moments of all orders) is thesum of only two operators: creation and annihilation operators. For non-symmetric randomvectors, the preservation operators play a significant role. In order to capture the effect ofthese operators, and still treat the multiplication operator generated by each random vari-able as a sum of only two operators, in [7], each preservation operator was split into twohalves. One half was added to the corresponding creation operator, while the other half wasadded to the annihilation operator. In this way, the semi-quantum operators: semi-creationand semi-annihilation operators were defined, and the multiplication operator generated byeach random variable can also be written as a sum of only two semi-quantum operators.Moreover, by splitting the preservation operators into two equal parts, the fact that eachsemi-creation operator is the polynomial dual of its corresponding semi-annihilation operatorwas preserved. In this way, the non-symmetric random vectors can be treated similarly tothe symmetric ones.It must also be mentioned that using the semi-quantum operators, rather than the quan-tum operators, it was possible not only to give an elegant way to describe the Gamma andGaussian random variables, using only one condition, but to also come up with the definitionof a countable family of Meixner classes. Each class is determined by the number of nestedcommutators involving the semi-quantum operators, see [8]. Thus for example, the Gammaand Gaussian use only one commutator, while the class of classic Meixner random variablesis described by two nested commutators.In [8] the non-degenerate two dimensional 1-Meixner random vectors were characterized.The method used in that paper was particular to the two dimensional case, and did not giveany indication about how to proceed in the multi-dimensional case. In the two-dimensionalcase, the components of a 1-Meixner random vector can be reduced, via an injective affinetransformation, to two independent classic Gamma or Gaussian random variables.In this paper, we find first a system of d linear partial differential equations satisfied bythe Laplace transform of a non-degenerate d –dimensional 1–Meixner random vector. Thenwe find a set of necessary conditions for the integrability of this system of linear partialdifferential equations. Finally, we give a complete characterization of all non–degenerate –MEIXNER RANDOM VECTORS 3 three–dimensional 1–Meixner random vectors. The important thing that will appear in ourcharacterization is the fact that there are non–degenerate three–dimensional Meixner ran-dom vectors whose components cannot be reduced, via any injective affine transformation,to three independent random variables. This fact is of great importance, because it showsthe power of the quantum and semi-quantum operators as natural extensions of the classicSzeg˝o–Jacobi parameters to the multi-dimensional case.The paper is structured as follows. In section 2, we give a minimal background of quantumand semi-quantum operators. In section 3, we review the Meixner random variables. Insection 4, we present a set of simplifying assumptions and consistency conditions. In sec-tion 5, we present a system of partial differential equations which has to be satisfied by theLaplace transform of any d -dimensional 1-Meixner random vector, for all natural numbers d . In section 6, we find a necessary condition for the existence of solutions for this system.In section 7, we describe all solutions of the system in the particular case d = 3. Finally,in the Appendix, we present calculations of the Laplace transforms of measures, that matchthe solutions found in the three–dimensional case.2. Background
Throughout this paper we consider d random variables, X , X , . . . , X d , having finitemoments of all orders, and defined on the same probability space (Ω, F , P ), where d is afixed natural number. We define the space: F := { f ( X , X , . . . , X d ) | f is polynomial } , and call it the space of all polynomial random variables in X , X , . . . , X d . The polynomials f in this definition are polynomials of d variables with complex coefficients.For each non-negative integer n , we define the space: F n := { f ( X , X , . . . , X d ) | f is polynomial of degree at most n } . Since X , X , . . . , X d have finite moments of all orders, we have: C ≡ F ⊆ F ⊆ F ⊆ · · · ⊆ F ⊆ L (Ω , F , P ) . Moreover, since for each n ≥ F n is a finite dimensional vector space, we conclude that F n is a closed subspace of L (Ω , F , P ).Since the spaces { F n } n ≥ are closed and contained one into another, we can orthogonalizethem with respect to the inner product, h· , ·i , of the space L (Ω , F , P ). Thus, we define: G := F , and for all n ≥ G n := F n ⊖ F n − , that means G n is the orthogonal complement of F n − in F n . For each n ≥
0, we call G n the n -th homogenous chaos space generated by X , X , . . . , X d . We also call every randomvariable f ( X , X , . . . , X d ) in G n a homogenous polynomial random variable of degree n (here, the word “homogenous” does not have the classic meaning that all terms have the A. I. STAN* AND F. CATRINA same degree).Define the spaces F − = G − = { } , where { } denotes the null space.We change now the way that we view the random variables X , X , . . . , X d , by regardingthem as the multiplication operators that they generate. That means, for each i ∈ {
1, 2, . . . , d } , we consider the linear operator going from F to F , defined by: f ( X , X , . . . , X d ) X i f ( X , X , . . . , X d ) . For all i ∈ {
1, 2, . . . , d } , we denote this operator by X i .In what follows, instead of f ( X , X , . . . , X d ), we write briefly f .Regarding the multiplication operators X , X , · · · , X d , we have the following lemma thatcan be found in [3], Theorem 1, page 6, (see also [1], Lemma 2.1., page 487). Lemma 1.
For all i ∈ { , , . . . , d } and all non–negative integers n , we have: X i G n ⊥ G k , for all k = n − , n , n + 1 , where “ ⊥ ” means “orthogonal to”. From this lemma, we conclude that, for all i ∈ {
1, 2, . . . , d } and all n ≥
0, we have: X i G n ⊆ G n − ⊕ G n ⊕ G n +1 . That means if f ∈ G n , there exist and are unique three homogenous polynomial randomvariables: f n − ,i ∈ G n − , f n,i ∈ G n , and f n +1 ,i ∈ G n +1 , such that: X i f = f n − ,i + f n,i + f n +1 ,i . We define the following linear operators: D − n ( i ) : G n → G n − ,D − n ( i ) f := f n − ,i , and call D − n ( i ) an annihilation operator , since it decreases the degree of a homogenouspolynomial by one unit, D n ( i ) : G n → G n ,D n ( i ) f := f n,i , and call D n ( i ) a preservation operator , because it preserves the degree of a homogenouspolynomial, and D + n ( i ) : G n → G n +1 ,D + n ( i ) f := f n +1 ,i , and call D + n ( i ) a creation operator , due to the fact that it increases the degree of a homogenouspolynomial by one unit.Lemma 1 can be written now: Lemma 2.
For all ≤ i ≤ d and all n ≥ , we have: X i | G n = D − n ( i ) + D n ( i ) + D + n ( i ) , where X i | G n is the restriction of the multiplication operator X i to the space G n . –MEIXNER RANDOM VECTORS 5 We extend now, by linearity, the definition of the annihilation, preservation, and creationoperators to the space F of all polynomial random variables, in the following way. If f ∈ F ,then there exist and are unique homogenous polynomial random variables f ∈ G , f ∈ G , f ∈ G , . . . , with only finitely many of them being different from zero, such that: f = f + f + f + · · · . We define the i -th annihilation operator by: a − ( i ) f = D − ( i ) f + D − ( i ) f + D − ( i ) f + · · · ,i -th preservation operator by: a ( i ) f = D ( i ) f + D ( i ) f + D ( i ) f + · · · , and i -th creation operator by: a + ( i ) f = D +0 ( i ) f + D +1 ( i ) f + D +2 ( i ) f + · · · . Lemma 1 becomes now:
Lemma 3.
For all i ∈ { , , . . . , d } , we have: X i = a − ( i ) + a ( i ) + a + ( i ) , where the domain of X i , a − ( i ) , a ( i ) , and a + ( i ) is considered to be the space F of all poly-nomial random variables. We call the operators: { a − ( i ) } ≤ i ≤ d , { a ( i ) } ≤ i ≤ d , and { a + ( i ) } ≤ i ≤ d the joint quantumoperators of X , X , . . . , X d . It is not hard to see that, for all i ∈ {
1, 2, . . . , d } , we have: (cid:0) a + ( i ) (cid:1) ∗ = a − ( i )and (cid:0) a ( i ) (cid:1) ∗ = a ( i ) , where the above duality is a polynomial duality , that means, for all f and g in F , we have: h a + ( i ) f, g i = h f, a − ( i ) g i and h a ( i ) f, g i = h f, a ( i ) g i . It is clear that for all i and j in {
1, 2, . . . , d } , the multiplication operators by X i and X j commute, that means: X i X j = X j X i . It was shown in [1] and [2], that the commutativity of the multiplication operators by X i and X j is equivalent, in terms of the commutators of the joint quantum operators, to thefollowing set of rules: (cid:2) a − ( i ) , a − ( j ) (cid:3) = 0 , (1) (cid:2) a − ( i ) , a ( j ) (cid:3) = (cid:2) a − ( j ) , a ( i ) (cid:3) , (2) A. I. STAN* AND F. CATRINA (cid:2) a ( i ) , a ( j ) (cid:3) = (cid:2) a − ( j ) , a + ( i ) (cid:3) − (cid:2) a − ( i ) , a + ( j ) (cid:3) , (3) (cid:2) a ( i ) , a + ( j ) (cid:3) = (cid:2) a ( j ) , a + ( i ) (cid:3) , (4)and (cid:2) a + ( i ) , a + ( j ) (cid:3) = 0 . (5)We refer to the commutation rules (1), (2), (3), (4), and (5), as the axioms of CommutativeProbability .For all i ∈ {
1, 2, . . . , d } , we define the linear operators: U i , V i : F → F,U i = a − ( i ) + 12 a ( i )and V i = a + ( i ) + 12 a ( i ) . For all i ∈ {
1, 2, . . . , d } , we call U i a semi-annihilation operator , and V i a semi–creationoperator . We also call { U i } ≤ i ≤ d and { V i } ≤ i ≤ d the joint semi-quantum operators generatedby X , X , . . . , X d .It is now clear that, for all i ∈ {
1, 2, . . . , d } , we have: X i = U i + V i and V ∗ i = U i , where the above duality is, as before, only a polynomial duality.As it was shown in [7], the axioms of Commutative Probability, can be written now in termsof the commutators involving the joint semi-quantum operators as any one of the followingthree equivalent statements:(1) For all ( i, j ) ∈ {
1, 2, . . . , d } , we have:[ U i , X j ] = [ U j , X i ] . (2) For all ( i, j ) ∈ {
1, 2, . . . , d } , we have:[ X i , V j ] = [ X j , V i ] . (3) For all ( i, j ) ∈ {
1, 2, . . . , d } , the operators [ U i , X j ] and [ X i , V j ] are polynomiallyself–adjoint.Let us see what this general theory becomes in the one dimensional case, d = 1. For d = 1,there is no need to use subscripts since we are dealing with only one random variable, X ,one creation, a + , one preservation, a , one annihilation, a − , one semi–creation, V , and onesemi–annihilation operator, U .For all n ≥
0, since the co–dimension of the space F n − (spanned by 1, X , . . . , X n − ) into F n (spanned by 1, X , . . . , X n − , X n ) is at most 1, the homogenous chaos space G n = F n ⊖ F n − –MEIXNER RANDOM VECTORS 7 has dimension at most 1. If the random variable X takes on only a finite number, k , ofdifferent values, with positive probability, then we have:dim( G n ) = (cid:26) n ≤ k −
10 if n ≥ k , where “dim” denotes the dimension. If the probability distribution, of the random variable X , has an infinite support, then, for all n ≥
0, we have:dim( G n ) = 1 . If dim( G n ) = 1, then there exists a unique polynomial f n ∈ G n , having the leading coefficientequal to 1. Because XG n ⊆ G n +1 + G n + G n − , there exist α n and ω n real numbers, suchthat: Xf n ( X ) = f n +1 ( X ) + α n f n ( X ) + ω n f n − ( X ) . For n = 0, because f − = 0, we can choose ω as we please. The real numbers { α n } n ≥ and { ω n } n ≥ are called the Szeg˝o–Jacobi parameters of X .In the case d = 1, for all n ≥
0, we have: a − f n = ω n f n − ,a f n = α n f n ,a + f n = f n +1 ,U f n = α n f n + ω n f n − , and V f n = f n +1 + α n f n . d -dimensional n -Meixner random vectors: definition and generalproperties We review now the classic Meixner random variables.
Definition 1.
A real valued random variable X , having finite moments of all orders, is calleda classic Meixner random variable if its Szeg˝o-Jacobi parameters are of the form: α n = αn + α and ω n = βn + ( t − β ) n, for all n ≥
1, where α , α , β , and t are fixed real numbers such that one of the two possiblescenarios happens:(1) β ≥ t ≥ β < t ∈ − N β . A. I. STAN* AND F. CATRINA
Of course, if t = 0, then ω = 0, and so X is a constant random variable, since the L -normof the 1-degree monic orthogonal polynomial is k f k = ω = 0. This case is not interesting,and we are going to assume that t > α ≥ X by − X .There are six types of Meixner random variables: • If α = β = 0, then X is a Gaussian random variable, i.e., a continuous randomvariable given by the density function f ( x ) = 1 √ πt e − ( x − α ) / (2 t ) . • If β = 0 and α = 0, then X is a shifted and re-scaled Poisson random variable, i.e., µ X = ∞ X k =0 λ k k ! e − λ δ α ( k − λ )+ α , where λ := t/α . • If β > α > β , then X is a shifted Pascal (negative binomial) random variable,i.e., µ X = ∞ X k =0 Γ( r + k ) k !Γ( r ) p r (1 − p ) k δ k − [2 t/ ( α + d )]+ α , where d := p α − β , p := 2 d/ ( α + d ), r := t/β . • If β > α = 4 β , then X is a shifted and re–scaled Gamma distributed randomvariable with shift parameter 2 t/α and scaling parameter α/
2, i.e., f ( x ) = 2 t/α α t/α Γ(2 t/α ) x (2 t/α ) − e − x/α (0 , ∞ ) . • If β > α < β , then up to a translation, X is a two parameter hyperbolicsecant random variable: f ( x ) = ce θx/γ | Γ( k + ixγ ) | , where γ := p β − α and γ + iα = re iθ , with − π/ < θ < π/ k := 2 t/ ( rγ ). • If β <
0, then t ∈ − N β , and in this case, up to a shifting and re-scaling, X is a binomial random variable: µ X = n X k =0 (cid:18) nk (cid:19) p k (1 − p ) n − k δ k , where n := − t/β , p := (1 / ± (1 / p c/ (4 + c ), and c := − α /β ≥ α = 4 β >
0) and the Gaussian random variables (Meixner with α = β = 0),are exactly those random variables X , having finite moments of all orders, for which thecommutator between the semi–annihilation operator U and X is of the form:[ U, X ] = bX + cI, (6) –MEIXNER RANDOM VECTORS 9 where b and c are real numbers. Using the polynomial duality between U and V , where V denotes the semi-creation operator of X , this condition is equivalent to:[ X, V ] = bX + cI. (7)We must mention that the equality (6) makes sense since one of the axioms of CommutativeProbability says that [ U , X ] is a polynomially self–adjoint operator. The shifted multiplica-tion operator bX + cI from the right of (6) is clearly self-adjoint. If [ U , X ] where not selfadjoint, then equality (6) would have been impossible.It was also shown in [9], that the classic Meixner random variables (all six of them) areexactly those random variables X , having finite moments of all order, for which the doublecommutator [[ U , X ], X ] is of the form:[[ U, X ] , X ] = bV − bU (8) = b ( X − U ) , where b is a real number. One may wonder why the right-hand side of (8) looks muchdifferent than the right-hand side of (6). The reason is that while the commutator [ U , X ],from the left-hand side of (6), is polynomially self-adjoint, the double commutator [[ U , X ], X ], from the left-hand side of (8) is polynomially anti-self-adjoint. Thus, the coefficients of U and V , in the right-hand side of (8), must be opposite one to another.Observe that when we have an odd number of nested commutators, we obtain a polynomiallyself-adjoint operator, while an even number of nested commutators creates a polynomiallyanti-self-adjoint operator. The following definition of different types of Meixner randomvectors was proposed in [8]. Definition 2.
Let X , X , . . . , X d be d random variables having finite moments of all orders.Let n be a natural number. We say that ( X , X , . . . , X d ) is a d –dimensional n –Meixnerrandom vector if: • If n is odd, then for all i , i , i , . . . , i n in {
1, 2, . . . , d } , we have:[ · · · [[ U i , X i ] , X i ] , · · · , X i n ] = d X j =1 b i,i i ...i n ,j X j + c i,i i ...i n I, for some real numbers b i,i i ...i n ,j and c i,i i ...i n , 1 ≤ j ≤ d . • If n is even, then for all i , i , i , . . . , i n in {
1, 2, . . . , d } , we have:[ · · · [[ U i , X i ] , X i ] , · · · , X i n ] = d X j =1 b i,i i ...i n ,j ( V j − U j )= d X j =1 b i,i i ...i n ,j ( X j − U j ) , for some real numbers b i,i i ...i n ,j , 1 ≤ j ≤ d . Moreover, we say that the random vector ( X , X , . . . , X d ) is non–degenerate if the operators I , X , X , . . . , X d are linearly independent. If µ denotes the joint probability distributionof X , X , . . . , X d , and the space of all polynomial functions of d variables, F ( x , x , . . . , x d ), is dense in L ( R d , µ ), then the non-degeneracy condition is equivalent to the fact that 1, X , X , . . . , X d are linearly independent as random variables, where 1 denotes the constantrandom variable equal to 1.According to this definition, the shifted and re–scaled Gamma and Gaussian distributedrandom variables form the one–dimensional 1–Meixner random vectors (variables). Theclassic Meixner random variables (all six types of them) are precisely the one–dimensional2–Meixner random vectors. We would like to stress that, this definition, that uses thesemi–quantum operators, permits us to include all the six types of classic Meixner randomvariables. It also allows us to keep the number of commutator conditions to a minimum.Moreover, it allows us to study not only 3–Meixner, 4–Meixner, . . . random variables, butalso random vectors.In [7] is was shown how this definition, employing the commutator between U and X , canbe used effectively to recover first the moments and then the probability distributions ofthe 1-Meixner random variables. In [9], it was shown how the definition using the doublecommutators can be applied to recover the probability distributions of all the classic Meixnerrandom variables. In [8], the first step in moving from one dimension to two dimensions wasachieved, and all non-degenerate two dimensional 1-Meixner random vectors were described.In this paper, we will find a system of differential equations satisfied by the Laplace transformof each non–degenerate d –dimensional 1–Meixner random vector, for every finite dimension d , and characterize all the non-degenerate three dimensional 1-Meixner random vectors.4. d -dimensional -Meixner random vectors: simplifying assumptions andconsistency conditions In this section we make some simplifying assumptions that will ease our work in the nextsection. We also establish some important consistency conditions. The following is a verysimple fact to check.
Proposition 1.
Let X , X , . . . , X d be d -random variables defined on the same probabilityspace (Ω , F , P ) and having finite moments of all orders. Let { U i } ≤ i ≤ d and { V i } ≤ i ≤ d be thejoint semi-annihilation and semi-creation operators, respectively, generated by X , X , . . . , X d . If A = ( a i,j ) ≤ i,j ≤ d is a d × d invertible matrix with real entries, and b = ( b i ) ≤ i ≤ d is avector in R d , then if we define the random variables: X ′ := a , X + a , X + · · · + a ,d X d + b X ′ := a , X + a , X + · · · + a ,d X d + b ... ... ... X ′ d := a d, X + a d, X + · · · + a d,d X d + b d , –MEIXNER RANDOM VECTORS 11 then X ′ , X ′ , . . . , X ′ d have finite moments of all orders, and their joint semi-quantum oper-ators satisfy: U ′ = a , U + a , U + · · · + a ,d U d + b U ′ = a , U + a , U + · · · + a ,d U d + b ... ... ... U ′ d = a d, U + a d, U + · · · + a d,d U d + b d , and similar formulas hold for their semi-creation operators. Moreover, if ( X , X , . . . , X d ) is non-degenerate, then ( X ′ , X ′ , . . . , X ′ d ) is also non-degenerate. Let us assume now that ( X , X , . . . , X d ) is a non-degenerate d -dimensional 1-Meixnerrandom vector. That means, there exist two finite sequences of real numbers: { α i,j,k } ≤ i,j,k ≤ d and { β i,j } ≤ i,j ≤ d , such that, for all ( i , j ) ∈ {
1, 2, . . . , d } , we have:[ U i , X j ] = d X k =1 α i,j,k X k + β i,j I, where I denotes the identity operator on the space F of all polynomial random variables in X , X , . . . , X d .It follows now from Proposition 1, that if we apply an invertible affine transformation T : R d → R d , T y = Ay + b, where A is a d × d invertible matrix, and b a vector in R d , to the random vector X := ( X , X , . . . , X d ), then the obtained random vector: X ′ := AX + b is also a non-degenerate d -dimensional 1-Meixner random vector.Let us first center the random variables X , X , . . . , X d , by subtracting for each of them itsexpectation. That means, for all i ∈ {
1, 2, . . . , d } , we define: X ′ i := X i − E [ X i ] . Thus, ( X ′ , X ′ , . . . , X ′ d ) is a non-degenerate d -dimensional 1-Meixner random vector, inwhich each component is a random variable with expectation equal to 0.Now let us apply the Gram-Schmidt orthogonalization procedure to X ′ , X ′ , . . . , X ′ d , withrespect to the inner product h· , ·i of L (Ω, F , P ). We obtain an orthonormal set of randomvariables X ′′ , X ′′ , . . . , X ′′ d , which are also the components of a centered non-degenerate d -dimensional 1-Meixner random vector, since the Gram-Schmidt orthogonalization procedureis obtained via an invertible linear map.Thus, via an invertible affine map, we may assume that ( X , X , . . . , X d ) is a non-degenerate d -dimensional 1-Meixner random vector such that for all i , j , and k in {
1, 2, . . . , d } , wehave: E [ X i X j ] = δ i,j and E [ X k ] = 0 , where δ i,j denotes Kronecker’s symbol.Since, for all 1 ≤ i ≤ d , we have E [ X i ] = 0, if we define φ := 1, i.e. φ is the constantpolynomial equal to 1, then we have: U i φ = a − ( i ) φ + 12 a ( i ) φ = 0 + 12 E [ X i ] φ = 0 , since φ ∈ G , and a − ( i ) : G → G − = { } , while: a ( i ) φ = P ( X i · h X i φ, φ i φ = E [ X i ] φ, where P n denotes the orthogonal projection of L (Ω, F , P ) onto G n , for all n ≥ ≤ i , j ≤ d , using the polynomially duality between U i and V i , we have: δ i,j = E [ X i X j ]= h X i X j φ, φ i = h ( U i + V i ) X j φ, φ i = h U i X j φ, φ i + h X j φ, U i φ i = h X j U i φ, φ i + h [ U i , X j ] φ, φ i + 0= 0 + * d X k =1 α i,j,k X k + β i,j I ! φ, φ + = d X k =1 α i,j,k E [ X k ] + β i,j = β i,j . Thus, for all ( i , j ) ∈ {
1, 2, . . . , d } , we have: β i,j = δ i,j . We establish now two sets of consistency conditions:
Proposition 2. Linear Consistency Conditions
For all ( i , j , k ) ∈ { , , . . . , d } , wehave: (1) α i,j,k = α j,i,k . (2) α i,j,k = α i,k,j . –MEIXNER RANDOM VECTORS 13 (3) for every permutation π of the indices ( i, j, k ) : α π ( i ) ,π ( j ) ,π ( k ) = α i,j,k . Proof.
1. For all 1 ≤ i , j ≤ d , due to the axiom of Commutative Probability:[ U i , X j ] = [ U j , X i ] , we have: d X k =1 α i,j,k X k + β i,j I = d X k =1 α j,i,k X k + β j,i I. Since X , X , . . . , X d , and I are linearly independent, we conclude that for all 1 ≤ k ≤ d ,we have: α i,j,k = α j,i,k .
2. For all 1 ≤ i , j , k ≤ d , we can compute the joint moment E [ X i X j X k ] in two differentways.Indeed, we have: E [ X i X j X k ] = h ( U i + V i ) X j X k φ, φ i = h U i X j X k φ, φ i + h X j X k φ, U i φ i = h X j X k U i φ, φ i + h [ U i , X j X k ] φ, φ i + 0 . Using now Leibniz commutator rule:[ U i , X j X k ] = [ U i , X j ] X k + X j [ U i , X k ] , we obtain: E [ X i X j X k ]= * d X l =1 α i,j,l X l + δ i,j I ! X k φ, φ + + * X j d X l =1 α i,k,l X l + δ i,k I ! φ, φ + = d X l =1 α i,j,l E [ X l X k ] + d X l =1 α i,k,l E [ X j X l ]= d X l =1 α i,j,l δ l,k + d X l =1 α i,k,l δ j,l = α i,j,k + α i,k,j . (9)Permuting now the factors X i , X j , and X k inside the expectation, a similar computationshows that: E [ X i X j X k ] = E [ X k X i X j ]= α k,i,j + α k,j,i . (10) Thus, from (9) and (10), it follows that: α i,j,k + α i,k,j = α k,i,j + α k,j,i , which combined with the fact, from part 1., that: α i,k,j = α k,i,j , implies: α i,j,k = α k,j,i .
3. Let i , j , and k be fixed in {
1, 2, . . . , d } . Parts 1. and 2, imply that for all transpositions τ of ( i, j, k ), we have: α τ ( i ) ,τ ( j ) ,τ ( k ) = α i,j,k . Since every permutation can be written as a product of transpositions, we conclude that forall permutations π of ( i, j, k ), we have: α π ( i ) ,π ( j ) ,π ( k ) = α i,j,k . (cid:3) As a consequence of formula (9) we obtain:
Corollary 1.
For all i , j , and k in { , , . . . , d } , we have: E [ X i X j X k ] = 2 α i,j,k . (11) 5. Moments estimates and Laplace transform
Let ( X , X , . . . , X d ) be a d -dimensional random vector.Let i = ( i , i , . . . , i d ) ∈ [ N ∪ { } ] d . We introduce the following notations: • | i | := i + i + · · · + i d and call | i | the length of i . • X i := X i X i · · · X i d d .We have the following lemma. Lemma 4.
Let ( X , X , . . . , X d ) be a centered d -dimensional 1-Meixner random vector. Let { α i,j,k } ≤ i,j,k ≤ d and { β i,j } ≤ i,j ≤ d be the coefficients that are used to express the commutatorsof the joint semi-annihilation operators and X , X , . . . , X d as linear combinations of X , X , . . . , X d , and the identity operator I . Then for all i = ( i , i , . . . , i d ) ∈ [ N ∪ { } ] d , wehave: (cid:12)(cid:12) E (cid:2) X i (cid:3)(cid:12)(cid:12) ≤ K | i | · | i | ! , (12) where K := max { dA + B , } , for A := max {| α i,j,k | | ≤ i , j , k ≤ d } , B := max {| β ( i, j ) | | ≤ i , j ≤ d } , and E (cid:2)(cid:12)(cid:12) X i (cid:12)(cid:12)(cid:3) ≤ (2 K ) | i | · | i | ! . (13) –MEIXNER RANDOM VECTORS 15 Proof.
We will prove first (12) by induction on l := | i | .For l := 0, the inequality is obvious since: (cid:12)(cid:12) E (cid:2) X (cid:3)(cid:12)(cid:12) = 1 ≤ K. Let us assume that inequality (13) is true for all multi-indexes i ∈ [ N ∪ { } ] d of length | i | ≤ l ,and prove that it remains true for all multi-indexes of length l + 1.Let i = ( i , i , . . . , i d ) ∈ [ N ∪ { } ] d be a multi-index of length i + i + · · · + i d = l + 1. Since l + 1 ≥
1, there exists w ∈ {
1, 2, . . . , d } , such that i w ≥
1. Thus, the factor X w appears forsure in the product X i = X i X i · · · X i d d .We have: E (cid:2) X i (cid:3) = h X i , i = h X w X i · · · X i w − w · · · X i d d , i = h ( U w + V w ) X i · · · X i w − w · · · X i d d , i = h U w X i · · · X i w − w · · · X i d d , i + h X i · · · X i w − w · · · X i d d , U w i = h U w X i · · · X i w − w · · · X i d d , i , since U w E [ X w ] = 0 (since X w is assumed to be centered).Let us define now the vector j , of length | j | = | i | −
1, by j := ( j , j , . . . , j d ), where: j r := (cid:26) i r if r = wi w − r = w. We commute U w with X j using Leibniz commutator rule, and obtain: E (cid:2) X i (cid:3) = h U w X j · · · X j w w · · · X j d d , i = h X j · · · X j w w · · · X j d d U w , i + (cid:10)(cid:2) U w , X j · · · X j w w · · · X j d d (cid:3) , (cid:11) = d X p =1 j p X q =1 D X j · · · X j p − p − X q − p [ U w , X p ] X j p − qp X j p +1 p +1 · · · X j d d , E since U w U w , X p ] = d X r =1 α w,p,r X r + β w,p I, we obtain: E (cid:2) X i (cid:3) = d X p =1 j p X q =1 * X j · · · X q − p d X r =1 α w,p,r X r + β w,p I ! X j p − qp · · · X j d d , + = d X p =1 j p X q =1 d X r =1 α w,p,r E (cid:2) X j · · · X q − p X r X j p − qp · · · X j d d (cid:3) + d X p =1 j p X q =1 β w,p E (cid:2) X j · · · X q − p X j p − qp · · · X j d d (cid:3) = d X p =1 d X r =1 j p α w,p,r E (cid:2) X r X j · · · X j p − p · · · X j d d (cid:3) + d X p =1 j p β w,p E (cid:2) X j · · · X j p − p · · · X j d d (cid:3) . (14)Since X r X j · · · X j p − p · · · X j d d = X u , for some vector u , with | u | = l , and X j · · · X j p − p · · · X j d d = X v , for some vector v , of length | v | = l −
1, using the triangleinequality, the induction hypothesis, and the inequalities ( l − ≤ l ! and K l − ≤ K l , weconclude from (14) that: E (cid:2) X i (cid:3) ≤ d X p =1 d X r =1 j p · A · K l · l ! + d X p =1 j p · B · K l − · ( l − ≤ dAK l · l ! d X p =1 j p ! + BK l · l ! d X p =1 j p ! ≤ dAK l · l ! · ( l + 1) + BK l · l ! · ( l + 1)= K l ( dA + B ) · ( l + 1)! ≤ K l +1 · ( l + 1)! . The proof of part a) is now complete.To prove part b), we use Jensen inequality for the convex function ϕ ( t ) = t , and theinequality from part a). Thus, for all i ∈ [ N ∪ { } ] d , we have: (cid:0) E (cid:2)(cid:12)(cid:12) X i (cid:12)(cid:12)(cid:3)(cid:1) ≤ E h(cid:12)(cid:12) X i (cid:12)(cid:12) i = E (cid:2) X i (cid:3) ≤ K | i | · | i | ! ≤ K | i | · | i | ( | i | !) , (15) –MEIXNER RANDOM VECTORS 17 due to the fact that for all l ∈ N ∪ { } , we have:(2 l )!( l !) = (cid:18) ll (cid:19) ≤ l X j =0 (cid:18) lj (cid:19) = 2 l . Taking the square root in both sides of (15) we obtain inequality (13). (cid:3)
Because of the above estimates, we have the following:
Lemma 5. If X = ( X , X , . . . , X d ) is a d -dimensional -Meixner random vector, then theLaplace transform of X , ϕ ( t ) = E [exp ( t · X )] is well defined and twice differentiable, with continuous second order partial derivatives on aneighborhood V of = (0 , , . . . , ∈ R d .Proof. Let V := { t ∈ R d |k t k ∞ < R } , where for all t = ( t , t , . . . , t d ) ∈ R d , we define k t k ∞ := max {| t | , | t | , . . . , | t d |} , and R := 1 / (2 Kd ), where K is the constant from theprevious lemma.For all n ∈ N , we define P n := { σ : {
1, 2, . . . , n } → {
1, 2, . . . , d }} . Then for all t ∈ V , wehave: ∞ X n =0 E [ | t · X | n ] n ! ≤ ∞ X n =1 X σ ∈P n E (cid:2) | t σ (1) || X σ (1) || t σ (2) || X σ (2 | · · · | t σ ( n ) || X σ ( n ) | (cid:3) n ! ≤ ∞ X n =1 X σ ∈P n k t k n ∞ E (cid:2) | X σ (1) || X σ (2) | · · · | X σ ( n ) | (cid:3) n ! ≤ ∞ X n =1 X σ ∈P n k t k n ∞ n K n n ! n != ∞ X n =0 n K n d n k t k n ∞ = ∞ X n =0 (cid:18) k t k ∞ R (cid:19) n = RR − k t k ∞ < ∞ . Monotone convergence theorem implies now that: E [exp ( | t · X | )] = E " ∞ X n =0 | t · X | n n ! = ∞ X n =0 E [ | t · X | n ] n ! < ∞ . Thus, we have: ϕ ( t ) = E [exp ( t · X )] ≤ E [exp ( | t · X | )] < ∞ . Therefore, the Laplace transform of X , ϕ , is well defined on V .In the same way, we can see that ϕ is infinitely differentiable on V , and each derivative canbe performed term by term using the exponential series. (cid:3) We find now a system of partial differential equations for the Laplace transform ϕ of X .We have the following lemma. Lemma 6.
Let X = ( X , X , . . . , X d ) be a non-degenerate d -dimensional -Meixner ran-dom vector. Let { α i,j,k } ≤ i,j,k ≤ d and { β i,j } ≤ i,j ≤ d be the coefficients that are used to expressthe commutators of their semi-annihilation operators and the components of X , as linearcombinations of X , X , . . . , X d . We also assume that X , X , . . . , X d form an orthonor-mal set of centered random variables in L (which we saw before that it is possible to beachieved via a translation and an invertible linear transformation). Thus, for all ≤ i , j ≤ d , β i,j = δ i,j (the Kronecker symbol). Then the Laplace transform of X , which is definedas: ϕ ( t , t , · · · , t d ) := E [exp ( t X + t X + · · · + t d X d )] , for all t = ( t , t , . . . , t d ) in a neighborhood V of = (0 , , . . . , , satisfies the followingsystem of differential equations: ∂ϕ∂t = X ≤ j,k ≤ d α ,j,k t j ∂ϕ∂t k + t ϕ∂ϕ∂t = X ≤ j,k ≤ d α ,j,k t j ∂ϕ∂t k + t ϕ ... ... ... ∂ϕ∂t d = X ≤ j,k ≤ d α d,j,k t j ∂ϕ∂t + t d ϕ. –MEIXNER RANDOM VECTORS 19 Proof.
As we saw in the previous lemma, there exists a neighborhood V of , on which theLaplace transform of ϕ is defined and infinitely many times differentiable. Moreover, on thatneighborhood the differentiation can be carried out term by term in the Taylor series of theexponential function, and the differentiation can be interchanged with the expectation.Let i ∈ {
1, 2, . . . , d } be fixed. For all t := ( t , t , . . . , t d ) ∈ V , we have: ∂ϕ∂t i ( t )= E [ X i exp ( t X + t X + · · · + t d X d )]= ∞ X n =0 n ! E [ X i ( t X + t X + · · · + t d X d ) n ]= ∞ X n =0 n ! h ( U i + V i ) ( t X + t X + · · · + t d X d ) n , i = ∞ X n =0 n ! h U i ( t X + t X + · · · + t d X d ) n , i + ∞ X n =0 n ! h ( t X + t X + · · · + t d X d ) n , U i i = ∞ X n =0 n ! h U i ( t X + t X + · · · + t d X d ) n , i , since U i / E [ X i ] = 0.We commute now U i and ( t X + t X + · · · + t d X d ) n , using Leibniz commutation rule, andobtain: ∂ϕ∂t i ( t )= ∞ X n =0 n ! h ( t X + t X + · · · + t d X d ) n U i , i + ∞ X n =0 n ! h [ U i , ( t X + t X + · · · + t d X d ) n ] 1 , i = ∞ X n =0 n ! n X p =1 h ( t X + t X + · · · + t d X d ) p − [ U i , t X + t X + · · · + t d X d ]( t X + t X + · · · + t d X d ) n − p , i = ∞ X n =0 n ! n X p =1 h ( t X + t X + · · · + t d X d ) p − d X j =1 t j [ U i , X j ]( t X + t X + · · · + t d X d ) n − p , i = ∞ X n =0 n ! n X p =1 h ( t X + t X + · · · + t d X d ) p − d X j =1 t j d X k =1 α i,j,k X k + δ i,j I ! ( t X + t X + · · · + t d X d ) n − p , i = ∞ X n =0 n X p =1 d X j =1 d X k =1 n ! α i,j,k t j h X k ( t X + t X + · · · + t d X d ) n − , i + ∞ X n =0 n X p =1 n ! t i h ( t X + t X + · · · + t d X d ) n − , i = ∞ X n =0 d X j =1 d X k =1 n n ! α i,j,k t j E (cid:2) X k ( t X + t X + · · · + t d X d ) n − (cid:3) + t i ∞ X n =0 n n ! E (cid:2) ( t X + t X + · · · + t d X d ) n − (cid:3) = d X j =1 d X k =1 α i,j,k t j ∞ X n =1 n − E (cid:2) X k ( t X + t X + · · · + t d X d ) n − (cid:3) + t i ∞ X n =1 n − E (cid:2) ( t X + t X + · · · + t d X d ) n − (cid:3) = d X j =1 d X k =1 α i,j,k t j E [ X k exp ( t X + t X + · · · + t d X d )]+ t i E [exp ( t X + t X + · · · + t d X d )]= d X j =1 d X k =1 α i,j,k t j ∂ϕ∂t k ( t ) + t i ϕ ( t ) . The proof of this lemma is now complete. (cid:3) –MEIXNER RANDOM VECTORS 21
At this point, it will be desirable to integrate the system in Lemma 6. If this is achieved,then inverting the Laplace transform will produce the probability distribution of the 1-Meixner random vector ( X , X , . . . , X d ). While this seems to be a challenging task, in thefollowing two sections we present results in this direction. In the next section we derive animportant necessary condition for the integrability of the system, for any dimension d ≥ d = 3.6. Necessary conditions for integrability
In this section we find a set of necessary conditions for the integrability of the systemfrom Lemma 6. Provided the system has a smooth solution ϕ , then ϕ is positive on aneighborhood V of , since: ϕ ( ) = E [exp( · X )]= 1 . We introduce the following notations: we denote by ” h· , ·i ” the inner product with respectto the original probability P , and by ” · ” the standard inner product in R d :( x ′ , x ′ , · · · , x ′ d ) · ( x ′′ , x ′′ , . . . , x ′′ d ) := x ′ x ′′ + x ′ x ′′ + · · · + x ′ d x ′′ d . We have the following result:
Lemma 7.
Let { α i,j,k } ≤ i,j,k ≤ d be real numbers, such that for every ( i , j , k ) ∈ { , , . . . , d } and every permutation π of { i , j , k } , we have: α π ( i ) ,π ( j ) ,π ( k ) = α i,j,k . If the system of partial differential equations: ∂ϕ∂t = X j,k α ,j,k t j ∂ϕ∂t k + t ϕ∂ϕ∂t = X j,k α ,j,k t j ∂ϕ∂t k + t ϕ (16) ... ... ... ∂ϕ∂t d = X j,k α d,j,k t j ∂ϕ∂t k + t d ϕ has a solution ϕ of class C defined on a neighborhood V of = (0 , , . . . , , such that ϕ ( ) = 0 , then for all ( i , j ) ∈ { , , . . . , d } and t = ( t , t , . . . , t d ) in a neighborhood of , we have: ( C i,j t ) · (cid:0) ( I − t A − t A − · · · − t d A d ) − t (cid:1) = 0 , (17) where, for all k ∈ { , , . . . , d } , we define the d × d matrix: A k := ( α k,r,s ) ≤ r,s ≤ d , (18) and C i,j := [ A i , A j ](19) is the commutator of A i and A j , and I is the d × d identity matrix.Proof. For i = j , C i,j = [ A i , A i ] = 0, and so formula (17) is obvious. So, we may assumethat i = j .Let us differentiate both sides of the i th equation of the system (16) with respect to thevariable t j . We have: ∂∂t j ∂ϕ∂t i = ∂∂t j X p,q α i,p,q t p ∂ϕ∂t q + t i ϕ ! . (20)This is equivalent to: ∂ ϕ∂t j ∂t i = X q α i,j,q ∂ϕ∂t q + X p,q α i,p,q t p ∂ ϕ∂t j ∂t q + t i ∂ϕ∂t j . (21)Applying now the Young’s commutation theorem: ∂ ϕ∂t j ∂t q = ∂ ϕ∂t q ∂t j , (22)for all q ∈ {
1, 2, . . . , d } , and using the j th equation of the system (16), we obtain: ∂ ϕ∂t j ∂t i = X q α i,j,q ∂ϕ∂t q + X p,q α i,p,q t p ∂∂t q ∂ϕ∂t j + t i ∂ϕ∂t j = X q α i,j,q ∂ϕ∂t q + X p,q α i,p,q t p ∂∂t q "X r,s α j,r,s t r ∂ϕ∂t s + t j ϕ + t i ∂ϕ∂t j = X q α i,j,q ∂ϕ∂t q + X p,q,s α i,p,q α j,q,s t p ∂ϕ∂t s + X p,q,r,s α i,p,q α j,r,s t p t r ∂ ϕ∂t q ∂t s + X p α i,p,j t p ϕ + X p,q α i,p,q t p t j ∂ϕ∂t q + t i ∂ϕ∂t j . (23)Switching the roles of i and j , similarly, we can prove that: ∂ ϕ∂t i ∂t j = X q α j,i,q ∂ϕ∂t q + X p,q,s α j,p,q α i,q,s t p ∂ϕ∂t s + X p,q,r,s α j,p,q α i,r,s t p t r ∂ ϕ∂t q ∂t s + X p α j,p,i t p ϕ + X p,q α j,p,q t p t i ∂ϕ∂t q + t j ∂ϕ∂t i . (24) –MEIXNER RANDOM VECTORS 23 Since ∂ ϕ/ ( ∂t i ∂t j ) = ∂ ϕ/ ( ∂t j ∂t i ), formulas (23) and (24) imply: P q α i,j,q ∂ϕ∂t q + P p,q,s α i,p,q α j,q,s t p ∂ϕ∂t s + P p,q,r,s α i,p,q α j,r,s t p t r ∂ ϕ∂t q ∂t s + P p α i,p,j t p ϕ + P p,q α i,p,q t p t j ∂ϕ∂t q + t i ∂ϕ∂t j = P q α j,i,q ∂ϕ∂t q + P p,q,s α j,p,q α i,q,s t p ∂ϕ∂t s + P p,q,r,s α j,p,q α i,r,s t p t r ∂ ϕ∂t q ∂t s + P p α j,p,i t p ϕ + P p,q α j,p,q t p t i ∂ϕ∂t q + t j ∂ϕ∂t i . Since, we have: P q α i,j,q ∂ϕ∂t q = P q α j,i,q ∂ϕ∂t q , P p,q,r,s α i,p,q α j,r,s t p t r ∂ ϕ∂t q ∂t s = P p,q,r,s α j,p,q α i,r,s t p t r ∂ ϕ∂t q ∂t s ,and P p α i,p,j t p ϕ = P p α j,p,i t p ϕ , we conclude from the last formula that: X p,q,s α i,p,q α j,q,s t p ∂ϕ∂t s + X p,q α i,p,q t p t j ∂ϕ∂t q + t i ∂ϕ∂t j (25) = X p,q,s α j,p,q α i,q,s t p ∂ϕ∂t s + X p,q α j,p,q t p t i ∂ϕ∂t q + t j ∂ϕ∂t i . Note that from the system (16) we have X p,q α i,p,q t p t j ∂ϕ∂t q = t j (cid:18) ∂ϕ∂t i − t i ϕ (cid:19) , and similarly X p,q α j,p,q t p t i ∂ϕ∂t q = t i (cid:18) ∂ϕ∂t j − t j ϕ (cid:19) . Therefore, the equalities (26) can be written: X p,q,s α i,p,q α j,q,s t p ∂ϕ∂t s + t j ∂ϕ∂t i − t j t i ϕ + t i ∂ϕ∂t j = X p,q,s α j,p,q α i,q,s t p ∂ϕ∂t s + t i ∂ϕ∂t j − t i t j ϕ + t j ∂ϕ∂t i . That means: X p,s t p ( A i A j ) ps ∂ϕ∂t s = X p,s t p ( A j A i ) ps ∂ϕ∂t s . This means: t · A i A j ∇ ϕ ( t ) = t · A j A i ∇ ϕ ( t ) , which due to the fact that both A i and A j are self-adjoint matrices, is equivalent to: A j A i t · ∇ ϕ ( t ) = A i A j t · ∇ ϕ ( t ) . This last equation is equivalent to:[ A i , A j ] t · ∇ ϕ ( t ) = 0 , (26) for all t in a neighborhood V of . Since our system of partial differential equations (16) isequivalent to: ∇ ϕ ( t ) = ( I − t A − t A − · · · − t d A d ) − ϕ ( t ) t , equation (26) is equivalent to:[ A i , A j ] t · ( I − t A − t A − · · · − t d A d ) − ϕ ( t ) t = 0 , and since ϕ ( ) = 1 = 0, we can divide the equation by ϕ ( t ), on a neighborhood V of , andconclude that: [ A i , A j ] t · ( I − t A − t A − · · · − t d A d ) − t = 0 . Thus, the Lemma is proved. (cid:3)
Proposition 3.
The necessary condition (17) is equivalent to: C i,j t · ( t A + t A + · · · + t d A d ) n t = 0 , (27) for all n ∈ N and all t = ( t , t , . . . , t d ) ∈ R d , which is turn is equivalent to: C i,j t · ( t A + t A + · · · + t d A d ) n t = 0 , (28) for all ≤ n ≤ d − and all t = ( t , t , . . . , t d ) ∈ R d .Proof. Since, for all t = ( t , t , . . . , t d ) in a neighborhood V of = (0, 0, . . . . 0), we have:( I − t A − t A − · · · − t d A d ) − = ∞ X n =0 ( t A + t A + · · · + t d A d ) n , (29)equation (17) becomes: ∞ X n =0 ( C i,j t ) · (( t A + t A + · · · + t d A d ) n t ) = 0 . (30)Due to the fact that, for all n ≥
0, ( C i,j t ) · ( t A + t A + · · · + t d A d ) n t is a homogenouspolynomial of degree ( n + 2) in the variables t , t , . . . , t d , we conclude that for all n ≥ C i,j t ) · (( t A + t A + · · · + t d A d ) n t ) = 0 , for t not only in a neighborhood V of but in the whole space R d .For a fixed t = ( t , t , . . . , t d ) ∈ R d , using Cayley-Hamilton-Frobenius Theorem, the matrix A t := t A + t A + · · · + t d A d satisfies its own characteristic equation:det ( xI − A t ) = 0 , (31)which is a polynomial equation of degree d : x d + c d − x d − + · · · + c x + c = 0 , (32) –MEIXNER RANDOM VECTORS 25 for some real numbers c , c , . . . , c d − .It follows from here that each of the matrices: A d t , A d +1 t , A d +2 t , . . . is a linear combinationof I , A t , A t , . . . , A d − t . Thus, the condition: for all n ≥
0, we have( C i,j t ) · (( t A + t A + · · · + t d A d ) n t ) = 0 , is equivalent to: for all 1 ≤ n ≤ d − C i,j t ) · (( t A + t A + · · · + t d A d ) n t ) = 0 . Note that for n = 0, due to the fact that C i,j = [ A i , A j ] is skew-symmetric, we have:( C i,j t ) · t = 0 . (cid:3) For n = 1, equation (27) becomes:( C i,j t ) · (( t A + t A + · · · + t d A d ) t ) = 0 . (33) Proposition 4.
For any ξ ∈ R d , the cubic homogenous polynomial: F ( t ) := X i,j,k α i,j,k t i t j t k (34) is constant along the points of the curve: t ( s, ξ ) := exp ( sC i,j ) ξ, which satisfies the initial value problem: dds t ( s, ξ ) = C i,j t ( s, ξ ) t (0 , ξ ) = ξ. (35) Proof.
Indeed, for any i ∈ {
1, 2, . . . , d } , we have (Euler’s formula): ∂F∂t i ( t ) = X p,q,r α p,q,r ∂∂t i ( t p t q t r )= X p,q,r α p,q,r ( δ ip t q t r + t p δ iq t r + t p t q δ ir )= X q,r α i,q,r t q t r + X p,r α p,i,r t p t r + X p,q α p,q,i t p t q = 3 X j,k α i,j,k t j t k , (36) since α u,v,w = α π ( u ) ,π ( v ) ,π ( w ) , for all ( u , v , w ) ∈ {
1, 2, . . . , d } and any π permutation of thetriplet ( u, v, w ). It follows from here that: dds F ( t ( s, ξ )) = ∇ F ( t ( s , ξ )) · dds t ( s, ξ )= ∇ F ( t ( s , ξ )) · C i,j t ( s, ξ )= X j,k α ,j,k t j t k , X j,k α ,j,k t j t k , . . . , X j,k α d,j,k t j t k , ! · C i,j t ( s, ξ )= 3 X j,k t j α j, ,k t k , X j,k t j α j, ,k t k , . . . , X j,k t j α j,d,k t k , ! · C i,j t ( s, ξ )= 3 d X j =1 ( t j A j t ) , d X j =1 ( t j A j t ) , . . . , d X j =1 ( t j A j t ) d ! · C i,j t ( s, ξ )= 3 d X j =1 t j A j ! t ! · C i,j t ( s, ξ )= 0 , (37)by equation (33). (cid:3) Proposition 5.
For all t = ( t , t , . . . , t d ) ∈ R d , we have: F ( t ) = 12 E (cid:2) ( t · X ) (cid:3) , where X := ( X , X , . . . , X d ) .Proof. Indeed, using formula (11), we have: F ( t ) = X i,j,k α i,j,k t i t j t k = X i,j,k E [ X i X j X k ] t i t j t k = 12 X i,j,k E [ t i X i t j X j t k X k ]= 12 E " X i t i X i ! X j t j X j ! X k t k X k ! = 12 E (cid:2) ( t · X ) (cid:3) . (cid:3) –MEIXNER RANDOM VECTORS 27 The case d = 3In this section we restrict our attention to the case d = 3. We give a complete descriptionof the non-degenerate 3-dimensional 1-Meixner random vectors, which is contained in The-orem 1, stated at the end of the section.It turns out that unlike the case d = 2, in R there exist random vectors whose components are not independent one-dimensional random variables.We distinguish between two cases: Case I.
At least one of the commutators [ A , A ], [ A , A ], or [ A , A ] is non-zero. Lemma 8.
If any one of the commutators [ A , A ] , [ A , A ] , or [ A , A ] , is non-zero, thenthere exists an orthogonal matrix U such that the cubic form: F X ( t ) := X i,j,k α i,j,k t i t j t k is written in the canonical form F X (cid:0) U − s (cid:1) = F UX ( s ) = 3 as ( s + s ) + as , for some a = 0 .Proof. Without loss of generality we may assume that C = C , := [ A , A ] = 0. Since C is a skew–symmetric real matrix, it must have a non–zero purely imaginary eigenvalue iλ .Then there exists an orthonormal basis { f , f , f } of R , such that: Cf = λf Cf = − λf Cf = 0 . Let ξ := xf + yf + zf ∈ R , where x , y , and z are fixed real numbers.In the basis { f , f , f } , the curve (level curve for F X ) described in Proposition 4 is: t ( s, ξ ) = exp ( sC ) ξ = ∞ X n =0 s n n ! C n ( xf + yf + zf )= zf + ∞ X n =0 s n n ! C n ( xf + yf )= zf + x ∞ X n =0 s n (2 n )! ( − n λ n f + y ∞ X n =0 s n (2 n )! ( − n λ n f + x ∞ X n =0 s n +1 (2 n + 1)! ( − n λ n +1 f − y ∞ X n =0 s n +1 (2 n + 1)! ( − n λ n +1 f = zf + [ x cos( λs ) − y sin( λs )] f + [ x sin( λs ) + y cos( λs )] f . The right hand side in the last formula represents the parametric equation of a circle centeredat (0, 0, z ), that sits in a plane perpendicular to f and has radius r = p x + y . Therefore,by Proposition 4, F X is constant on all circles that are centered at a point found on the R f –axis and sit in a plane perpendicular to f .Now, let us consider the orthogonal transformation U that maps the basis f , f , and f intothe standard basis e = (1, 0, 0), e = (0, 1, 0), and e = (0, 0, 1), where f , f , and f is abasis of R , such that F is rotationally invariant about f . Therefore, U f = e , U f = e , U f = e . Since U is an orthogonal transformation, it preserves the standard inner product in R . ByProposition 5 we have: F X ( t ) = 12 E (cid:2) ( t · X ) (cid:3) = 12 E (cid:2) ( U t · U X ) (cid:3) = F UX ( U t )= X i,j,k β i,j,k s i s j s k , where: s = ( s , s , s ):= U t and β i,j,k are the numbers α i,j,k that correspond to the new non-degenerate 1-Meixner randomvector: X ′ := U X.
Let us consider a circle of the form: C e ( c, r ) := { s e + s e + s e | s + s = r , s = c } , (38)for some fixed numbers c and r . For any s ∈ C e ( c, r ), we have: F X ′ ( s ) = F UX (cid:0) U (cid:0) U − s (cid:1)(cid:1) = F X (cid:0) U − s (cid:1) = F X (cid:0) U − ( s e + s e + s e ) (cid:1) = F X (cid:0) s U − e + s U − e + s U − e (cid:1) = F X ( s f + s f + s f )= constant , (39)since F X is constant along circles of the form: C f ( c, r ) := { s f + s f + s f | s + s = r , s = c } . (40) –MEIXNER RANDOM VECTORS 29 Thus F X ′ = F UX is constant along the circles C e ( a , r ), for which s is constant and s + s is constant. Because F X ′ is a third–degree homogeneous polynomial in the variables s , s ,and s , we must have: F X ′ ( s ) = 3 a (cid:0) s + s (cid:1) s + bs , for some real numbers a and b .We argue that we cannot have a = 0. Indeed, if this was the case, then F X ′ ( s ) = bs , and so F X ( t ) = b ( u · t ) , where u = ( u , u , u ) is the third row of U (this also means that u = f ). Therefore, α i,j,k = bu i u j u k , for ( i, j, k ) ∈ { , , } . However, this implies that either, one (or both) of the matrices A and A is zero, or else, theyhave proportional entries. In either case, the commutator [ A , A ] = 0, which contradictsour hypothesis.Since we have: F X ′ ( s ) = X i,j,k β i,j,k s i s j s k , identifying the coefficients of s i s j s k , for all possible values of i , j , and k in {
1, 2, 3 } , weobtain: β i,j,k := a if ( i, j, k ) is a permutation of (1 , ,
3) or (2 , , b if i = j = k = 30 otherwise . This implies that the matrices corresponding to the new three–dimensional 1–Meixner ran-dom vector X ′ = ( X ′ , X ′ , X ′ ) are:(41) A ′ = a a , A ′ = a a , A ′ = a a
00 0 b . Therefore C ′ , = [ A ′ , A ′ ] = a − a = 0 , as a = 0 . On the other hand, consider the commutator C ′ , = [ A ′ , A ′ ] = a ( b − a )0 − a ( b − a ) 0 . Applying equation (33) to C ′ , and t := e = (0, 1, 0), we obtain: (cid:0) C ′ , e (cid:1) · ((0 A ′ + 1 A ′ + 0 A ′ ) e ) = 0 . (42) Since C ′ , e = (0, 0, − a ( b − a )) = − a ( b − a ) e and A ′ e = (0, 0, a ) = ae , equation (42)becomes: − a ( b − a ) = 0 . (43)Because a = 0, we conclude from (43), that b = a , which also implies C ′ , = C ′ , = 0.Therefore, the proof of this lemma is concluded. (cid:3) In the next lemma we integrate the system (16) in the particular case of a canonicalrandom vector (with associated cubic form as in Lemma 8). Consequently, we produce anexplicit formula for the Laplace transform of its probability distribution.
Lemma 9.
Let X ′ be a random vector with associated cubic form F X ′ ( s ) = 3 as ( s + s ) + as , for some a = 0 . Then, the Laplace transform ϕ ( s ) = E [exp ( s · X ′ )] is given by the formula (44) ϕ ( s ) = e − (1 /a ) s (cid:0) − a s − a s + (1 − as ) (cid:1) − / (2 a ) , which is analytic in the interior of the cone D := (cid:26) s ∈ R | | a | q s + s < − as (cid:27) . Proof.
Since ϕ ( ) = 1 = 0 and ϕ is continuous at , we can divide both sides of each equationof the system (16) by ϕ ( s ), for s in a neighborhood V of , and conclude that the logarithmof the joint Laplace transform of X ′ , X ′ , . . . , X ′ d : ψ ( s ) := ln ϕ ( s )satisfies the system of partial differential equations, written in matrix form as: ∇ ψ ( s ) = M ( s ) s , for all s in V , where: M ( s ) := ( I − B ( s )) − , for: B ( s ) := s A ′ + s A ′ + s A ′ , with the coefficients A ′ , A ′ , A ′ given by (41). We have: B ( s ) = as as as as as as as , –MEIXNER RANDOM VECTORS 31 and M ( s ) = ( I − B ( s )) − = − a s +(1 − as ) (1 − as )( − a s − a s +(1 − as ) ) a s s (1 − as )( − a s − a s +(1 − as ) ) as − a s − a s +(1 − as ) a s s (1 − as )( − a s − a s +(1 − as ) ) − a s +(1 − as ) (1 − as )( − a s − a s +(1 − as ) ) as − a s − a s +(1 − as ) as − a s − a s +(1 − as ) as − a s − a s +(1 − as ) − as − a s − a s +(1 − as ) . We get M ( s ) s = s − a s − a s +(1 − as ) s − a s − a s +(1 − as ) as + as − as + s − a s − a s +(1 − as ) , and from ∇ ψ ( s ) = M ( s ) s we obtain: ψ ( s ) = − a ln (cid:0) − a s − a s + (1 − as ) (cid:1) − a s . Since ψ ( s ) = ln ϕ ( s ), we conclude that the Laplace transform of X ′ is: ϕ ( s ) = e − (1 /a ) s · (cid:2) − a s − a s + (1 − as ) (cid:3) − / (2 a ) , and therefore the Lemma is proved. (cid:3) Our next step is to invert the Laplace transform (44), to obtain the joint probabilitydistribution of X ′ , X ′ , and X ′ .We introduce the following notations: • We denote the Laplace transform by L , and accordingly, its inverse by L − . • For all c ∈ R , we denote by E c , the exponential function: E c ( x ) := e c · x , for all x ∈ R . • For all c ∈ R , we denote by T c , the translation operator that maps a function f intothe function T c f , defined by:( T c f ) ( x ) := f ( x + c ) , for all x ∈ R . • For all c ∈ R \ { } , we denote by D c , the dilation operator that maps a function f into the function D c f , defined by:( D c f ) ( x ) := f ( c x ) , for all x ∈ R .We have the following properties: • For every function f and c ∈ R , L ( T c f ) = E − c · ( L f ) . • For every function f and c ∈ R , L ( E c f ) = T c ( L f ) . • For every function f and c ∈ R \ { } , L ( D c f ) = 1 | c | D /c ( L f ) . Using these properties, it is not hard to see that: L − h e − (1 /a ) s · (cid:2) − a s − a s + (1 − as ) (cid:3) − / (2 a ) i = L − n D a h e − (1 /a ) s · (cid:2) − s − s + (1 − s ) (cid:3) − / (2 a ) io = 1 | a | D /a n L − h e − (1 /a ) s · (cid:2) − s − s + (1 − s ) (cid:3) − / (2 a ) io = 1 | a | D /a n L − h E − (1 /a ) e · (cid:2) − s − s + (1 − s ) (cid:3) − / (2 a ) io = 1 | a | D /a n T (1 /a ) e h L − n(cid:2) − s − s + ( s − (cid:3) − / (2 a ) oio = 1 | a | D /a n T (1 /a ) e h L − (cid:16) T − e n(cid:0) − s − s + s (cid:1) − / (2 a ) o(cid:17)io = 1 | a | D /a n T (1 /a ) e n E − e L − h(cid:0) − s − s + s (cid:1) − / (2 a ) ioo . (45)It was shown in [4] (see also [5] pages 6482–6483), that L − [( s − s − s ) − / (2 a ) ] is a (positive)measure ν if and only if: 12 a ≥ − , here the number 3 from the right–hand side of the last inequality is the dimension d of R .The above inequality is equivalent to: | a | ≤ . We will refer to a probability measure having the Laplace transform L ( s ) = ( s − s − s ) − / (2 a ) , as a three-dimensional Gamma distribution .Moreover, for | a | <
1, the measure ν is absolutely continuous with respect to the Lebesguemeasure dx dx dx on R . Its Radon–Nikod´ym derivative is: g ( x , x , x ) := 12 /a − / Γ Ω (1 / (2 a )) (cid:0) x − x − x (cid:1) / (2 a ) − / , where, for all p > / Ω ( p ) := (2 π ) / Γ( p )Γ (cid:18) p − (cid:19) , –MEIXNER RANDOM VECTORS 33 for all x = ( x , x , x ) ∈ Ω, where Ω is the open cone:Ω := (cid:26) x ∈ R | x > q x + x (cid:27) . We have also included the computation of the Laplace transform of these functions in theAppendix.Let us define p := 1 / (2 a ) > /
2. Then the joint probability distribution µ of X ′ , X ′ , X ′ isabsolutely continuous with respect to the Lebesgue measure on R , and its density functionis according to formula (45): f ( x , x , x ) := 1 | a | D /a (cid:8) T (1 /a ) e {E − e g ( x , x , x ) } (cid:9) = 1 | a | D /a (cid:8) T pe (cid:8) e − x g ( x , x , x ) (cid:9)(cid:9) = 1 | a | D /a (cid:8) e − ( x +2 p ) g ( x , x , x + 2 p ) (cid:9) = e − p | a | e − x /a g (cid:16) x a , x a , x a + 2 p (cid:17) = e − /a | a | e − x /a /a − / Γ Ω (1 / (2 a )) "(cid:18) x a + 1 a (cid:19) − x a − x a / (2 a ) − / = C a e − ( x /a +1 /a ) "(cid:18) x a + 1 a (cid:19) − x a − x a / (2 a ) − / , (46)for all x = ( x , x , x ) inside a cone Ω a , where C a is the positive constant: C a := 12 /a − / | a | Γ Ω (1 / (2 a )) . The presence of the factor e − x /a in formula (46), ensures the fact that µ has finite momentsof all orders inside the (shifted) cone Ω a , where:Ω a := ( ( x , x , x ) ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:18) x a + 1 a (cid:19) > x a + x a , x a + 1 a > ) . For | a | = 1, formula (44) becomes: ϕ ( s ) = e ± s p (1 ± s ) − s − s . The case when a = − a = 1, by applying a dilation of factor c = −
1, since: ϕ − ( s ) = e s p (1 + s ) − s − s = e − ( − s ) p [1 − ( − s )] − ( − s ) − ( − s ) = ϕ +1 ( − s ) . Thus, we have: L − ( ϕ − ) = L − ( D − ϕ +1 )= D − L − ( ϕ +1 ) . Therefore, if we find the measure of µ +1 , corresponding to a = +1, and its support is D +1 ,then the measure µ − , corresponding to a = −
1, will be supported by D − := − D +1 , andfor all Borel subsets B of D − , we have µ − ( B ) = µ +1 ( − B ).The probability measure µ +1 , corresponding to a = +1, is supported on a cone (a two–dimensional manifold), and its construction is described in the Appendix. Case 2.
If [ A , A ] = [ A , A ] = [ A , A ] = 0. Since all the matrices A , A , and A aresymmetric and commute, they can be diagonalized in the same basis { f , f , f } of R . Let U : R → R be the orthogonal linear transformation that maps the standard basis { e , e , e } of R into { f , f , f } . We identify U with its matrix { u i,j } ≤ i,j ≤ d . We have U − = U T (the transpose of U ), and for all r ∈ {
1, 2, 3 } : U A r U T = D r , where D r is a 3 × X ′ := X j =1 u ,j X j ,X ′ := X j =1 u ,j X j ,X ′ := X j =1 u ,j X j . –MEIXNER RANDOM VECTORS 35 Then, for all i ∈ {
1, 2, 3 } , we have: X i = X j =1 u Ti,j X ′ j = X j =1 u j,i X ′ j . Since ( X , X , X ) is a non–degenerate 1-Meixner random vector, and U is an invertiblelinear transformation, ( X ′ , X ′ , X ′ ) is also a non-degenerate 1-Meixner random vector.Moreover, the joint semi–annihilation operators of X ′ , X ′ , and X ′ are, for all i ∈ {
1, 2, 3 } : U ′ i := X j =1 u i,j U j . For all i and j in {
1, 2, 3 } , such that i = j , we have: (cid:2) U ′ i , X ′ j (cid:3) = " X k =1 u i,k U k , X l =1 u j,l X l = X k,l u i,k u j,l [ U k , X l ]= X k,l u i,k u j,l X r α k,l,r X r + δ i,j I ! = X k,l,r u i,k α r,k,l u Tl,j X s u s,r X ′ s = X r,s u s,r X k,l u i,k α r,k,l u Tl,j ! X ′ s = X r,s u s,r (cid:0) U A r U T (cid:1) i,j X ′ s = X r,s u s,r ( D r ) i,j X ′ s = 0 , since D r is a diagonal matrix and i = j .Since for all i = j , we have [ U ′ i , X ′ j ] = 0, it follows from Theorem 4.6 from [10], that thejoint probability distribution of ( X ′ , X ′ , X ′ ) is polynomially factorisable. That means, forall i , j , and k ∈ N ∪ { } , we have: E (cid:2) X ′ i X ′ j X ′ k (cid:3) = E (cid:2) X ′ i (cid:3) E (cid:2) X ′ j (cid:3) E (cid:2) X ′ k (cid:3) . That means, from the point of view of moments X ′ , X ′ , and X ′ behave like three independentrandom variables. Since the new coefficients α ′ i,j,k = 0, for all i = j , permuting the indexes,we obtain, that, for all k ∈ {
1, 2, 3 } , we have: α ′ k,i,j = 0 , for all i = j . Thus, if we choose i := k and j = k , we have: α ′ k,k,j = 0 . That means, for all k ∈ {
1, 2, 3 } , we have:[ U ′ k , X ′ k ] = X j =1 α ′ k,k,j X ′ j + c ′ k I = α ′ k,k,k X ′ k + c ′ k I. The last equation shows that individually each random variable X ′ , X ′ , and X ′ is a 1-Meixner random variable. It was shown in [7], that the 1-Meixner random variables are up toa re-scaling and translation Gamma or Gaussian random variables. Since the joint momentsof X , X , and X can be written as products of the corresponding individual moments of X , X , and X , and because the moment problem for 1-Meixner random vectors is uniquelysolvable (due to the estimates that we obtained in Lemma 4), we conclude that the jointprobability distribution µ of X ′ , X ′ , and X ′ is the product of the individual probabilitydistributions µ , µ , and µ of X ′ , X ′ , and X ′ , respectively. Thus X ′ , X ′ , and X ′ are threeindependent re-scaled and shifted Gamma or Gaussian random variables.We conclude our discussion with the following theorem: Theorem 1.
A three–dimensional random vector ( X , X , X ) , having finite joint momentsof all orders, is a non–degenerate –Meixner random vector if and only if there exists aninvertible affine transformation from R to R , denoted ( X , X , X ) → ( X ′ , X ′ , X ′ ) , such that either: • X ′ , X ′ , and X ′ are independent Gamma or Gaussian random variablesor • the joint probability distribution of ( X ′ , X ′ , X ′ ) is a three–dimensional Gammadistribution. Appendix
Proposition 6.
Let µ be the measure on R , that is absolutely continuous with respect tothe Lebesgue measure dx dx dx on R , and whose Radon-Nikod´ym derivative is: dµ ( x ) dxdydz = (cid:0) x − x − x (cid:1) p Ω ( x , x , x ) , –MEIXNER RANDOM VECTORS 37 where: Ω := (cid:8) ( x , x , x ) ∈ R | x + x < x , x > (cid:9) , is the open upper cone with vertex at the origin, axis of symmetry the positive x -axis, whosegenerator and axis of symmetry make an angle ϕ , of measure m ( ϕ ) = 45 ◦ , and Ω denotes thecharacteristic function of Ω . Then, for p > − , the Laplace transform of µ (or equivalently,of its Radon-Nikod´ym derivative with respect to the Lebesgue measure) is: E µ [exp ( t · X )] = 2 π Γ(2 p + 2) (cid:0) t − t − t (cid:1) − p − (3 / . Proof.
Let t = ( t , t , t ) ∈ R . We have: ϕ ( t ) = Z D e t x e t x e t x (cid:0) x − x − x (cid:1) p dx dx dx . Let us move to spherical coordinates: x = r sin( ϕ ) cos( θ ) x = r sin( ϕ ) sin( θ ) x = r cos( ϕ ) , where r ∈ (0, ∞ ), θ ∈ [0, 2 π ), ϕ ∈ [0, π/ | J ( r, θ, ϕ ) | = r sin( ϕ ) . Thus, we have: ϕ ( t ) = Z π/ sin( ϕ ) (cid:20)Z ∞ e t r cos( ϕ ) (cid:0) r cos(2 ϕ ) (cid:1) p r (cid:20)Z π e r sin( ϕ )( t cos( θ )+ t sin( θ )) dθ (cid:21) dr (cid:21) dϕ. The order of integration that we choose is: first with respect to θ , second with respect to r ,and third we respect to ϕ .Let us compute first the innermost integral, with respect to θ , for r and ϕ fixed. In theexponent of that integrand we write the superposition of waves as only one wave, namely: t cos( θ ) + t sin( θ ) = q t + t " cos( θ ) t p t + t + sin( θ ) t p t + t = q t + t [cos( θ ) cos( τ ) + sin( θ ) sin( τ )]= q t + t cos( θ − τ ) , where τ ∈ [0, 2 π ) is the only angle such that: cos( τ ) = t √ t + t sin( τ ) = t √ t + t . Therefore, the most inner integral becomes: I := Z π e r sin( ϕ ) √ t + t cos( θ − τ ) dθ. Since the integrand is a periodic function of period 2 π , we can integrate on any interval oflength 2 π . So, we can integrate on the interval [ τ , τ + 2 π ), obtaining after the change ofvariable θ θ + τ : I = Z π e r sin( ϕ ) √ t + t cos( θ ) dθ. We use now the Taylor series expansion of the exponential function: e z = ∞ X n =0 z n n ! . Since this is an entire series (radius of convergence R = ∞ ), according to Weierstrass The-orem, it converges uniformly on any compact K . Thus, we can interchange the series andintegral, obtaining: I = Z π e r sin( ϕ ) √ t + t cos( θ ) dθ = Z π ∞ X n =0 (cid:16) r sin( ϕ ) p t + t cos( θ ) (cid:17) n n ! dθ = ∞ X n =0 (cid:16) r sin( ϕ ) p t + t (cid:17) n n ! Z π cos n ( θ ) dθ. For all n ≥
0, let us define: J n := Z π cos n ( θ ) dθ. Then J = 2 π , J = 0, and for all n ≥
2, integrating by parts, we obtain: J n = Z π cos n − ( θ ) cos( θ ) dθ = Z π cos n − ( θ ) d (sin( θ ))= cos n − ( θ ) sin( θ ) | π + ( n − Z π cos n − ( θ ) sin ( θ ) dθ = 0 + ( n − Z π cos n − ( θ ) (cid:2) − cos ( θ ) (cid:3) dθ = ( n − J n − − ( n − J n . –MEIXNER RANDOM VECTORS 39 We obtain from here, that: J n = n − n J n − , for all n ≥
1. Iterating this recursive relation, we get: J n +1 = 0and J n = (2 n − n )!! 2 π = (2 n − n n ! 2 π. Thus, we obtain: Z π e r √ t + t cos( θ ) dθ = ∞ X n =0 r n ( t + t ) n sin n ( ϕ )(2 n )! · (2 n − n n ! 2 π = ∞ X n =0 r n ( t + t ) n sin n ( ϕ )2 n ( n !) π. We compute now the second integral, with respect to r , for a fixed ϕ . I := ∞ X n =0 ( t + t ) n sin n ( ϕ )2 n ( n !) π Z ∞ r n +2 p +2 e t r cos( ϕ ) dr. Here, for convergence, we must assume that t <
0, and make the change of variable s = r | t | cos( ϕ ), dr = | t | cos( ϕ ) ds . We obtain: I = ∞ X n =0 ( t + t ) n sin n ( ϕ )2 n ( n !) π Z ∞ s n +2 p +2 e − s | t | n +2 p +3 cos n +2 p +3 ( ϕ ) ds = ∞ X n =0 ( t + t ) n sin n ( ϕ )2 n ( n !) | t | n +2 p +3 cos n +2 p +3 ( ϕ ) 2 π Γ(2 n + 2 p + 3) . Finally, we compute the last integral, with respect to ϕ , which is the Laplace transform of µ . We have: ϕ ( t ) = ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z π/ sin n +1 ( ϕ )cos n +2 p +3 ( ϕ ) cos p (2 ϕ ) dϕ = ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z π/ sin n ( ϕ )cos n +2 p +3 ( ϕ ) cos p (2 ϕ ) sin( ϕ ) dϕ. We make now the change of variable u = cos( ϕ ), du = − sin( ϕ ) dϕ . We obtain: ϕ ( t ) = ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z √ / (1 − u ) n u n +2 p +3 (cid:0) u − (cid:1) p du. We make now the change of variable v = u , dv = 2 udu . We obtain: ϕ ( t ) = 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z / (1 − v ) n (2 v − p v n + p +2 dv = 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z / (cid:18) − vv (cid:19) n (cid:18) v − v (cid:19) p v dv. Finally, we make the change of variable w = (1 − v ) /v , dw = − (1 /v ) dv . We obtain: ϕ ( t ) = 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Z w n (1 − w ) p dw = 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) B ( n + 1 , p + 1) , where B is the Euler beta function. Applying the formula: B ( n + 1 , p + 1) = Γ( n + 1)Γ( p + 1)Γ( n + p + 2) , we have: ϕ ( t ) = 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ(2 n + 2 p + 3) Γ( n + 1)Γ( p + 1)Γ( n + p + 2)= 12 ∞ X n =0 ( t + t ) n n ( n !) | t | n +2 p +3 π Γ (cid:18) (cid:18) n + p + 32 (cid:19)(cid:19) n !Γ( p + 1)Γ( n + p + 2) . –MEIXNER RANDOM VECTORS 41 Using now Legendre duplication formula for the Gamma function:Γ(2 z ) = 2 z − Γ( z )Γ( z + (1 / √ π , we obtain: ϕ ( t ) = π ∞ X n =0 ( t + t ) n n n ! | t | n +2 p +3 n +2 p +2 Γ( n + p + (3 / n + p + 2) √ π Γ( p + 1)Γ( n + p + 2)= π p +2 | t | p +3 ∞ X n =0 ( t + t ) n n ! | t | n Γ( n + p + (3 / √ π Γ( p + 1)= π p +2 | t | p +3 ∞ X n =0 ( t + t ) n n ! | t | n ( n + p + (1 / n − p + (1 / · · · (1 + p + (1 / p + (1 / √ π Γ( p + 1)= π p +2 | t | p +3 ∞ X n =0 ( − n ( − p − (3 / − p − (3 / − · · · ( − p − (3 / − ( n − n !( t + t ) n | t | n Γ( p + 1)Γ( p + 1 + (1 / √ π . Using now the binomial formula:(1 + x ) r = ∞ X n =0 r ( r − · · · ( r − n + 1) n ! x n , for | x | <
1, and the duplication formula again, we obtain: ϕ ( t ) = π p +2 | t | p +3 (cid:20) − t + t t (cid:21) − p − (3 / Γ(2 p + 2)2 p +1 = 2 π Γ(2 p + 2) (cid:0) t − t − t (cid:1) − p − (3 / . (cid:3) The case a = 1 corresponds to p = − Proposition 7.
Let ν ( x ) be the two dimensional measure on ∂ Ω (the surface of the conewith equation p x + x = x ) obtained as the push–forward of the surface measure on thecylinder given by parametrization ( r, θ ) → (cos( θ ) , sin( θ ) , r ) , r > , θ ∈ [0 , π ) , via the map (cos( θ ) , sin( θ ) , r ) → ( r cos( θ ) , r sin( θ ) , r ) . Then, for t ∈ − Ω the Laplace transform L ν ( t ) = Z ∂ Ω e t · x dν ( x ) , is given by L ν ( t ) = 2 π p t − ( t + t ) . Proof.
Using x = ( r cos( θ ) , r sin( θ ) , r ), we have L ν ( t ) = Z ∞ Z π e t r e t r cos( θ )+ t r sin( θ ) dθ dr. Therefore L ν ( t ) = Z ∞ e t r Z π e r √ t + t cos( θ − τ ) dθ dr, where τ ∈ [0 , π ) is the only angle such that cos( τ ) = t √ t + t sin( τ ) = t √ t + t . As in Proposition 6, we have I := Z π e r √ t + t cos( θ ) dθ = ∞ X n =0 r n ( t + t ) n (2 n )! (2 n − n n ! 2 π = ∞ X n =0 r n ( t + t ) n n ( n !) π. We compute now the second integral, with respect to r . I := ∞ X n =0 ( t + t ) n n ( n !) π Z ∞ r n e t r dr. –MEIXNER RANDOM VECTORS 43 Here, for convergence at infinity, we must assume that t <
0. Making the change of variable s = r | t | , dr = | t | ds . We obtain: I = ∞ X n =0 ( t + t ) n n ( n !) π Z ∞ s n e − s | t | n +1 ds = ∞ X n =0 ( t + t ) n n ( n !) | t | n +1 π (2 n )! . Therefore, for t + t t <
1, we have I = 2 π | t | q − t + t t = 2 π p t − ( t + t ) . (cid:3) References [1] Accardi, L., Kuo, H.–H. and Stan, A. I.: Characterization of probability measures through the canonicallyassociated interacting Fock spaces,
Infin. Dimens. Anal. Quantum Probab. Relat. Top. No 4 (2004) 485–505.[2] Accardi, L., Kuo, H.–H. and Stan, A. I.: Moments and commutators of probability measures,
Infin.Dimens. Anal. Quantum Probab. Relat. Top. No 4 (2007) 591–612.[3] Accardi, L., Nahni, M.: Interacting Fock Spaces and Orthogonal Polynomials in several variables
Non-Commutativity, Infinite-Dimensionality and Probability at the Crossroads (2002) 192–205[4] Gindikin, S.: Invariant generalized functions in homogenous domains, J. Functional Anal. Appl. (1975)50–52.[5] Letac, G., Wesolowski, J.: Laplace transforms which are negative powers of quadratic polynomials, Trans.Amer. Math. Soc.
No 12 (2008) 6475–6496.[6] Meixner, J.: Orthogonale Polynomsysteme mit einer besonderen Gestalt der erzeugenden Funktion,
J.London Math. Soc. (1934) 6–13.[7] Popa, G., Stan, A. I.: Gamma distributed random variables and their semi–quantum operators, in: J.Phys.: Conf. Ser. 563 012029 doi:10.1088/1742-6596/563/1/012029 (2014).[8] Popa, G., Stan, A. I.: Two-dimensional 1-Meixner random vectors and their semi-quantum operators
Communications on Stochastic Analysis (COSA) No 4 (2015) 425–455.[9] Popa, G., Stan, A. I.: 2-Meixner random variables and semi-quantum operators
J. Phys.: Conf. Ser. (2017) 012002.[10] Popa, G., Stan, A. I.: A characterization of probability measures in terms of semi-quantum operators,
Infin. Dimens. Anal. Quantum Probab. Relat. Top. No 02 (2019)https://doi.org/10.1142/S0219025719500097.
Department of Mathematics, The Ohio State University, 1465 Mount Vernon Avenue,Marion, OH 43302, U.S.A.
E-mail address : [email protected]* Department of Mathematics and Computer Science, St. John’s University, Queens, NY11439, U.S.A.
E-mail address ::