A characterization of the Riesz distribution
aa r X i v : . [ m a t h . P R ] J un A characterization of the Riesz distribution (Running title:
A characterization of the Riesz distribution ) A. Hassairi ∗ , S. Lajmi and R. Zine Facult´e des Sciences, Universit´e de Sfax, B.P.802, Sfax, Tunisie.
Abstract
Bobecka and Wesolowski (2002) have shown that, in the Olkin and Rubin characterization ofthe Wishart distribution (See Casalis and Letac (1996)), when we use the division algorithm definedby the quadratic representation and replace the property of invariance by the existence of twicedifferentiable densities, we still have a characterization of the Wishart distribution. In the presentwork, we show that, when we use the division algorithm defined by the Cholesky decomposition,we get a characterization of the Riesz distribution.
Keywords:
Symmetric cone, division algorithm, Wishart distribution, Riesz distribution, Beta-Riesz distribution, functional equation
A remarkable characterization of the gamma distribution, due to Luckacs (1955), says that if U and V are two independent non Dirac and non negative random variables such that U + V is a.s. positive,then UU + V and U + V are independent if and only if U and V have the gamma distribution withthe same scale parameter. The classical multivariate version of this characterization concerns theWishart distribution on the cone of symmetric positive matrices. In this case, there is not a singleway to define the quotient of two matrices. For instance if Y is a positive definite matrix, onecan for example, use the quadratic representation, that is write Y = Y Y and define the ratio X by Y as Y − XY − , or use the Cholesky decomposition Y = T T ∗ , where T is a lower triangularmatrix and define the ratio as ( T − ) X ( T − ) ∗ . A general definition of a division algorithm will ∗ Corresponding author.
E-mail address: [email protected] e given in Section 2, however these two examples are the most usual and most important. In1962, Olkin and Rubin have shown that, if U and V are two independent random variables valuedin the cone of symmetric non negative matrices such that U + V is a.s. positive definite, then,independently of the choice of the division algorithm, the quotient of U by U + V is independentof U + V and its distribution is invariant by the orthogonal group if and only if U and V have theWishart distribution. This result has been extended to the Wishart distribution on any symmetriccone by Casalis and Letac (1996). Recently, Bobecka and Wesolowski (2002) have given anothercharacterization of the Wishart distribution without any invariance assumption for the quotient.More precisely, they have shown that if we use the division algorithm defined by the quadraticrepresentation and we replace in the Olkin and Rubin theorem the condition of invariance forthe distribution of the quotient by the existence of twice differentiable densities, then we stillhave a characterization of the Wishart distribution. The present paper gives a parallel resultwhich starts from the observation that, when the condition of invariance of the distribution ofthe ratio by the orthogonal group is dropped, the characterization in the Bobecka and Wesolowskiway is not independent of the choice of the division algorithm. We show that, when we use thedivision algorithm defined by the Cholesky decomposition, we get a characterization of the Rieszdistribution introduced by Hassairi and Lajmi (2001). Our method of proof is based on somefunctional equations depending on the triangular group. These equations are more involved thenthe ones used in the characterization of the Wishart distribution, their solutions are expressedin terms of the generalized power. Our results will be presented in the framework of the Rieszdistribution on the symmetric cone of a simple Euclidean Jordan algebra. This will enable us touse some technical results established in the book of Faraut-Kor´anyi (1994) and in Hassairi et al.(2001, 2005). However, to make the paper accessible to a reader who is not familiar with the theoryof Jordan algebras, we will give a particular emphasis to the cone of positive definite symmetricmatrices. We first review some facts concerning Jordan algebras and their symmetric cones. Our notationsare the ones used in the book of Faraut-Kor´anyi (1994). Let us recall that a Euclidean Jordanalgebra is a Euclidean space E with scalar product < x, y > and a bilinear map E × E −→ E, ( x, y ) xy called Jordan product such that, for all x, y, z in E ,i) xy = yx ,ii) < x, yz > = < xy, z > ,iii) there exists e in E such that ex = x ,iv) x ( x y ) = x ( xy ), where we used the abbreviation x = xx .An Euclidean Jordan algebra is said to be simple if it does not contain a nontrivial ideal.Actually to each Euclidean simple Jordan algebra, one attaches the set of Jordan squaresΩ = { x ; x ∈ E } . ts interior Ω is a symmetric cone, i.e., a cone which isi) self dual, i.e., Ω = { x ∈ E ; < x, y > > ∀ y ∈ Ω \ { }} .ii) homogeneous, i.e., the subgroup G (Ω) of the linear group GL ( E ) of linear automorphisms whichpreserves Ω acts transitively on Ω.iii) salient, i.e., Ω does not contain a line. Furthermore, it is irreducible in the sense that it is notthe product of two cones.Let now x be in E . If L ( x ) is the endomorphism of E ; y xy and P ( x ) = 2 L ( x ) − L ( x ),then L ( x ) and P ( x ) are symmetric for the Euclidean structure of E and the map x P ( x ) iscalled the quadratic representation of E .An element c of E is said to be idempotent if c = c , it is a primitive idempotent if furthermore c = 0 and is not the sum t + u of two non null idempotents t and u such that t.u = 0.A Jordan frame is a set { c , ..., c r } of primitive idempotents such that r X i =1 c i = e and c i c j = δ ij c i ,for 1 ≤ i, j ≤ r . It is an important result that the size r of such a frame is a constant called therank of E .If c is a primitive idempotent of E , the only possible eigenvalues of L ( c ) are 0 , and 1. Thecorresponding eigenspaces are respectively denoted by E ( c, , E ( c, ) and E ( c,
1) and the decom-position E = E ( c, ⊕ E ( c,
12 ) ⊕ E ( c, E with respect to c .Suppose now that ( c i ) ≤ i ≤ r is a Jordan frame in E and let, for 1 ≤ i, j ≤ r , E ij = ( E ( c i ,
1) = IR c i if i = jE ( c i , ) ∩ E ( c j , ) if i = j. Then (See Faraut-Kor´anyi (1994), Theorem IV.2.1) we have E = ⊕ i ≤ j E ij and the dimension of E ij is, for i = j , a constant d called the Jordan constant. It is related to the dimension n and the rank r of E by the relation n = r + r ( r − d . For 1 ≤ k ≤ r , let P k denote the orthogonal projection on the Jordan subalgebra E ( k ) = E ( c + ... + c k , , det ( k ) the determinant in the subalgebra E ( k ) and, for x in E , ∆ k ( x ) = det ( k ) ( P k ( x )). Then∆ k is called the principal minor of order k with respect to the Jordan frame ( c i ) ≤ i ≤ r . For s =( s , ..., s r ) ∈ IR r , and x in Ω, we write∆ s ( x ) = ∆ ( x ) s − s ∆ ( x ) s − s ...... ∆ r ( x ) s r . his is the generalized power function. Note that, if x = r X i =1 λ i c i , then ∆ s ( x ) = λ s λ s ...λ s r r and that∆ s ( x ) = (det x ) p if s = ( p, ..., p ) with p ∈ IR. It is also easy to see that ∆ s + s ′ ( x ) = ∆ s ( x )∆ s ′ ( x ).In particular, if m ∈ IR and s + m = ( s + m, ..., s r + m ), we have ∆ s + m ( x ) = ∆ s ( x )(det x ) m .As we have mentioned above, one may suppose that E is the algebra of real symmetric matriceswith rank r . In this case, the Jordan product xy of two symmetric matrices x and y is definedby ( x.y + y.x ) where x.y is the ordinary product of the matrices x and y , the cone Ω is thecone of positive definite matrices, Ω is the cone of symmetric non negative matrices and d = 1. If x = ( x ij ) ≤ i,j ≤ r is an ( r, r ) − symmetric positive definite matrix, and if, for 1 ≤ k ≤ r , we denote P k ( x ) = ( x ij ) ≤ i,j ≤ k and ∆ k ( x ) = det( x ij ) ≤ i,j ≤ k , the generalized power is the function on Ω definedby ∆ s ( x ) = ∆ ( x ) s − s ∆ ( x ) s − s ...... ∆ r ( x ) s r . The definition of the Riesz distribution on a symmetric cone Ω relies on the notion of generalizedpower. In fact, for σ is in Ω and s such that, for all i , s i > ( i − d , the measure on Ω R ( s, σ )( dx ) = 1Γ Ω ( s )∆ s ( σ − ) e − <σ,x> ∆ s − nr ( x ) Ω ( x ) dx where Γ Ω ( s ) = (2 π ) n − r r Y j =1 Γ( s j − ( j − d s and σ .We come now to the general definition of a division algorithm in a symmetric cone Ω. Let G bethe connected component of the identity in G (Ω). A division algorithm is defined as a measurablemap g from Ω into G such that, for all y in Ω , g ( y )( y ) = e .As in the case of symmetric matrices, we will introduce two important division algorithms, thefirst is based on the quadratic representation x P ( x − ) and the second algorithm takes its valuesin the triangular group T . For the definition of T , we need to introduce some other facts concerninga Jordan algebra. For x and y in E , let x ✷ y denote the endomorphism of E defined by x ✷ y = L ( xy ) + [ L ( x ) , L ( y )] = L ( xy ) + L ( x ) L ( y ) − L ( y ) L ( x ) . (1)If c is an idempotent and if z is an element of E ( c, ), τ c ( z ) = exp(2 z ✷ c )is called a Frobenius transformation, it is an element of the group G .Given a Jordan frame ( c i ) ≤ i ≤ r , the subgroup of GT = τ c ( z (1) ) .....τ c r − ( z ( r − ) P ( r X i =1 a i c i ) , a i > , z ( j ) ∈ r M k = j +1 E jk s called the triangular group corresponding to the Jordan frame ( c i ) ≤ i ≤ r . It is an important result(Faraut-Kor´anyi, p.113, Prop VI.3.8) that the symmetric cone Ω of the algebra E is parameterizedby the set E + = { u = r X i =1 u i c i + X i
Theorem 3.1.
Let X and Y be two independent Riesz random variables X ∼ R ( s, σ ) and Y ∼ R ( s ′ , σ ) . If we set V = X + Y and U = g ( X + Y )( X ) , theni) V is a Riesz random variable V ∼ R ( s + s ′ , σ ) and is independent of U ii) The density of U with respect to the Lebesgue measure is B Ω ( s, s ′ ) ∆ s − nr ( x )∆ s ′ − nr ( e − x ) Ω ∩ ( e − Ω) ( x ) , where B Ω ( s, s ′ ) is the beta function defined on the symmetric cone Ω (See Faraut-Kor´anyi, 1994,p.130) by B Ω ( s, s ′ ) = Γ Ω ( s )Γ Ω ( s ′ )Γ Ω ( s + s ′ ) . roof. Consider the transformation: Ω × Ω −→ (Ω ∩ ( e − Ω)) × Ω; ( x, y ) ( u, v ), where v = x + y = te, t ∈ T and u = t − ( x ). Its Jacobian is det t − = (det t ( e )) − nr = (det v ) − nr .The density of probability of ( U, V ) with respect to the Lebesgue measure is then given by1Γ Ω ( s )Γ Ω ( s ′ )∆ s + s ′ ( σ − ) ∆ s − nr ( t ( u ))∆ s ′ − nr ( t ( e − u )) (det v ) nr e − <σ,v> K ( u, v )where K is defined by K = { ( u, v ) /u ∈ Ω ∩ ( e − Ω) and v ∈ Ω } . Using equality (3 .
5) in Hassairi and Lajmi (2001), this density may be written asΓ Ω ( s + s ′ )Γ Ω ( s )Γ Ω ( s ′ ) ∆ s − nr ( u ) ∆ s ′ − nr ( e − u ) 1Γ Ω ( s + s ′ )∆ s + s ′ ( σ − ) e − <σ,v> ∆ s + s ′ − nr ( v ) K ( u, v ) . From this we deduce that U and V are independent and that V ∼ R ( s + s ′ , σ ). Furthermore thedistribution of U is concentrated on Ω ∩ ( e − Ω) with a density equal toΓ Ω ( s + s ′ )Γ Ω ( s )Γ Ω ( s ′ ) ∆ s − nr ( u )∆ s ′ − nr ( e − u ) Ω ∩ ( e − Ω) ( u ) . ✷ Note that the distribution of the random variable U = g ( X + Y )( X ) is called beta-Rieszdistribution with parameters s and s ′ (See Hassairi et al (2005)). In the case of symmetric matrices,the random variable U is nothing but U = ( T − ) X ( T − ) ∗ , where T is a lower triangular matrix with positive diagonal such that X + Y = T T ∗ . Theorem 3.2.
Let b g ( b ) be the division algorithm defined by (4). Let X and Y be independentrandom variables valued in Ω with strictly positive twice differentiable densities. Set V = X + Y and U = g ( V )( X ) . If U and V are independent then there exist s, s ′ ∈ IR r ; s i > ( i − d , s ′ i > ( i − d for all i , and σ ∈ Ω such that X ∼ R ( s, σ ) and Y ∼ R ( s ′ , σ ).The proof of this theorem relies on the resolution of two functional equations given in the fol-lowing theorems which are interesting in their own rights. The proofs of these theorems are givenin Section 4. Theorem 3.3.
Let a : Ω ∩ ( e − Ω) −→ IR and g : Ω −→ IR be functions such that, for any x ∈ Ω ∩ ( e − Ω) and t ∈ T , a ( x ) = g ( tx ) − g ( t ( e − x )) . (5) ssume that g is differentiable, then there exist p ∈ IR r and c ∈ IR such that, for any x ∈ Ω ∩ ( e − Ω) and y ∈ Ω , a ( x ) = log ∆ p ( x ) − log ∆ p ( e − x ) , g ( y ) = log ∆ p ( y ) + c. Theorem 3.4.
Let a : Ω ∩ ( e − Ω) −→ IR and a , g : Ω −→ IR be functions satisfying a ( x ) + a ( te ) = g ( tx ) + g ( t ( e − x )) , (6) for any x ∈ Ω ∩ ( e − Ω) and t ∈ T . Assume that g is twice differentiable then there exist p ′ ∈ IR r , δ ∈ E and c , c , c ∈ IR such that for any x ∈ Ω ∩ ( e − Ω) and y ∈ Ω , g ( y ) = log ∆ p ′ ( y )+ < δ, y > + c a ( x ) = log ∆ p ′ ( x ) + log ∆ p ′ ( e − x ) + c ,a ( y ) = 2 log ∆ p ′ ( y )+ < δ, y > + c , where c = c + c . Proof of Theorem 3.2.
We again use the transformation: Ω × Ω −→ (Ω ∩ ( e − Ω)) × Ω; ( x, y ) ( u, v ), where v = x + y = te, t ∈ T and u = t − ( x ). Let f X , f Y , f U and f V be the densities of X, Y, U and V , respectively. Then, since ( X, Y ) and (
U, V ) have independent components, wehave that, for all u ∈ Ω ∩ ( e − Ω) and v ∈ Ω, f U ( u ) f V ( v ) = (det v ) nr f X ( tu ) f Y ( t ( e − u )) (7)Taking logarithms in (7) we get g ( u ) + g ( te ) = g ( tu ) + g ( t ( e − u )) , (8)where g ( u ) = log f U ( u ) , (9) g ( v ) = log f V ( v ) − nr log det v, (10) g ( x ) = log f X ( x ) , (11)and g ( y ) = log f Y ( y ) . (12)Inserting e − u for u in (8) gives g ( e − u ) + g ( te ) = g ( t ( e − u )) + g ( tu ) . (13) ubtracting (13) from (8), we obtain g ( u ) − g ( e − u ) = [ g ( tu ) − g ( tu )] − [ g ( t ( e − u )) − g ( t ( e − u ))] . (14)Define a ( u ) = g ( u ) − g ( e − u ) , g = g − g , then, a ( u ) = g ( t ( u )) − g ( t ( e − u )) . Now according to Theorem 3.3, we obtain a ( u ) = log ∆ p ( u ) − log ∆ p ( e − u ) , g ( v ) = log ∆ p ( v ) + c, for p = ( p , ..., p r ) ∈ IR r and c ∈ IR.Hence g ( v ) = g ( v ) + g ( v ) = g ( v ) + log ∆ p ( v ) + c. (15)Inserting (15) back into (8) gives g ( u ) + g ( te ) = log ∆ p ( u ) + log ∆ p ( te ) + c + g ( t ( u )) + g ( t ( e − u )) , which can be rewritten in the form a ( u ) + a ( te ) = g ( t ( u )) + g ( t ( e − u )) , (16)where a ( u ) = g ( u ) − log ∆ p ( u ) , (17) a ( te ) = g ( te ) − log ∆ p ( te ) − c. (18)Hence, by Theorem 3.4, it follows that g ( v ) = log ∆ p ′ ( v )+ < δ, v > + c ,a ( u ) = log ∆ p ′ ( u ) + log ∆ p ′ ( e − u ) + c ,a ( v ) = 2 log ∆ p ′ ( v )+ < δ, v > + c , for some δ ∈ E , p ′ ∈ IR r and c , c , c ∈ IR such that c = 2 c − c . This with (12) imply log f Y ( y ) = log ∆ p ′ ( y )+ < δ, y > + c , that is f Y ( y ) = e c e <δ,y> ∆ p ′ ( y ) . ince f Y is a probability density, it follows that σ = − δ ∈ Ω and s ′ = p ′ + nr is such that s ′ i > ( i − d , ∀ ≤ i ≤ r so that Y ∼ R ( s ′ , σ ).Now by (15), we get g ( v ) = log ∆ p ( v ) + c + log ∆ p ′ ( v )+ < δ, v > + c = log ∆ p + p ′ ( v )+ < δ, v > + c + c . From (11) it follows that f X ( x ) = ∆ p + p ′ ( x ) e <δ,x> e c + c = ∆ p + s ′ − nr ( x ) e − <σ,x> e c + c which implies that p i + s ′ i > ( i − d , ∀ ≤ i ≤ r and consequently X ∼ R ( s, σ ) where s = p + s ′ . For the proof of Theorem 3.3, we need to establish some preliminary results.
Proposition 4.1. i) The map ϕ : Ω −→ IR; x log ∆ k ( x ) is differentiable on Ω and ∇ log ∆ k ( x ) = ( P k ( x )) − . ii) The differential of the map x ( P k ( x )) − is − P (( P k ( x )) − ) . Proof.
We first observe that if c is an idempotent of E and x is in E ( c, P ( x ) P ( c ) = P ( x )In fact, let h = h + h + h be the Peirce decomposition of an element h in E with respect to c .As x ∈ E ( c, P ( x )( h ) = P ( x )( h ) = 0 , and it follows that P ( x )( h ) = P ( x )( h ) = P ( x ) P ( c )( h )i) Let Ω k = P k (Ω) and consider the map ψ : Ω k −→ IR; x log det ( k ) ( x ). Then ϕ ( x ) = ψ ◦ P k ( x ).Since ∇ log det x = x − , we have log ∆ k ( x )) ′ ( h ) = ψ ′ ( P k ( x ))( P k ( x )) ′ ( h ) , ∀ h ∈ E = < ( P k ( x )) − , P k ( h ) > = < ( P k ( x )) − , h > ii) We have that the differential in x of the map β : x x − is − P ( x − ). As ( P k ( x )) − = β ◦ P k ( x ) , ∀ x ∈ Ω, then the differential in x ∈ Ω of the map x ( P k ( x )) − is equal to − P (( P k ( x )) − ) ◦ P k which is also equal to − P (( P k ( x )) − ) Proposition 4.2.
Let u be in E + and let x = t u ( e ) . Then, for ≤ k ≤ r, ( P k ( x )) − = t ∗ − u ( k X i =1 c i ) . Proof.
We know, from Faraut-Kor´anyi (Proposition VI.3.10), that for x in E and t u in T , P k ( t u ( x )) = t P k ( u ) ( P k ( x )) . Using (3) we can write P k ( x ) = τ ( e z (1) ) ...τ ( e z ( k − ) P ( k X i =1 u i c i )( c + ... + c k )= τ ( e z (1) ) ...τ ( e z ( k − )( k X i =1 u i c i )where e z ( j ) = k X l = j +1 z jl and z jl = u jl u j , j < l. Hence ( P k ( x )) − = τ ( − e z (1) ) ∗ ...τ ( − e z ( k − ) ∗ ( k X i =1 u i c i ) (19)On the other hand t ∗ − u ( k X i =1 c i ) = τ ( − z (1) ) ∗ ...τ ( − z ( r − ) ∗ ( k X i =1 u i c i ) . Let us recall that if c is an idempotent, z is in E ( c, ) and x , x , x are the Peirce componentsof x with respect to c . Then the Peirce components of y = τ c ( z ) ∗ ( x ) are y = 2 c [ z ( zx ) + zx ] + x ,y = 2 zx + x ,y = x . (20) his, after some elementary calculations, implies that, for k + 1 ≤ j ≤ r − z ( j ) ∈ E ( c j , ) and k X i =1 u i c i ∈ E ( c j , τ ( − z ( j ) ) ∗ ( k X i =1 u i c i ) = k X i =1 u i c i , ∀ k + 1 ≤ j ≤ r − , and it follows that, t ∗ − u ( k X i =1 c i ) = τ ( − z (1) ) ∗ ...τ ( − z ( k ) ) ∗ ( k X i =1 u i c i ) . Using again (20), we have that t ∗ − u ( k X i =1 c i ) = τ ( − z (1) ) ∗ ...τ ( − z ( k − ) ∗ ( k X i =1 u i c i ) . From this, we deduce that, for 1 ≤ j ≤ k − a ∈ E ( c + ... + c k , τ ( − z ( j ) ) ∗ a = τ ( − e z ( j ) ) ∗ a. Finally, we conclude that t ∗ − u ( k X i =1 c i ) = τ ( − z (1) ) ∗ ...τ ( − z ( k − ) ∗ ( k X i =1 u i c i )= τ ( − e z (1) ) ∗ ...τ ( − e z ( k − ) ∗ ( k X i =1 u i c i )= ( P k ( x )) − . For the proof of Theorem 3.3, we also need the following result for which a proof is given inHassairi and Lajmi (2001). For h = r X i =1 h i c i + X i Note that for u = e , we get K hj ( e ) = 2 h ( j ) ✷ c j and H ′ ( e )( h ) = r − X j =1 K hj ( e ) + 2 L ( h ) = 2[ r − X j =1 h ( j ) ✷ c j + L ( h )] (21)We are now in position to prove Theorem 3.3. Proof of Theorem 3.3. Differentiating (5) with respect to x gives a ′ ( x ) = t ∗ g ′ ( tx ) + t ∗ g ′ ( t ( e − x )) . (22)Setting x = e and replacing t by 2 t in (22) give g ′ ( te ) = t ∗ − b, where b = 14 a ′ ( e g ′ ( e ) . (23)Taking t = Id in (22) gives a ′ ( x ) = g ′ ( x ) + g ′ ( e − x ) . Inserting this identity back into (22) gives t ∗ − g ′ ( x ) − g ′ ( tx ) = − [ t ∗ − g ′ ( e − x ) − g ′ ( t ( e − x ))] . For any s ∈ (0 , 1) and x = te , we have g ′ ( sx ) = g ′ ( ste ) = 1 s t ∗ − b = 1 s g ′ ( x ) . Hence for any s ∈ (0 , t ∗ − g ′ ( x ) − g ′ ( tx ) = − s [ t ∗ − g ′ ( e − sx ) − g ′ ( t ( e − sx ))] . Consequently, on letting s −→ 0, we obtain for any x ∈ Ω ∩ ( e − Ω) and t ∈ T , that g ′ ( tx ) = t ∗ − g ′ ( x ) . (24)Now consider u ∈ E + . Then (5) can be rewritten as a ( x ) = g ( t u ( x )) − g ( t u ( e − x )) . (25) o differentiate (25) with respect to u , let us consider the functions H : E + −→ L ( E ); u t u and α : L ( E ) −→ E ; f f ( x ) , so that for u ∈ E + , we can write g ( t u ( x )) = ( g ◦ α ◦ H )( u ) . From Lemma 4.3, we have that( g ◦ α ◦ H ) ′ ( u )( h ) = g ′ ( t u ( x )) α ′ ( H ( u )) H ′ ( u )( h ) , ∀ h ∈ E = g ′ ( t u ( x )) α ( H ′ ( u )( h ))= < g ′ ( t u ( x )) , H ′ ( u )( h )( x ) > . Then, for all h ∈ E , < g ′ ( t u ( x )) , H ′ ( u )( h )( x ) > = < g ′ ( t u ( e − x )) , H ′ ( u )( h )( e − x ) > . Hence by (24) we get < t ∗ − u g ′ ( x ) , H ′ ( u )( h )( x ) > = < t ∗ − u g ′ ( e − x ) , H ′ ( u )( h )( e − x ) >, and for any s ∈ (0 , < t ∗ − u g ′ ( sx ) , H ′ ( u )( h )( sx ) > = < t ∗ − u g ′ ( x ) , H ′ ( u )( h )( x ) > = < t ∗ − u g ′ ( e − sx ) , H ′ ( u )( h )( e − sx ) > . Letting s −→ 0, we get < t ∗ − u g ′ ( x ) , H ′ ( u )( h )( x ) > = < t ∗ − u g ′ ( e ) , H ′ ( u )( h )( e ) > . Note that if u = e , we have for all x ∈ Ω ∩ ( e − Ω), < g ′ ( x ) , H ′ ( e )( h )( x ) > = < b, H ′ ( e )( h )( e ) >, ∀ h ∈ E. (26)Let b = r X i =1 b i c i + X i Proposition 4.4. Let x = r X i =1 x i c i + X i We use induction on the rank r of the algebra. The result is obvious for r = 2, in fact wehave q x c + q x c + q x = q ( x c + x c + x ) + ( q − q ) x c = [ q P ( c + c ) + ( q − q ) P ( c )]( x ) . Suppose the result true for a Jordan algebra with a rank equal to r − 1. We easily verify that r X i =1 q i x i c i + X i 0, we obtain, for any x ∈ Ω ∩ ( e − Ω) and t ∈ Tg ′′ ( tx ) = t ∗ − g ′′ ( x ) t − . (32)Let us now consider u ∈ E + , then (28) can be rewritten in the form a ′ ( x ) = t ∗ u g ′ ( t u ( x )) − t ∗ u g ′ ( t u ( e − x )) . (33)In order to differentiate (33) with respect to u , we introduce the functions H : E + −→ L ( E ); u t u , π : L ( E ) −→ L ( E ); f f ∗ and φ : L ( E ) −→ E ; f f ( x ) . We have ( π ◦ H ) ′ ( u )( h ) = H ′ ( u )( h ) ∗ , ∀ h ∈ E. As, for any u ∈ E + , we have g ′ ( t u ( x )) = ( g ′ ◦ φ ◦ H )( u ), then( g ′ ◦ φ ◦ H ) ′ ( u )( h ) = g ′′ ( t u ( x )) H ′ ( u )( h )( x ) . e also consider the functionsΨ : E + −→ L ( E ) × E ; u ( t ∗ u , g ′ ( t u ( x ))) , where x is fixed , and ζ : L ( E ) × E −→ E ; ( f, z ) f ( z ) . Then one can easily see thatΨ ′ ( u )( h ) = (cid:0) H ′ ( u )( h ) ∗ , g ′′ ( t u ( x )) H ′ ( u )( h )( x ) (cid:1) , and it follows that( ζ ◦ Ψ) ′ ( u )( h ) = ζ ′ (Ψ( u ))Ψ ′ ( u )( h )= ζ ′ ( t ∗ u , g ′ ( t u ( x )))( H ′ ( u )( h ) ∗ , g ′′ ( t u ( x )) H ′ ( u )( h )( x ))= ζ ( t ∗ u , g ′′ ( t u ( x )) H ′ ( u )( h )( x )) + ζ ( H ′ ( u )( h ) ∗ , g ′ ( t u ( x )))= t ∗ u g ′′ ( t u ( x )) H ′ ( u )( h )( x ) + H ′ ( u )( h ) ∗ ( g ′ ( t u ( x ))) . Now differentiating (33) with respect to u gives t ∗ u g ′′ ( t u ( x )) H ′ ( u )( h )( x )+ H ′ ( u )( h ) ∗ ( g ′ ( t u ( x ))) = t ∗ u g ′′ ( t u ( e − x )) H ′ ( u )( h )( e − x )+ H ′ ( u )( h ) ∗ ( g ′ ( t u ( e − x ))) . If we make u = e , we get g ′′ ( x ) H ′ ( e )( h )( x ) + H ′ ( e )( h ) ∗ g ′ ( x ) = g ′′ ( e − x ) H ′ ( e )( h )( e − x ) + H ′ ( e )( h ) ∗ g ′ ( e − x ) . (34)That is( g ′′ ( x ) + g ′′ ( e − x )) H ′ ( e )( h )( x ) + H ′ ( e )( h ) ∗ ( g ′ ( x ) − g ′ ( e − x )) = g ′′ ( e − x )) H ′ ( e )( h )( e ) . Using (28) and (31), this becomes a ′′ ( x ) H ′ ( e )( h )( x ) + H ′ ( e )( h ) ∗ a ′ ( x ) = g ′′ ( e − x )) H ′ ( e )( h )( e ) . On the other hand, we know that ( a ′′ ( e − x ) = a ′′ ( x ) a ′ ( e − x ) = − a ′ ( x ) . Hence a ′′ ( x ) H ′ ( e )( h )( e − x ) − H ′ ( e )( h ) ∗ a ′ ( x ) = g ′′ ( x ) H ′ ( e )( h )( e ) . Given (31), we obtain H ′ ( e )( h ) ∗ a ′ ( x ) = g ′′ ( e − x ) H ′ ( e )( h )( e − x ) − g ′′ ( x ) H ′ ( e )( h )( x ) , ∀ h ∈ E. Setting h = e in this equality, we obtain by (21), ′ ( x ) = g ′′ ( e − x )( e − x ) − g ′′ ( x )( x ) . (35)Therefore, using (35) and the fact that, for all x in Ω ∩ ( e − Ω), a ′ ( x ) = g ′ ( x ) − g ′ ( e − x ), theequality (34) becomes g ′′ ( x ) H ′ ( e )( h )( x ) − H ′ ( e )( h ) ∗ g ′′ ( x )( x ) = g ′′ ( e − x ) H ′ ( e )( h )( e − x ) − H ′ ( e )( h ) ∗ g ′′ ( e − x )( e − x ) . (36)For s ∈ (0 , 1) change x to sx , to obtain g ′′ ( x ) H ′ ( e )( h )( x ) − H ′ ( e )( h ) ∗ g ′′ ( x )( x ) = s [ g ′′ ( e − sx ) H ′ ( e )( h )( e − sx ) − H ′ ( e )( h ) ∗ g ′′ ( e − sx )( e − sx )] . Letting s −→ 0, implies that for any x ∈ Ω ∩ ( e − Ω) and h ∈ E , H ′ ( e )( h ) ∗ g ′′ ( x )( x ) = g ′′ ( x ) H ′ ( e )( h )( x ) . (37)If x = e , then, using the fact that g ′′ ( e ) = 4 B , we can write H ′ ( e )( h ) ∗ B ( e ) = BH ′ ( e )( h )( e ) , ∀ h ∈ E. (38)Now let B ( e ) = r X i =1 λ i c i + X i We are very grateful to Jacek Wesolowski for discussions on the subject ofthis paper. References 1. Bobecka, K., and Wesolowski, J. (2002). The Lukacs-Olkin-Rubin theorem without invariance ofthe ”quotient”, Studia Math. 152 147-160.2. M. Casalis, M., and Letac, G. (1996). The Lukacs-Olkin-Rubin characterization of the Wishart istributions on symmetric cone, Ann.Statist. 24 763-786.3. Faraut, J., and Kor´anyi, A. (1994). Analysis on Symmetric Cones , Oxford Univ, Press,.4. Hassairi, A., and Lajmi S., (2001) Riesz exponential families on symmetric cones, J. Theoret.Probab . 14 (2001) 927-948.5. Hassairi, A., Lajmi, S., and Zine, R. (2005). Beta-Riesz distributions on symmetric cones, J. Stat.Plann. Inf . 133 387-404.6. Lajmi, S., (1997). Une caract´erisation des familles exponentielles de Riesz, C.R.Acad Sci. Paris, t. 325 915-920.7. Lajmi, S., (1998). Les familles exponentielles de Riesz sur les cˆones sym´etriques, Th`ese , Universit´ede Tunis.8. Lukacs, E., (1955). A characterization of the gamma distribution, Ann. Statist. 26 319-324.9. Massam, H. and Neher, E. (1997). On transformation and determinants of Wishart variables onsymmetric cone, J. Theoret. Probab . 10 867-902.10. Olkin, I., and Rubin, H., (1962). A characterization of the Wishart distribution, Ann.Math. Statist. 33 1272-1280.33 1272-1280.