Explicit expressions for joint moments of n -dimensional elliptical distributions
aa r X i v : . [ m a t h . S T ] A ug Expressions for joint moments of elliptical distributions
Baishuai Zuo a , Chuancun Yin a, ∗ , Narayanaswamy Balakrishnan b, ∗ a School of Statistics, Qufu Normal University, Qufu, Shandong 273165, China b Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario,Canada
Abstract
Inspired by Stein’s lemma, we derive two expressions for the joint momentsof elliptical distributions. We use two different methods to derive E [ X f ( X )]for any measurable function f satisfying some regularity conditions. Then,by applying this result, we obtain new formulae for expectations of prod-uct of normally distributed random variables, and also present simplifiedexpressions of E [ X f ( X )] for multivariate Student- t , logistic and Laplacedistributions. Keywords:
Multivariate elliptical distributions; Multivariate normal distribution; Jointmoments; Stein’s lemma
1. Introduction and motivation
Stein’s (1981) lemma discusses the determination of
Cov ( X, h ( Y )) for abivariate normal random vector ( X, Y ), where h is any differentiable func-tion with finite extensions E | h ′ ( Y ) | . Inspired by the original work of Stein,several and generalizations have appeared in the literature. Liu (1994) gen-eralized the lemma to the multivariate normal case, while Landsman (2006)showed that Stein’s type lemma also holds when ( X, Y ) is distributed as bi-variate elliptical. This result got extended to multivariate elliptical vectors ∗ Corresponding author.
Email addresses: [email protected] (Chuancun Yin), [email protected] (Narayanaswamy Balakrishnan)
Preprint submitted to Elsevier August 4, 2020 y Landsman and Neslehova (2008); see also Landsman et al.(2013) for a sim-ple proof. Recently, Shushi (2018) derived the multivariate Stein’s lemma fortruncated elliptical random vectors.In this work, we derive expressions of the joint moments E [ X f ( X )] forany measurable function f satisfying some regularity conditions. In par-ticular, we obtain new formulae for the expectation of product of normallydistributed random variables, and also present expressions of E [ X f ( X )] formultivariate Student- t , logistic and Laplace distributions.The rest of the paper is organized as follows. Section 2 reviews somedefinitions and properties of the family of elliptical distributions. Section3 presents explicit expressions of joint moments of elliptical distributionsby a direct method. Section 4 derives these joint moments by the use ofStein’s lemma. Section 5 shows the equivalence of the two expressions underthe condition that the scale matrix is positive definite. Section 6 presentsexpression for expectation of products of correlated normal variables. Section7 presents simplified results for the special cases of multivariate Student- t ,logistic and Laplace distributions as illustrative examples of the general resultestablished here. Finally, Section 8 gives some concluding remarks.
2. Family of elliptical distributions
Elliptical distributions are generalizations of multivariate normal distri-bution and possess many novel tractable properties, in addition to allow-ing fat tails with suitably chosen keernels. This class of distributions wasfirst introduced by Kelker (1970), and has been widely discussed in detailby Fang et al. (1990), and Kotz et al. (2000). A n × X = ( X , · · · , X n ) T is said to have an elliptically symmetric distribution if2ts characteristic function has the form E [exp( i t T X )] = e i t T µ φ (cid:18) t T Σt (cid:19) for all t ∈ R n , denoted X ∼ E n ( µ , Σ , φ ), where φ is called the characteristicgenerator with φ (0) = 1, µ ( n -dimensional vector) is the location parameter,and Σ ( n × n matrix with Σ ≥ ) is the dispersion matrix (or scale matrix).The mean vector E ( X ) (if it exists) coincides with the location vector andthe covariance matrix Cov( X ) (if it exists) is − φ ′ (0) Σ . The generator of themultivariate normal distribution, for example, is given by φ ( u ) = exp( − u ).In general, the elliptical vector X ∼ E n ( µ , Σ , φ ) may not have a densityfunction. However, if the density f X ( x ) exists, then it is of the form f X ( x ) = c n p | Σ | g n (cid:18)
12 ( x − µ ) T Σ − ( x − µ ) (cid:19) , x ∈ R n , (1)where µ is an n × Σ is an n × n positive definite scale matrix,and g n ( u ), u ≥
0, is the density generator of X . This density generatorsatisfies the condition Z ∞ t ( n/ − g n ( t )d t < ∞ , and the normalizing constant c n is given by c n = Γ( n/ π ) n/ (cid:20)Z ∞ t ( n/ − g n ( t )d t (cid:21) − . (2)Two important special cases are multivariate normal family with g n ( u ) = e − u ,and multivariate generalized Student- t family with g n ( u ) = (1+ uk n,p ) − p , wherethe parameter p > n and k n,p is some constant that may depend on n and p .To derive the mixed moments of elliptical distributions, we use the cu-mulative generators G n ( u ) and G n ( u ), which are given by G n ( u ) = Z ∞ u g n ( v )d v (3)3nd G n ( u ) = Z ∞ u G n ( v )d v, (4)respectively (see Landsman et al. (2018)), and the corresponding normalizingconstants are c ∗ n = Γ( n/ π ) n/ (cid:20)Z ∞ t n/ − G n ( t )d t (cid:21) − (5)and c ∗∗ n = Γ( n/ π ) n/ (cid:20)Z ∞ t n/ − G n ( t )d t (cid:21) − . (6)Throughout this paper, x ∈ R n will denote an n -dimensional vector and x T = ( x , · · · , x n ) its transpose. For an n × n matrix Σ ∈ R n × n , | Σ | is thedeterminant of Σ . If Σ is positive definite, then its Cholesky decompositionis known to be unique.
3. Direct method of derivation
Consider a random vector X ∼ E n ( µ , Σ , g n ) with mean vector µ =( µ , · · · , µ n ) T and positive define matrix Σ = ( σ ij ) ni,j =1 . Partition y ∈ R n intotwo parts as y = ( y , y (2) ) T each with 1 and n − A = ( a ij ) ni,j =1 such that AA T = Σ . In terms of components, we have a = √ σ , a i = σ i a , i = 2 , · · · , n, a ik = 0 , i < k, (7) a kk = vuut σ kk − k − X i =1 a ki , a ik = σ ik − P k − j =1 a ij a kj a kk , i = k + 1 , · · · , n. (8)4et f : R n → R be a twice continuously differentiable function, and shalluse ( ∇ i,j f ( x )) ni,j =1 to denote the Hessian matrix of f . In addition, we denote ∇ i,j f ( x ) = ∂ f ( x ) ∂x i ∂x j , i, j = 1 , , · · · , n, ∇ i f ( x ) = ∂f ( x ) ∂y i , i = 1 , , · · · , n and ∇ f ( x ) = (cid:18) ∂f ( x ) ∂x , ∂f ( x ) ∂x , · · · , ∂f ( x ) ∂x n (cid:19) T . Let X ∗ ∼ E n ( µ , Σ , G n ) and X ∗∗ ∼ E n ( µ , Σ , G n ) be two elliptical randomvectors with generators G n ( u ) and G n ( u ), repectively.The following theorem gives an expression for joint moments of ellipticaldistributions. Theorem 1.
Let X ∼ E n ( µ , Σ , g n ) be an n -dimensional elliptical randomvector with density generator g n , positive definite matrix Σ = ( σ i,j ) ni,j =1 ,and finite expectation µ . Further, let f : R n → R be a twice continuouslydifferentiable function satisfying E [ ∇ i,j f ( X ∗∗ )] < ∞ and E [ ∇ i f ( X ∗ )] < ∞ .Let, in addition, lim | x |→∞ x f ( A x + µ ) G n (cid:18) x T x (cid:19) = 0 (9) and lim | x |→∞ [ ∇ f ( A x + µ )] G n (cid:18) x T x (cid:19) = 0 . (10) Then, E [ X f ( X )] = σ b ∗ n E [ f ( X ∗ )] + b ∗∗ n n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X ∗∗ )]+ 2 µ b ∗ n n X i =1 σ i E [ ∇ i f ( X ∗ )] + µ E [ f ( X )] , (11) where b ∗ n = c n c ∗ n and b ∗∗ n = c n c ∗∗ n . E [ X f ( X )] = c n p | Σ | Z R n x f ( x ) g n (cid:26)
12 ( x − µ ) T Σ − ( x − µ ) (cid:27) d x = c n p | Σ | Z R n x f ( x ) g n (cid:26)
12 ( x − µ ) T ( AA T ) − ( x − µ ) (cid:27) d x . Now, setting y = A − ( x − µ ), we obtain E [ X f ( X )] = c n | A | p | Σ | Z R n ( a , y + µ ) f ( A y + µ ) g n (cid:26) y T y (cid:27) d y = c n Z R n ( a , y + µ ) f ( A y + µ ) g n (cid:26) y T y (cid:27) d y = a , c n Z R n y f ( A y + µ ) g n (cid:26) y T y (cid:27) d y + 2 a , µ c n Z R n y f ( A y + µ ) g n (cid:26) y T y (cid:27) d y + µ c n Z R n f ( A y + µ ) g n (cid:26) y T y (cid:27) d y = a , c n Z R n − I d y (2) + 2 a , µ c n Z R n − I d y (2) + µ c n Z R n f ( A y + µ ) g n (cid:26) y T y (cid:27) d y , where I = Z R y f ( A y + µ ) g n (cid:26) y T y (cid:27) d y = − Z R y f ( A y + µ ) ∂∂y G n (cid:26) y T y (cid:27) d y = Z R [ f ( A y + µ ) + y ∇ f ( A y + µ )] G n (cid:26) y T y (cid:27) d y = Z R f ( A y + µ ) G n (cid:26) y T y (cid:27) d y − Z R ∇ f ( A y + µ ) ∂∂y G n { y T y } d y = Z R f ( A y + µ ) G n (cid:26) y T y (cid:27) d y + Z R ∇ , f ( A y + µ ) G n (cid:26) y T y (cid:27) d y , I = Z R y f ( A y + µ ) g n (cid:26) y T y (cid:27) d y = − Z R f ( A y + µ ) ∂∂y G n (cid:26) y T y (cid:27) d y = Z R ∇ f ( A y + µ ) G n (cid:26) y T y (cid:27) d y . We then obtain E [ X f ( X )] = a { b ∗ n E [ f ( X ∗ )] + b ∗∗ n A T E ( ∇ i,j f ( X ∗∗ )) ni,j =1 A } + 2 a µ b ∗ n A T E [ ∇ f ( X ∗ )] + µ E [ f ( X )] , where A = ( a ij ) ni,j =1 = ( A , · · · , A n ). Now, upon using (7) and (8), weobtain (11), completing the proof of the theorem.Kan (2008) and Song and Lee (2015) have presented explicit formulae forproduct moments of multivariate Gaussian random variables. The followingcorollary presents an explicit expression for product moments of multivariateelliptical random variables, in general. Corollary 1 . Suppose X ∼ E n ( µ , Σ , g n ), X ∗ ∼ E n ( µ , Σ , G n ) and X ∗∗ ∼ E n ( µ , Σ , G n ). Let p , · · · , p n be nonnegative integers with p ≥
2. Then, wehave E " n Y i =1 X p i i = b ∗ n σ E " ( X ∗ ) p − n Y k =2 ( X ∗ k ) p k + b ∗∗ n (cid:26) σ ( p − p − E " ( X ∗∗ ) p − n Y k =2 ( X ∗∗ k ) p k + 2 n X j =2 σ σ j ( p − p j E " ( X ∗∗ ) p − ( X ∗∗ j ) p j − n Y k =2 , k = j ( X ∗∗ k ) p k + n X j =2 σ j p j ( p j − E " ( X ∗∗ ) p − ( X ∗∗ j ) p j − n Y k =2 , k = j ( X ∗∗ k ) p k n X j =2 n X i =2 , i = j σ j σ i p j p i E " ( X ∗∗ i ) p i − ( X ∗∗ j ) p j − ( X ∗∗ ) p − n Y k =2 , k = i, j ( X ∗∗ k ) p k + 2 b ∗ n µ (cid:26) σ ( p − E " ( X ∗∗ ) p − n Y k =2 ( X ∗∗ k ) p k + n X j =2 σ j p j E " ( X ∗ j ) p j − ( X ∗ ) p − n Y k =2 , k = j ( X ∗ k ) p k + µ E " X p − n Y k =2 X p k k . (12)From Corollary 1, we readily deduce the following relations for the specialcase of p = 3, for example: E " X n Y i =2 X p i i = b ∗ n σ E " X ∗ n Y k =2 ( X ∗ k ) p k + µ E " X n Y k =2 X p k k + b ∗∗ n (cid:26) n X j =2 σ σ j p j E " ( X ∗∗ j ) p j − n Y k =2 , k = j ( X ∗∗ k ) p k + n X j =2 σ j p j ( p j − E " X ∗∗ ( X ∗∗ j ) p j − n Y k =2 , k = j ( X ∗∗ k ) p k + n X j =2 n X i =2 , i = j σ j σ i p j p i E " ( X ∗∗ i ) p i − ( X ∗∗ j ) p j − X ∗∗ n Y k =2 , k = i, j ( X ∗∗ k ) p k + 2 b ∗ n µ (cid:26) σ E " n Y k =2 ( X ∗∗ k ) p k + n X j =2 σ j p j E " ( X ∗ j ) p j − X ∗ n Y k =2 , k = j ( X ∗ k ) p k .
4. Derivation by the use of Stein’s lemma
We now derive an expression for E [ X f ( X )] by using Stein’s Lemma. Itshould be noted that the positive definiteness of Σ is not necessary in thiscase. Theorem 2.
Suppose X ∼ E n ( µ , Σ , φ ) , and all the conditions of Theorem1 hold. Then, [ X f ( X )] = n X i =1 n X j =1 Cov ( X , X i ) Cov ( X ∗ , X ∗ j ) E [ ∇ i,j f ( X ∗∗ )]+ 2 µ n X i =1 Cov ( X , X i ) E [ ∇ i f ( X ∗ )]+ Cov ( X , X ) E [ f ( X ∗ )] + µ E [ f ( X )] . (13)Proof. By Lemma 2 of Landsman et al. (2013), we have Cov ( X , f ( X )) = n X i =1 Cov ( X , X i ) E [ ∇ i f ( X ∗ )] , and so E [ X f ( X )] = Cov ( X , f ( X )) + E [ X ] E [ f ( X )]= n X i =1 Cov ( X , X i ) E [ ∇ i f ( X ∗ )] + E [ X ] E [ f ( X )] . (14)Upon replacing f ( X ) by X f ( X ) in (14), we obtain E [ X f ( X )] = n X i =1 Cov ( X , X i ) E [ ∇ i ( X ∗ f ( X ∗ ))] + E [ X ] E [ X f ( X )]= n X i =1 Cov ( X , X i ) E [ X ∗ ∇ i f ( X ∗ )] + Cov ( X , X ) E [ f ( X ∗ )] + E [ X ] E [ X f ( X )]= n X i =1 Cov ( X , X i ) ( n X j =1 Cov ( X ∗ , X ∗ j ) E [ ∇ i,j f ( X ∗∗ )] + E [ X ∗ ] E [ ∇ i f ( X ∗ )] ) + Cov ( X , X ) E [ f ( X ∗ )] + E [ X ] ( n X i =1 Cov ( X , X i ) E [ ∇ i f ( X ∗ )] + E [ X ] E [ f ( X )] ) = n X i =1 n X j =1 Cov ( X , X i ) Cov ( X ∗ , X ∗ j ) E [ ∇ i,j f ( X ∗∗ )] + ( E [ X ]) E [ f ( X )]+ 2 E [ X ] n X i =1 Cov ( X , X i ) E [ ∇ i f ( X ∗ )] + Cov ( X , X ) E [ f ( X ∗ )] , as required. 9 . Equivalence of the two expressions We shall now establish the equivalence of the two expressions in (11) and(13) under the condition that Σ is positive definite. For this purpose, we willuse the following lemma; see, for example, Fang et al. (1990). Lemma 1.
For any non-negative measurable function f : R → R + , we have Z R n f (cid:18) y T y (cid:19) d y = (2 π ) n/ Γ( n/ Z ∞ u n/ − f ( u )d u. Proposition 1.
Under the conditions in Theorem 1, we have c n c ∗ n = − φ ′ (0) , c ∗ n c ∗∗ n = − φ ∗′ (0) , where φ and φ ∗ are the characteristic generating functions corresponding tothe density generators g n and G n , respectively. Proof. Let Y = p − φ ′ (0) Σ − ( X − E ( X )). Then, X ∼ E n ( µ , Σ , g n ) implies Y ∼ E n ( , I n , g n ) and Cov ( Y ) = − φ ′ (0) I n . It then follows that E ( Y T Y ) = − nφ ′ (0)= c n Z R n y T y g n (cid:18) y T y (cid:19) d y = 2 c n (2 π ) n/ Γ( n/ Z ∞ t n/ g n ( t )d t, by Lemma 1. Thus, c n = − nφ ′ (0)2 (2 π ) n/ Γ( n/ R ∞ t n/ g n ( t )d t . (15)Now, by using Eq.(9) of Landsman et al. (2013), we have1 = c ∗ n Z R n G n (cid:18) y T y (cid:19) d y = c ∗ n (2 π ) n/ Γ( n/ Z ∞ t n/ g n ( t )d t, c ∗ n = 1 (2 π ) n/ Γ( n/ R ∞ t n/ g n ( t )d t . (16)The result that c n c ∗ n = − φ ′ (0) readily follows from (15) and (16).For X ∗ ∼ E n ( µ , Σ , G n ), let Z = p − φ ∗′ (0) Σ − ( X ∗ − E ( X ∗ )). Then, Z ∼ E n ( , I n , G n ) and Cov ( Z ) = − φ ∗′ (0) I n . By using the same argumentsas above, we have E ( Z T Z ) = − nφ ∗′ (0)= c ∗ n Z R n z T z G n (cid:18) z T z (cid:19) d z = 2 c ∗ n (2 π ) n/ Γ( n/ Z ∞ t n/ G n ( t )d t = nc ∗ n (2 π ) n/ Γ( n/ Z ∞ t n/ − G n ( t )d t = nc ∗ n c ∗∗ n , and hence c ∗ n c ∗∗ n = − φ ∗′ (0) , as required. Remark 1.
From Proposition 1, we find that, when Σ is a positive definite,the expressions in (11) and (13) are indeed equivalent. Moreover, for a pos-itive semidefinite matrix Σ , we can rewrite (13) in terms of characteristicgenerators as follows: E [ X f ( X )] = φ ′ (0) φ ∗′ (0) n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X ∗∗ )] − µ φ ′ (0) n X i =1 σ i E [ ∇ i f ( X ∗ )] − φ ′ (0) σ E [ f ( X ∗ )] + µ E [ f ( X )] , (17)11 here φ and φ ∗ are the characteristic generators corresponding to the densitygenerators g n and G n , respectively.
6. Product moments of correlated normal random variables
We derive now expressions for E [ X f ( X )] of multivariate normal distri-bution and moments of products of correlated normal distribution. Corollary 2 . Suppose X ∼ E n ( µ , Σ , φ ) with density generator g ( u ) =exp {− u } and characteristic generator φ ( t ) = exp {− t } . Note in this casethat g ( u ) = G ( u ) = G ( u ) = exp {− u } , φ ( t ) = φ ∗ ( t ) and b ∗ n = b ∗∗ n = 1. Then,we have E [ X f ( X )] = n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X )] + 2 µ n X i =1 σ i E [ ∇ i f ( X )]+ σ E [ f ( X )] + µ E [ f ( X )] . In general, for any p ≥
2, we have the following recursive relation: E [ X p f ( X )] = n X i =1 n X j =1 σ i σ j E [ ∇ i,j (cid:0) X p − f ( X ) (cid:1) ]+ 2 µ n X i =1 σ i E [ ∇ i (cid:0) X p − f ( X ) (cid:3) + σ E [ X p − f ( X )] + µ E [ X p − f ( X )]= n X i =1 n X j =1 σ i σ j E [ X p − ∇ i,j f ( X )] + σ ( p − p − E [ X p − f ( X )]+ 2 µ n X i =1 σ i E [ X p − ∇ i f ( X )] + 2 µ σ ( p − E [ X p − f ( X )]+ σ E [ X p − f ( X )] + µ E [ X p − f ( X )] . This formula can be seen as a supplement to the following multivariate ver-sion of Stein’s identity (see Stein (1981) and Liu (1994)):
Cov ( X , f ( X )) = n X i =1 Cov ( X , X i ) E ( ∇ i f ( X )) ,
12r equivalently, E ( X f ( X )) = n X i =1 σ i E ( ∇ i f ( X )) + E ( X ) E ( f ( X )) . Stein’s identity for multivariate elliptical distributions also has a similar re-sult, that can be found in Landsman et al. (2013).In particular, when φ ( t ) = exp {− t } , then X ∼ N n ( µ , Σ ), X ∗ ∼ N n ( µ , Σ )and X ∗∗ ∼ N n ( µ , Σ ). In this case, we obtain the following recursion formula: E " n Y i =1 X p i i = σ E " ( X ) p − n Y i =2 ( X k ) p k + µ E " X p − n Y k =2 X p k k + σ ( p − p − E " ( X ) p − n Y k =2 ( X k ) p k + 2 n X j =2 σ σ j ( p − p j E " ( X ) p − ( X j ) p j − n Y k =2 , k = j ( X k ) p k + n X j =2 σ j p j ( p j − E " ( X ) p − ( X j ) p j − n Y k =2 , k = j ( X k ) p k + n X j =2 n X i =2 , i = j σ j σ i p j p i E " ( X i ) p i − ( X j ) p j − ( X ) p − n Y k =2 , k = i, j ( X k ) p k + 2 µ (cid:26) σ ( p − E " ( X ) p − n Y k =2 ( X k ) p k + n X j =2 σ j p j E " ( X j ) p j − ( X ) p − n Y k =2 , k = j ( X k ) p k .
7. Results for some special cases
In this section, we present the results for the special case of multivariateStudent- t , logistic and Laplace distributions. Example 7 . Multivariate Student- t distribution. A n -dimensional Student- t random vector X , with location parameter µ , scale matrix Σ and p > f X ( x ) = c n p | Σ | (cid:20) x − µ ) T Σ − ( x − µ ) p (cid:21) − p + n , x ∈ R n , where c n = Γ ( p + n ) Γ( p/ pπ ) n . We denote it by X ∼ St n ( µ , Σ , p ). In this case, thedensity generator is g n ( t ) = (cid:18) tp (cid:19) − ( p + n ) / , and so G n ( t ) and G n ( t ) can be expressed, respectively, as G n ( t ) = pp + n − (cid:18) tp (cid:19) − ( p + n − / and G n ( t ) = pp + n − pp + n − (cid:18) tp (cid:19) − ( p + n − / . In addition, c ∗ n = ( p + n − n/ π ) n/ p "Z ∞ t n/ − (cid:18) tp (cid:19) − ( p + n − / d t − = ( p + n − n/ pπ ) n/ pB ( n , p − ) , if p > c ∗∗ n = ( p + n − p + n − n/ π ) n/ p "Z ∞ t n/ − (cid:18) tp (cid:19) − ( p + n − / d t − = ( p + n − p + n − n/ pπ ) n/ p B ( n , p − ) , if p > , where Γ( · ) and B ( · , · ) are Gamma function and Beta function, respectively.Then, we have E [ X f ( X )] = σ b ∗ n E [ f ( X ∗ )] + b ∗∗ n n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X ∗∗ )]+ 2 µ b ∗ n n X i =1 σ i E [ ∇ i f ( X ∗ )] + µ E [ f ( X )] , (18)14here X ∼ E n ( µ , Σ , g n ), X ∗ ∼ E n ( µ , Σ , G n ), X ∗∗ ∼ E n ( µ , Σ , G n ), b ∗ n = p Γ( p + n ) B ( n , p − )( p + n − p )Γ( n ) = p p − , if p > b ∗∗ n = p Γ( p + n ) B ( n , p − )( p + n − p + n − p )Γ( n ) = p ( p − p − , if p > . Example 7 . Multivariate logistic distribution. The density function of a n -dimension logistic random vector X , with location parameter µ and scalematrix Σ , is given by f X ( x ) = c n p | Σ | exp (cid:8) − ( x − µ ) T Σ − ( x − µ ) (cid:9)(cid:2) (cid:8) − ( x − µ ) T Σ − ( x − µ ) (cid:9)(cid:3) , x ∈ R n , where c n = Γ( n/ π ) n/ (cid:20)Z ∞ t n/ − exp( − t )[1 + exp( − t )] d t (cid:21) − = 1(2 π ) n/ Ψ ∗ ( − , n , . Here Ψ ∗ µ ( z, s, a ) is the generalized Hurwitz-Lerch zeta function defined by (cf.Lin et al.(2006)) Ψ ∗ µ ( z, s, a ) = 1Γ( µ ) ∞ X n =0 Γ( µ + n ) n ! z n ( n + a ) s , which has an integral representationΨ ∗ µ ( z, s, a ) = 1Γ( s ) Z ∞ t s − e − at (1 − ze − t ) µ d t, where R ( a ) > R ( s ) > | z | ≤ z = 1); R ( s ) > z = 1. Wedenote it by X ∼ Lo n ( µ , Σ ). The density generator in this case is g n ( t ) = exp( − t )[1 + exp( − t )] , G n ( t ) and G n ( t ) are given by G n ( t ) = exp( − t )1 + exp( − t ) , G n ( t ) = ln [1 + exp( − t )] . In addition, c ∗ n = Γ( n/ π ) n/ (cid:20)Z ∞ t n/ − exp( − t )1 + exp( − t ) d t (cid:21) − = 1(2 π ) n/ Ψ ∗ ( − , n , c ∗∗ n = Γ( n/ π ) n/ (cid:26)Z ∞ t n/ − ln [1 + exp( − t )] d t (cid:27) − = Γ( n/ π ) n/ (cid:20) n Z ∞ t n/ e − t e − t d t (cid:21) − = 1(2 π ) n/ Ψ ∗ ( − , n + 1 , . Then, we have E [ X f ( X )] = σ b ∗ n E [ f ( X ∗ )] + b ∗∗ n n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X ∗∗ )]+ 2 µ b ∗ n n X i =1 σ i E [ ∇ i f ( X ∗ )] + µ E [ f ( X )] , (19)where X ∼ E n ( µ , Σ , g n ), X ∗ ∼ E n ( µ , Σ , G n ), X ∗∗ ∼ E n ( µ , Σ , G n ), b ∗ n = Ψ ∗ ( − , n , ∗ ( − , n , b ∗∗ n = Ψ ∗ ( − , n + 1 , ∗ ( − , n , . emark 2 A simplification of c n can be found in Yin et al. (2018): c n = √ π [Ψ ∗ ( − , n , − , if n = 1 , π , if n = 2 , π ln 2 , if n = 4 , π n/ (2 n/ − ζ ( n − , if n ≥ , n = 4 , where ζ is Riemann zeta function, and it is defined as ζ ( s ) = ( P ∞ n =1 1 n s = − − s P ∞ n =1 1(2 n − s , if R ( s ) > , − − s P ∞ n =1 ( − n +1 n s , if R ( s ) > , s = 1 , which can, except for a simple pole at s = 1 with its residue 1, be continuedmeromorphically to the whole complex s-plane (see Srivastava (2003) andChoi et al. (2004) for details). Example 7 . Multivariate Laplace distribution. The density function of aLaplace random vector X , with location parameter µ and scale matrix Σ , isgiven by f X ( x ) = c n p | Σ | exp (cid:8) − [( x − µ ) T Σ − ( x − µ )] / (cid:9) , x ∈ R n , where c n = Γ( n/ π n/ Γ( n ) . We denote it by X ∼ La n ( µ , Σ ). The density genera-tor in this case is g n ( t ) = exp( −√ t ), and so G n ( t ) = (1 + √ t ) exp( −√ t ) , G n ( t ) = (3 + 2 t + 3 √ t ) exp( −√ t ) . In addition, c ∗ n = n Γ( n/ π n/ Γ( n + 2) , c ∗∗ n = n ( n + 2)Γ( n/ π n/ Γ( n + 4) . E [ X f ( X )] = σ b ∗ n E [ f ( X ∗ )] + b ∗∗ n n X i =1 n X j =1 σ i σ j E [ ∇ i,j f ( X ∗∗ )]+ 2 µ b ∗ n n X i =1 σ i E [ ∇ i f ( X ∗ )] + µ E [ f ( X )] , (20)where X ∼ E n ( µ , Σ , g n ), X ∗ ∼ E n ( µ , Σ , G n ), X ∗∗ ∼ E n ( µ , Σ , G n ), b ∗ n = n + 1 and b ∗∗ n = ( n + 3)( n + 1) .
8. Concluding remarks
In this work, we have derived expressions for the joint moments of ellipti-cal distributions by two methods and have shown their equivalence when thedispersion matrix Σ is positive definite. We have used the result to deriveexpectations of products of correlated normal random variables and have alsopresented simplified expressions for the joint moments of the special case ofmultivariate Student- t , logistic and Laplace distributions. It will be of inter-est to extend the results established have to truncated elliptical distributionsalong the lines of Shushi (2018), and to the family of skew-elliptical distri-butions proposed by Branco and Dey (2001). Work on these problems iscurrently under progress and hope to report the findings in a future paper. Acknowledgments
The research was supported by the National Natural Science Foundation ofChina (No. 11571198, 11701319).
ReferencesReferences [1] Branco, M.D., Dey, D.K., 2001. A general class of multivariate skew-elliptical distributions. Journal of Multivariate Analysis 79 (1), 99-113.182] Choi, J., Cho, Y.J., Srivastava, H.M., 2004. Series involving the zetafunction and multiple Gamma functions. Applied Mathematics andComputation 159, 509-537.[3] Fang, K.T., Kotz, S., Ng, K.W., 1990. Symmetric Multivariate and Re-lated Distributions. Chapman and Hall, New York.[4] Golub, G.H., Van Loan, C.F., 2012. Matrix Computation (4th Edition).Johns Hopkins University Press, Baltimore, Maryland.[5] Kan, R., 2008. From moments of sum to moments of product. Journalof Multivariate Analysis 99 (3), 542-554.[6] Kelker, D., 1970. Distribution theory of spherical distributions andlocation-scale parameter generalization. Sankhya 32, 419-430.[7] Kotz, S., Balakrishnan, N., Johnson, N.L., 2000. Continuous Multivari-ate Distributions-Vol.1, Second edition. John Wiley and Sons, New york.[8] Landsman, Z., 2006. On the generalization of Stein’s lemma for ellipticalclass of distributions. Statistics and Probability Letters 76, 1012-1016.[9] Landsman, Z., Makov, U., Shushi, T., 2018. A multivariate tail covari-ance measure for elliptical distributions. Insurance: Mathematics andEconomics 81, 27-35.[10] Landsman, Z., Neˇ s lehov´ aa