Two Sample Covariances from a Trivariate Normal Distribution
aa r X i v : . [ m a t h . S T ] D ec Two Sample Covariances from a Trivariate Normal Distribution
Steven Finch
December 17, 2015
Abstract.
The joint distribution of two off-diagonal Wishart matrixelements was useful in recent work on geometric probability [12]. Not findingsuch formulas in the literature, we report these here.
Let (
A, B, C ) be trivariate normally distributed with known mean (0 , ,
0) andcovariance matrixΣ = Cov
ABC = σ ρσ ρρ ρ , − σ > , − ρ + σ > . We wish to evaluate the probability that
AC < x and
BC < y . On the one hand,the joint density of (
AC, BC ) is [1] f ( x, y ) = 12 π exp (cid:18) ξ (cid:16) ρx + ρy − √ η p (1 − ρ ) x − − ρ + σ ) xy + (1 − ρ ) y (cid:17)(cid:19)p (1 − ρ ) x − − ρ + σ ) xy + (1 − ρ ) y where ξ = 1 − ρ + σ , η = (1+ σ ) / (1 − σ ). On the other hand, ( A , AB, AC, B , BC, C )is pseudo-Wishart distributed with 1 degree of freedom and thus ( AC, BC ) has char-acteristic function [2] F ( u, v ) = det − i σ ρσ ρρ ρ u/
20 0 v/ u/ v/ − / = 1 p − ρ ) u − iρu + (1 − ρ ) v − iρv + 2( − ρ + σ ) uv . Our first task is to confirm that f ( x, y ) and F ( u, v ) are indeed a Fourier transformpair.Now let ( A , B , C ), ( A , B , C ), . . . , ( A n , B n , C n ) be a random sample from N (0 , Σ) and define sample covariancesˆ γ A,C = n X j =1 A j C j , ˆ γ B,C = n X j =1 B j C j . Copyright c (cid:13) wo Sample Covariances from a Trivariate Normal Distribution γ A,C is well-known [3, 4, 5, 6, 7, 8, 9, 10]: | x/ | ( n − / √ π p − ρ Γ ( n/
2) exp (cid:18) ρx − ρ (cid:19) K ( n − / (cid:18) | x | − ρ (cid:19) and likewise for ˆ γ B,C , where K ( n − / ( θ ) is the modified Bessel function of the secondkind. The joint density f n ( x, y ), however, is not fully understood [11] even though F n ( u, v ) = 1[1 + (1 − ρ ) u − iρu + (1 − ρ ) v − iρv + 2( − ρ + σ ) uv ] n/ is comparatively simple. We shall determine f n ( x, y ) for n = 2 , , n , closing an evidently open issue. A special case involving f ( x, y ) wasexamined in [12], for which ρ = σ = 1 /
2, to answer a geometric probability question.Our discussion generalizes this earlier work.
1. Fourier Transform Pair
Our objective is to evaluate the integral: F ( u, v ) = 12 π ∞ Z −∞ ∞ Z −∞ exp (cid:18) ξ (cid:16) ( ρ + iξu ) x + ( ρ + iξv ) y − √ η p ax − bxy + ay (cid:17)(cid:19)p ax − bxy + ay dx dy where a = 1 − ρ , b = − ρ + σ . Let x = r √ λ cos( θ ) − κ sin( θ )) , y = r √ λ cos( θ ) + κ sin( θ ))then ax − bxy + by = r under the requirement that λ = 1 √ − σ , κ = 1 p − ρ + σ . The Jacobian determinant is (cid:18) √ (cid:19) (cid:12)(cid:12)(cid:12)(cid:12) λ cos( θ ) − κ sin( θ ) r ( − λ sin( θ ) − κ cos( θ )) λ cos( θ ) + κ sin( θ ) r ( − λ sin( θ ) + κ cos( θ )) (cid:12)(cid:12)(cid:12)(cid:12) = λκr wo Sample Covariances from a Trivariate Normal Distribution dx dy = λκr dr dθ . We obtain F ( u, v ) = λκ π π Z ∞ Z exp (cid:18) ξ r √ ρ + iξu ) ( λ cos( θ ) − κ sin( θ ))+ ( ρ + iξv ) ( λ cos( θ ) + κ sin( θ )) − p η i(cid:17) dr dθ = λκ π π Z ∞ Z exp (cid:18) ξ r √ h λρ cos( θ ) + iλξ ( u + v ) cos( θ ) − iκξ ( u − v ) sin( θ ) − p η i(cid:19) dr dθ = − λκξ √ π π Z λρ cos( θ ) + iλξ ( u + v ) cos( θ ) − iκξ ( u − v ) sin( θ ) − √ η dθ sinceRe h λρ cos( θ ) + iλξ ( u + v ) cos( θ ) − iκξ ( u − v ) sin( θ ) − p η i = 2 λρ cos( θ ) − p η ≤ λρ − p η and 4 λ ρ − η = 4 11 − σ ρ − σ − σ = − − σ (cid:0) − ρ + σ (cid:1) < . Let z = exp( i θ ), then dθ = − i dz/z and F ( u, v ) = iλκξ √ π Z C λρ ( z + z ) + iλξ ( u + v ) ( z + z ) − iκξ ( u − v ) i ( z − z ) − √ η dzz = √ iλκξπ Z C λρ ( z + 1) + iλξ ( u + v )( z + 1) − κξ ( u − v )( z − − √ ηz dz. The denominator of the integrand can be rewritten as(2 λρ − κξu + κξv + iλξu + iλξv ) z − p ηz + (2 λρ + κξu − κξv + iλξu + iλξv ) . Let δ = 2 ρ/ξ . The two poles z pos , z neg of the integrand are2 √ η ± p η − λρ − κξu + κξv + iλξu + iλξv )(2 λρ + κξu − κξv + iλξu + iλξv )2(2 λρ − κξu + κξv + iλξu + iλξv )= √ η ± p λ ξ ( u + v − iδ ) + κ ξ ( − u + v ) + 2 η λρ − κξu + κξv + iλξu + iλξv wo Sample Covariances from a Trivariate Normal Distribution z neg is always inside C , z pos is always outside. Clearly z neg is a pole of order 1and the associated residue islim z → z neg λρ − κξu + κξv + iλξu + iλξv )( z − z pos )= −
12 1 p λ ξ ( u + v − iδ ) + κ ξ ( − u + v ) + 2 η ;multiplying by (2 πi )( √ iλκξ/π ) gives √ λκξ p λ ξ ( u + v − iδ ) + κ ξ ( − u + v ) + 2 η = √ λκ p λ ( u + v − iδ ) + κ ( − u + v ) + α where α = √ η/ξ . This is the most useful expression for our purposes. The originalformula for F ( u, v ) can now be confirmed.For example, if ρ = σ = 1 /
2, we have ξ = 1, η = 3, λ = √ κ = 1, δ = 1, α = √
6. In this special case, our expression for F ( u, v ) becomes2 p u + v − i ) + ( − u + v ) + 6 = 1 p ( u − i )(3 u + i ) + ( v − i )(3 v + i ) + 2 u v − f ( x, y ) = √ λκ (2 π ) ∞ Z −∞ ∞ Z −∞ exp( − iux − ivy ) p λ ( u + v − iδ ) + κ ( − u + v ) + α dv du. Let s = u + v , t = − u + v , then u = ( s − t ) / v = ( s + t ) / − iux − ivy = − i s x + i t x − i s y − i t y = − i x + y s − i − x + y t ;hence f ( x, y ) = λκ √ π ) ∞ Z −∞ ∞ Z −∞ exp (cid:0) − i x + y s − i − x + y t (cid:1)p λ ( s − iδ ) + κ t + α dt ds wo Sample Covariances from a Trivariate Normal Distribution /
2. The inner integral is [13] ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1)p λ ( s − iδ ) + κ t + α dt = 2 κ K (cid:18) κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) p λ ( s − iδ ) + α (cid:19) . Define γ = 1 κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) , then the outer integral is [13] ∞ Z −∞ K (cid:16) γ p λ ( s − iδ ) + α (cid:17) exp (cid:18) − i x + y s (cid:19) ds = π exp (cid:18) x + y δ (cid:19) exp (cid:18) − αλ q γ λ + (cid:0) x + y (cid:1) (cid:19)q γ λ + (cid:0) x + y (cid:1) = 2 πκ exp (cid:18) x + y δ (cid:19) exp (cid:18) − α λκ q κ ( x + y ) + λ ( x − y ) (cid:19)q κ ( x + y ) + λ ( x − y ) . Multiplying by ( λκ/ ( √ π ) )) · (2 /κ ), we obtain λκ √ π exp (cid:18) x + y δ (cid:19) exp (cid:18) − α λκ q κ ( x + y ) + λ ( x − y ) (cid:19)q κ ( x + y ) + λ ( x − y ) and the original formula for f ( x, y ) can now be confirmed. For example, if ρ = σ =1 /
2, the probability that both
AC >
BC > / .
2. Sample Size n = 2We wish to evaluate f ( x, y ) = λ κ (2 π ) ∞ Z −∞ ∞ Z −∞ exp (cid:0) − i x + y s − i − x + y t (cid:1) λ ( s − iδ ) + κ t + α dt ds. The inner integral is [13] ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1) λ ( s − iδ ) + κ t + α dt = πκ exp (cid:16) − κ (cid:12)(cid:12) − x + y (cid:12)(cid:12) p λ ( s − iδ ) + α (cid:17)p λ ( s − iδ ) + α wo Sample Covariances from a Trivariate Normal Distribution ∞ Z −∞ exp (cid:16) − γ p λ ( s − iδ ) + α (cid:17)p λ ( s − iδ ) + α exp (cid:18) − i x + y s (cid:19) ds = 2 λ exp (cid:18) x + y δ (cid:19) K αλ s γ λ + (cid:18) x + y (cid:19) = 2 λ exp (cid:18) x + y δ (cid:19) K (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . Multiplying by ( λ κ / (2 π ) )) · ( π/κ ), we obtain that f ( x, y ) is λκ π exp (cid:18) x + y δ (cid:19) K (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . For example, if ρ = σ = 1 /
2, the probability that both ˆ γ A,C > γ B,C > . ....
3. Sample Size n = 3We wish to evaluate f ( x, y ) = √ λ κ (2 π ) ∞ Z −∞ ∞ Z −∞ exp (cid:0) − i x + y s − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) / dt ds. The inner integral is ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) / dt = − λ ( s − iδ ) dds ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1)p λ ( s − iδ ) + κ t + α dt = − λ ( s − iδ ) dds κ K (cid:18) κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) p λ ( s − iδ ) + α (cid:19) = 2 κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) K (cid:16) κ (cid:12)(cid:12) − x + y (cid:12)(cid:12) p λ ( s − iδ ) + α (cid:17)p λ ( s − iδ ) + α wo Sample Covariances from a Trivariate Normal Distribution ∞ Z −∞ K (cid:16) γ p λ ( s − iδ ) + α (cid:17)p λ ( s − iδ ) + α exp (cid:18) − i x + y s (cid:19) ds = παγλ exp (cid:18) x + y δ (cid:19) exp − αλ s γ λ + (cid:18) x + y (cid:19) = παγλ exp (cid:18) x + y δ (cid:19) exp (cid:18) − α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . Multiplying by ( √ λ κ / (2 π ) ) · (2 γ/κ ), we obtain that f ( x, y ) is λ κ √ πα exp (cid:18) x + y δ (cid:19) exp (cid:18) − α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) = 12 π √ − σ exp (cid:18) ξ (cid:16) ρx + ρy − √ η p (1 − ρ ) x − − ρ + σ ) xy + (1 − ρ ) y (cid:17)(cid:19) which is remarkably simple. For example, if ρ = σ = 1 /
2, the probability that bothˆ γ A,C > γ B,C > . ... .
4. Sample Size n = 4We wish to evaluate f ( x, y ) = 2 λ κ (2 π ) ∞ Z −∞ ∞ Z −∞ exp (cid:0) − i x + y s − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) dt ds. The inner integral is [13] ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) dt = π κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) exp (cid:18) − κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) p λ ( s − iδ ) + α (cid:19) · ( κ (cid:12)(cid:12) − x + y (cid:12)(cid:12) ( λ ( s − iδ ) + α ) / + 1 λ ( s − iδ ) + α ) wo Sample Covariances from a Trivariate Normal Distribution ∞ Z −∞ ( γ ( λ ( s − iδ ) + α ) / + 1 λ ( s − iδ ) + α ) exp (cid:16) − γ p λ ( s − iδ ) + α (cid:17) exp (cid:18) − i x + y s (cid:19) ds = 2 αγλ exp (cid:18) x + y δ (cid:19) s γ λ + (cid:18) x + y (cid:19) K αλ s γ λ + (cid:18) x + y (cid:19) = 1 αγλ κ exp (cid:18) x + y δ (cid:19) q κ ( x + y ) + λ ( x − y ) K (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . Multiplying by (2 λ κ / (2 π ) ) · ( πγ/ (2 κ )), we obtain that f ( x, y ) is λ κ πα exp (cid:18) x + y δ (cid:19) q κ ( x + y ) + λ ( x − y ) K (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . For example, if ρ = σ = 1 /
2, the probability that both ˆ γ A,C > γ B,C > . ....
5. Formula for Arbitrary n We wish to evaluate f n ( x, y ) = 2 ( n − / λ n κ n (2 π ) ∞ Z −∞ ∞ Z −∞ exp (cid:0) − i x + y s − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) n/ dt ds. The inner integral is [14, 15, 16] ∞ Z −∞ exp (cid:0) − i − x + y t (cid:1) ( λ ( s − iδ ) + κ t + α ) n/ dt = √ π Γ( n/ ( n − / κ ( n +1) / (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) ( n − / λ ( s − iδ ) + α ) ( n − / · K ( n − / (cid:18) κ (cid:12)(cid:12)(cid:12)(cid:12) − x + y (cid:12)(cid:12)(cid:12)(cid:12) p λ ( s − iδ ) + α (cid:19) wo Sample Covariances from a Trivariate Normal Distribution ∞ Z −∞ K ( n − / (cid:16) γ p λ ( s − iδ ) + α (cid:17) ( λ ( s − iδ ) + α ) ( n − / exp (cid:18) − i x + y s (cid:19) ds = r π α ( n − / γ ( n − / λ n/ exp (cid:18) x + y δ (cid:19) γ λ + (cid:18) x + y (cid:19) ! ( n − / · K ( n − / αλ s γ λ + (cid:18) x + y (cid:19) = r π ( n − / α ( n − / γ ( n − / λ n/ κ ( n − / exp (cid:18) x + y δ (cid:19) (cid:0) κ ( x + y ) + λ ( x − y ) (cid:1) ( n − / · K ( n − / (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) . Multiplying by 2 ( n − / λ n κ n (2 π ) · √ π Γ( n/ ( n − / κ γ ( n − / , we obtain that f n ( x, y ) is1Γ( n/ λ n/ κ n/ n/ πα ( n − / exp (cid:18) x + y δ (cid:19) (cid:0) κ ( x + y ) + λ ( x − y ) (cid:1) ( n − / · K ( n − / (cid:18) α λκ q κ ( x + y ) + λ ( x − y ) (cid:19) which indeed generalizes the cases n = 1 , , , (cid:18) (ˆ γ A,C − nρ ) / √ n (ˆ γ B,C − nρ ) / √ n (cid:19) ∼ N (cid:18)(cid:18) (cid:19) , (cid:18) ρ + 1 ρ + σρ + σ ρ + 1 (cid:19)(cid:19) as n → ∞ .
6. Closing Words
Testing the hypothesis H : γ A,C = γ B,C can be done by examining the differenceˆ γ A,C − ˆ γ B,C = ˆ γ A − B,C = n X j =1 ( A j − B j ) C j . wo Sample Covariances from a Trivariate Normal Distribution H is true, then (cid:18) A − BC (cid:19) ∼ N (cid:18)(cid:18) (cid:19) , (cid:18) − σ
00 1 (cid:19)(cid:19) and is independent of ρ ; further, the density of ˆ γ A − B,C is | x/ | ( n − / √ π (2 − σ ) ( n +1) / Γ ( n/ K ( n − / (cid:18) | x |√ − σ (cid:19) via known results [7] on Gaussian inner products. A considerable literature exists onthe harder problem of testing ˜ H : ρ A,C = ρ B,C where variances are unknown andunderlying distributions might not be normal [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50].We have not attempted to evaluate the inverse Fourier transform of G ( u, v ) = det − i σ ρσ ρρ ρ u/ v/ u/ w/ v/ w/ − / . There is no analog of Miller’s result [1], as far as we know, giving a joint density g ( x, y, z ) for ( AB, AC, BC ). Hence no distributional insight on (ˆ γ A,B , ˆ γ A,C , ˆ γ B,C )seems to be available here. Interestingly, a formula for a joint density for ( ˆ ρ A,B , ˆ ρ A,C , ˆ ρ B,C )is outlined in [51, 52] – evidently a sample size >
7. Acknowledgement
I am grateful to Robert Israel for a helpful discussion about residue calculus. Muchmore relevant material can be found at [53, 54], including experimental computerruns that aided theoretical discussion here.
References [1] K. S. Miller, Some multivariate density functions of products of Gaussian vari-ates,
Biometrika
52 (1965) 645–646; MR0207014 (34
An Introduction to Multivariate Statistical Analysis , 3 rd ed.,Wiley, 2003, pp. 258–259; MR1990662 (2004c:62001) .[3] K. Pearson, G. B. Jeffery and E. M. Elderton, On the distribution of the firstproduct moment-coefficient, in samples drawn from an indefinitely large normalpopulation, Biometrika
21 (1929) 164–193. wo Sample Covariances from a Trivariate Normal Distribution
Proc. Cambridge Philos. Soc.
28 (1932) 455–459.[5] H. O. Hirschfeld, The distribution of the ratio of covariance estimates in twosamples drawn from normal bivariate populations,
Biometrika
29 (1937) 65–79.[6] P. C. Mahalanobis, R. C. Bose and S. N. Roy, Normalization of statistical variatesand the use of rectangular coordinates in the theory of sampling distributions,
Sankhya
Annals Math. Statist.
34 (1963) 903–910; MR0150860 (27
An-nals Inst. Statist. Math.
19 (1967) 355–361; MR0219160 (36
The Algebra of Random Variables , Wiley, 1979, pp. 339–343;MR0519342 (80h:60029).[10] A. H. Joarder and M. H. Omar, Some character-istics of sample covariance, technical report (2008),http://aisys.kfupm.edu.sa/MATH ONLY/TechReports DATA/401.pdf.[11] S. J. Press,
Applied Multivariate Analysis , 2 nd ed., Krieger, 1982, pp. 107–113;MR0420970 (54 Table of Integrals, Series, and Products , 7 th ed., Elsevier/Academic Press, 2007, pp. 424, 427, 430, 435, 491, 722 & 738;MR2360010 (2008g:00005).[14] T. M. MacRobert, The modified Bessel function K n ( z ), Proc. Edinburgh Math.Soc.
38 (1919) 10–19.[15] G. N. Watson,
A Treatise on the Theory of Bessel Functions , 2 nd ed., CambridgeUniversity Press, 1952, pp. 172–173, 181–183, 185–188; MR1349110 (96i:33010).[16] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions withFormulas, Graphs, and Mathematical Tables , Dover Publications, 1992, p. 376;MR1225604 (94b:00012). wo Sample Covariances from a Trivariate Normal Distribution
Philos. Trans. Royal Soc. LondonSer. A
191 (1898) 229–311.[18] H. Hotelling, The selection of variates for use in prediction with some commentson the general problem of nuisance parameters,
Annals Math. Statist.
11 (1940)271–283; MR0002756 (2,111a).[19] E. J. Williams, The comparison of regression variables,
J. Royal Statis. Soc. Ser.B
21 (1959) 396-399; MR0113255 (22
Biometrics
15 (1959) 135–136.[21] E. J. Williams,
Regression Analysis , Wiley, 1959, pp. 72–89; MR0112212 (22
AnnalsMath. Statist.
34 (1963) 149–151; MR0145617 (26
Biometrika
55 (1968) 513–517.[24] M. A. Aitkin, W. C. Nelson and K. H. Reinfurt, Tests for correlation matrices,
Biometrika
55 (1968) 327–334; MR0232493 (38
Biometrics
24 (1968) 987–995.[26] O. J. Dunn and V. Clark, Correlation coefficients measured on the same indi-viduals,
J. Amer. Stat. Assoc.
64 (1969) 366–377.[27] L. C. A Corsten, On a test for the difference between two correlation coefficients,
Mededelingen Landbouwhogeschool Wageningen , v. 70 (1970) n. 13, 1–21.[28] I. Olkin, Correlations revisited,
Improving Experimental Design and Statisti-cal Analysis , Proc. 7 th Phi Delta Kappa Symposium on Educational Research,Madison, ed. J. C. Stanley, Rand McNally, 1967, pp. 102–128.[29] I. Olkin and M. Siotani, Asymptotic distribution of functions of a correlationmatrix,
Essays in Probability and Statistics , Shinko Tsusho, 1976, pp. 235–251;MR0603847 (82b:62061). wo Sample Covariances from a Trivariate Normal Distribution r vs. r compared with Hotelling’s method, Amer. EducationalResearch J. r vs. r compared with Hotelling’s method”, Amer.Educational Research J.
J. Amer. Stat. Assoc.
66 (1971) 904–908.[33] G. T. Duncan and M. W. J. Layard, A Monte-Carlo study of asymptotically ro-bust tests for correlation coefficients,
Biometrika
60 (1973) 551–558; MR0339407(49
Bio-metrics
31 (1975) 531–543; MR0373162 (51
Biometrika
63 (1976) 214–215.[36] D. A. Wolfe, A distribution-free test for related correlation coefficients,
Techno-metrics
19 (1977) 507–509.[37] S. C. Choi, Tests of equality of dependent correlation coefficients,
Biometrika
Statis-tische Hefte
PsychologicalBulletin
87 (1980) 245–251.[41] M. C. Yu and O. J. Dunn, Robust tests for the equality of two correlationcoefficients: A Monte Carlo study,
Educational and Psychological Measurement
42 (1982) 987–1004.[42] J. E. Boyer, A. D. Palachek and W. R. Schucany, An empirical study of relatedcorrelation coefficients,
J. Educational Statistics (1983) 75–86. wo Sample Covariances from a Trivariate Normal Distribution
Psychological Bulletin
111 (1992) 172–175.[44] T. E. Raghunathan, R. Rosenthal and D. B. Rubin, Comparing correlated butnonoverlapping correlation coefficients,
Psychological Methods
J. Quality & Quantity
37 (2003) 99–110.[46] K. May and J. B. Hittner, A note on statistics for comparing dependent corre-lations,
Psychological Reports
80 (1997) 475–480.[47] K. May and J. B. Hittner, Tests for comparing dependent correlations revisited:A Monte Carlo study,
J. Experimental Education
65 (1997) 257–269.[48] N. C. Silver, K. May and J. B. Hittner, A Monte Carlo evaluation of tests forcomparing dependent correlations,
J. General Psychology
130 (2003) 149–168.[49] N. C. Silver, J. B. Hittner and K. May, Testing dependent correlations withnonoverlapping variables: A Monte Carlo simulation,
J. Experimental Education
73 (2004) 53–69.[50] R. Wilcox and T. Tian, Comparing dependent correlations,
J. General Psychol-ogy
SankhyaSer. A
24 (1962) 1–8; MR0145605 (26
Sankhya Ser. B