Copula-based measures of asymmetry between the lower and upper tail probabilities
aa r X i v : . [ s t a t . M E ] A ug Copula-based measures of asymmetry between the lowerand upper tail probabilities
Shogo Kato ∗ ,a , Toshinao Yoshiba b,c and Shinto Eguchi aa Institute of Statistical Mathematics b Tokyo Metropolitan University c Bank of Japan
August 4, 2020
Abstract
We propose a copula-based measure of asymmetry between the lower and uppertail probabilities of bivariate distributions. The proposed measure has a simple formand possesses some desirable properties as a measure of asymmetry. The limit of theproposed measure as the index goes to the boundary of its domain can be expressedin a simple form under certain conditions on copulas. A sample analogue of theproposed measure for a sample from a copula is presented and its weak convergenceto a Gaussian process is shown. Another sample analogue of the presented measure,which is based on a sample from a distribution on R , is given. Simple methods forinterval estimation and nonparametric testing based on the two sample analoguesare presented. As an example, the presented measure is applied to daily returns ofS&P500 and Nikkei225. Keywords:
Asymptotic theory; Bootstrap; Extreme value theory; Gaussian process; Stockdaily return.
In statistical analysis of multivariate data, it is often the case that data have complexdependence structure among variables. As a statistical tool for analyzing such data, cop-ulas have gained their popularity in various academic fields, especially, finance, actuarialscience and survival analysis (see, e.g., Joe, 1997, 2014; Nelsen, 2006; McNeil et al., 2015).A copula is a multivariate cumulative distribution function with uniform [0 ,
1] margins.The bivariate case of Sklar’s theorem states that, for a bivariate cumulative distributionfunction F with margins F and F , there exists a copula C such that F ( x , x ) = C ( F ( x ) , F ( x )). Hence a copula can be used as a model for dependence structure andis applicable for flexible modeling. Another important advantage of using copulas is that ∗ Address for correspondence : Shogo Kato, Institute of Statistical Mathematics, 10-3 Midori-cho,Tachikawa, Tokyo 190-8562, Japan.
E-mail: [email protected] ,
1] margins. Let C be a setof all the bivariate copulas. Let C denote the survival copula associated with C , whichdefined by C ( u , u ) = 1 − u − u + C ( u , u ). Define ¯ u by ¯ u = 1 − u .2 Definition and basic properties
In this section we propose a measure for comparing the probabilities of the lower andupper tails of bivariate distributions. The proposed measure is defined as follows.
Definition 1.
Let ( X , X ) be an R -valued random vector. Assume X and X havecontinuous margins F and F , respectively. Then a measure of comparison between thelower-left and upper-right tail probabilities of ( X , X ) is defined by α ( u ) = log (cid:18) P ( F ( X ) > − u, F ( X ) > − u ) P ( F ( X ) ≤ u, F ( X ) ≤ u ) (cid:19) , < u ≤ . . Here the definition of the logarithm function is extended to be log( x/y ) = −∞ if x = 0 and y > , log( x/y ) = ∞ if x > and y = 0 , and log( x/y ) = 0 if x = y = 0 . Similarly it is possible to define a measure to compare the lower-right and upper-lefttail probabilities of bivariate distributions. Properties of this measure immediately followfrom those of α ( u ), which will be given hereafter, by replacing ( X , X ) by ( X , − X ).The calculation of α ( u ) can be simplified if the distribution of ( X , X ) is representedin terms of copula. The proof is straightforward and therefore omitted. Proposition 1.
Let C denote a copula of ( X , X ) given by C ( u , u ) = P ( F ( X ) ≤ u , F ( X ) ≤ u ) . Then α ( u ) defined in Definition 1 can be expressed as α ( u ) = log (cid:18) u − C (1 − u, − u ) C ( u, u ) (cid:19) . (1)Note that, using the survival copula C associated with C with ¯ u = 1 − u , the proposedmeasure (1) has the simpler expression α ( u ) = log (cid:18) C (¯ u, ¯ u ) C ( u, u ) (cid:19) . Throughout this paper, the lower [0 , u ] tail and the upper [1 − u, tail of the copula C are said to be symmetric if C ( u, u ) = C (¯ u, ¯ u ).Unlike many existing measures, the proposed measure (1) is not a global measure buta local one in the sense that this measure focuses on the probability of a subdomain of thecopula regulated by the index u . Setting a particular value of u or looking at the behaviorof α ( u ) for multiple choices of u , the proposed measure (1) provides a different insight fromthe global measure. For more details on the comparison between the proposed measureand existing ones, see Section 6.It is straightforward to see that the following basic properties hold for α ( u ). Proposition 2.
Let C be a set of all bivariate copulas. Denote the measure α ( u ) for thecopula C ∈ C by α C ( u ) . Assume that p L = C ( u, u ) , p U = C (¯ u, ¯ u ) , and C P ( u, v ) = C ( v, u ) is the permutated copula of C . Then, for < u ≤ . , we have that:(i) −∞ ≤ α C ( u ) ≤ ∞ for every C ∈ C ; the equality holds only when either p U = 0 or p L = 0 ;(ii) α C ( u ) = 0 if and only if p L = p U ; iii) for fixed p U , α C ( u ) is monotonically non-increasing with respect to p L ; similarly,for fixed p L , α C ( u ) is monotonically non-decreasing with respect to p U ;(iv) α C ( u ) = − α C ( u ) for every C ∈ C ;(v) α C P ( u ) = α C ( u ) for every C ∈ C ;(vi) if C ∈ C and { C n } n ∈ N is a sequence of copulas such that C n → C uniformly, then α ( C n ) → α ( C ) . Property (i) implies that the proposed measure is potentially unbounded although itis bounded except for the unusual case p U = 0 or p L = 0. Compared with a similarmeasure based on the difference between p U and p L , our measure is advantageous in thesensitivity of detecting the asymmetry of tail probabilities for small u ; see Section 6.2 fordetails. Property (ii) implies that α C ( u ) = 0 for any 0 < u ≤ . C is radiallysymmetric, namely, C ≡ C . Property (ii) is the same as an axiom of Dehgani et al. (2013)and is an extended property of Rosco and Joe (2013). Properties (iv)–(vi) are the sameas the axioms of tail asymmetry presented in Section 2 of Rosco and Joe (2013). It ispossible to use any function of p U /p L other than the logarithm function as a measureof tail asymmetry. However one nice property of the proposed measure is property (iv)which other functions of p U /p L do not have in general. We consider limits of the proposed measure (1) the index goes to the boundary of itsdomain. It follows from the expression (1) that α (0 .
5) = 0 , for any copula C ∈ C .Therefore we have lim u ↑ . α ( u ) = 0 . (2)The limiting behavior of α ( u ) as u → α (0) = lim u ↓ α ( u ) , (3)given that the limit exists. Here we present three expressions for the limit (3).The first expression is based on the tail dependence coefficients. Tail dependencecoefficients are often used as local dependence measures of bivariate distributions. Thelower-left and upper-right tail dependence coefficients of the random variables X and X are defined by λ L = lim u ↓ P ( F ( X ) ≤ u, F ( X ) ≤ u ) u and λ U = lim u ↑ P ( F ( X ) > u, F ( X ) > u )1 − u , respectively, given the limits exist. If ( X , X ) has the copula C , the expressions for λ L and λ U are simplified as λ L = lim u ↓ C ( u, u ) u and λ U = lim u ↑ C ( u, u )1 − u , (4)respectively (see, e.g., Joe, 2014, Section 2.13).4 heorem 1. Let ( X , X ) be an R -valued random vector with the copula C . Assumethat the lower-left and upper-right tail dependence coefficients of X and X exist and aregiven by λ L and λ U , respectively. Suppose that either λ L or λ U is not equal to zero. Then α (0) = log (cid:18) λ U λ L (cid:19) . See Supplementary Material for the proof. Theorem 1 can be generalized by utilizingthe concepts of tail orders and tail order parameters. If there exists a constant κ L > ℓ L ( u ) such that C ( u, u ) ∼ u κ L ℓ L ( u ) ( u → κ L is calledthe lower tail order of C and Υ L = lim u ↓ ℓ L ( u ) is called the lower tail order parameterof C , where f ( u ) ∼ g ( u ) ( u →
0) is defined by lim u ↓ f ( u ) /g ( u ) = 1. Similarly, the uppertail order and the upper tail order parameter of C are defined by the lower tail orderand the lower tail order parameter of the survival copula C , respectively. See Joe (2014,Section 2.16) for more details on the tail orders and tail order parameters. Using thetail orders and tail order parameters, we have the following result. The proof is given inSupplementary Material. Theorem 2.
Let κ L and κ U be the lower and upper tail orders of the copula C , respec-tively. Then α (0) = ∞ if κ L < κ U and α (0) = −∞ if κ L > κ U . If κ L = κ U and either ofthe lower tail order parameter Υ L or the upper tail order parameter Υ U of C is not equalto zero, then α (0) = log(Υ U / Υ L ) . Note that Theorem 2 with κ L = κ U = 1 reduces to Theorem 1. Theorems 1 and 2 areuseful to evaluate α (0) if we already know the tail dependence coefficients or tail ordersand tail order parameters of a copula. If those values are not known, the following thirdexpression for α (0) could be useful. Theorem 3.
Let ( X , X ) be an R -valued random vector with the copula C . Supposethat there exists ε > such that c ( u ) = d C ( t, t ) /dt | t = u exists in (0 , ε ) ∪ (1 − ε, .Assume that lim u ↓ dC ( u, u ) /du = lim u ↓ dC (¯ u, ¯ u ) /du = 0 . Then α (0) = log (cid:18) lim u ↓ c (1 − u ) c ( u ) (cid:19) , given the limit exists. See Supplementary Material for the proof. As will be seen in the next section, Theorem3 can be utilized to calculate α (0) for Clayton copula and Ali-Mikhail-Haq copula. In this section we discuss the values of the proposed measure α ( u ) for some existing cop-ulas. It is seen to be useful to plot α ( u ) with respect to u for comparing the probabilitiesof the lower [0 , u ] tail and upper [1 − u, one for the whole range of u ∈ (0 , . .0 0.1 0.2 0.3 0.4 0.5 − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . u a l pha − . − . − . . theta a l pha − . − . − . . − . − . − . . − . − . − . . (a) (b) (c)Figure 1: Plots of α ( u ) for: (a) Clayton copula (5) with respect to u for θ = 1 (solid), θ = 5 (dashed), θ = 10 (dotted), and θ = 20 (dotdashed), (b) Ali-Mikhail-Haq copula(6) with respect to u for θ = 0 . θ = 0 . θ = 0 . θ = 1(dotdashed), and (c) Ali-Mikhail-Haq copula (6) with respect to θ for u = 0 .
01 (solid), u = 0 .
05 (dashed), u = 0 . u = 0 . Proposition 2 implies that α ( u ) = 0 for any u ∈ [0 , .
5] if C ( u, u ) = C (¯ u, ¯ u ) for any u ∈ [0 , . t -copula,Plackett copula and FGM copula. Among well-known Archimedean copulas, Frank copulahas a radially symmetric shape and therefore α ( u ) = 0 for any u . There exist various copulas for which α ( u ) is not equal to zero in general. Many Archimedeancopulas have asymmetric tails, including Clayton copula, Gumbel copula, Ali-Mikhail-Haq copula and two-parameter BB copulas. In addition, some asymmetric extensionsof Gaussian copula and t -copula have been proposed recently. Such extensions includethe skew-normal copulas and skew- t copulas discussed in Joe (2006) and Yoshiba (2018),for which α ( u ) is not equal to zero in general. As examples of copulas with asymmetrictails, here we discuss the values of α ( u ) for the three well-known copulas, namely, Claytoncopula, Ali-Mikhail-Haq copula and BB7 copula. Clayton copula:
Clayton copula is defined by C cl ( u , u ; θ ) = max n u − θ + u − θ − , o − /θ , (5)where θ ∈ [ − , ∞ ) \ { } . Figure 1(a) plots the values of α ( u ) as a function of u forfour positive values of θ . (For an intuitive understanding of the distributions of Claytoncopula, see Figure S1(a) and (b) of Supplementary Material which plot random variatesfrom Clayton copula with the two values of the parameters used in Figure 1.) As is clearfrom equation (2), α (0 .
5) = 0 for any θ . The smaller the value of u , the smaller the valueof α ( u ). The figure also suggests that, for a fixed value of u , as θ increases, the value of α ( u ) approaches zero. The upper tail dependence coefficient of Clayton copula is 0 andthe lower tail dependence coefficient is 2 − /θ for θ > θ ≤
0. Therefore, for6 >
0, Theorem 1 implies that α (0) = −∞ , meaning that the lower tail dependence isconsiderably stronger than the upper one. If θ ∈ [ − , α (0) = ∞ . Ali-Mikhail-Haq copula:
Ali-Mikhail-Haq copula is of the form C ( u , u ) = u u − θ (1 − u )(1 − u ) , (6)where θ ∈ [ − , α ( u ) as a function of u and θ are exhibited in Figure1(b) and (c), respectively. (See Figure S1(d) and (e) of Supplementary Material for plotsof random variates generated from Ali-Mikhail-Haq copula with the two values of theparameters used in Figure 1(b).) Figure 1(b) suggests that α ( u ) decreases with u . Alsoit appears that, for a fixed value of u , the greater the value of θ , the smaller the valueof α ( u ). This observation can be seen more clearly in Figure 1(c) which plots the valuesof α ( u ) as a function of θ . Since both the lower and upper tail dependence coefficientsof this copula are equal to zero, one can not apply Theorem 1 for the calculation of α (0). However Theorems 2 and 3 are applicable in this case and we have a simple form α (0) = log(1 − θ ). BB7 copula:
Finally, consider the BB7 copula of Joe and Hu (1996) defined by C ( u , u ) = 1 − " − (cid:26)(cid:16) − u θ (cid:17) − δ + (cid:16) − u θ (cid:17) − δ − (cid:27) − /δ /θ , (7)where δ > θ ≥
1. Unlike the last two copulas, this model has two parameters. Theparameter δ controls the lower tail dependence coefficient, while θ regulates the upperone. Indeed, the lower and upper tail dependence coefficients are known to be 2 − /δ and2 − /θ , respectively.It follows from Theorem 1 that α (0) = log(2 − /θ ) − δ − log 2. Figure 2 displays a plotof α ( u ) with respect to u for four selected values of ( δ, θ ) and that of α ( u ) with respect to( δ, θ ) for u = 0 .
01. Note that, in Figure 2(a), δ = 1 and δ = 1 .
94 imply that the lower taildependence coefficients are around 0.5 and 0.9, respectively, while θ = 1 .
71 and θ = 7 . α ( u ) are close to zero for any u . When the difference between the lower andupper tail dependence coefficients is large, α ( u ) appears to be monotonic with respect to u . It can be seen from Figure 2(b) that the values of α (0 .
01) monotonically decreases as δ increases. Also, α (0 .
01) monotonically increases with θ . The two contours α (0 .
01) = 0and α (0) = 0 show somewhat similar shapes, implying that α (0 .
01) = 0 is a reasonableapproximation to α (0) = 0. α ( u ) In practice, it is often the case that the form of the copula C ( u , u ) underlying data isnot known. In such a case, we need to estimate α ( u ) based on the data. In Sections 5.1–S1.6 we propose a sample analogue of α ( u ) based on a sample from the copula. Section7 .0 0.1 0.2 0.3 0.4 0.5 − . − . . . . . u a l pha − . − . . . . . − . − . . . . . − . − . . . . . delta t he t a (a) (b)Figure 2: (a) Plot of α ( u ) for BB7 copula (7) with respect to u for ( δ, θ ) = (1 , . δ, θ ) = (1 . , .
71) (dashed), ( δ, θ ) = (1 , .
27) (dotted), and ( δ, θ ) = (1 . , . α (0 .
01) (solid) and plot of α (0) = 0 (dashed) withrespect to ( δ, θ ).5.4 presents a sample analogue of α ( u ) based on a sample from a distribution on R .A comparison between the two proposed sample analogues of α ( u ) is discussed via asimulation study in Section 5.5. α ( u ) based on a sample from a copula A sample analogue of α ( u ) based on a sample from a copula is defined as follows. Definition 2.
Let ( U , U ) , . . . , ( U n , U n ) be a random sample from a copula. Then wedefine a sample analogue of α ( u ) by ˆ α ( u ) = log (cid:18) T U ( u ) T L ( u ) (cid:19) , where T L ( u ) = 1 n n X i =1 ( U i ≤ u, U i ≤ u ) ,T U ( u ) = 1 n n X i =1 ( U i ≥ − u, U i ≥ − u ) , and ( · ) is an indicator function, i.e., ( A ) = 1 if A is true and ( A ) = 0 otherwise. In Sections 5.1–S1.6, we assume that ( U , U ) ,. . . , ( U n , U n ) is an iid sample fromthe copula C ( u , u ). For iid R -valued random vectors ( X , X ) , . . . , ( X n , X n ), ifthe margins of X and X are known to be F and F , respectively, then ˆ α ( u ) can beobtained by replacing ( U j , U j ) by ( F ( X j ) , F ( X j )) ( j = 1 , . . . , n ).The goal of this subsection is to investigate some properties of ˆ α ( u ). To achieve this,we first show the following lemma. See Supplementary Material for the proof.8 emma 1. For < u, v ≤ . , we have the following: E [ T L ( u )] = C u , E [ T U ( u )] = C ¯ u , var [ T L ( u )] = 1 n C u (1 − C u ) , var [ T U ( u )] = 1 n C ¯ u (1 − C ¯ u ) , cov [ T L ( u ) , T L ( v )] = 1 n C u ∧ v (1 − C u ∨ v ) , cov [ T L ( u ) , T U ( v )] = − n C u ∧ v C ¯ u ∧ ¯ v , cov [ T U ( u ) , T U ( v )] = 1 n C ¯ u ∨ ¯ v (1 − C ¯ u ∧ ¯ v ) , where u ∧ v = min( u, v ) , u ∨ v = max( u, v ) , and C w = C ( w, w ) . This lemma implies that ˆ α ( u ) is a consistent estimator of α ( u ). Applying this lemma,we obtain the following asymptotic result. The proof is given in Supplementary Material. Theorem 4.
Define A n ( u ) = √ n { ˆ α ( u ) − α ( u ) } , < u ≤ . . Then, as n → ∞ , { A n ( u ) | < u ≤ . } converges weakly to a centered Gaussian processwith covariance function σ ( u, v ) ≡ E [ A n ( u ) A n ( v )] = C ( u ∨ v, u ∨ v ) + C (¯ u ∧ ¯ v, ¯ u ∧ ¯ v ) C ( u ∨ v, u ∨ v ) · C (¯ u ∧ ¯ v, ¯ u ∧ ¯ v ) . (8)We note that the covariance function (8) is monotonically decreasing with respectto u ∨ v . If u ∨ v = 0 .
5, then the covariance function (8) reaches the minimum value2 /C (0 . , . ˆ α ( u ) An asymptotic interval estimator of α ( u ) can be obtained by applying the asymptoticresults obtained in the previous subsection. Theorem 4 implies that, for fixed u ∈ (0 , . √ n { ˆ α ( u ) − α ( u ) } d −→ N (0 , σ ( u )) ( n → ∞ ) , where σ ( u ) = σ ( u, u ) and σ ( u, u ) is defined as in equation (8). Since σ ( u ) includes thecopula C which is usually not known in practice, we use an estimator of σ ( u ) defined byˆ σ ( u ) = s T L ( u ) + T U ( u ) T L ( u ) · T U ( u ) . It follows from Lemma 1 that ˆ σ ( u ) a.s. −−→ σ ( u ) as n → ∞ . Then we have √ n { ˆ α ( u ) − α ( u ) } / ˆ σ ( u ) d −→ N (0 ,
1) as n → ∞ . Hence a 100(1 − p )% nonparametric asymptoticconfidence interval for α ( u ) isˆ α ( u ) − z p/ ˆ σ ( u ) √ n ≤ α ( u ) ≤ ˆ α ( u ) + z p/ ˆ σ ( u ) √ n , (9)where z p/ satisfies P ( Z ≥ z p/ ) = p/
2, where Z ∼ N (0 ,
1) and 0 < p < u . If the interest of statistical analysis is to construct an asymptotic confidence band9or α ( u ) for a range of u , one can adopt Bonferroni correction. This can be done byreplacing p by p/n in the equation (9) for u ∈ U , where U = { min(max( u i , u i ) , max(1 − u i , − u i )) } ni =1 . Therefore the asymptotic confidence band for α ( u ) with Bonferronicorrection is ((cid:20) ˆ α ( u ) − z p/ (2 n ) ˆ σ ( u ) √ n , ˆ α ( u ) + z p/ (2 n ) ˆ σ ( u ) √ n (cid:21) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) u min ≤ u ≤ u max ) , (10)where u min = min U and u max = max U . Since ˆ α ( u ) and ˆ σ ( u ) are step functions, it sufficesto evaluate the bounds of confidence intervals only for u ∈ U . ˆ α ( u ) Some hypothesis tests can be established based on ˆ α ( u ). For a given value of u = u ,one can carry out a hypothesis test to test H : α ( u ) = α against H : α ( u ) = α by using the asymptotic confidence interval (9). Similarly, an one-sided test for thealternative hypothesis H : α ( u ) > α or H : α ( u ) < α can be derived by modifyingthe asymptotic confidence interval (9).If the interest of analysis is to evaluate the values of ˆ α ( u ) for multiple values of u ,one can consider the test H : α ( u ) = α ( u ) against H : α ( u ) = α ( u ) for { u ; u = u , . . . , u m } . One example of such tests is based on a asymptotic confidence band basedon Bonferroni correction (10). However, since the confidence band based on Bonferronicorrection is known to be conservative, especially, for dependent hypotheses, the test basedon Bonferroni correction (10) is not powerful in general. Alternatively, the following resultcan be used to present a test for the multiple values of u . See Supplementary Materialfor the proof. Theorem 5.
Let a = √ n { ˆ α ( u ) − α ( u ) , . . . , ˆ α ( u m ) − α ( u m ) } T and u < · · · < u m .Suppose ˆ Σ = ˆ σ ( u ) ˆ σ ( u , u ) . . . ˆ σ ( u , u m )ˆ σ ( u , u ) ˆ σ ( u ) . . . ˆ σ ( u , u m ) ... ... . . . ... ˆ σ ( u , u m ) ˆ σ ( u , u m ) . . . ˆ σ ( u m ) , where ˆ σ ( u i ) = ˆ σ ( u i , u i ) and ˆ σ ( u i , u j ) = { T L ( u j ) + T U ( u j ) } / { T L ( u j ) T U ( u j ) } ( i ≤ j ) .Assume that ˆ Σ is invertible. Then a T ˆ Σ − a d −→ χ ( m ) as n → ∞ , where χ ( m ) denotes the chi-squared distribution with m degrees of freedom. Substituting α ( u ) into α ( u ) in the test statistic a T ˆ Σ − a , the null hypothesis α ( u ) = α ( u ) is rejected for a large value of a T ˆ Σ − a . In order that ˆ Σ becomes invertible, thevalues of ( u , . . . , u m ) need to be selected such that T U ( u i ) < T U ( u i +1 ) and/or T L ( u i ) Let ( X , X ) , . . . , ( X n , X n ) be iid random vectors with the copula C ( u, v ) and the continuous margins. Assume that C ( u, v ) is differentiable with continuous i -th partial derivatives ( i = 1 , . Then, as n → ∞ , √ n { T ∗ L ( u ) − C ( u, u ) } d −→ D C ( u ) , √ n (cid:8) T ∗ U ( u ) − C (¯ u, ¯ u ) (cid:9) d −→ D C (¯ u ) , where D C ( u ) = U ( u, u ) − ∂C ( u , u ) ∂u (cid:12)(cid:12)(cid:12)(cid:12) u = u U ( u, − ∂C ( u, u ) ∂u (cid:12)(cid:12)(cid:12)(cid:12) u = u U (1 , u ) ,U is a centered Gaussian process with covariance function E [ U ( u , u ) U ( v , v )] = C ( u ∧ v , u ∧ v ) − C ( u , u ) C ( v , v ) , and u ∧ v is defined as in Lemma 1. Confidence intervals for α ∗ ( u ) can be numerically constructed using the bootstrapmethod. Hypothesis tests can also be established based on the bootstrap confidenceintervals. It should be noted that, in order to calculate α ∗ ( u ) based on bootstrap samples,ˆ F and ˆ F in (11) should be calculated based on each bootstrap sample. If ˆ F and ˆ F are calculated from the original data, the bootstrap confidence intervals become similarto the asymptotic confidence intervals (9) for large n .11 .0 0.1 0.2 0.3 0.4 0.5 − . − . . . . u d i ff − . − . . . . Figure 3: Plot of ˆ α ( u ) − α ( u ) (solid, red), ˆ α ∗ ( u ) − α ( u ) (dashed, purple), the lower andupper bounds of 90% asymptotic confidence intervals of ˆ α ( u ) − α ( u ) (dotdashed, black),and the lower and upper bounds of 90% bootstrap confidence intervals ˆ α ∗ ( u ) − α ( u )(dotted, blue) obtained from a sample of size 10000 from the distribution (12). In order to compare the performance of the two proposed sample analogues of α ( u ) for alarge sample size, we consider the following cumulative distribution function F ( x , x ) = C cl ( F ( x ) , F ( x ); 20) , −∞ < x , x < ∞ , (12)where F j ( x ) is the cumulative distribution function of the standard Cauchy distribution,i.e., F j ( x ) = 0 . π − arctan x , and C cl ( u , u ; θ ) denotes the Clayton copula (5).Figure 3 plots the values of ˆ α ( u ) − α ( u ), ˆ α ∗ ( u ) − α ( u ) and their bounds of 90%confidence intervals for a sample of size n = 10000 from the distribution (12). Seealso Figure 1(a) for the plot of α ( u ). For the calculations of ˆ α ( u ) and its confidenceintervals (9), the sample { ( x i , x i ) } is transformed into the copula sample { ( u i , u i ) } via u ji = F j ( x ji ), where F j is the true margin ( j = 1 , α ∗ ( u )are calculated using the basic bootstrap method based on 999 resamples of size 10000;see the last paragraph of Section 5.4 for details. The minimum value of u in the plot isdefined as u min = min { u ∈ (0 , . T L ( u ) , T U ( u ) , T ∗ L ( u ) , T ∗ U ( u ) ≥ } ≃ . 01 in order thatthe asymptotic theory is applicable.The figure suggests that when u is around 0.2 or greater, the performance of both ˆ α ( u )and ˆ α ∗ ( u ) seems satisfactory. For u ≤ . 2, the difference between the sample analoguesand the true value increases with u in general. It appears that the 90% confidence intervalsof both ˆ α ( u ) and ˆ α ∗ ( u ) are generally narrow if u is around 0.2 or more. For u ≤ . 2, thesmaller the value of u , the wider the ranges of the confidence intervals of both ˆ α ( u ) andˆ α ∗ ( u ). Interestingly, the confidence intervals of ˆ α ∗ ( u ) are narrower than those of ˆ α ( u )in most of the plotted range of u . This tendency is particularly obvious in the range[0 . , . α ∗ ( u ) are much narrower than those of ˆ α ( u ).12 Comparison with other measures In this section we compare our measure with other copula-based measures of tail asym-metry. Rosco and Joe (2013) proposed three measures of tail asymmetry. One of theirmeasures based on the distance between a copula C and its survival copula is defined by ς = sup ( u ,u ) ∈ [0 , (cid:8)(cid:12)(cid:12) C ( u , u ) − C (¯ u , ¯ u ) (cid:12)(cid:12)(cid:9) . (13)This measure has also been proposed by Dehgani et al. (2013) as a limiting case of ameasure of radial asymmetry for bivariate random variables.Our measure (1) has some similarities to and differences from the measure (13). Simi-larities include that both are functions of a copula C and its survival function. Also, bothmeasures satisfy Properties (ii), (v) and (vi) of Proposition 2.However there are considerable differences between the two measures (1) and (13).First, the domains of a copula the two measures evaluate are different. The measure (13)is a global measure in the sense that the whole domain of the copula is taken into accountto evaluate the value of the measure, while our measure (1) is a local measure whichfocuses on squared subdomains of the copula. By choosing the value of the index u , ourmeasure (1) enables us to choose the subdomain of a copula which analysts are interestedin. However the prescription for selecting the value of u is not always straightforward andthe choice of the index u could influence the results of analysis. The index-free measure(13) does not have such a problem. However the supremum value of this measure is notnecessarily attained in the tails of the distribution and the value of the measure mightnot reflect the tail probabilities if u ≥ . u ≥ . 5. Also, because of its locality,computations associated with our measure (1) are very fast.Also there are differences between the two measures (1) and (13) in terms of properties.Our measure (1) satisfies all the properties of (i)–(vi) of Proposition 2 which include four(out of five) axioms of Rosco and Joe (2013). However this measure does not satisfyone of the axioms, i.e., axiom (i), of Rosco and Joe (2013) and therefore the value ofthe measure could be unbounded for special cases. The measure (13) also satisfies fouraxioms of Rosco and Joe (2013) including the axiom (i). On the other hand, the measure(13) does not satisfy their axiom (iii) which is equivalent to Property (iv) of Proposition2, implying that the measure (13) does not distinguish which tail probability is greaterthan the other one.The other two measures of Rosco and Joe (2013) are derived through different ap-proaches. For a bivariate random vector ( U , U ) from a copula, the two measures arebased on the moments or quantile function of the univariate random variable U + U − U , U ).Another copula-based measure for tail asymmetry has been proposed by Krupskii(2017). It is defined by ̺ K ( a, u ) = ̺ L ( a, u ) − ̺ U ( a, u ) , (14)where 0 < u ≤ . a is a weighting function, ̺ L ( a, u ) = cor (cid:20) a (cid:18) − U u (cid:19) , a (cid:18) − U u (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) U < u, U < u (cid:21) , U ( a, u ) = cor (cid:20) a (cid:18) − − U u (cid:19) , a (cid:18) − − U u (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) U > − u, U > − u (cid:21) . If a ( x ) = x , the measure (14) reduces to the measure discussed by Nikoloulopoulos et al.(2012) and Dobri´c et al. (2013). Properties of each term of the measure (14) have beeninvestigated by Krupskii and Joe (2015).The measure (14) is related to ours in the sense that the values of their measures arecalculated from the subdomain of a copula indexed by the truncation parameter. Howeverthe measure (14) is based on Spearman’s rhos or correlation coefficients of a truncatedcopula, and therefore the interpretation of the values of the measure (14) is essentiallydifferent from ours. A nice property of the measure (14) is that the weights of tails can becontrolled through the weight function a . Therefore this measure can be a useful measureof tail asymmetry if the weight function is appropriately defined. The proposed measure α ( u ) is a function of the lower and upper tail probabilities. Herewe briefly consider another measure of comparison between the two tail probabilities. Definition 4. Let ( X , X ) , F and F be defined as in Definition 1. Then we define ameasure of comparison between the lower-left and upper-right tail probabilities of ( X , X ) by β ( u ) = u − κ (cid:8) P ( F ( X ) > − u, F ( X ) > − u ) − P ( F ( X ) ≤ u, F ( X ) ≤ u ) (cid:9) , < u ≤ . , where the index κ is given by κ ≥ . If ( X , X ) has the copula C , the expression of β ( u ) can be simplified to β ( u ) = C (¯ u, ¯ u ) − C ( u, u ) u κ . Hence this measure is based on the difference between the lower and upper tail probabil-ities of the copula as well as the value of κ which could be decided based on the lowerand upper tail orders.The measure β ( u ) is another simple measure to compare the lower-right and upper-left tail probabilities. Properties (ii)–(vi) of Proposition 2 hold for β ( u ). As for the rangeof the measure β ( u ) related to Property (i), it can be seen − ≤ β ( u ) ≤ κ = 1 and −∞ ≤ β ( u ) ≤ ∞ otherwise.One big difference between α ( u ) and β ( u ) is that, when considering the values of themeasures for u ≃ α ( u ) and β ( u ) possibly lead to different conclusions. As an exampleof this, consider the Clayton copula (5) with θ < 0, which is said to have asymmetrictails (see Figure S1 of Supplementary Material for a plot of random variates from Claytoncopula with θ = − . α ( u ) for u ≃ α (0) = ∞ . However, the values of β ( u ) with κ = 1 for u ≃ β (0) = 0. This fact about β ( u ) is reasonable in one sense, but one might arguethat β ( u ) does not capture the asymmetry of tail probabilities appropriately. Althoughthis problem can be solved by selecting a different value of κ , the selection of κ , whichinfluences the conclusion of analysis, appears difficult in practical situations where C isunknown. 14 Example As an application of the proposed measure, we consider a dataset of daily returns of twostock indices. The dataset is taken from historical data in Yahoo Finance, available at https://finance.yahoo.com/quote/%5EGSPC/history/ and https://finance.yahoo.com/quote/%5EN225/history/ . We consider stock daily returns of S&P500 and Nikkei225observed from the 1st of April, 2008 until the 31st of March, 2019, inclusive. We fitthe autoregressive-generalized autoregressive conditional heteroscedastic model AR(1)-GARCH(1,1) to each of the stock daily returns using ugarchfit in ‘rugarch’ package inR (R Core Team, 2020; Ghalanos, 2020). The Student t -distribution is used as the con-ditional density for the innovations. We consider the residuals { ( x i , x i ) } ni =1 ( n = 2605)of the fitted AR(1)-GARCH(1,1), where x i and x i are the residuals of S&P500 andNikkei225, respectively. The residuals show unexpected changes in daily return whichare not explained by the model; if the joint plunging probability is higher than the jointsoaring probability, then the proposed measures α ( u ) is supposed to be negative.We discuss ˆ α ( u ) defined in Definition 1 and ˆ α ∗ ( u ) defined in Definition 3. In order toobtain the copula sample { ( u i , u i ) } for ˆ α ( u ), we transform the residuals { ( x i , x i ) } via( u i , u i ) = ( F ( x i ) , F ( x i )), where F and F are the cumulative distribution functionsof Student t -distribution estimated using the maximum likelihood method. We assume,though not mathematically precise, that F and F are known. Figure 4(a) plots thesample { ( u i , u i ) } which the residuals are transformed into via the cumulative distribu-tion functions of Student t . The values of ˆ α ( u ) calculated from the sample and their90% asymptotic confidence intervals (9) are displayed in Figure 4(d). In Figure 4(d), theminimum value of u is defined as u min = min { u ∈ (0 , . T L ( u ) , T U ( u ) ≥ } ≃ . α ∗ ( u ), we use the empirical distribution functions (11) to trans-form the residuals { ( x i , x i ) } into the copula sample { ( u i , u i ) } . The transformed sampleis displayed in Figure 4(b). The values of ˆ α ∗ ( u ) calculated from the sample are plottedin Figure 4(d). The same frame also plots the 90% confidence intervals based on 999resamples of size 2605 using the basic bootstrap method. The minimum value of u in theplot is u ∗ min = min { u ∈ (0 , . T ∗ L ( u ) , T ∗ U ( u ) ≥ } ≃ . , u ] tailthan the upper-right [1 − u, one for u ≃ . 1. However it does not seem immediatelyclear from these data plots whether there is significant difference between the two tailprobabilities for u ≃ . u . To solve this problem, Figure 4(d)and (e) showing the values of α ( u ) and α ∗ ( u ), respectively, are helpful. Indeed Figure4(d) and (e) suggest that α ( u ) and α ∗ ( u ) are negative in most areas of the domain of u ,suggesting that the lower tail probability is greater than the upper one for most valuesof u ∈ ( u min , . u ≤ . α ( u ) and α ∗ ( u ) decrease with u . The asymptotic and bootstrap 90% confidenceintervals do not include 0 for u ≤ . 19 in Figure 4(d) and for u ≤ . 14 in Figure 4(e).Hence, when considering the tests H : α ( u ) = 0 against H : α ( u ) = 0 for a fixedvalue of u ∈ [ u min , . 14] based on the two 90% confidence intervals, both tests rejectthe null hypothesis at a significance level of 0.1. This implies that the lower [0 , u ) tailprobability is significantly greater than the upper (1 − u , one for u ∈ [ u min , . u is greater than 0.21, both 90% confidence intervals include0 and therefore each of the tests for a nominal size of 0.1 accepts the null hypothesis H .15 .0 0.2 0.4 0.6 0.8 1.0 . . . . . . S&P500 N i kk e i . . . . . . S&P500 N i kk e i − . − . . . u m ea s u r e − . − . . . − . − . . . (a) (b) (c) − . − . − . . u a l pha ^ ha t − . − . − . . − . − . − . − . . u a l pha ^ s t a r − . − . − . − . . (d) (e)Figure 4: Plots of the sample { ( u i , u i ) } i =1 which the residuals are transformed intovia the cumulative distribution functions of: (a) Student t -distribution and (b) empiricaldistribution. (c) Plot of − ̺ K ( a, u ), a modified version of the measure (14) of Krupskii(2017), with: a ( x ) = x (solid, black), a ( x ) = x (dashed, blue), and a ( x ) = x (dot-ted, red). Plots of the proposed measure (black) and its asymptotic or bootstrap 90%confidence intervals (gray) for: (d) ˆ α ( u ) and (e) ˆ α ∗ ( u ).16here is disagreement in conclusions between the tests based on the two 90% confidenceintervals in some areas of u in (0 . , . α ( u ) and ˆ α ∗ ( u ) show similar tendencies ingeneral. Actually, the two data plots given in Figure 4(a) and (b) look similar at the firstglance. However Figure 4(d) and (e) reveal that there are some differences between ˆ α ( u )and ˆ α ∗ ( u ). For example, the values of ˆ α ( u ) are generally smaller than those of ˆ α ∗ ( u )for u ∈ (0 , . α ∗ ( u ) are narrower than theasymptotic confidence intervals of ˆ α ( u ) for large u .Apart from the tests based on pointwise confidence intervals given in Figure 4(d) and(e), we carry out a different test for a nominal size of 0.1 based on the test statisticin Theorem 5. We test H : α ( u ) = 0 against H : α ( u ) = 0 for { u ; u = u min + (0 . − u min ) j/ , j = 0 , . . . , } . The test statistic is T = a T ˆ Σ − a ≃ . 64 with P ( T > . ≃ . ≪ . 1. Therefore we reject the null hypothesis that the lower [0 , u ] tail and upper[1 − u, tail are symmetric for the 11 equally spaced points of u in [ u min , . ς ≃ . − ̺ K ( a, u ). This modification is made to interpret the sign of the measure in the samemanner as in that of ours. Figure 4(c) displays the estimates of − ̺ K ( a, u ) with respect to u for the three specific functions of a . The three curves of the modified measure − ̺ K ( a, u )agree that there is stronger correlation in the lower [0 , u ] tail than the upper [1 − u, one for u ≥ . 12. This is somewhat similar to the result based on our measure as well.For a ( x ) = x , the correlation coefficient in the lower [0 , u ] tail is greater than that inthe upper [1 − u, tail for any u ∈ [0 , . , u ] tail probability is greaterthan the upper [1 − u, one for most values of u ∈ ( u min , . u min ≃ . u < . 14. From the economic perspective, this result implies that the joint plungingprobability is higher than the joint soaring probability with the threshold u < . u > . 12, there is stronger correlation in thelower [0 , u ] tail than in the upper [1 − u, one. In this paper we have proposed a copula-based measure of asymmetry between the lowerand upper tail probabilities. It has been seen that the proposed measure has some prop-erties which are desirable as a measure of tail asymmetry. Sample analogues of theproposed measure have been presented, and statistical inference based on them, includ-ing point estimation, interval estimation and hypothesis testing, has been shown to bevery simple. The practical importance of the proposed measure has been demonstratedthrough statistical analysis of stock return data.This paper discusses a measure for bivariate data. However it is straightforward toextend the proposed bivariate measure to a multivariate one in a similar manner as inEmbrechts et al. (2016) and Hofert and Koike (2019). Let ( X , . . . , X d ) be an R d -valued17andom vector with continuous univariate margins. Then an extended measure of tailasymmetry for ( X , . . . , X d ) is defined by A ( u ) = α ( u ) α ( u ) · · · α d ( u ) α ( u ) α ( u ) · · · α d ( u )... ... . . . ... α d ( u ) α d ( u ) . . . α dd ( u ) , where α ij is the proposed measure (1) of the random vector ( X i , X j ) ( i, j = 1 , . . . , d ).The properties of each element of the measure A ( u ) are straightforward from the resultsof this paper. It would be a possible topic for future work to investigate properties of thisextended measure as a matrix and evaluate the values of the measure for multivariatecopulas such as some examples of the vine copulas (Aas et al., 2009; Czado, 2010). Supplementary material Supplementary material contains the proofs of Lemma 1 and Theorems 1–5 and plots ofrandom variates from the copulas discussed in Sections 4.2 and 6.2. Acknowledgements The authors are grateful to Hideatsu Tsukahara for his valuable comments on the work.Kato’s research was supported by JSPS KAKENHI Grant Numbers JP17K05379 andJP20K03759. References Aas K, Czado C, Frigessi A, Bakken H (2009) Pair-copula constructions of multiple de-pendence. Insur Math Econ 44(2):182–198.Czado C (2010) Pair-copula constructions of multivariate copulas. In Copula Theory andits Applications, Springer, Dordrecht, pp 93–109Dehgani A, Dolati A, ´Ubeda-Flores M (2013) Measures of radial asymmetry for bivariaterandom vectors. Stat Pap 54:271–286Dobri´c J, Frahm G, Schmid F (2013) Dependence of stock returns in bull and bearmarkets. Dependence Model 1:94–110Donnelly C, Embrechts P (2010) The devil is in the tails: actuarial mathematics and thesubprime mortgage crisis. ASTIN Bulletin 40(1):1–33Embrechts P, Hofert M, Wang R (2016) Bernoulli and tail-dependence compatibility. AnnAppl Probab 26(3):1636–1658Fermanian JD, Radulovic D, Wegkamp M (2004) Weak convergence of empirical copulaprocesses. Bernoulli 10(5):847–860 18enest C, Neˇslehov´a JG (2014) On tests of radial symmetry for bivariate copulas. StatPap 55:1107–1119Ghalanos, A (2020) rugarch: univariate GARCH models, R package version 1.4-2Hofert M, Koike T (2019) Compatibility and attainability of matrices of correlation-basedmeasures of concordance. ASTIN Bulletin 49:885–918Joe H (1997) Multivariate models and dependence concepts. Chapman & Hall, LondonJoe H (2006) Discussion of “copulas: tales and facts” by Thomas Mikosch. Extremes9(1):37–41Joe H (2014) Dependence modeling with copulas. Chapman & Hall/CRC, Boca Raton,FLJoe H, Hu T (1996) Multivariate distributions from mixtures of max-infinitely divisibledistributions. J Multivar Anal 57:240–265Krupskii P (2017) Copula-based measures of reflection and permutation asymmetry andstatistical tests. Stat Pap 58:1165–1187Krupskii P, Joe H (2015) Tail-weighted measures of dependence. J Appl Stat 42:614–629Lee D, Joe H, Krupskii P (2018) Tail-weighted dependence measures with limit being thetail dependence coefficient. J Nonparametr Stat 30:262–290McNeil AJ, Frey R, Embrechts P (2015) Quantitative risk management: concepts, tech-niques, and tools (revised ed.). Princeton University Press, Princeton, NJNelsen RB (2006) An introduction to copulas (2nd ed.) Springer, New YorkNikoloulopoulos AK, Joe H, Li H (2012) Vine copulas with asymmetric tail dependenceand applications to financial return data. Comput Stat Data Anal 56:3659–3673R Core Team (2020) R: a language and environment for statistical computing t copulas with its applicationsto stock returns. J Stat Comput Simul 88(13):2489–250619 upplementary Material for “Copula-based measures ofasymmetry between the lower and upper tail probabilities” Shogo Kato ∗ ,a , Toshinao Yoshiba b,c and Shinto Eguchi aa Institute of Statistical Mathematics b Tokyo Metropolitan University c Bank of Japan August 4, 2020 The Supplementary Material is organized as follows. Section S1 presents the proofs ofLemma 1 and Theorems 1–5 of the article. Section S2 displays plots of random variatesfrom Clayton copula, Ali-Mikhail-Haq copula and BB7 copula discussed in Sections 4.2and 6.2 of the article. S1 Proofs S1.1 Proof of Theorem 1 Proof. It follows from the expression (4) that λ U = lim u ↑ C ( u, u )1 − u = lim u ↓ C (¯ u, ¯ u ) u . This result and Proposition 1 imply that α (0) = lim u ↓ log (cid:18) C (¯ u, ¯ u ) C ( u, u ) (cid:19) = log (cid:18) lim u ↓ C (¯ u, ¯ u ) /uC ( u, u ) /u (cid:19) = log (cid:18) λ U λ L (cid:19) . The last equality holds because λ U and λ L exist and either of λ U and λ L is not equal tozero. S1.2 Proof of Theorem 2 Proof. If follows from the assumption that there exists a slowly varying function ℓ L ( u )such that C ( u, u ) ∼ u κ L ℓ L ( u ) as u → 0. Similarly, there exists a slowly varying function ∗ Address for correspondence : Shogo Kato, Institute of Statistical Mathematics, 10-3 Midori-cho,Tachikawa, Tokyo 190-8562, Japan. E-mail: [email protected] U ( u ) such that C (¯ u, ¯ u ) ∼ u κ U ℓ U ( u ) as u → . Therefore α (0) = lim u ↓ log (cid:18) C (¯ u, ¯ u ) C ( u, u ) (cid:19) = lim u ↓ log (cid:18) u κ U − κ L ℓ U ( u ) ℓ L ( u ) (cid:19) = (cid:26) ∞ , κ U > κ L , −∞ , κ U < κ L . The last equality holds because ℓ U ( u ) /ℓ L ( u ) is slowly varying. If κ U = κ L and eitherΥ U = 0 or Υ L = 0, then α (0) = lim u ↓ log (cid:18) ℓ U ( u ) ℓ L ( u ) (cid:19) = log (cid:18) lim u ↓ ℓ U ( u ) ℓ L ( u ) (cid:19) = log (cid:18) Υ U Υ L (cid:19) . S1.3 Proof of Theorem 3 Proof. Proposition 1 implies that α (0) can be expressed as α (0) = lim u ↓ log (cid:18) C (¯ u, ¯ u ) C ( u, u ) (cid:19) = log (cid:18) lim u ↓ C (¯ u, ¯ u ) C ( u, u ) (cid:19) . Since lim u ↓ dC ( u, u ) /du = lim u ↓ dC (¯ u, ¯ u ) /du = 0, the l’Hˆopital’s rule is applicable tothe last expression of the equation above. Hence we have α (0) = log (cid:18) lim u ↓ d C (¯ u, ¯ u ) /du d C ( u, u ) /du (cid:19) = log (cid:18) lim u ↓ c (1 − u ) c ( u ) (cid:19) as required. S1.4 Proof of Lemma 1 Proof. It is straightforward to see that E [ T L ( u )] and var [ T L ( u )] can be calculated as E [ T L ( u )] = E " n n X i =1 ( U i ≤ u, U i ≤ u ) = 1 n n X i =1 E [ ( U i ≤ u, U i ≤ u )]= 1 n · nC u = C u , var [ T L ( u )] = var " n n X i =1 ( U i ≤ u, U i ≤ u ) = nn var [ ( U ≤ u, U ≤ u )]= 1 n (cid:16) E (cid:2) ( U ≤ u, U ≤ u ) (cid:3) − { E [ ( U ≤ u, U ≤ u )] } (cid:17) = 1 n (cid:0) C u − C u (cid:1) = 1 n C u (1 − C u ) . E [ ( U > − u, U > − u )] = C ¯ u , the other expectation and variance,namely, E [ T U ( u )] and var [ T U ( u )], can be calculated in a similar manner.Consider cov [ T L ( u ) , T L ( v )] = E [ T L ( u ) T L ( v )] − E [ T L ( u )] E [ T L ( v )] . The first term of the left-hand side of the equation above is E [ T L ( u ) T L ( v )] = 1 n E n X i =1 ( U i ≤ u, U i ≤ u ) n X j =1 ( U j ≤ v, U j ≤ v ) = 1 n n X i,j =1 E [ ( U i ≤ u, U i ≤ u ) ( U j ≤ v, U j ≤ v )]= 1 n n X i =1 E [ ( U i ≤ u, U i ≤ u ) ( U i ≤ v, U i ≤ v )]+ 1 n X i = j E [ ( U i ≤ u, U i ≤ u ) ( U j ≤ v, U j ≤ v )]= 1 n n X i =1 E [ ( U i ≤ u ∧ v, U i ≤ u ∧ v )]+ 1 n X i = j E [ ( U i ≤ u, U i ≤ u )] E [ ( U j ≤ v, U j ≤ v )]= 1 n { nC u ∧ v + n ( n − C u C v } = 1 n C u ∧ v { n − C u ∨ v } . Therefore we havecov [ T L ( u ) , T L ( v )] = 1 n C u ∧ v { n − C u ∨ v } − C u C v = 1 n C u ∧ v (1 − C u ∨ v ) . Similarly, cov [ T U ( u ) , T U ( v )] can be calculated. The other covariance cov[ T L ( u ) , T U ( v )]can also be obtained via a similar approach, but notice that E [ T L ( u ) T U ( v )] = 1 n n X i =1 E [ ( U i ≤ u, U i ≤ u ) ( U i > − v, U i > − v )]+ 1 n X i = j E [ ( U i ≤ u, U i ≤ u ) ( U j > − v, U j > − v )]= 0 + n ( n − n C u C ¯ v = n − n C u C ¯ v . The second equality holds because 0 < u, v ≤ . 5. Thuscov [ T L ( u ) , T U ( v )] = E [ T L ( u ) T U ( v )] − E [ T L ( u )] E [ T U ( v )] = − n C u C ¯ v . Proof. Without loss of generality, assume 0 < u ≤ v ≤ . 5. Letˆ β = ( ˆ β , ˆ β , ˆ β , ˆ β ) T = ( T L ( u ) , T U ( u ) , T L ( v ) , T U ( v )) T , β = ( β , β , β , β ) T = ( C ( u, u ) , C (¯ u, ¯ u ) , C ( v, v ) , C (¯ v, ¯ v )) T , Σ β = ( σ βij ) i,j , σ βij = n · cov( ˆ β i , ˆ β j ) . Then it follows from Lemma 1 and the central limit theorem that √ n ( ˆ β − β ) d −→ N ( , Σ β ) as n → ∞ . Define h ( β ) = (cid:18) log( β /β )log( β /β ) (cid:19) = (cid:18) α ( u ) α ( v ) (cid:19) . Applying the delta method, we have √ n n h ( β ) − h ( ˆ β ) o d −→ N (cid:0) , ∇ h ( β ) T Σ β ∇ h ( β ) (cid:1) as n → ∞ , where ∇ h ( β ) = ∂∂β log( β /β ) ∂∂β log( β /β )... ... ∂∂β log( β /β ) ∂∂β log( β /β ) = − /β /β − /β /β . The asymptotic variance can be calculated as ∇ h ( β ) T Σ β ∇ h ( β ) = σ β − σ β β + σ β σ β β − σ β β − σ β β + σ β β σ β β − σ β β − σ β β + σ β β σ β − σ β β + σ β ! = C ( u,u )+ C (¯ u, ¯ u ) C ( u,u ) · C (¯ u, ¯ u ) C ( v,v )+ C (¯ v, ¯ v ) C ( v,v ) · C (¯ v, ¯ v ) C ( v,v )+ C (¯ v, ¯ v ) C ( v,v ) · C (¯ v, ¯ v ) C ( v,v )+ C (¯ v, ¯ v ) C ( v,v ) · C (¯ v, ¯ v ) = (cid:18) σ ( u, u ) σ ( u, v ) σ ( u, v ) σ ( v, v ) (cid:19) . (S1)The case 0 < v < u ≤ . < u, v ≤ . 5, it follows that, as n → ∞ , ( A n ( u ) , A n ( v ))(= √ n { h ( β ) − h ( ˆ β ) } ) convergesweakly to the two-dimensional Gaussian distribution with mean 0 and the covariancematrix (S1). Weak convergence of ( A n ( u ) , . . . A n ( u m )) to an m -dimensional centeredGaussian distribution for u , . . . , u m ∈ (0 , . 5] ( u i = u j , i = j ) can be shown in a similarmanner. Therefore { A n ( u ) | < u ≤ . } converges weakly to a centered Gaussian processwith covariance function σ ( u, v ) as n → ∞ .23 Proof. Theorem 4 implies that a converges weakly to an m -dimensional normal distribu-tion N ( , Σ ) as n tends to infinity, where Σ = σ ( u ) σ ( u , u ) . . . σ ( u , u m ) σ ( u , u ) σ ( u ) . . . σ ( u , u m )... ... . . . ... σ ( u , u m ) σ ( u , u m ) . . . σ ( u m ) ,σ ( u i ) = σ ( u i , u i ), and σ ( u i , u j ) is defined as in Theorem 4. Then we have a T Σ − a d −→ χ ( m ) as n → ∞ . Since T L ( u j ) and T U ( u j ) are consistent estimators of C ( u j , u j ) and C (¯ u j , ¯ u j ), respectively, it holds that, for any ( i, j ), ˆ σ ( u i , u j ) converges in probability to σ ( u i , u j ) as n → ∞ . It then follows from Slutsky’s theorem that a T ˆ Σ − a d −→ χ ( m ) as n → ∞ . S2 Plots of random variates from some existing copulas Figure S1 plots random variates from Clayton copula (5), Ali-Mikhail-Haq copula (6) andBB7 copula (7) with some selected values of the parameter(s). This figure is given to helpan intuitive understanding of the distributions of those copulas discussed in the paper.24 .0 0.2 0.4 0.6 0.8 1.0 . . . . . . u1 u2 . . . . . . u1 u2 . . . . . . u1 u2 (a) (b) (c) . . . . . . u1 u2 . . . . . . u1 u2 (d) (e) . . . . . . u1 u2 . . . . . . u1 u2 . . . . . . u1 u2 (f) (g) (h)Figure S1: Plots of 5000 random variates from: Clayton copula (5) with (a) θ = 1, (b) θ = 20 and (c) θ = − . 3; Ali-Mikhail-Haq copula (6) with (d) θ = 0 . e ) θ = 1; andBB7 copula (7) with (f) ( δ, θ ) = (1 , . δ, θ ) = (1 . , . 71) and (h) ( δ, θ ) = (1 , ..