Analytic Moments for GARCH Processes
aa r X i v : . [ q -f i n . S T ] S e p Analytic Moments for GARCH Processes
Carol Alexander, ∗ Emese Lazar † , Silvia Stanescu ‡ September 7, 2018
Abstract
For a GJR-GARCH specification with a generic innovation distribution we deriveanalytic expressions for the first four conditional moments of the forward andaggregated returns and variances. Moment for the most commonly used GARCHmodels are stated as special cases. We also the limits of these moments as thetime horizon increases, establishing regularity conditions for the moments of ag-gregated returns to converge to normal moments. Our empirical study yieldsexcellent approximate predictive distributions from these analytic moments, thusprecluding the need for time-consuming simulations.
Keywords : Approximate predictive distributions, conditional and unconditional mo-ments, GARCH, kurtosis, skewness, simulation
JEL Code:
C53 ∗ School of Business, Management and Economics, University of Sussex, United [email protected] † ICMA Centre, Henley Business School, University of Reading. [email protected] ‡ Kent Business School, University of Kent. [email protected]
Introduction
Forward-looking physical return distributions attract a vast academic research interest be-cause they have a great variety of financial applications to market risk assessment and port-folio optimization techniques. Since Mandelbrot (1963) and Fama (1965) it is recognized thattime series of asset returns are not well described by normal, independent processes. Typi-cally, their conditional distributions are non-normal and they exhibit volatility clustering, sothey are not independent. Hence, we require forecasts of the entire distribution, not only ofthe first two moments of returns.The family of generalized autoregressive conditional heteroskedasticity (GARCH) modelsis highly successful in capturing (at least partially) the salient empirical features of bothconditional and unconditional returns distributions. Following the pioneering work of Engle(1982), Bollerslev (1986) and Taylor (1986) numerous alternative specifications for GARCHprocesses have been proposed. In many financial markets, especially equities and commodi-ties, the GARCH conditional variance equation captures the asymmetric response of volatil-ity to innovations with different signs. Well-known asymmetric GARCH models include theEGARCH model of Nelson (1991), the AGARCH model of Engle (1990) and Engle andNg (1993), the NGARCH model also proposed by Engle and Ng (1993), and the model ofGlosten, Jagannathan and Runkle (1993), henceforth denoted GJR. Additionally, GARCHmodels with non-normal innovation distributions have been developed by Bollerslev (1987),Nelson (1991), Haas, Mittnik, Paolella (2004) and many others.The performance of various GARCH models has been emprically assessed by numerousauthors, following Andersen and Bollerslev (1998), Marcucci (2005) and many others. Vir-tually all this literature refers to the accuracy of forward or aggregated returns distributionswhen a point GARCH variance forecast is used. However, only the one-step-ahead GARCHvariance forecast is deterministic: due to the uncertainty about future returns, the forward Also, Andersen, Bollerslev and Diebold (2009) give a broad overview of volatility modelling procedures,focusing on the GARCH methodology and Bauwens, Laurent and Rombouts (2006) review some importantcontributions to the multivariate ARCH literature. However, since returns are not identically distributed, theconditional moments and their dynamics are most important for many financial applications.Knowledge of the dynamics of the conditional mean and variance is sufficient only whenconditional distributions are normal: more generally, the dynamics of higher order condi-tional moments are needed. Hence, most recent research focuses on the first four conditionalmoments of forward and aggregated returns for some specific GARCH processes.Duan et al.(1999) derive expressions for the first four conditional moments of the aggre-gated returns generated by the normal NGARCH model under the risk-neutral probabilitymeasure, and Duan et al.(2006) extend these results to the risk-neutral moments of aggre-gated returns under normal GJR and normal EGARCH processes. Wong and So (2003)derive an expression for the variance of aggregated QGARCH returns and, under the ad-ditional assumption that the innovation is symmetric, expressions for the third and fourthorder conditional moments of aggregate returns. Breuer and Jandacka (2010) derive thelimit of the variance and kurtosis of forward and aggregated returns for a generic symmetric See Engle (1982), Nemec (1985), Milhoj (1985), Bollerslev (1986), He and Terasvirta (1999a, 1999b),Karanasos (1999, 2001), He, Terasvirta and Malmsten (2002), Demos (2002), Ling and McAleer (2002a,2002b), Karanasos and Kim (2003), Bai, Russell and Tiao (2003) and Karanasos, Psaradakis and Sola(2004). Their QGARCH model is the same as an AGARCH( p , q ). Of some relation to this research is the paperby Christoffersen et al.(2008), who propose a new two-component GARCH volatility model for Europeanoptions valuation, for which the authors also derive the conditional moment generating function of the (log)price distribution. Here we present analytic expressions for the first four conditional moments of both forwardone-period and future aggregated (also called cumulative) returns and variances for the GJRmodel with a generic innovation distribution having zero mean, unit variance and finitehigher moments. We assume that the one-period log return r t = log (cid:16) P t +1 P t (cid:17) on a financialasset with market price P t follows a stationary processes with no significant autocorrelation3if returns exhibit autocorrelation, we can de-autocorrelate the data before estimating theGARCH parameters). The mathematical specification of the generic GJR model is: r t = µ + ε t , ε t = z t h / t , z t ∼ i.i.d. D (0 , , h t = ω + α ε t − + λ ε t − I − t − + β h t − , (1)where I − t is an indicator function which equals 1 if ε t ¡ 0 and zero otherwise. We also assumethat z t and h t are independent and D (0 ,
1) is a generic conditional distribution with zeromean, unit variance, constant skewness τ z and kurtosis κ z , and with constant higher moments µ ( i ) z = E ( z it ) for any i > , i ∈ N . The aggregated return over n consecutive time periods is denoted R tn = n P s =1 r t + s , and forthe conditional un-centred and centred moments of the forward and aggregated returns andvariances the following notation is used:˜ µ ( i ) x,s = E t (cid:0) x it + s (cid:1) , µ ( i ) x,s = E t (cid:18)(cid:16) x t + s − ˜ µ (1) x,s (cid:17) i (cid:19) ˜ M ( i ) x,n = E t "(cid:18) n P s =1 x t + s (cid:19) i , M ( i ) x,n = E t (cid:18) n P s =1 (cid:16) x t + s − ˜ µ (1) x,s (cid:17)(cid:19) i ! for x = r and h in turn, s = 1 , , ... , n and i = 1 , , ,
4. Thus, the skewness and kurtosis ofthe forward (return or variance) distributions are: τ x,s = µ (3) x,s (cid:16) µ (2) x,s (cid:17) − / and κ x,s = µ (4) x,s (cid:16) µ (2) x,s (cid:17) − and the skewness and kurtosis of the aggregated return or variance distributions are:T x,n = M (3) x,n (cid:16) M (2) x,n (cid:17) − / and K x,n = M (4) x,n (cid:16) M (2) x,n (cid:17) − . We start with the un-centred moments, namely: To be more precise, we have: µ ( i ) z = E (cid:0) z it (cid:1) = E t (cid:0) z it (cid:1) = E t ( z t − E t ( z t )) i = E t ( z t − E t ( z t )) i E t (cid:0) z t (cid:1) − i/ since un-centred, centred and standardized moments are all equal for a zero mean, unit variance distribution.Also, since the z process is i.i.d., conditional and unconditional moments of z are also identical. Actually,for the fourth conditional moment of returns to exist we need the first four moments of z to be finite, whilewe require up to the eighth moment of z to be finite in order to have a finite fourth conditional moment offuture variances. t (cid:0) x it + s (cid:1) and E t "(cid:18) n P s =1 x t + s (cid:19) i again, for x = r and h in turn, s = 1 , , ... , n and i = 1 , , ,
4. Subsequently, we obtain thecentred and standardized moments of the GJR process with a generic innovation distribution.The derivations rely on the observation that, although E t ( h t +1 ) = h t +1 (i.e. V t ( h t +1 ) = 0)both { h t + s | Ω t : s ∈ N \ { , } } and (cid:26) n P s =1 h t + s | Ω t : n ∈ N \ { , } (cid:27) are random. Moreover,both { r t + s | Ω t : s ∈ N \ { } } and { R tn | Ω t : n ∈ N \ { } } are random and have distributions that can also be approximatedusing moments that we derive.The following results and notation will be used: ϕ = α + λF + β , with F being the distribution function for D (0 ,
1) evaluated at zero;¯ h = ω (1 − ϕ ) − ; if ϕ ∈ (0 , h is the steady-state variance towards which the condi-tional variance mean reverts;˜ µ (2) h,s = c + (cid:0) h t +1 − c (cid:1) γ s − + c ϕ s − , (2)where γ = ϕ + ( κ z −
1) ( α + λF ) + κ z λ F (1 − F ), c = (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − , c = 2 ωϕ (cid:0) h t +1 − ¯ h (cid:1) ( ϕ − γ ) − and c = c + c ; E t (cid:0) ε t + s ε t + s + u (cid:1) = ϕ u − ατ z + λ ˆ x = −∞ x f ( x ) dx E t (cid:16) h / t + s (cid:17) , (3)where f is the density function of D (0 ,
1) and E t (cid:16) h / t + s (cid:17) ≃ (cid:16) ˜ µ (1) h,s (cid:17) / + ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / , with˜ µ (2) h,s given in (2) and ˜ µ (1) h,s given in Theorem 1 below; E t (cid:0) ε t + s ε t + s + u (cid:1) = τ z θ (3 / su , θ (3 / su = 34 (cid:16) ˜ µ (1) h,s + u (cid:17) / c ϕ u − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − (cid:16) c γ u − E t (cid:16) h / t + s (cid:17) + 2 ωc (cid:0) ϕ ( ϕ − γ ) − ( ϕ u − − γ u − ) + γ u − (cid:1) E t (cid:16) h / t + s (cid:17)(cid:17) , with c = ατ z + λ ˆ x = −∞ x f ( x ) dx ,c = α (cid:0) αµ (5) z + 2 βτ z (cid:1) + λ (2 α + λ ) ˆ x = −∞ x f ( x ) dx + 2 βλ ˆ x = −∞ x f ( x ) dx, and E t (cid:16) h / t + s (cid:17) is given approximately using a second order Taylor expansion for h / t + s around E t ( h t + s ): E t (cid:16) h / t + s (cid:17) ≃ (cid:16) ˜ µ (1) h,s (cid:17) / (cid:18) µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) ; E t (cid:0) ε t + s ε t + s + u (cid:1) = ¯ h (1 − ϕ u ) ˜ µ (1) h,s + ϕ u − κ z (cid:0) α + λF + κ − z β (cid:1) ˜ µ (2) h,s ; E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) = c ϕ v − θ (3 / su . Theorem 1: Moments of Forward and Aggregated Returns
The conditional moments of forward one-period returns of model (1) are:˜ µ (1) r,s = µ, µ (2) r,s = ˜ µ (1) h,s = ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) ,τ r,s = τ z E t (cid:16) h / t + s (cid:17) (cid:16) ˜ µ (1) h,s (cid:17) − / ≃ τ z (cid:18) + ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − (cid:19) ,κ r,s = κ z ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − . The conditional moments of the aggregated returns of model (1) are:˜ M (1) r,n = nµ, M (2) r,n = n ¯ h + (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) , T r,n ≃ (cid:18) τ z n P s =1 (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / + ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) + 3 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:19) (cid:16) M (2) r,n (cid:17) − / , K r,n = κ z n P s =1 ˜ µ (2) h,s + n P s =1 n − s P u =1 (cid:0) E t (cid:0) ε t + s ε t + s + u (cid:1) + 6 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:1) +12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) (cid:16) M (2) r,n (cid:17) − . µ (1) r,s and ˜ M (1) r,n simply state that with a constant conditionalmean equation the time t conditional expectation of the s -step-ahead one-period return isequal to the constant conditional mean, whereas the expected return aggregated over n periods scales with time. The second moment of the forward return µ (2) r,s shows that theconditional expectation of the s -step-ahead variance is equal to the steady state variance¯ h , plus an exponentially decreasing correction term to account for the distance between theone-step-ahead variance h t +1 and the steady state variance ¯ h . Because we assume that thereturns are not autocorrelated, the variance of aggregated returns over n time periods M (2) r,n is simply equal to the sum of the s -step-ahead variances for s = 1, 2, ..., n .The expressions for the forward and aggregated skewness are obtained using a secondorder Taylor series expansion for E t (cid:16) h / t + s (cid:17) , as detailed in the Technical Appendix T.A.1.It is easily observed that if the innovation is symmetric ( τ z = 0) then the forward returnsdistribution is also symmetric. By contrast, considering the expression for E t (cid:0) ε t + s ε t + s + u (cid:1) in (3), cumulative returns have an independent source of skewness in addition to that ofthe innovations τ z , due to the asymmetric response parameter λ in the conditional varianceequation. As a result, aggregated returns can exhibit skewness even if the innovation issymmetric.For s = 1, the forward kurtosis equals the kurtosis of the innovation process. But for s > s and, as it must be positive, the forward kurtosis will be greater thanthe kurtosis of the innovation, whenever s >
1, and will itself be time-varying. The net effectof uncertainty in variance is a greater weight in the tails of forward one-period returns. Also,the time-varying conditional variance of the conditional variance introduces dynamics in thehigher moments of the forward returns.The moments of variances require the following results (proved in the Technical AppendixT.A.2): 7 µ (3) h,s = s − P i =0 c i (cid:16) ω + 3 ω ϕ ˜ µ (1) h,s − i − + 3 ωγ ˜ µ (2) h,s − i − (cid:17) + c s − h t +1 , with c = µ (6) z (cid:0) α + 3 αλ ( α + λ ) F + λ F (cid:1) + 3 βγ − β (2 β + 3 ( α + λF )) (4)˜ µ (4) h,s = s − X j =0 c j (cid:16) ω + 4 ω ϕ ˜ µ (1) h,s − j − + c ˜ µ (2) h,s − j − + c ˜ µ (3) h,s − j − (cid:17) + c s − h t +1 , with c = 6 ω γ, c = 4 ωc , and c = µ (8) z (cid:0) α + F (cid:0) λ + 4 (cid:0) α λ + αλ (cid:1) + 6 α λ (cid:1)(cid:1) + β + 4 (cid:2) µ (6) z β (cid:0) α + F (cid:0) λ + 3 (cid:0) α λ + αλ (cid:1)(cid:1)(cid:1) + β ( α + λF ) (cid:3) + 6 κ z β (cid:0) α + λ F + 2 αλF (cid:1) . Expressions for ˜ µ ( i,j,k ) h,suv , with i , j , k ∈ { , , } are also required but since most are ratherlengthy they are only stated in the Technical Appendix T.A.2, with the proof of the followingresult: Theorem 2: Moments of Forward and Aggregated Variances
The conditional moments of forward one-period variances of model (1) are:˜ µ (1) h,s = ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) , µ (2) h,s = ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) ,τ h,s = (cid:20) ˜ µ (3) h,s − µ (2) h,s ˜ µ (1) h,s + 2 (cid:16) ˜ µ (1) h,s (cid:17) (cid:21) (cid:16) ˜ µ (2) h,s − ˜ µ (1) h,s (cid:17) − / ,κ h,s = (cid:18) ˜ µ (4) h,s − µ (1) h,s ˜ µ (3) h,s + 6 (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) (cid:18) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) − . The conditional moments of the aggregated future variances of model (1) are:˜ M (1) h,n = n ¯ h + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) ,M (2) h,n = n P s =1 (cid:18) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) + 2 n P s =1 n − s P u =1 (cid:16) ˜ µ (1 , h,su − ˜ µ (1) h,s ˜ µ (1) h,s + u (cid:17) , T h,n = M (3) h,n (cid:16) M (2) h,n (cid:17) − / ,M (3) h,n = n P s =1 (cid:18) ˜ µ (3) h,s − µ (2) h,s ˜ µ (1) h,s + 2 (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) + 3 n P s =1 n − s P u =1 A h,s,u + 6 n P s =1 n − s P u =1 n − s − u P v =1 B h,s,u,v ,A h,s,u = ˜ µ (2 , h,su + ˜ µ (1 , h,su + 2 (cid:16) ˜ µ (1) h,s + ˜ µ (1) h,s + u (cid:17) (cid:16) ˜ µ (1) h,s ˜ µ (1) h,s + u − ˜ µ (1 , h,su (cid:17) − ˜ µ (1) h,s ˜ µ (2) h,s + u − ˜ µ (1) h,s + u ˜ µ (2) h,s ,B h,s,u,v = ˜ µ (1 , , h,suv − ˜ µ (1) h,s ˜ µ (1 , h, ( s + u ) v − ˜ µ (1) h, ( s + u ) ˜ µ (1 , h,s ( u + v ) − ˜ µ (1) h, ( s + u + v ) ˜ µ (1 , h,su + 2˜ µ (1) h,s ˜ µ (1) h, ( s + u ) ˜ µ (1) h, ( s + u + v ) , h,n = M (4) h,n (cid:16) M (2) h,n (cid:17) − M (4) h,n = n P s =1 µ (4) h,s + n P s =1 n − s P u =1 C h,s,u + 12 n P s =1 n − s P u =1 n − s − u P v =1 D h,s,u,v + 24 n P s =1 n − s P u =1 n − s − u P v =1 n − s − u − v P w =1 E h,s,u,v,w ,C h,s,u = 4 ˜ µ (3 , h,su + ˜ µ (1 , h,su − (cid:16) ˜ µ (1) h,s ˜ µ (2 , h,su + ˜ µ (1) h,s + u ˜ µ (1 , h,su (cid:17) − (cid:16) ˜ µ (1) h,s + u ˜ µ (3) h,s + ˜ µ (1) h,s ˜ µ (3) h,s + u (cid:17) +3˜ µ (1) h,s (cid:18) ˜ µ (1) h,s ˜ µ (1 , h,su + ˜ µ (1) h,s + u ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (1) h,s + u (cid:19) +3˜ µ (1) h,s + u (cid:18) ˜ µ (1) h,s + u ˜ µ (1 , h,su + ˜ µ (1) h,s ˜ µ (2) h,s + u − ˜ µ (1) h,s (cid:16) ˜ µ (1) h,s + u (cid:17) (cid:19) +6 ˜ µ (2 , h,su − (cid:16) ˜ µ (1) h,s ˜ µ (1 , h,su + ˜ µ (1) h,s + u ˜ µ (2 , h,su (cid:17) + (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (2) h,s + u + (cid:16) ˜ µ (1) h,s + u (cid:17) ˜ µ (2) h,s + 4˜ µ (1) h,s ˜ µ (1) h,s + u ˜ µ (1 , h,su − (cid:16) ˜ µ (1) h,s ˜ µ (1) h,s + u (cid:17) ,D h,s,u,v = ˜ µ (2 , , h,suv − µ (1) h,s ˜ µ (1 , , h,suv − ˜ µ (1) h,s + u ˜ µ (2 , h,s ( u + v ) − ˜ µ (1) h,s + u + v ˜ µ (2 , h,su + (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (1 , h, ( s + u ) v +2˜ µ (1) h,s ˜ µ (1) h,s + u ˜ µ (1 , h,s ( u + v ) + 2˜ µ (1) h,s ˜ µ (1) h,s + u + v ˜ µ (1 , h,su + ˜ µ (1) h,s + u ˜ µ (1) h,s + u + v ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (1) h,s + u ˜ µ (1) h,s + u + v + ˜ µ (1 , , h,suv − µ (1) h,s + u ˜ µ (1 , , h,suv − ˜ µ (1) h,s ˜ µ (2 , h, ( s + u ) v − ˜ µ (1) h,s + u + v ˜ µ (1 , h,su + (cid:16) ˜ µ (1) h,s + u (cid:17) ˜ µ (1 , h,s ( u + v ) + 2˜ µ (1) h,s ˜ µ (1) h,s + u ˜ µ (1 , h, ( s + u ) v + 2˜ µ (1) h,s + u ˜ µ (1) h,s + u + v ˜ µ (1 , h,su +˜ µ (1) h,s ˜ µ (1) h,s + u + v ˜ µ (2) h,s + u − µ (1) h,s (cid:16) ˜ µ (1) h,s + u (cid:17) ˜ µ (1) h,s + u + v + ˜ µ (1 , , h,suv − µ (1) h,s + u + v ˜ µ (1 , , h,suv − ˜ µ (1) h,s ˜ µ (1 , h, ( s + u ) v − ˜ µ (1) h,s + u ˜ µ (1 , h,s ( u + v ) + (cid:16) ˜ µ (1) h,s + u + v (cid:17) ˜ µ (1 , h,su + 2˜ µ (1) h,s ˜ µ (1) h,s + u + v ˜ µ (1 , h, ( s + u ) v +2˜ µ (1) h,s + u ˜ µ (1) h,s + u + v ˜ µ (1 , h,s ( u + v ) + ˜ µ (1) h,s ˜ µ (1) h,s + u ˜ µ (2) h,s + u + v − µ (1) h,s ˜ µ (1) h,s + u (cid:16) ˜ µ (1) h,s + u + v (cid:17) ,E h,s,u,v,w = ˜ µ (1 , , , h,suvw − ˜ µ (1) h,s + u + v ˜ µ (1 , , h,su ( v + w ) − ˜ µ (1) h,s + u + v + w ˜ µ (1 , , h,suv + ˜ µ (1) h,s + u + v ˜ µ (1) h,s + u + v + w ˜ µ (1 , h,su − ˜ µ (1) h,s ˜ µ (1 , , h, ( s + u ) vw +˜ µ (1) h,s ˜ µ (1) h,s + u + v ˜ µ (1 , h, ( s + u )( v + w ) + ˜ µ (1) h,s ˜ µ (1) h,s + u + v + w ˜ µ (1 , h, ( s + u ) v − ˜ µ (1) h,s + u ˜ µ (1 , , h, ( s + u ) vw + ˜ µ (1) h,s + u ˜ µ (1) h,s + u + v ˜ µ (1 , h,s ( u + w + v ) +˜ µ (1) h,s + u ˜ µ (1) h,s + u + v + w ˜ µ (1 , h,s ( u + v ) + ˜ µ (1) h,s ˜ µ (1) h,s + u ˜ µ (1 , h, ( s + u + v ) w − µ (1) h,s ˜ µ (1) h,s + u ˜ µ (1) h,s + u + v ˜ µ (1) h,s + u + v + w . It has been argued, for instance in Ishida and Engle (2002), that the conditional variance ofthe conditional variance grows faster than linearly with the current variance. Our formulafor µ (2) h,s shows that, in the context of model (1), it grows quadratically, as from this iteasily follows that the conditional variance of the forward conditional variance is a quadraticfunction of the current variance h t +1 . Hence, the uncertainty around the point varianceforecast increases much more than linearly when variance levels are high, much reducing thereliability of the point forecast. This highlights the importance of our analytic formulae for9he higher moments of GARCH variances. The convergence behaviour of the conditional moments as the time horizon increases is ofinterest both theoretically and empirically. The following result summarizes the limits of themoments of returns (the limits of the forward and aggregated mean are trivial and immediateand are thus excluded).
Theorem 3: Limiting Behaviour of Moments of Forward and Aggregated Returns
Suppose ϕ ∈ (0 ,
1) and ϕ = γ . The limiting behaviour of the conditional moments of theforward one-period and aggregated returns of model (1) when we increase the time horizonis: lim s →∞ µ (2) r,s = ¯ h, (5)lim s →∞ τ r,s = τ z (cid:16) + (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − (cid:0) ¯ h (cid:1) − (cid:17) if γ ∈ (0 , , sgn ( τ z ) ∞ if γ ∈ [1 , ∞ ) , (6)lim s →∞ κ r,s = κ z (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − (cid:0) ¯ h (cid:1) − if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) , (7)lim n →∞ M (2) r,n n = ¯ h, (8)lim n →∞ T r,n = γ ∈ (0 , , sgn (cid:18) τ z (cid:0) α + γ − ϕ (cid:1) + λ ´ x = −∞ x f ( x ) dx (cid:19) ∞ if γ ∈ [1 , ∞ ) , (9) Intuitively, we expect distributions of forward variances to be positively skewed, since jumps in varianceare usually positive rather than negative. In an empirical implementation of the moments formulae derivedin this section we find that the skewness of forward variance is indeed positive, for all horizons and all threesamples considered; we also find that the excess kurtosis of variance is always positive. These empiricalresults are excluded from this paper for reasons of space, but can be obtained from the authors on request. sgn ( x ) = − x < , x = 0 , x > ∞ = 0. n →∞ K r,n = γ ∈ (0 , , κ z (1 − ϕ ) (cid:0) α + λF + κ − z β ) (1 − ϕ ) − (cid:1) + sgn ( | λ | + | τ z | ) ∞ if γ = 1 , ∞ if γ ∈ (1 , ∞ ) . (10)Hence, under suitable parameter conditions, the conditional moments of forward one-periodreturns converge to finite limits that are the unconditional counterparts of the respectiveconditional moments, and these parameter conditions are the necessary and sufficient condi-tions for the existence of the corresponding unconditional moments. Indeed, ϕ ∈ (0 ,
1) is anecessary and sufficient condition for the existence of the unconditional variance, as can beshown using Theorem 2.2 (and Example 2.1) in Ling and McAleer(2002b), and for ϕ ∈ (0 , ∃ h such that E ( h t ) = h , for any t ∈ N . It iseasy to show that, when it exists, the unconditional variance h is given by: h = ω − ϕ = ¯ h . Thus we obtain that for ϕ ∈ (0 , s →∞ µ (2) r,s = lim s →∞ E t ( h t + s ) = ¯ h = E ( h t + s ).Again, using Theorem 2.2 from Ling and McAleer(2002b), a necessary and sufficientcondition for the existence of the fourth unconditional moment is γ ∈ (0 , E (cid:0) ε t (cid:1) = κ z E (cid:0) h t (cid:1) = κ z ω + 2 ωϕ ¯ h − γ = κ z c and, as a result, the unconditional kurtosis is given by the same expression as in (7) above,for γ ∈ (0 , E ( ε t )[ E ( ε t )] / = E (cid:16) z t h / t (cid:17) [ E ( h t )] / = τ z E (cid:16) h / t (cid:17) [ E ( h t )] / . Applying the expectation operator on both sides of the equation for the GJR conditional variance andusing that the indicator I − t and the even powers of the contemporaneous innovations ε kt , where k ∈ N , areindependent, we get: E ( h t ) = ω + α E (cid:0) ε t − (cid:1) + λE (cid:0) ε t − (cid:1) F + β E ( h t ). Using that E t − (cid:0) ε t − (cid:1) = h t − and the tower law of expectations, we can write: E ( h t ) = ω + ( α + λF + β ) E ( h t ), which yields E ( h t ) = ω − ϕ = ¯ h . E (cid:16) h / t (cid:17) cannot be computed analytically in this framework, we use a second order Tay-lor series expansion to approximate it. Thus, E (cid:16) h / t (cid:17) ≃ ( E ( h t )) / + E ( h t ) ( E ( h t )) − / and a resulting approximation of the unconditional skewness is the same as the expressiongiven in (6) for γ ∈ (0 , ±∞ and the conditional kurtosis of aggregated returns diverges to + ∞ . This issimilar to a result of Diebold (1988) who shows that the unconditional distribution of theaggregated returns for a conditionally normal AR-ARCH ( m, p ) process also converges to anormal distribution, under suitable parameter conditions.Interestingly, identical convergence conditions apply for the moments of both forward andaggregated returns. Whenever the moments of forward returns converge to the unconditionalmoments, the aggregated moments converge to the corresponding moments of a normal dis-tribution. Moreover, for a special case of the generic framework, namely for the normalGARCH(1,1) model with γ = 1, the limit of the kurtosis of forward returns is infinite whilstthe kurtosis of aggregated returns converges to a constant value different from 3. In fact, thisadditional convergence case for γ = 1 is not specific to the normal GARCH(1,1): it applies toany GARCH(1,1) model with symmetric innovations. This result, for the symmetric specialcase, is in agreement with Breuer and Jandacka (2010) even though our proof is differentfrom theirs. Theorem 4: Limiting Behaviour of Moments for Forward and Aggregated Vari-ances
Suppose ϕ ∈ (0 ,
1) and γ = ϕ (as above); additionally c = γ and c = ϕ . Then we have:a) The limit of the conditional variance of the forward conditional variance of model (1) is:12im s →∞ µ (2) h,s = (cid:0)(cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − − ¯ h (cid:1) if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . (11)b) The limit of the conditional variance of the aggregated conditional variance (per unit oftime) of model (1) is:lim n →∞ M (2) h,n n = (cid:0)(cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − − ¯ h (cid:1) (cid:0) ϕ (1 − ϕ ) − (cid:1) if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . (12)c) The limit of the conditional skewness of the forward conditional variance of model (1) is:lim s →∞ τ h,s = M if γ ∈ (0 ,
1) and c ∈ (0 , , γ ∈ [1 , ∞ ) and c ∈ (cid:0) , γ / (cid:1) ,M if γ ∈ (1 , ∞ ) and c = γ / , ∞ if { γ ∈ (0 , , c ∈ [1 , ∞ ) } or { γ = 1 , c ∈ (1 , ∞ ) } , or (cid:8) γ ∈ (1 , ∞ ) , c ∈ (cid:0) γ / , ∞ (cid:1)(cid:9) , (13)where M = ω (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) (1 − c ) − − hc + 2¯ h (cid:0) c − ¯ h (cid:1) / M = h t +1 − ω (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) (1 − c ) − − (cid:0) ω ϕ (cid:0) h t +1 − ¯ h (cid:1) + 3 ωγ (cid:0) − c + h t +1 (cid:1)(cid:1) ( ϕ − c ) − (cid:0) − c + h t +1 (cid:1) / . d) For γ ∈ (0 ,
1) the conditional skewness of the aggregated conditional variance of model(1) has limit: lim n →∞ T h,n = c ∈ (0 , , ∞ if c ∈ (1 , ∞ ) , sgn( N ) ∞ if c = 1 , (14) Since proofs become increasingly lengthy we only state the limit of the conditional skewness of theaggregated conditional variance in the case that γ ∈ (0 , γ ≥ N = ω ( ω +3 ωϕ ¯ h +3 γc ) + 3 ¯ h (cid:0) c + ϕ (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) + ω (1 − γ ) − + 2 ωϕ ¯ h (cid:1) +3 γ (1 − γ ) − ( ω +3 ωϕ ¯ h +3 γc ) (cid:0) ω + 2 ϕ ¯ h (cid:1) +3(1 − ϕ ) − ¯ h (cid:2) c (1 + ϕ ) − ϕ ¯ h + ¯ h (cid:0)(cid:0) h t +1 − ¯ h (cid:1) − ω (cid:1) − ϕ (cid:0) c − ¯ h (cid:1)(cid:3) . The returns process has no autocorrelation so the variance of aggregated returns is just thesum of the forward one-period variances. However, the variance process is autocorrelated.As a result the variances of the two processes have different limiting behaviour. The limit ofthe variance of aggregated returns per unit of time is equal to the limit of forward variance(i.e. lim n →∞ M (2) r,n n = lim s →∞ µ (2) r,s ), but the same does not hold for the variance of variance. Indeed,lim n →∞ M (2) h,n n > lim s →∞ µ (2) h,s .Of course, the returns and variance processes are not independent, and certain aspectsof this dependence are reflected in a similar behaviour in their limiting distributions. Inparticular, recall that the forward and aggregated returns had identical regularity conditionsand that whenever a moment of forward returns converges to a finite limit, the correspondingmoment of aggregated returns converges to the normal value. Theorem 4 yields a similarresult for forward and aggregated variances: the skewness of the aggregated variance con-verges to zero when the skewness of forward variance converges to a finite value. But for thevariance, the skewness regularity condition is slightly stronger: it is required that not only γ ∈ (0 ,
1) but also c ∈ (0 ,
1) and c = γ . Density forecasting is a prime application of our moment formulae. Whilst one must knowall the moments of a random variable to determine its distribution, it can be approximated To be precise, a distribution is uniquely determined by its moments only if the Carleman condition holds,i.e. only if ∞ P n =1 α − . n n → ∞ , where { α k } is the moment sequence (see Serfling, 1980, p.46). In the followingwe assume that this condition is met. The Edgeworth expansion can approximate a density of interest around a base density, usu-ally the standard normal density. It belongs to the class of Gram-Charlier expansions,being a rearrangement of a Gram-Charlier A series. However, Gram-Charlier A series andEdgeworth series are only equivalent asymptotically, when an infinite number of terms enterthe expansions. In empirical applications using finite order approximations they can differsignificantly, and the Edgeworth version is preferred since it is an asymptotic expansion. Nevertheless, the Edgeworth expansion may have monotonicity and convergence problems,i.e. the distribution function is not guaranteed to be monotonic and the error of approxima-tion does not necessarily improve when we increase the order of the expansion. The first four terms of the Edgeworth expansion are: f x ( x ) ≃ f Ex ( x ) = ϕ ( x ) − τ x ϕ (3) ( x ) + ( κ x − ϕ (4) ( x ) + τ x ϕ (6) ( x ) , where f Ex ( x ) is the second-order Edgeworth approximation of f x , so moments (cumulants)of order higher than four (kurtosis) are ignored, ϕ is the standard normal density and ϕ ( k ) is its k th derivative, and τ x and κ x denote the skewness and kurtosis of f x . For our purposes f x will be the density of the normalised forward returns.A random variable x is said to follow a Johnson SU distribution (Johnson, 1949) if: z = γ + δ sinh − (cid:0) x − ξλ (cid:1) , For the general theory and expansion see Edgeworth (1905), Wallace (1958) and Bhattacharya and Ghosh(1978). See Chebyshev (1860), Chebyshev (1890), Gram (1883), Charlier (1905) and Charlier (1906). An asymptotic expansion is defined in Wallace (1958) as one where the error of approximation approacheszero as one of its parameters, e.g. the sample size for approximations of the sampling distribution of a randomsample of size T , approaches infinity. Furthermore, Walace (1958) calls the Edgeworth expansion a ’formal’asymptotic expansion. See Jasche (2002) and Wallace (1958). z is a standard normal variable. The four parameters γ , δ , ξ and λ may be estimatedusing the moment-matching algorithm described in Tuenter (2001). Although flexible, themain disadvantage of this approach is that a Johnson SU distribution is not guaranteed toexist for any set of mean, variance, skewness and (positive) excess kurtosis. To assess how well these approximate distributions serve their purpose we should investigatewhether they provide an adequate representation of the conditional distributions of forwardreturns. But these distributions are not observable, even ex-post, so we shall use simulateddistributions as proxies. The null hypothesis is H : F m = F s , where F m is the cumulative dis-tribution function for the approximate distribution constructed using the first four momentsand a specific approximation method, and F s is the distribution function for the simulatedforward returns based on the GARCH process. The simulated distribution F s is given by thestep-function of the sample. Thus, F s ( x i ) = T − i , where i = number of returns less than or equal to x i ; x i with i ∈ { , ..., T } is an increasingly orderedsample, and T is the number of simulations.Standard hypothesis tests where the null is the equality of two distributions are theKolmogorov-Smirnov (KS) and the Anderson Darling (AD) tests. The KS test, proposed byKolmogorov (1933) and Smirnov (1939), is, in effect, a simple hypothesis test which is basedon the maximum difference between an empirical and a hypothetical cumulative distribution.The test statistic is given by KS = √ T D , where D is the maximum distance between thetwo distributions, i.e. D = max x | F m ( x ) − F s ( x ) | . For practical implementations, we use thefollowing version of the statistic for the increasingly ordered sample: KS = √ T max ≤ i ≤ T (cid:8) max (cid:2) F m ( x i ) − i − T , iT − F m ( x i ) (cid:3)(cid:9) . When comparing alternative models, the one with the lowest KS value is deemed the mostaccurate for predicting the distribution in question. See Anderson and Darling (1952) and Pearson and Hartley (1972). The respectivetest statistics are given by:AD = T / max x | F m ( x ) − F s ( x ) | (cid:16) ψ ( F m ( x )) / (cid:17) , (15)AD = T ˆ x (cid:2) ( F m ( x ) − F s ( x )) ψ ( F m ( x )) (cid:3) , (16)where ψ is a weighting function. Following convention we refer to the Anderson-Darling (AD)test as (16) with a weighting function ψ ( x ) = ( x (1 − x )) − . Conducting these tests in our setting requires the simulation of critical values. Thestatistics only have standard distributions if the distribution under the null hypothesis isentirely pre-specified, but in our case the F m distribution relies on estimated parametervalues so the theoretical critical values are no longer applicable. The performance of our proposed distribution forecasts is tested using daily observations onan equity index (S&P 500), a foreign exchange rate (Euro/dollar) and an interest rate (3-month Treasury bill). These series represent three major market risk types and within eachclass they represent the most important risk factors in terms of volumes of exposures. Thethree data sets used in this application were obtained from Datastream and each comprise20 years of daily data from 1st January 1990 to 31st December 2009. Figure 1 plots thedaily log returns for the equity and exchange rate data and the daily changes in the interest The KS and Cramer-von Mises tests are obtained when ψ ( t ) = 1 in (15) and (16) respectively. For practical implementations and for an increasingly ordered sample, we use the following versions ofthe CVM and AD test statistics (see Anderson and Darling, 1952 and Pearson and Hartley, 1972):CVM = T P i =1 (cid:2) F m ( x i ) − i − T (cid:3) + T , AD = − T P i =1 2 i − T [ln ( F m ( x i )) + ln (1 − F m ( x T +1 − i ))] − T. The Euro was only introduced in 1999, so the ECU/dollar exchange rate is used before this date. Jan 1990 Jan 1994 Jan 1998 Jan 2002 Jan 2006 Jan 2009−0.15−0.1−0.0500.050.10.15 (a) S&P 500 returnsJan 1990 Jan 1994 Jan 1998 Jan 2002 Jan 2006 Jan 2009−0.06−0.04−0.0200.020.04 (b) Euro/Dollar returnsJan 1990 Jan 1994 Jan 1998 Jan 2002 Jan 2006 Jan 2009−0.01−0.00500.0050.01 (c) 3−month Treasury Bill returns
Figure 1:
Returns
The equity and exchange rate daily (log) returns are computed as the first differencesof the logarithm of the S&P 500 index values and Euro/dollar exchange rates, respectively. The interest ratereturns are computed as first differences in interest rate values.
Table 1 presents the sample statistics of the empirical unconditional daily returns distri-bution over the entire sample. In accordance with stylized facts the mean of every series is notstatistically different from zero and the unconditional volatility is highest for the equity andlowest for the interest rates. Skewness is negative and low (in absolute value) but significantfor all three series, so extreme negative returns are more likely than extreme positive returnsof the same magnitude, while excess kurtosis is always positive and highly significant, so theunconditional distributions of the series have a greater probability mass in the tails than thenormal distribution with the same variance.Four different GARCH models, namely the baseline GARCH(1,1) and the asymmetricGJR, each with normal and Student t error distributions, are estimated for each of the First differences in fixed maturity interest rates are the equivalent of log returns on corresponding bonds.
Summary statistics.
The summary statistics are of the equity and exchange rate daily log returns,and of the daily changes in interest rates from 1990 to 2009. Asterisks denote significance at 5% (*), 1% (**)and 0.1%(***). The standard error of the sample mean is equal to the sample standard deviation, dividedby sample size, while the standard errors are approximately (6 /T ) / and (24 /T ) / for the sample skewnessand excess kurtosis, respectively, where T is the sample size. We used 252 risk days per year to annualizethe standard deviation into volatility. three time series. Their parameters are repeatedly estimated, approximately 2500 times,on a sample consisting of ten years of daily data which is rolled daily for an additionalten years. The resulting time series of model parameters are subsequently used to computecorresponding time series for the higher moments of forward and aggregated returns, basedon the analytic formulae derived in Section 2. This way we may construct time series ofconditional moments for the forward and aggregated returns and variances, for any timehorizon h , from 3rd January 2000 to 31st December 2009.For the symmetric models – the normal and Student t GARCH(1,1) – the skewness ofthe returns, both forward and aggregated, is zero by construction. However, the asymmetricspecifications – the normal and Student t GJR – lead to non-zero skewness forecasts for theaggregated returns. The skewness of the (forward and aggregated) variances is non-zero evenfor the symmetric models. All four models yield non-zero, positive excess kurtosis for alltime series.Now we apply the distribution approximation methods reviewed in Section 4.1 to derivedistributions for the h -day forward and aggregated returns. We combine four GARCH speci-fications (the normal and Student t GARCH(1,1) and GJR models) with two approximationmethods (the Johnson SU distribution and the Edgeworth expansion) and thus have eightalternative approximate distributions to evaluate and compare with the simulated distribu-tions, each based on 10,000 simulations. We test the accuracy of the approximations using19he KS distance, the CVM and AD test statistics described in Section 4.2. To capture anydifferences between market regimes, the tests are performed for 150 days from a low volatilityperiod (i.e. January to August 2006), 150 days from a high volatility period (i.e. August2008 to March 2009) and the last 150 observations from 2009. In our results these periodsare labelled ’low vol’, ’high vol’ and ’current’ respectively. Also the label ’total’ refers to theresults obtained for all three out-of-sample periods, considered together. Finally, the timehorizon we consider here is h = 5 days. Tables 2 and 3 summarize the results of the distribution tests for each of the eight approximatedistributions considered. The AD test gave results very similar to the CVM test, hencewe only report the results for the KS and CVM tests. We report the mean values and theassociated standard deviations of the test statistics and also the percentage of times when thecomputed test statistic was higher than the asymptotic 5% critical value. Since we performthe tests at the 5% significance level we expect a 5% rejection rate.The 5% critical values are 0.0136 for the KS distance and 0.461 for the CVM statistic. Although the asymptotic critical values do not apply exactly in our case, the model thatproduces the lowest values of the test statistic is still the best among the alternatives. Wenow discuss the results in greater detail for the equity, exchange rate and interest rate dis- For the interest rate sample, fitting the Johnson SU distribution using the moments of the aggregatedreturns estimated for the Student t GJR was problematic and hence, for this sample, we do not report resultsfor the Johnson SU Student t GJR in Table 3. These are asymptotic results for a test where the distribution being tested for is continuous, fully knownand generic (no particular family of distributions assumed). Stephens (1970) derives modified statistics forthe finite sample case; however, with a sample size of 10,000 these modifications are not actually needed andthe asymptotic results would apply, if the hypothetical distribution were fully specified. However, in our casethis distribution is based on estimated results and we would need to simulate the correct critical values if wewere to properly carry out the tests. Still, we report the percentage of times the test statistics are greaterthan the asymptotic critical values, so that we can infer, approximately, if the test results are at least in thevicinity of these asymptotic critical values. We also note that the results have to be interpreted with caresince it is likely that the appropriate (simulated) critical values for this testing exercise are lower than theasymptotic critical values reported above (see Massey, 1951). What we mean by ”best among alternatives” here means ”closest to the (respective) simulated distribu-tion”. However, one has to interpret the results with care since the simulated distribution is obviously notthe same for all alternative approximate distributions. t GARCH(1,1) and GJR models. For the normalGARCH(1,1) and GJR models the average values of both the KS and CVM test statisticsare still greater, but only marginally, for the Edgeworth approximations, compared with theirJohnson SU counterparts. Indeed, if the Edgeworth expansion is used, then the models withnormal innovations fit the simulated distributions significantly better than their Student t counterparts. This is not the case for the Johnson SU distribution, where all four GARCHspecifications provide very close fits to the simulated distributions (although again the normalmodels, and especially the normal GARCH(1,1), do give slightly better fits than the Student t models). Overall, the Johnson SU normal GARCH(1,1) model produces the lowest averagevalues of both the KS distance (0.0088) and CVM test statistic (0.1719). For the aggregatedreturns, the results improve even further, with the Johnson SU methodology still provingsuperior overall to the Edgeworth expansion (when combined with the Student t models),but to a lesser extent than in the case of the forward returns.(b) Euro/Dollar Exchange RateFrom the results in Table 2 there is not much to choose between the two alternative approx-imation methods when the innovations are normal. Indeed, the results for both distributiontests are virtually identical (and good) for the normal GARCH(1,1) and GJR, using eitherthe Johnson SU or Edgeworth expansion. For the Johnson SU approach, the fit is closestwhen the innovations are normal but the fit is almost as good based on the Student t models.However, when the Edgeworth expansion is employed, the results obtained with the Student t models are significantly worse than when the innovations are normal. For the distributionsof aggregated Euro/dollar returns, all models produce similar and good results.(c) 3-month Treasury Bill Rate 21s in (a), the Johnson SU should be preferred to the Edgeworth expansion. The JohnsonSU approximation yields the lowest average KS distance as well as the lowest value for theCVM test statistic for the normal GARCH(1,1) model, the normal GJR model being secondbest. The fits deteriorate when innovations have a Student t distribution.To summarize, the predictive distributions of forward and aggregated returns on major riskfactors may be well approximated using the analytic expressions for the first four conditionalmoments that we have derived in this paper. The best distribution approximation methodoverall is the Johnson SU and all four GARCH models that we have tested have predictiveforward and aggregated returns distributions that can be well approximated, the easiest beingthat generated by the normal GARCH(1,1). 22 ormal GARCH(1,1) Student t GARCH(1,1) Normal GJR Student t GJRtotal low vol high vol current total low vol high vol current total low vol high vol current total low vol high vol currentS&P 500 Johnson SUKS-average 0.0088 0.0088 0.0086 0.0089 0.0092 0.0091 0.0091 0.0093 0.0088 0.0088 0.0087 0.0089 0.009 0.009 0.0089 0.0091KS-stdev 0.0027 0.0027 0.0026 0.0027 0.0027 0.0028 0.0026 0.0029 0.0027 0.0027 0.0025 0.0027 0.0027 0.0028 0.0025 0.0028KS-rejections@5% 6.00% 6.00% 4.67% 7.33% 7.33% 8.67% 6.00% 7.33% 6.00% 6.67% 5.33% 6.00% 6.89% 8.00% 4.67% 8.00%CVM-average 0.1719 0.1765 0.1577 0.1814 0.1897 0.1935 0.1809 0.1948 0.1737 0.1803 0.1593 0.1814 0.1824 0.1896 0.1688 0.1889CVM-stdev 0.1496 0.1628 0.1298 0.1543 0.1544 0.1664 0.1346 0.1611 0.1504 0.1635 0.1284 0.157 0.1528 0.1652 0.1296 0.1611CVM-rejections@5% 6.44% 8.00% 4.67% 6.67% 6.00% 8.00% 4.00% 6.00% 6.89% 8.67% 4.67% 7.33% 6.00% 8.00% 4.00% 6.00%EdgeworthKS-average 0.0088 0.0088 0.0086 0.009 0.0171 0.01629 0.01676 0.0182 0.0089 0.0089 0.0088 0.009 0.0142 0.0142 0.0139 0.0144KS-stdev 0.0027 0.0027 0.0026 0.0027 0.0031 0.0029 0.0031 0.0031 0.0027 0.0027 0.0025 0.0028 0.003 0.0028 0.003 0.003KS-rejections@5% 5.78% 6.00% 4.00% 7.33% 86.89% 80.67% 85.33% 94.67% 6.67% 8.00% 5.33% 6.67% 54.00% 54.67% 50.67% 56.67%CVM-average 0.1721 0.1768 0.1579 0.1815 0.8535 0.7698 0.8121 0.9785 0.1764 0.1846 0.1614 0.1832 0.5387 0.5516 0.5059 0.5587CVM-stdev 0.1497 0.1629 0.1296 0.1546 0.3256 0.2887 0.2912 0.3561 0.1513 0.1647 0.1284 0.1582 0.2471 0.244 0.2278 0.2663CVM-rejections@5% 6.22% 8.00% 4.67% 6.00% 91.33% 86.67% 91.33% 96.00% 6.67% 8.67% 4.67% 6.67% 56.89% 59.33% 50.00% 61.33%Euro/dollar Johnson SUKS-average 0.0088 0.0088 0.0086 0.009 0.009 0.009 0.0088 0.0091 0.0088 0.0088 0.0086 0.009 0.009 0.009 0.0088 0.0091KS-stdev 0.0027 0.0027 0.0026 0.0028 0.0027 0.0028 0.0025 0.0028 0.0027 0.0027 0.0026 0.0027 0.0027 0.0027 0.0025 0.0028KS-rejections@5% 5.11% 4.67% 4.67% 6.00% 6.44% 8.00% 4.00% 7.33% 5.11% 4.67% 4.67% 6.00% 6.22% 8.00% 4.00% 6.67%CVM-average 0.1713 0.1754 0.1571 0.1814 0.1802 0.1883 0.1636 0.1888 0.1713 0.1754 0.1571 0.1814 0.1803 0.1886 0.1646 0.1878CVM-stdev 0.1495 0.1618 0.1302 0.1546 0.1519 0.1647 0.129 0.1591 0.1496 0.1618 0.1303 0.1547 0.1519 0.1648 0.1297 0.1587CVM-rejections@5% 6.44% 7.33% 4.67% 7.33% 6.89% 9.33% 4.67% 6.67% 6.44% 7.33% 4.67% 7.33% 5.55% 7.33% 3.33% 6.00%EdgeworthKS-average 0.0088 0.0088 0.0086 0.009 0.0136 0.0151 0.0122 0.0134 0.0088 0.0088 0.0086 0.009 0.0135 0.015 0.0122 0.0133KS-stdev 0.0027 0.0027 0.0026 0.0028 0.0032 0.0029 0.0029 0.003 0.0027 0.0027 0.0026 0.0027 0.0032 0.003 0.0029 0.0029KS-rejections@5% 5.11% 4.67% 4.67% 6.00% 45.78% 67.33% 29.33% 40.67% 5.11% 4.67% 4.67% 6.00% 45.33% 66.67% 29.33% 40.00%CVM-average 0.1713 0.1754 0.1571 0.1814 0.4941 0.6435 0.3761 0.4628 0.1713 0.1754 0.1571 0.1814 0.4886 0.644 0.3735 0.4484CVM-stdev 0.1495 0.1618 0.1302 0.1547 0.2628 0.2735 0.1876 0.2462 0.1496 0.1618 0.1303 0.1547 0.2621 0.2754 0.1878 0.2379CVM-rejections@5% 6.44% 7.33% 4.67% 7.33% 52.89% 78.67% 32.00% 48.00% 6.44% 7.33% 4.67% 7.33% 44.89% 74.67% 24.00% 36.00%3M Bill Johnson SUKS-average 0.0106 0.0103 0.0104 0.0111 0.164 0.1315 0.2498 0.0989 0.0124 0.0113 0.0127 0.0132 0.0234 0.0212 0.0263 0.0227KS-stdev 0.0031 0.0031 0.0029 0.0032 0.0898 0.0238 0.0996 0.0214 0.0033 0.0032 0.003 0.0034 0.0047 0.0041 0.0046 0.0034KS-rejections@5% 17.02% 16.67% 12.00% 23.58% 100% 100% 100% 1 34.04% 26.00% 34.67% 43.09% 99.53% 98.67% 100% 100%CVM-average 0.2546 0.2478 0.236 0.2856 155.3 85.868 312.3 48.57 0.3651 0.3038 0.3786 0.4234 1.782 1.4011 2.3533 1.5497CVM-stdev 0.1822 0.1865 0.1548 0.204 173.7 33.399 211.8 21.649 0.2154 0.2006 0.1921 0.2409 0.8393 0.6283 0.9624 0.435CVM-rejections@5% 12.77% 13.33% 8.00% 17.89% 100% 100% 100% 100% 26.24% 19.33% 27.33% 33.33% 99.29% 98.00% 100% 100%EdgeworthKS-average 0.0267 0.0187 0.0325 0.0294 0.2662 0.2097 0.3491 0.234 0.0321 0.0217 0.0395 0.0356 0.1997 0.1496 0.254 0.1944KS-stdev 0.0074 0.0033 0.0054 0.004 0.0812 0.0247 0.0824 0.0181 0.0094 0.0035 0.0068 0.0044 0.0537 0.0179 0.0471 0.0122KS-rejections@5% 98.35% 95.33% 100% 100% 100% 100% 100% 100% 99.76% 99.33% 100% 100% 100% 100% 100% 100%CVM-average 2.6567 1.0831 3.9245 3.0298 286.1 181.8 438.1 227.9 4.0869 1.5392 6.1278 4.7048 188.6 99.35 295.4 167.1CVM-stdev 1.5653 0.4047 1.3952 0.8581 152.7 40.51 163.1 31 2.5031 0.5157 2.2873 1.2126 106.3 25.08 105.4 21.35CVM-rejections@5% 99.29% 98.00% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
Table 2:
Distribution tests for the approximate distributions of 5-day forward returns
We report the average KS distance (KS-average) and CVM test statistic (CVM-average), with associated standard deviations (KS-stdev andCVM-stdev, respectively) and the percentage of cases where the test statistics are greater than the asymptotic 5% CVs(KS-rejections@5%and CVM-rejections@5%, respectively) for the 5-day forward returns for the S&P 500, Euro/dollar and 3-month Treasury Bills, respectively.Labels ’low vol’, ’high vol’ and ’current’ refer to the sub-periods: January to August 2006, August 2008 to March 2009 and last 150observations from 2009, respectively. The label ’total’ refers to all three sub-periods, considered together. ormal GARCH(1,1) Student t GARCH(1,1) Normal GJR Student t GJRtotal low vol high vol current total low vol high vol current total low vol high vol current total low vol high vol currentS&P 500 Johnson SUKS-average 0.0113 0.0085 0.0086 0.0169 0.0088 0.0086 0.009 0.0088 0.009 0.0088 0.0091 0.0092 0.0097 0.0094 0.0098 0.0099KS-stdev 0.0144 0.0025 0.0024 0.0238 0.0026 0.0025 0.0026 0.0027 0.0026 0.0024 0.0027 0.0027 0.0028 0.0027 0.0028 0.0029KS-rejections@5% 8.67% 4.67% 4.67% 16.67% 5.78% 6.00% 5.33% 6.00% 5.78% 3.33% 6.00% 8.00% 9.78% 8.00% 9.33% 12.00%CVM-average 1.1472 0.1554 0.1608 3.1253 0.1686 0.1592 0.174 0.1726 0.1796 0.1704 0.1818 0.1867 0.2127 0.1957 0.2206 0.2218CVM-stdev 5.8194 0.1351 0.1396 9.8033 0.1408 0.1397 0.1382 0.145 0.1498 0.1398 0.1587 0.1509 0.1716 0.1546 0.1811 0.1778CVM-rejections@5% 9.33% 4.00% 5.33% 18.67% 4.67% 4.67% 4.00% 5.33% 5.78% 7.33% 4.67% 5.33% 7.56% 6.67% 6.67% 9.33%EdgeworthKS-average 0.0114 0.0086 0.0087 0.017 0.0107 0.0102 0.0111 0.0108 0.0095 0.0095 0.0095 0.0096 0.0113 0.0112 0.0113 0.0113KS-stdev 0.0142 0.0025 0.0025 0.0235 0.0028 0.0028 0.0028 0.0029 0.0027 0.0026 0.0028 0.0028 0.003 0.0028 0.0031 0.0031KS-rejections@5% 9.56% 6.00% 4.67% 18.00% 15.33% 11.33% 17.33% 17.33% 8.44% 8.00% 6.67% 10.67% 18.67% 14.00% 19.33% 22.67%CVM-average 1.1322 0.1595 0.1643 3.0728 0.27 0.2354 0.2904 0.2844 0.2071 0.2041 0.2075 0.2096 0.3208 0.3049 0.3289 0.3287CVM-stdev 5.7089 0.1372 0.1389 9.6171 0.175 0.1659 0.1692 0.1853 0.1703 0.1602 0.1822 0.1689 0.2293 0.2061 0.243 0.2378CVM-rejections@5% 8.89% 4.67% 4.00% 18.00% 13.33% 9.33% 16.00% 14.67% 7.11% 6.67% 6.00% 8.67% 18.89% 13.33% 20.67% 22.67%Euro/dollar Johnson SUKS-average 0.0086 0.0084 0.0086 0.0087 0.0087 0.0085 0.0087 0.0087 0.0086 0.0084 0.0086 0.0087 0.0087 0.0085 0.0088 0.0087KS-stdev 0.0025 0.0025 0.0024 0.0027 0.0026 0.0025 0.0025 0.0028 0.0025 0.0025 0.0024 0.0027 0.0026 0.0025 0.0025 0.0028KS-rejections@5% 4.67% 4.67% 4.67% 4.67% 4.44% 3.33% 4.00% 6.00% 4.44% 4.00% 4.67% 4.67% 4.89% 4.00% 4.00% 6.67%CVM-average 0.1625 0.156 0.1617 0.1697 0.166 0.1591 0.1645 0.1743 0.1625 0.1561 0.1613 0.1703 0.1663 0.159 0.1654 0.1745CVM-stdev 0.1401 0.1381 0.1411 0.1416 0.1423 0.1391 0.1404 0.1478 0.1401 0.1382 0.1402 0.1424 0.142 0.1398 0.1392 0.1475CVM-rejections@5% 5.11% 4.67% 4.67% 6.00% 4.67% 4.00% 4.67% 5.33% 5.11% 4.67% 4.67% 6.00% 4.22% 3.33% 4.00% 5.33%EdgeworthKS-average 0.0086 0.0084 0.0086 0.0087 0.009 0.0089 0.009 0.009 0.0086 0.0084 0.0086 0.0087 0.009 0.0089 0.0091 0.0091KS-stdev 0.0025 0.0025 0.0024 0.0027 0.0027 0.0025 0.0025 0.0029 0.0025 0.0025 0.0024 0.0027 0.0026 0.0026 0.0025 0.0028KS-rejections@5% 5.11% 4.67% 4.67% 6.00% 6.00% 6.67% 4.00% 7.33% 4.67% 4.67% 4.67% 4.67% 6.44% 6.67% 4.67% 8.00%CVM-average 0.1626 0.1559 0.1617 0.1703 0.1784 0.1722 0.173 0.1899 0.1627 0.156 0.1612 0.171 0.1793 0.172 0.1756 0.1904CVM-stdev 0.1402 0.1382 0.1408 0.1422 0.1474 0.1448 0.1407 0.1567 0.1403 0.1384 0.1395 0.1434 0.1468 0.1458 0.1396 0.1549CVM-rejections@5% 5.11% 4.67% 4.67% 6.00% 4.89% 4.00% 4.67% 6.00% 4.89% 4.67% 4.67% 5.33% 4.44% 4.00% 4.00% 5.33%3M Bill Johnson SUKS-average 0.0087 0.0084 0.0087 0.0089 0.0155 0.0144 0.0158 0.0165 0.0106 0.0094 0.0112 0.0113 N/AKS-stdev 0.0025 0.0024 0.0024 0.0025 0.0036 0.0042 0.0031 0.0031 0.003 0.0026 0.003 0.0032KS-rejections@5% 4.73% 4.00% 4.67% 5.69% 72.81% 59.33% 78.00% 82.93% 15.37% 5.33% 18.00% 24.39%CVM-average 0.1614 0.152 0.1645 0.169 0.6838 0.6 0.686 0.7834 0.2575 0.1948 0.2889 0.2956CVM-stdev 0.1288 0.1233 0.1349 0.1282 0.3202 0.3659 0.265 0.2943 0.1931 0.1431 0.2116 0.2044CVM-rejections@5% 4.26% 4.00% 4.67% 4.07% 76.83% 60.67% 81.33% 91.06% 15.60% 6.00% 18.67% 23.58%EdgeworthKS-average 0.0235 0.0179 0.0269 0.026 0.0636 0.0511 0.0747 0.0655 0.0289 0.0214 0.0335 0.0326 0.0543 0.0468 0.0627 0.0533KS-stdev 0.0054 0.003 0.0041 0.0033 0.0123 0.0069 0.0094 0.0032 0.0071 0.0033 0.0054 0.004 0.0096 0.0065 0.0089 0.0037KS-rejections@5% 92.44% 95.33% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%CVM-average 1.8418 0.9387 2.3913 2.273 17.198 10.55 23.437 17.696 2.9942 1.4219 3.9511 3.7446 11.682 8.5839 15.458 10.857CVM-stdev 0.8681 0.3213 0.7176 0.5385 6.6817 2.872 5.6867 1.5538 1.4736 0.4556 1.2011 0.8734 4.2451 2.5548 4.32 1.248CVM-rejections@5% 97.64% 93.33% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
Table 3:
Distribution tests for the approximate distributions of 5-day aggregated returns
We report the average KS distance (KS-average) and CVM test statistic (CVM-average), with associated standard deviations (KS-stdev andCVM-stdev, respectively) and the percentage of cases where the test statistics are greater than the asymptotic 5% CVs(KS-rejections@5% andCVM-rejections@5%, respectively) for the 5-day aggregated returns for the S&P 500, Euro/dollar and 3-month Treasury Bills, respectively.Labels ’low vol’, ’high vol’ and ’current’ refer to the sub-periods: January to August 2006, August 2008 to March 2009 and last 150observations from 2009, respectively. The label ’total’ refers to all three sub-periods, considered together. Conclusions
We have derived analytical expressions for the moments of forward and aggregated returnsand variances for an established asymmetric GARCH specification, namely the GJR model,with a generic innovations distribution. Special cases include the normal and Student t GARCH(1,1) and GJR models. We found that the distribution of forward returns is skewedonly if the distribution of innovations is skewed, but the distribution of aggregated returnsis skewed even if the innovation distribution is symmetric. The other source of skewness inthis case is the asymmetric response of variance to positive and negative shocks (i.e. λ = 0).There are two sources of kurtosis in forward returns: the degree of leptokurtosis of theinnovation distribution and the uncertainty in forward variance. Since the one-step-aheadvariance is deterministic in a GARCH setting, the kurtosis coefficient of the one-step-aheadreturns distributions is equal to that of the innovation distribution. However, whenever weforecast s -steps ahead (with s >
1) using a GARCH(1,1) or GJR model, the s -step-aheadreturns distribution for s > References
Andersen, T.G. & T. Bollerslev (1998) Answering the Critics: Yes ARCH models Do ProvideGood Volatility Forecasts.
International Economic Review
39, 885-905.Andersen, T.G., T. Bollerslev & F.X. Diebold (2009) Parametric and Nonparametric Volatil-ity Measurement. In L.P. Hansen & Y. Ait-Sahalia (eds.),
Handbook of FinancialEconometrics . North-Holland.Anderson, T.W. & D.A. Darling (1952) Asymptotic Theory of Certain ”Goodness of Fit”Criteria Based on Stochastic Processes.
The Annals of Mathematical Statistics
Journal of Econometrics
Journal of Applied Econometrics
21, 79-109.Bhattacharya, R.N. & J.K. Ghosh (1978) On the Validity of the Formal Edgeworth Expan-sion.
The Annals of Statistics
6, 435-451.Bollerslev, T. (1986) Generalized Autoregressive Conditional Heteroskedasticity.
Journal ofEconometrics
31, 307-327.Bollerslev, T. (1987) A Conditionally Heteroskedastic Time Series Model for SpeculativePrices and Rates of Return.
The Review of Economics and Statistics
69, 542-547.Breuer, T. & M. Jandacka (2010) Temporal Aggregation of GARCH Models: ConditionalKurtosis and Optimal Frequency.
Working Paper , available from http://ssrn.com.Charlier, C.V.L. (1905) Uber das Fehlergesetz.
Arkiv for Matematik, Astronomi Och Fysic ,8, 1-9.Charlier, C.V.L. (1906) Uber die Darstellung Willkurlicher Funktionen.
Arkiv for Matem-atik, Astronomi Och Fysic , 20, 1-35.Chebyshev, P.L. (1860) Sur le Developpment des Fonctions a une Seule Variable.
Bulletin dela Classe Physique-Mathematique de l’Academie Imperiale des Sciences St. Petersbourg
3, 193-202. 26hebyshev, P.L. (1890) Sur Deux Theorems Relatifs aux Probabilites.
Acta Mathematica
14, 305-315.Christoffersen, P.F., C. Dorion, K. Jacobs & Y. Wang (2010) Volatility Components: AffineRestrictions and Non-normal Innovations.
Journal of Business and Economic Statistics
28, 483-502.Christoffersen, P.F., K. Jacobs, C. Ornthanalai & Y. Wang (2008) Option Valuation withLong-run and Short-run Volatility Components.
Journal of Financial Economics
The Econometrics Journal
5, 345-357.Diebold, F.X. (1988)
Empirical Modelling of Exchange Rates . Springer.Duan, J.C., G. Gauthier, C. Sasseville & J.G. Simonato (2006) Approximating the GJR-GARCH and EGARCH Option Pricing Models Analytically.
Journal of ComputationalFinance
9, 41-69.Duan, J.C., G. Gauthier & J.G. Simonato (1999) An Analytical Approximation for theGARCH Option Pricing Model.
Journal of Computational Finance
2, 76-116.Edgeworth, F.Y. (1905) The Law of Error.
Transactions of the Cambridge PhilosophicalSociety
20, 33-66 and 113 -141.Engle, R.F. (1982) Autoregressive Conditional Heteroskedasticity with Estimates of theVariance of United Kingdom Inflation.
Econometrica
50, 987-1007.Engle, R.F. (1990) Discussion: Stock Market Volatility and the Crash of ’87.
Review ofFinancial Studies
3, 103-106.Engle, R.F. & V.K. Ng (1993) Measuring and Testing the Impact of News on Volatility.
Journal of Finance
48, 1749-1778.Fama, E. (1965) The Behaviour of Stock Market Prices.
Journal of Business
38, 34 -105.Glosten, L.R., R. Jagannathan & D.E. Runkle (1993) On the Relation Between the ExpectedValue and the Volatility of the Nominal Excess Return on Stocks.
Journal of Finance
48, 1779-1801.Gram, J.P. (1883) Uber die Entwickelung Reeler Funktionen in Reihen mittelst der Methodeder Kleinsten Quadrate.
Creile
94, 41-73.Haas, M., S. Mittnik, & M.S. Paolella (2004) Mixed Normal Conditional Heteroskedasticity.
Journal of Financial Econometrics
2, 493-530.He, C. & T. Terasvirta (1999a) Properties of Moments of a Family of GARCH Processes.
Journal of Econometrics
92, 173-192. 27e, C. & T. Terasvirta (1999b) Fourth Moment Structure of the GARCH(p,q) process.
Econometric Theory
15, 824-846.He, C., T. Terasvirta & H. Malmsten (2002) Fourth Moments Structure of a Family ofFirst-Order Exponential GARCH Models.
Econometric Theory
18, 868-885.Heston, S.L. & S. Nandi (2000) A Closed-Form GARCH Option Pricing Model.
Review ofFinancial Studies
13, 281-300.Ishida, I. & R. F. Engle (2002) Modeling Variance of Variance: The Square-Root, the Affine,and the CEV GARCH Models.
Working Paper
Journal of Risk
4, 33-52.Johnson, N.L. (1949) Systems of Frequency Curves Generated by Methods of Translation.
Biometrica
36, 149-76.Karanasos, M. (1999) The Second Moment and the Autocovariance Function of the Squarederrors of the GARCH Model.
Journal of Econometrics
9, 63-76.Karanasos, M. (2001) Prediction in ARMA Models with GARCH-in-Mean Effects.
Journalof Time Series Analysis
22, 555-78.Karanasos, M. & J. Kim (2003) Moments of ARMA-EGARCH model.
The EconometricsJournal
6, 146-166.Karanasos, M., Z. Psaradakis & M. Sola (2004) On the Autocorrelation Properties of Long-Memory GARCH Processes.
Journal of Time Series Analysis
25, 265-281.Kolmogorov, A. (1933) Sulla Determinazione Empirica di una Legge di Distribuzione.
Gior-nale dell’Instituto Italiano degh Attuari
4, 1-11.Ling, S. & M. McAleer (2002a) Necessary and Sufficient Moment Conditions for the GARCH(r,s)and the Asymmetric Power GARCH(r,s).
Econometric Theory
18, 722-729.Ling, S. & M. McAleer (2002b) Stationarity and the Existence of Moments of a Family ofGARCH Processes.
Journal of Econometrics
Journal of Business
36, 394-419.Marcucci, J. (2005) Forecasting Stock Market Volatility with Regime-Switching GARCHModels.
Studies in Nonlinear Dynamics & Econometrics
9, 1-53.Massey, F.J. Jr. (1951) The Kolmogorov-Smirnov Test for Goodness of Fit.
Journal of theAmerican Statistical Association
46, 68-78.Milhoj, A. (1985) The Moment Structure of ARCH Processes.
Scandinavian Journal ofStatistics
12, 281-292. 28elson, D.B. (1991) Conditional Heteroskedasticity in Asset Returns: a New Approach.
Econometrica
59, 347-370.Nemec, A.F.L. (1985) Conditionally Heteroskedastic Autoregressions.
Technical Report no.43 . Department of Statistics, University of Washington.Pearson, E.S. & H.O. Hartley (1972)
Biometrika Tables for Statisticians , vol.2. CambridgeUniversity Press.Serfling, R.J. (1980)
Approximation Theorems of Mathematical Statistics . Wiley.Smirnov, N. (1939) Sur les Ecarts de la Courbe de Distribution Empirique.
MatematicheskiiSbornik
48, 3-26.Stephens, M.A. (1970) Use of the Kolmogorov-Smirnov, Cramer-von Mises and RelatedStatistics without Extensive Tables.
Journal of the Royal Statistical Society B
Modeling Financial Time Series . Wiley.Tuenter, H.J.H. (2001) An Algorithm to Determine the Parameters of SU-Curves in theJohnson System of Probability Distributions by Moment Matching.
Journal of Statis-tical Computation and Simulation
70, 325-347.Wallace, D.L. (1958) Asymptotic Approximations to Distributions.
The Annals of Mathe-matical Statistics
29, 635-654.Wong, C.M. & M.K.P. So (2003) On Conditional Moments of GARCH Models with Appli-cations to Multiple Period Value at Risk Estimation.
Statistica Sinica
13, 1015-1044.29 ppendices: Conditional Moments for the Generic GJR model
The model’s s -step-ahead conditional variances are given by: h t + s = ω + (cid:0) α + λI − t + s − (cid:1) ε t + s − + βh t + s − (17)The aim of this appendix is to calculate the conditional moments of the return the forwardreturn r t + s and of its conditional variance h t + s , as well as of the aggregated return and of itsconditional variance according to the model. Specifically, for i = 1, 2, 3 and 4, and for x = r and h , we compute: ˜ µ ( i ) x,s = E t (cid:0) x it + s (cid:1) , µ ( i ) x,s = E t (cid:18)(cid:16) x t + s − ˜ µ (1) x,s (cid:17) i (cid:19) , ˜ M ( i ) x,n = E t "(cid:18) n P s =1 x t + s (cid:19) i , M ( i ) x,n = E t (cid:18) n P s =1 (cid:16) x t + s − ˜ µ (1) x,s (cid:17)(cid:19) i ! , and the corresponding standardized moments: τ x,s = µ (3) x,s (cid:16) µ (2) x,s (cid:17) / , T x,n = M (3) x,n (cid:16) M (2) x,n (cid:17) / , κ x,s = µ (4) x,s (cid:16) µ (2) x,s (cid:17) , K x,n = M (4) x,n (cid:16) M (2) x,n (cid:17) . We focus on calculating the un-centred conditional moments; the centred moments will followusing simple formulae. We denote by F the distribution function for D (0 , ϕ = α + λF + β (18)and ¯ h = ω (1 − ϕ ) − . For both the normal and the standardized Student t , F = , since thetwo distributions are symmetric, thus for these two special cases ϕ becomes: ϕ = α + λ β (19)The following notation will also be useful, where s, u, v, w > µ ( i,j ) h,su = E t (cid:0) h it + s h jt + s + u (cid:1) ˜ µ ( i,j,k ) h,suv = E t (cid:0) h it + s h jt + s + u h kt + s + u + v (cid:1) ˜ µ h,suvw = E t ( h t + s h t + s + u h t + s + u + v h t + s + u + v + w ) θ ( j ) su = E t (cid:0) ε t + s h jt + s + u (cid:1) D (0 ,
1) to take two specific functional forms, largely used in practice: the standard normaland the (standardized) Student t . When D (0 ,
1) is the standard normal distribution themoments with odd order are all zero and the even moments are given by µ (2 r ) z = r Q i =1 (2 i − D (0 ,
1) is the standardized Student t distribution, the odd order moments are againall zero (as long as the number of degrees of freedom ν > the order of the moment) and theeven moments are given by: µ (2 r ) z = ( ν − r Γ ( r + ) Γ ( ν − r ) Γ ( ) Γ ( ν ) , if ν > r . T.A.1: Conditional Moments of Forward and Aggregated Returns (a)
First Conditional Moments of Forward and Aggregated Returns
For s ≥ R tn = n P s =1 r t + s for the aggregated return and also using thetower law of expectations, we get: E t ( r t + s ) = E t ( µ + ε t + s ) = µ + E t E t + s − ( ε t + s ) | {z } = µE t ( R tn ) = E t (cid:18) n P s =1 r t + s (cid:19) = n P s =1 E t ( r t + s ) = nµ (b) Second Conditional Moments of Forward and Aggregated Returns
The second moment of the forward return is: E t (cid:0) r t + s (cid:1) = E t (cid:2) ( µ + ε t + s ) (cid:3) = µ + ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) ,where we used the expression for the second moment of variance ˜ µ (1) h,s = µ (2) r,s = ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) derived in the Technical Appendix T.A.2 below. The second moment of aggregated returnsis: E t ( R tn ) = E t (cid:18) n P s =1 r t + s (cid:19) = E t (cid:18) n P s =1 r t + s + 2 n P s =1 n − s P u =1 r t + s r t + s + u (cid:19) .E t (cid:18) n P s =1 r t + s (cid:19) = n P s =1 (cid:0) µ + ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = n (cid:0) µ + ¯ h (cid:1) + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) We shall only derive the expressions for the two special cases when a particular expression derived forthe generic case differs for one (or both) of the special cases; when no formulae for the special cases arementioned, the generic formulae apply. t (cid:18) n P s =1 n − s P u =1 r t + s r t + s + u (cid:19) = n P s =1 n − s P u =1 E t ( r t + s r t + s + u ) = n P s =1 n − s P u =1 E t (( µ + ε t + s ) ( µ + ε t + s + u )) = / n ( n − µ .Hence, the expression for the second moment of aggregated returns becomes: E t ( R tn ) = n µ + n ¯ h + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n )(c) Third Conditional Moments of Forward and Aggregated Returns
For the third moment of forward returns we write: E t (cid:0) r t + s (cid:1) = E t (cid:2) ( µ + ε t + s ) (cid:3) = E t (cid:0) µ + 3 µ ε t + s + 3 µε t + s + ε t + s (cid:1) = µ + 3 µ ˜ µ (1) h,s + E t (cid:16) z t + s h / t + s (cid:17) = µ + 3 µ ˜ µ (1) h,s + τ z E t (cid:16) h / t + s (cid:17) . For both the normal and Student t GJR (or rather for any GJR model with a symmetricinnovations distribution), the expression for the third moment of returns is given by: E t (cid:0) r t + s (cid:1) = µ + 3 µ ˜ µ (1) h,s = µ + 3 µ (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) However, for the generic, skewed model, we need to approximate E t (cid:16) h / t + s (cid:17) using a secondorder Taylor series expansion. In general, for a smooth function g ( X ): g ( X ) ≈ g ( E t ( X )) + g ′ ( E t ( X )) ( X − E t ( X )) + / g ′′ ( E t ( X )) ( X − E t ( X )) . Taking expectations we get: E t ( g ( X )) ≈ g ( E t ( X )) + / g ′′ ( E t ( X )) V t ( X ) . Setting g ( X ) = X / , so g ′ ( X ) = X / and g ′′ ( X ) = X − / and setting X = h t + s yields: E t (cid:16) h / t + s (cid:17) ≃ (cid:16) ˜ µ (1) h,s (cid:17) / + ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / , where the expressions for ˜ µ (1) h,s and ˜ µ (2) h,s are given in the Technical Appendix T.A.2 below.We now compute the third moment of the aggregated returns: E t ( R tn ) = E t (cid:18) n P s =1 r t + s (cid:19) = n P s =1 E t (cid:0) r t + s (cid:1) + 3 n P s =1 n − s P u =1 (cid:2) E t (cid:0) r t + s r t + s + u (cid:1) + E t (cid:0) r t + s r t + s + u (cid:1)(cid:3) +6 n P s =1 n − s P u =1 n − s − u P v =1 E t ( r t + s r t + s + u r t + s + u + v ) n P s =1 E t (cid:0) r t + s (cid:1) = nµ (cid:0) µ + 3¯ h (cid:1) + 3 µ (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) + τ z n P s =1 E t (cid:16) h / t + s (cid:17) n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n P s =1 n − s P u =1 E t (cid:2)(cid:0) µ + 2 µε t + s + ε t + s (cid:1) ( µ + ε t + s + u ) (cid:3) = n P s =1 n − s P u =1 (cid:16) µ + µµ (1) h,s (cid:17) = µ h n ( n − (cid:0) µ + ¯ h (cid:1) + (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) (cid:0) h t +1 − ¯ h (cid:1)i P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n P s =1 n − s P u =1 E t (cid:0) ( µ + ε t + s ) (cid:0) µ + 2 µε t + s + u + ε t + s + u (cid:1)(cid:1) = n P s =1 n − s P u =1 (cid:16) µ + µ ˜ µ (1) h,s + u + E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:17) E t (cid:0) ε t + s ε t + s + u (cid:1) = E t (cid:0) ε t + s E t + s + u − (cid:0) ε t + s + u (cid:1)(cid:1) = θ (1) su = E t (cid:0) ε t + s (cid:0) ω + ( α + λI t + s + u − ) ε t + s + u − + βh t + s + u − (cid:1)(cid:1) = ϕE t (cid:0) ε t + s ε t + s + u − (cid:1) = ϕ u − E t ( ε t + s h t + s +1 ) = ϕ u − (cid:18) ατ z + λ ´ z = −∞ z f ( z ) dz (cid:19) E t (cid:16) h / t + s (cid:17) where f is the density of the innovation distribution; the final expression for n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) becomes: n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n ( n − µ (cid:0) µ + ¯ h (cid:1) +(1 − ϕ ) − µ (cid:2) ϕ (1 − ϕ ) − (1 − ϕ n ) − nϕ n (cid:3) (cid:0) h t +1 − ¯ h (cid:1) + (cid:18) ατ z + λ ´ z = −∞ z f ( z ) dz (cid:19) n P s =1 (1 − ϕ n − s ) E t (cid:16) h / t + s (cid:17) For the normal GJR we have τ z = 0 and also it can be easily shown that: ´ z = −∞ z f ( z ) dz = ´ z = −∞ √ π z exp (cid:16) − z (cid:17) dz = − q π . Similarly, for the Student t GJR, we have τ z = 0 and easily get: ´ z = −∞ z f ( z ) = ´ z = −∞ z ( ν +12 ) Γ ( ν ) √ π ( ν − (cid:16) z ν − (cid:17) − ν +12 dz = − √ π ( ν − / ( ν − ν −
3) Γ ( ν +12 ) Γ ( ν ) , Repeatedly applying the tower law, we get that: n P s =1 n − s P u =1 n − s − u P v =1 E t ( r t + s r t + s + u r t + s + u + v ) = n P s =1 n − s P u =1 n − s − u P v =1 µ = n ( n − n − µ .(d) Fourth Conditional Moments of Forward and Aggregated Returns
The fourth moment of the forward returns is E t (cid:0) r t + s (cid:1) = E t (cid:0) µ + 4 µ ε t + s + 6 µ ε t + s + 4 µε t + s + ε t + s (cid:1) = µ + 6 µ ˜ µ (1) h,s + 4 µτ z E t (cid:16) h / t + s (cid:17) + κ z ˜ µ (2) h,s ,where ˜ µ (1) h,s and ˜ µ (2) h,s are derived in the Technical Appendix T.A.2 below and E t (cid:16) h / t + s (cid:17) is givenabove as a function of these first two conditional moments of the forward variance.In the special case that the innovation distribution is the standard normal, E t (cid:0) r t + s (cid:1) = µ + 6 µ ˜ µ (1) h,s + 3˜ µ (2) h,s , while for the Student t GJR, E t (cid:0) r t + s (cid:1) = µ + 6 µ ˜ µ (1) h,s +3 ν − ν − ˜ µ (2) h,s . 33or the fourth moment of aggregated returns, we write: E t ( R tn ) = n P s =1 E t (cid:0) r t + s (cid:1) + n P s =1 n − s P u =1 (cid:2) (cid:0) E t (cid:0) r t + s r t + s + u (cid:1) + E t (cid:0) r t + s r t + s + u (cid:1)(cid:1) + 6 E t (cid:0) r t + s r t + s + u (cid:1)(cid:3) +12 n P s =1 n − s P u =1 n − s − u P v =1 (cid:0) E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1) + E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1) + E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1)(cid:1) +24 n P s =1 n − s P u =1 n − s − u P v =1 n − s − u − v P w =1 E t ( r t + s r t + s + u r t + s + u + v r t + s + u + v + w ) n P s =1 E t (cid:0) r t + s (cid:1) = n P s =1 (cid:16) µ + 6 µ ˜ µ (1) h,s + 4 µτ z E t (cid:16) h / t + s (cid:17) + κ z ˜ µ (2) h,s (cid:17) = nµ (cid:0) µ + 6¯ h (cid:1) + 6 µ (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) + n P s =1 (cid:16) µτ z E t (cid:16) h / t + s (cid:17) + κ z ˜ µ (2) h,s (cid:17) For the normal GJR, n P s =1 E t (cid:0) r t + s (cid:1) = nµ (cid:0) µ + 6¯ h (cid:1) + 6 µ (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) +3 n P s =1 ˜ µ (2) h,s , while for the Student t GJR the sum above becomes: n P s =1 E t (cid:0) r t + s (cid:1) = nµ (cid:0) µ + 6¯ h (cid:1) + 6 µ (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) + 3 ν − ν − n P s =1 ˜ µ (2) h,s . n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n P s =1 n − s P u =1 E t (cid:0)(cid:0) µ + 3 (cid:0) µ ε t + s + µε t + s (cid:1) + ε t + s (cid:1) ( µ + ε t + s + u ) (cid:1) = n ( n − µ (cid:0) µ + 3¯ h (cid:1) + 3 µ (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) (cid:0) h t +1 − ¯ h (cid:1) + µτ z n P s =1 ( n − s ) E t (cid:16) h / t + s (cid:17) , which for the normal and Student t GJR models becomes: n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n ( n − µ (cid:0) µ + 3¯ h (cid:1) +3 µ (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) (cid:0) h t +1 − ¯ h (cid:1) . n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n P s =1 n − s P u =1 E t (cid:0) ( µ + ε t + s ) (cid:0) µ + 3 (cid:0) µ ε t + s + u + µε t + s + u (cid:1) + ε t + s + u (cid:1)(cid:1) = n P s =1 n − s P u =1 (cid:16) µ + 3 µ ˜ µ (1) h,s + u + µτ z E t (cid:16) h / t + s + u (cid:17) + 3 µE t (cid:0) ε t + s ε t + s + u (cid:1) + E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:17) . Now, E t (cid:0) ε t + s ε t + s + u (cid:1) = E t (cid:16) ε t + s E t + s + u − (cid:16) z t + s + u h / t + s + u (cid:17)(cid:17) = τ z E t (cid:16) ε t + s h / t + s + u (cid:17) = τ z θ (3 / su ,which for the normal and Student t GJR models reduces to: E t (cid:0) ε t + s ε t + s + u (cid:1) = 0, since τ z = 0. To solve for θ (3 / su , we are using a second order Taylor expansion around ˜ µ (1) h,s + u ,and obtain: h / t + s + u ≃ (cid:16) ˜ µ (1) h,s + u (cid:17) / + (cid:16) ˜ µ (1) h,s + u (cid:17) / (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) ,which yields: Even though for the normal and Student t GJR τ z = 0 , θ (3 / su = E t (cid:16) ε t + s h / t + s + u (cid:17) is generally non-zerofor these models (and enters the expressions of higher moments computed below) and this is why we stillconsider the normal and Student t special cases in the derivation of θ (3 / su . t (cid:0) ε t + s ε t + s + u (cid:1) = τ z θ (3 / su = τ z (cid:16) ˜ µ (1) h,s + u (cid:17) / (cid:18) E t ( ε t + s h t + s + u ) + (cid:16) ˜ µ (1) h,s + u (cid:17) − E t (cid:0) ε t + s h t + s + u (cid:1)(cid:19) ,θ (2) su = E t (cid:16) ε t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1) (cid:17) = γθ (2) s ( u − + 2 ωϕθ (1) s ( u − , where we have used that, conditional on the information available at time t , the indicatorfunction I − t is independent of all (contemporaneous) ε kt for any natural number k . We alsoset: γ = ( α + (2 αλ + λ ) F ) κ z + β + 2 β ( α + λF ) = ϕ +( κ z −
1) ( α + λF ) + κ z λ F (1 − F ).If D (0 , γ becomes: γ = ϕ + 2 (cid:0) α + λ (cid:1) + λ .When D (0 ,
1) is the standardized Student t distribution, γ is given by: γ = ϕ + (cid:0) ν − ν − − (cid:1) (cid:0) α + λ (cid:1) + (cid:0) ν − ν − (cid:1) λ .Solving the recursion for θ (2) su , we get: θ (2) su = γ u − θ (2) s + 2 ωϕ u − P j =1 γ j − θ (1) s ( u − j ) . u − P j =1 γ j − θ (1) s ( u − j ) = c u − P j =1 γ j − ϕ u − j − E t (cid:16) h / t + s (cid:17) = c ( ϕ − γ ) − ( ϕ u − − γ u − ) E t (cid:16) h / t + s (cid:17) , where c = (cid:18) ατ z + λ ´ x = −∞ x f ( x ) dx (cid:19) . θ (2) s = E t (cid:0) ε t + s h t + s +1 (cid:1) = E t (cid:16) ε t + s (cid:0) ω + (cid:0) α + λI − t + s (cid:1) ε t + s + βh t + s (cid:1) (cid:17) = (cid:18) α (cid:16) αµ (5) z + 2 βτ z (cid:17) + λ (2 α + λ ) ´ x = −∞ x f ( x ) dx + 2 λβ ´ x = −∞ x f ( x ) dx (cid:19) E t (cid:16) h / t + s (cid:17) +2 (cid:18) ωατ z + λω ´ x = −∞ x f ( x ) dx (cid:19) E t (cid:16) h / t + s (cid:17) For the normal GJR, τ z = µ (5) z = 0, f ( z ) = ϕ ( z ), and it can be easily shown that ´ z = −∞ z ϕ ( z ) dz = − q π . Similarly, for the Student t GJR, we again have have τ z = µ (5) z = 0 and f ( z ) = f ν ( z ).After some algebraic manipulation, we get that: ´ z = −∞ z f v ( z ) = − √ π ( ν − / ( ν − ν − ν −
5) Γ ( ν +12 ) Γ ( ν ) .Thus, the final expression for θ (3 / su becomes: θ (3 / su = (cid:16) ˜ µ (1) h,s + u (cid:17) / c ϕ u − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − (cid:16) c γ u − E t (cid:16) h / t + s (cid:17) + 2 ωc (cid:0) ϕ ( ϕ − γ ) − ( ϕ u − − γ u − ) + γ u − (cid:1) E t (cid:16) h / t + s (cid:17)(cid:17) , It can be shown, using the Cauchy – Buniakowsky – Schwarz inequality, that the kurtosis is alwaysgreater than or equal to 1, hence κ z ≥ Now it can be easily seen that γ > c = α (cid:16) αµ (5) z + 2 βτ z (cid:17) + λ (2 α + λ ) ´ x = −∞ x f ( x ) dx +2 βλ ´ x = −∞ x f ( x ) dx and E t (cid:16) h / t + s (cid:17) is given approximately using a second order Taylor expansion for h / t + s around E t ( h t + s ): E t (cid:16) h / t + s (cid:17) ≃ (cid:16) ˜ µ (1) h,s (cid:17) / (cid:18) µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) . n P s =1 n − s P u =1 E t (cid:0) r t + s r t + s + u (cid:1) = n P s =1 n − s P u =1 E t (cid:2) ( µ + ε t + s ) ( µ + ε t + s + u ) (cid:3) = n P s =1 n − s P u =1 h µ + µ (cid:16) ˜ µ (1) h,s + ˜ µ (1) h,s + u (cid:17) + 2 µE t (cid:0) ε t + s ε t + s + u (cid:1) + E t (cid:0) ε t + s ε t + s + u (cid:1)i ,E t (cid:0) ε t + s ε t + s + u (cid:1) = E t (cid:0) ε t + s h t + s + u (cid:1) = ω ˜ µ (1) h,s + ϕE t (cid:0) ε t + s ε t + s + u − (cid:1) = ω ˜ µ (1) h,s (cid:0) (1 − ϕ ) − (1 − ϕ u ) − ϕ u − (cid:1) + ϕ u − E t (cid:0) ε t + s ε t + s +1 (cid:1) , with E t (cid:0) ε t + s ε t + s +1 (cid:1) = E t (cid:0) ε t + s h t + s +1 (cid:1) = E t (cid:0) ε t + s (cid:0) ω + (cid:0) α + λI − t + s (cid:1) ε t + s + βh t + s (cid:1)(cid:1) = ω ˜ µ (1) h,s + κ z ( α + λF + κ − z β ) ˜ µ (2) h,s The final expression for E t (cid:0) ε t + s ε t + s + u (cid:1) becomes: E t (cid:0) ε t + s ε t + s + u (cid:1) = ¯ h (1 − ϕ u ) ˜ µ (1) h,s + ϕ u − κ z ( α + λF + κ − z β ) ˜ µ (2) h,s .The expressions for the normal and Student t GJR are obtained by replacing κ z = 3 and κ z = 3 ( ν − ν − , respectively, and F = in the expression above. E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1) = E t (cid:0) r t + s r t + s + u E t + s + u + v − ( r t + s + u + v ) (cid:1) = µE t (cid:0) r t + s r t + s + u (cid:1) = µ E t (cid:0) r t + s (cid:1) = µ (cid:16) µ + ˜ µ (1) h,s (cid:17) E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1) = µE t (( µ + ε t + s ) ( µ + h t + s + u )) = µ + µ ˜ µ (1) h,s + µθ (1) su E t (cid:0) r t + s r t + s + u r t + s + u + v (cid:1) = E t (cid:0) r t + s r t + s + u E t + s + u + v − (cid:0) µ + 2 µε t + s + u + v + ε t + s + u + v (cid:1)(cid:1) = E t ( r t + s r t + s + u ( µ + h t + s + u + v ))= µ + µ ˜ µ (1) h, ( s + u + v ) + µθ (1) s ( u + v ) + µE t ( ε t + s + u h t + s + u + v ) + E t ( ε t + s ε t + s + u h t + s + u + v ) E t ( ε t + s + u h t + s + u + v ) = E t ( E t + s ( ε t + s + u h t + s + u + v )) = E t ( E t ( ε t + u h t + u + v )), where t = t + s .We showed that E t ( ε t + s h t + s + u ) = θ (1) su = c ϕ u − E t (cid:16) h / t + s (cid:17) . Hence E t ( ε t + u h t + u + v ) = c ϕ v − E t (cid:16) h / t + u (cid:17) and thus we get: E t ( ε t + s + u h t + s + u + v ) = c ϕ v − E t (cid:16) E t (cid:16) h / t + u (cid:17)(cid:17) = c ϕ v − E t (cid:16) h / t + s + u (cid:17) . E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) = E t ( ε t + s ε t + s + u h t + s + u + v ) = E t ( ε t + s E t ( ε t + u h t + u + v ))= c ϕ v − E t (cid:16) ε t + s E t (cid:16) h / t + u (cid:17)(cid:17) = c ϕ v − E t (cid:16) ε t + s h / t + s + u (cid:17) = c ϕ v − θ (3 / su Finally, repeatedly applying the tower law, we get that: E t ( r t + s r t + s + u r t + s + u + v r t + s + u + v + w ) = µ . Centred Conditional Moments of Forward and Aggregated Returns
The second conditional centred moment of the forward returns (i.e. the conditional varianceof the forward return) is: µ (2) r,s = E t (cid:0) ε t + s (cid:1) = E t ( h t + s ) = ˜ µ (1) h,s = ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) , and for the aggregated returns it is: M (2) r,n = E t (cid:18) n P s =1 ε t + s (cid:19) ! = n P s =1 ˜ µ (1) h,s + 2 n P s =1 n − s P u =1 E t ( ε t + s ε t + s + u ) | {z } = n P s =1 ˜ µ (1) h,s Hence, M (2) r,n = n ¯ h + (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) . The third conditional centred moment of the forward returns is: µ (3) r,s = E t (cid:0) ε t + s (cid:1) = τ z E t (cid:16) h / t + s (cid:17) ≃ τ z (cid:18) (cid:16) µ (1) h,s (cid:17) / + 3 µ (2) h,s (cid:16) µ (1) h,s (cid:17) − / (cid:19) , so in the special cases when the innovation distribution is either the standard normal, or thestandardized Student t , µ (3) r,s = 0. The third conditional centred moment of the aggregatedreturns is: M (3) r,n = E t (cid:18) n P s =1 ε t + s (cid:19) ! = n P s =1 E t (cid:0) ε t + s (cid:1) + 3 n P s =1 n − s P u =1 (cid:2) E t (cid:0) ε t + s ε t + s + u (cid:1) + E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:3) +6 n P s =1 n − s P u =1 n − s − u P v =1 E t ( ε t + s ε t + s + u ε t + s + u + v )= τ z n P s =1 E t (cid:16) h / t + s (cid:17) +3 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1) ≃ τ z n P s =1 (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / + 3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) +3 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1) ;for the normal and Student t GJR models, this expression simplifies to: M (3) r,n = 3 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1) .The fourth conditional centred moment of the forward returns is: µ (4) r,s = E t (cid:0) ε t + s (cid:1) = κ z ˜ µ (2) h,s , so in the special case that the innovation distribution is the standard normal, µ (4) r,s = 3˜ µ (2) h,s ,while for the Student t GJR we get: µ (4) r,s = 3 ν − ν − ˜ µ (2) h,s .The fourth conditional centred moment of the aggregated returns is:37 (4) r,n = E t (cid:18) n P s =1 ε t + s (cid:19) ! = n P s =1 ε t + s + n P s =1 n − s P u =1 (cid:0) (cid:0) E t (cid:0) ε t + s ε t + s + u (cid:1) + E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:1) + 6 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:1) +12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) + E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) + E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) +24 n P s =1 n − s P u =1 n − s − u P v =1 n − s − u − v P w =1 E t ( ε t + s ε t + s + u ε t + s + u + v ε t + s + u + v + w )= κ z n P s =1 ˜ µ (2) h,s + n P s =1 n − s P u =1 (cid:0) E t (cid:0) ε t + s ε t + s + u (cid:1) + 6 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:1) + 12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) . In the special case that the conditional distribution is the standard normal, M (4) r,n = 3 n P s =1 ˜ µ (2) h,s + 6 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1) + 12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) ,while for a Student t GJR we obtain, M (4) r,n = 3 ν − ν − n P s =1 ˜ µ (2) h,s + 6 n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1) + 12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1) .(f) Standardized Conditional Moments of Forward and Aggregated Returns
The skewness of the forward returns is: τ r,s = µ (3) r,s (cid:16) µ (2) r,s (cid:17) − / = τ z E t (cid:16) h / t + s (cid:17) (cid:16) ˜ µ (1) h,s (cid:17) − / ≃ τ z (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / + 3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) (cid:16) ˜ µ (1) h,s (cid:17) − / = τ z (cid:18) µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − (cid:19) . It can be easily observed that if we used only a first order Taylor series expansion we wouldobtain τ r,s ≈ τ z and that τ r,s = 0 for both the normal and Student t GJR.The kurtosis of the forward returns is: κ r,s = µ (4) r,s (cid:16) µ (2) r,s (cid:17) − = κ z ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − , so in the special cases that the conditional distribution is standard normal κ h,s = 3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − , while for the Student t GJR we obtain: κ h,s = 3 ν − ν − ˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − .Finally, the skewness and kurtosis of the aggregated returns are:T r,n = M (3) r,n (cid:16) M (2) r,n (cid:17) − / and K r,n = M (4) r,n (cid:16) M (2) r,n (cid:17) − .38 .A.2: Conditional Moments of Forward and Aggregated Variances (a) First Conditional Moments of Forward and Aggregated Variances
Applying the conditional expectation operator to (17), the first un-centred conditional mo-ment of the forward variance may be written:˜ µ (1) h,s = µ (2) r,s = ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1) .Similarly, the first un-centred conditional moment of the aggregated variance becomes:˜ M (1) h,n = n P s =1 ˜ µ (1) h,s = n ¯ h + (1 − ϕ ) − (1 − ϕ n ) (cid:0) h t +1 − ¯ h (cid:1) or equivalently, in recursive form: ˜ M (1) h,n = ˜ M (1) h,n − + ¯ h + ϕ n − (cid:0) h t +1 − ¯ h (cid:1) .(b) Second Conditional Moments of Forward and Aggregated Variances
The second moment of the forward variance is:˜ µ (2) h,s = E t (cid:16)(cid:0) ω + (cid:0) α + λI − t + s − (cid:1) ε t + s − + βh t + s − (cid:1) (cid:17) = ω + 2 ωϕ ˜ µ (1) h,s − + (cid:0) ϕ + ( κ z −
1) ( α + λF ) + κ z λ F (1 − F ) (cid:1) ˜ µ (2) h,s − = s − P i =1 γ i − (cid:0) ω + 2 ωϕ (cid:0) ¯ h + ϕ s − i − (cid:0) h t +1 − ¯ h (cid:1)(cid:1)(cid:1) + γ s − h t +1 . When γ = 1, the expression for the second moment of the forward variance becomes:˜ µ (2) h,s = ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 .For γ = 1 (and γ = ϕ ), we introduce the following additional notation: c = (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − , c = 2 ωϕ (cid:0) h t +1 − ¯ h (cid:1) ( ϕ − γ ) − and c = c + c .Now the expression for the second moment of variance may be written:˜ µ (2) h,s = c + (cid:0) h t +1 − c (cid:1) γ s − + c ϕ s − . The second moment of the aggregated variance is given by:˜ M (2) h,n = E t "(cid:18) n P s =1 h t + s (cid:19) = n P s =1 ˜ µ (2) h,s + 2 n P s =1 n − s P u =1 ˜ µ (1 , h,su µ (1 , h,su = E t (cid:0) h t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1)(cid:1) = ω ˜ µ (1) h,s + ϕ ˜ µ (1 , h,s ( u − = ¯ h ˜ µ (1) h,s + ϕ u (cid:16) ˜ µ (2) h,s − ¯ h ˜ µ (1) h,s (cid:17) , hence ˜ M (2) h,n = n X s =1 (cid:16) ˜ µ (2) h,s + 2¯ h ( n − s ) ˜ µ (1) h,s (cid:17) + 2 n X s =1 n − s X u =1 (cid:16) ϕ u (cid:16) ˜ µ (2) h,s − ¯ h ˜ µ (1) h,s (cid:17)(cid:17) . (20)Consider the first sum in (20). For γ = 1 and γ = ϕ , we have: n P s =1 ˜ µ (2) h,s = n P s =1 (cid:0) c + (cid:0) h t +1 − c (cid:1) γ s − + c ϕ s − (cid:1) = nc + (cid:0) h t +1 − c (cid:1) (1 − γ ) − (1 − γ n ) + c (1 − ϕ ) − (1 − ϕ n ) , (21)and n P s =1 ( n − s ) ˜ µ (1) h,s = n P s =1 ( n − s ) (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = n ( n − ¯ h + (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) (cid:0) h t +1 − ¯ h (cid:1) . (22)Next we evaluate the double sum term in (20). We have: n P s =1 n − s P u =1 ϕ u ˜ µ (2) h,s = ϕ (1 − ϕ ) − n P s =1 ˜ µ (2) h,s (1 − ϕ n − s )= ϕ (1 − ϕ ) − n ( c − c ϕ n − ) + ( c − c ) (1 − ϕ ) − (1 − ϕ n ) + (cid:0) h t +1 − c (cid:1)(cid:2) (1 − γ ) − (1 − γ n ) − ( ϕ − γ ) − ( ϕ n − γ n ) (cid:3) and n P s =1 n − s P u =1 ϕ u ˜ µ (1) h,s = ϕ (1 − ϕ ) − n P s =1 (1 − ϕ n − s ) (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = ϕ (1 − ϕ ) − (cid:0) n (cid:0) ¯ h − ϕ n − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) + (1 − ϕ ) − (cid:0) h t +1 − h (cid:1) (1 − ϕ n ) (cid:1) . (23)Thus the final expression for ˜ M (2) h,n is:˜ M (2) h,n = nc + (cid:0) h t +1 − c (cid:1) (1 − γ ) − (1 − γ n ) + c (1 − ϕ ) − (1 − ϕ n )+2¯ h (cid:16) n ( n − ¯ h + (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) (cid:0) h t +1 − ¯ h (cid:1)(cid:17) +2 ϕ (1 − ϕ ) − n ( c − c ϕ n − ) + ( c − c ) (1 − ϕ ) − (1 − ϕ n ) + (cid:0) h t +1 − c (cid:1)(cid:2) (1 − γ ) − (1 − γ n ) − ( ϕ − γ ) − ( ϕ n − γ n ) (cid:3) − hϕ (1 − ϕ ) − (cid:0) n (cid:0) ¯ h − ϕ n − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) + (1 − ϕ ) − (cid:0) h t +1 − h (cid:1) (1 − ϕ n ) (cid:1) . (24)(c) Third Conditional Moments of Forward and Aggregated Variances
We now consider the third moment of the forward variance:40 µ (3) h,s = E t h(cid:0) ω + (cid:0) α + λI − t + s − (cid:1) ε t + s − + βh t + s − (cid:1) i = ω + 3 ω ϕ ˜ µ (1) h,s − + 3 ω (cid:2) κ z (cid:0) α + λ (2 α + λ ) F (cid:1) + β + 2 β ( α + λF ) (cid:3)| {z } γ ˜ µ (2) h,s − + (cid:2) µ (6) z (cid:0) α + 3 αλ ( α + λ ) F + λ F (cid:1) + 3 βκ z (cid:0) α + λ (2 α + λ ) F (cid:1) + 3 β ( α + λF ) + β (cid:3) ˜ µ (3) h,s − Hence ˜ µ (3) h,s = s − P i =0 c i (cid:16) ω + 3 ω ϕ ˜ µ (1) h,s − i − + 3 ωγ ˜ µ (2) h,s − i − (cid:17) + c s − h t +1 , where c = µ (6) z ( α + 3 αλ ( α + λ ) F + λ F ) + 3 βγ − β (2 β + 3 ( α + λF )).For the special case when innovations are normally distributed, F = and µ (6) z = 15. Simi-larly, when innovations are Student t distributed, F = still and µ (6) z = 15 ( ν − ( ν − ν − .For c = 1 (and c = γ , c = ϕ , and γ = 1), we get that:˜ µ (3) h,s = ω (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) (1 − c ) − + (cid:0) ω ϕ (cid:0) h t +1 − ¯ h (cid:1) + 3 ωγc (cid:1) ( ϕ − c ) − ϕ s − + c c s − + 3 ωγ (cid:0) − c + h t +1 (cid:1) ( γ − c ) − γ s − . where c = h t +1 − ω (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) (1 − c ) − − (cid:0) ω ϕ (cid:0) h t +1 − ¯ h (cid:1) + 3 ωγ (cid:0) − c + h t +1 (cid:1)(cid:1) ( ϕ − c ) − .For c = 1 and γ = 1, the expression for ˜ µ (3) h,s becomes:˜ µ (3) h,s = s − P i =0 (cid:16) ω + 3 ω ϕ ˜ µ (1) h,s − i − + 3 ωγ ˜ µ (2) h,s − i − (cid:17) + h t +1 = h t +1 + ( s − ω (cid:0) ω + 3¯ hϕω + 3 γc (cid:1) + 3 ω (cid:2) ωϕ (cid:0) h t +1 − ¯ h (cid:1) + γc (cid:3) (1 − ϕ ) − (1 − ϕ s − )+3 ωγ (cid:0) h t +1 − c (cid:1) (1 − γ ) − (1 − γ s − ) . For c = 1 and γ = 1, we get that: 41 µ (3) h,s = ω (1 − c ) − (cid:0) − c s − (cid:1) + 3 ω ϕ s − P i =0 (cid:0) c i (cid:0) ¯ h + ϕ s − i − (cid:0) h t +1 − ¯ h (cid:1)(cid:1)(cid:1) +3 ωγ s − P i =0 (cid:0) c i (cid:0) ( s − i − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕ ¯ h (1 − ϕ s − i − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 (cid:1)(cid:1) + c s − h t +1 = (cid:0) ω + 3 ω ϕ ¯ h + 6 ωγϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) + 3 ωγh t +1 (cid:1) (1 − c ) − (cid:0) − c s − (cid:1) + (cid:2) ω ϕ (cid:0) h t +1 − ¯ h (cid:1) − ωγϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1)(cid:3) ( ϕ − c ) − (cid:0) ϕ s − − c s − (cid:1) +3 ωγ (cid:0) ω + 2 ωϕ ¯ h (cid:1) (cid:0) (1 − c ) − ( s − − c (1 − c ) − (cid:0) − c s − (cid:1)(cid:1) + c s − h t +1 . For the third moment of aggregated variance we write:˜ M (3) h,n = E t (cid:18) n P s =1 h t + s (cid:19) ! = n P s =1 E t (cid:0) h t + s (cid:1) + 3 n P s =1 n − s P u =1 (cid:2) E t (cid:0) h t + s h t + s + u (cid:1) + E t (cid:0) h t + s h t + s + u (cid:1)(cid:3) +6 n P s =1 n − s P u =1 n − s − u P v =1 E t ( h t + s h t + s + u h t + s + u + v )= n P s =1 ˜ µ (3) h,s + 3 n P s =1 n − s P u =1 (cid:16) ˜ µ (1 , h,su + ˜ µ (2 , h,su (cid:17) + 6 n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1 , , h,suv ˜ µ (2 , h,su = E t (cid:0) h t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1)(cid:1) = ω ˜ µ (2) h,s + ϕ ˜ µ (2 , h,s ( u − = ¯ h ˜ µ (2) h,s + ϕ u (cid:16) ˜ µ (3) h,s − ¯ h ˜ µ (2) h,s (cid:17) (25)˜ µ (1 , h,su = E t h t + s ω + (cid:0) α + 2 αλI − t + s + u − + λ I − t + s + u − (cid:1) ε t + s + u − + β h t + s + u − +2 ω (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + 2 ωβh t + s + u − +2 β (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − h t + s + u − = ω ˜ µ (1) h,s + 2 ωϕ ˜ µ (1 , h,s ( u − + γ ˜ µ (1 , h,s ( u − . = u − P j =0 γ j (cid:16) ω ˜ µ (1) h,s + 2 ωϕ ˜ µ (1 , h,s ( u − j − (cid:17) + γ u ˜ µ (3) h,s (26)˜ µ (1 , , h,suv = E t ( h t + s E t + s ( h t + s + u h t + s + u + v )) = E t ( h t + s E t ( h t + u h t + u + v )), where t = t + s .We have already shown that ˜ µ (1 , h,su = E t ( h t + s h t + s + u ) = ¯ h ˜ µ (1) h,s + ϕ u (cid:16) ˜ µ (2) h,s − ¯ h ˜ µ (1) h,s (cid:17) , thus: E t ( h t + u h t + u + v ) = ¯ h ˜˜ µ (1) h,u + ϕ v (cid:16) ˜˜ µ (2) h,u − ¯ h ˜˜ µ (1) h,u (cid:17) , where ˜˜ µ (1) h,u = E t ( h t + u ) and ˜˜ µ (2) h,u = E t (cid:0) h t + u (cid:1) .Since ¯ h ˜ µ (1) h,s + ϕ u (cid:16) ˜ µ (2) h,s − ¯ h ˜ µ (1) h,s (cid:17) = ( ϕ u γ s − ) h t +1 + (cid:0) ¯ hϕ s − (1 − ϕ u ) + 2 ωϕ u +1 ( ϕ − γ ) − ( ϕ s − − γ s − ) (cid:1) h t +1 +¯ h (1 − ϕ s − ) + ϕ u (cid:0) c + (cid:0) ωϕ ¯ h ( ϕ − γ ) − − c (cid:1) γ s − − ωϕ s ¯ h ( ϕ − γ ) − − ¯ h (1 − ϕ s − ) (cid:1) , weget: E t ( h t + u h t + u + v ) = ( ϕ v γ u − ) h t + s +1 + (cid:0) ¯ hϕ u − (1 − ϕ v ) + 2 ωϕ v +1 ( ϕ − γ ) − ( ϕ u − − γ u − ) (cid:1) h t + s +1 +¯ h (1 − ϕ u − ) + ϕ v c + (cid:0) ωϕ ¯ h ( ϕ − γ ) − − c (cid:1) γ u − − ωϕ u ¯ h ( ϕ − γ ) − − ¯ h (1 − ϕ u − ) , and the expression for ˜ µ (1 , , h,suv now becomes: 42 µ (1 , , h,suv = ( ϕ v γ u − ) E t (cid:0) h t + s h t + s +1 (cid:1) + (cid:0) ¯ hϕ u − (1 − ϕ v ) + 2 ωϕ v +1 ( ϕ − γ ) − ( ϕ u − − γ u − ) (cid:1) E t ( h t + s h t + s +1 )+ ¯ h (1 − ϕ u − ) + ϕ v c + (cid:0) ωϕ ¯ h ( ϕ − γ ) − − c (cid:1) γ u − − ωϕ u ¯ h ( ϕ − γ ) − − ¯ h (1 − ϕ u − ) ˜ µ (1) h,s . . But E t (cid:0) h t + s h t + s +1 (cid:1) = E t (cid:16) h t + s (cid:0) ω + (cid:0) α + λI − t + s (cid:1) ε t + s + βh t + s (cid:1) (cid:17) = ω ˜ µ (1) h,s + 2 ωϕ ˜ µ (2) h,s + γ ˜ µ (3) h,s and E t ( h t + s h t + s +1 ) = E t (cid:0) h t + s (cid:0) ω + (cid:0) α + λI − t + s (cid:1) ε t + s + βh t + s (cid:1)(cid:1) = ω ˜ µ (1) h,s + ϕ ˜ µ (2) h,s . Hence thefinal expression for ˜ µ (1 , , h,suv is:˜ µ (1 , , h,suv = ( ϕ v γ u − ) (cid:16) ω ˜ µ (1) h,s + 2 ωϕ ˜ µ (2) h,s + γ ˜ µ (3) h,s (cid:17) + (cid:0) ¯ hϕ u − (1 − ϕ v ) + 2 ωϕ v +1 ( ϕ − γ ) − ( ϕ u − − γ u − ) (cid:1) (cid:16) ω ˜ µ (1) h,s + ϕ ˜ µ (2) h,s (cid:17) + ¯ h (1 − ϕ u − ) + ϕ v c + (cid:0) ωϕ ¯ h ( ϕ − γ ) − − c (cid:1) γ u − − ωϕ u ¯ h ( ϕ − γ ) − − ¯ h (1 − ϕ u − ) ˜ µ (1) h,s . (d) Fourth Conditional Moments of Forward and Aggregated Variances
For the fourth moment of the forward variance we write:˜ µ (4) h,s = E t h(cid:0) ω + (cid:0) α + λI − t + s − (cid:1) ε t + s − + βh t + s − (cid:1) i = ω + 4 ω ϕ ˜ µ (1) h,s − + c ˜ µ (2) h,s − + c ˜ µ (3) h,s − + c ˜ µ (4) h,s − = s − P j =0 c j (cid:16) ω + 4 ω ϕ ˜ µ (1) h,s − j − + c ˜ µ (2) h,s − j − + c ˜ µ (3) h,s − j − (cid:17) + c s − h t +1 where c = 6 ω ( κ z ( α + λF ( λ + 2 α )) + β ) + 12 ω β ( α + λF ) = 6 ω γc = 4 ω µ (6) z ( α + F ( λ + 3 ( α λ + αλ )))+3 κ z β ( α + λ F + 2 αλF ) + 3 β ( α + λF ) + β = 4 ωc c = µ (8) z ( α + F ( λ + 4 ( α λ + αλ ) + 6 α λ )) + β +4 h µ (6) z β ( α + F ( λ + 3 ( α λ + αλ ))) + β ( α + λF ) i +6 κ z β ( α + λ F + 2 αλF ) . When the innovations are normally distributed, µ (8) z = 105, while when they are Student t distributed µ (8) z = 105 ( ν − ( ν − ν − ν − . Finally, for the fourth moment of aggrregated variancewe write:˜ M (4) h,n = n P s =1 ˜ µ (4) h,s + n P s =1 n − s P u =1 (cid:16) (cid:16) ˜ µ (3 , h,su + ˜ µ (1 , h,su (cid:17) + 6˜ µ (2 , h,su (cid:17) +12 n P s =1 n − s P u =1 n − s − u P v =1 (cid:16) ˜ µ (2 , , h,suv + ˜ µ (1 , , h,suv + ˜ µ (1 , , h,suv (cid:17) + 24 n P s =1 n − s P u =1 n − s − u P v =1 n − s − u − v P w =1 ˜ µ (1 , , , h,suvw µ (3 , h,su = E t (cid:0) h t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1)(cid:1) = ¯ h ˜ µ (3) h,s + ϕ u (cid:16) ˜ µ (4) h,s − ¯ h ˜ µ (3) h,s (cid:17) ˜ µ (1 , h,su = E t (cid:16) h t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1) (cid:17) = ω ˜ µ (1) h,s + 3 ω (cid:16) ωϕ ˜ µ (1 , h,s ( u − + γ ˜ µ (1 , h,s ( u − (cid:17) + c ˜ µ (1 , h,s ( u − = u − P j =0 c j (cid:16) ω ˜ µ (1) h,s + 3 ω (cid:16) ωϕ ˜ µ (1 , h,s ( u − j − + γ ˜ µ (1 , h,s ( u − j − (cid:17)(cid:17) + c u ˜ µ (4) h,s ˜ µ (2 , h,su = E t (cid:16) h t + s (cid:0) ω + (cid:0) α + λI − t + s + u − (cid:1) ε t + s + u − + βh t + s + u − (cid:1) (cid:17) = ω ˜ µ (2) h,s + 2 ωϕ ˜ µ (2 , h,s ( u − + γ ˜ µ (2 , h,s ( u − = u − P j =0 γ j (cid:16) ω ˜ µ (2) h,s + 2 ωϕ ˜ µ (2 , h,s ( u − j − (cid:17) + γ u ˜ µ (4) h,s . ˜ µ (2 , , h,suv = E t (cid:0) h t + s E t + s ( h t + s + u h t + s + u + v ) (cid:1) = E t (cid:0) h t + s E t ( h t + u h t + u + v ) (cid:1) , where t = t + s .By analogy with ˜ µ (1 , , h,suv , we obtain the following expression for ˜ µ (2 , , h,suv :˜ µ (2 , , h,suv = ( ϕ v γ u − ) (cid:16) ω ˜ µ (2) h,s + 2 ωϕ ˜ µ (3) h,s + γ ˜ µ (4) h,s (cid:17) + (cid:0) ¯ hϕ u − (1 − ϕ v ) + 2 ωϕ v +1 ( ϕ − γ ) − ( ϕ u − − γ u − ) (cid:1) (cid:16) ω ˜ µ (2) h,s + ϕ ˜ µ (3) h,s (cid:17) + ¯ h (1 − ϕ u − ) + ϕ v c + (cid:0) ωϕ ¯ h ( ϕ − γ ) − − c (cid:1) γ u − − ωϕ u ¯ h ( ϕ − γ ) − − ¯ h (1 − ϕ u − ) ˜ µ (2) h,s . Using the same idea (the tower law), we can also solve for ˜ µ (1 , , h,suv , ˜ µ (1 , , h,suv and ˜ µ (1 , , , h,suv .(e) Centred Conditional Moments of Forward and Aggregated Variances
The second centred moment of the forward variance, i.e. the conditional variance of theforward conditional variance is: µ (2) h,s = E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:19) = ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) .The second centred moment of the aggregated variance, i.e. the conditional variance of theaggregated conditional variance is: M (2) h,n = E t (cid:18) n P s =1 (cid:16) h t + s − ˜ µ (1) h,s (cid:17)(cid:19) ! = n P s =1 (cid:18) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) + 2 n P s =1 n − s P u =1 (cid:16) ˜ µ (1 , h,su − ˜ µ (1) h,s ˜ µ (1) h,s + u (cid:17) = n P s =1 ˜ µ (2) h,s + 2 n P s =1 n − s P u =1 ϕ u ˜ µ (2) h,s − n P s =1 (cid:16) ˜ µ (1) h,s (cid:17) + 2 n P s =1 n − s P u =1 (1 − ϕ u ) ¯ h ˜ µ (1) h,s − n P s =1 n − s P u =1 ˜ µ (1) h,s ˜ µ (1) h,s + u (27)or M (2) h,n = ˜ M (2) h,n − n X s =1 (cid:16) ˜ µ (1) h,s (cid:17) − n X s =1 n − s X u =1 ˜ µ (1) h,s ˜ µ (1) h,s + u . (28)For γ = 1, the expression for the second moment of the aggregated variance is given by (24).44 P s =1 (cid:16) ˜ µ (1) h,s (cid:17) = n P s =1 (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = n ¯ h + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) + 2¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) (29) n P s =1 n − s P u =1 ˜ µ (1) h,s ˜ µ (1) h,s + u = n P s =1 n − s P u =1 (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) (cid:0) ¯ h + ϕ s + u − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = n ( n −
1) ¯ h + ¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) +¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (cid:2) ϕ (1 − ϕ ) − (1 − ϕ n ) − nϕ n (cid:3) + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − h ϕ (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − ϕ n (1 − ϕ n ) i (30)For γ = 1, consider the formula in (27). The expressions for the last three sums do notdepend on γ and hence remain the same as in the γ = 1 case (see (22), (23), (29) and (30)),while n P s =1 ˜ µ (2) h,s and n P s =1 n − s P u =1 ϕ u ˜ µ (2) h,s become: n P s =1 ˜ µ (2) h,s = n P s =1 (cid:2) ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 (cid:3) = n ( n − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + nh t +1 (31) n X s =1 n − s X u =1 ϕ u ˜ µ (2) h,s = n X s =1 ˜ µ (2) h,s n − s X u =1 ϕ u = ϕ (1 − ϕ ) − " n X s =1 ˜ µ (2) h,s − n X s =1 ϕ n − s ˜ µ (2) h,s (32) n P s =1 ϕ n − s ˜ µ (2) h,s = n P s =1 ϕ n − s (cid:2) ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 (cid:3) = (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − ϕ ) − (cid:2) n − (1 − ϕ ) − (1 − ϕ n ) (cid:3) − n ¯ h (cid:0) h t +1 − ¯ h (cid:1) ϕ n + (cid:2) ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) + h t +1 (cid:3) (1 − ϕ ) − (1 − ϕ n ) . (33)The third centred moment of the forward variance is: µ (3) h,s = E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:19) = ˜ µ (3) h,s − µ (2) h,s ˜ µ (1) h,s + 2 (cid:16) ˜ µ (1) h,s (cid:17) (34)and the third centred moment of the aggregated variance is:45 (3) h,n = E t (cid:18) n P s =1 (cid:16) h t + s − ˜ µ (1) h,s (cid:17)(cid:19) ! = n P s =1 (cid:18) ˜ µ (3) h,s − µ (2) h,s ˜ µ (1) h,s + 2 (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) + 3 n P s =1 n − s P u =1 ˜ µ (2 , h,su + ˜ µ (1 , h,su + 2 (cid:16) ˜ µ (1) h,s + ˜ µ (1) h,s + u (cid:17)(cid:16) ˜ µ (1) h,s ˜ µ (1) h,s + u − ˜ µ (1 , h,su (cid:17) − ˜ µ (1) h,s ˜ µ (2) h,s + u − ˜ µ (1) h,s + u ˜ µ (2) h,s +6 n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1 , , h,suv − ˜ µ (1) h,s ˜ µ (1 , h, ( s + u ) v − ˜ µ (1) h, ( s + u ) ˜ µ (1 , h,s ( u + v ) − ˜ µ (1) h, ( s + u + v ) ˜ µ (1 , h,su + 2˜ µ (1) h,s ˜ µ (1) h, ( s + u ) ˜ µ (1) h, ( s + u + v ) . The fourth centred moment of the forward variance is: µ (4) h,s = ˜ µ (4) h,s − µ (1) h,s ˜ µ (3) h,s + 6 (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) . Finally, the fourth centred moment of the aggregated variance is: M (4) h,n = E t (cid:18) n P s =1 (cid:16) h t + s − ˜ µ (1) h,s (cid:17)(cid:19) ! = n P s =1 E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:19) + n P s =1 n − s P u =1 (cid:18) E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17)(cid:19) + E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) (cid:19)(cid:19) +6 E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) (cid:19) +12 n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) (cid:16) h t + s + u + v − ˜ µ (1) h,s + u + v (cid:17)(cid:19) + E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) (cid:16) h t + s + u + v − ˜ µ (1) h,s + u + v (cid:17)(cid:19) + E t (cid:18)(cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17) (cid:16) h t + s + u + v − ˜ µ (1) h,s + u + v (cid:17) (cid:19) +24 n P s =1 n − s P u =1 n − s − u P v =1 n − s − u − v P w =1 E t (cid:16) h t + s − ˜ µ (1) h,s (cid:17) (cid:16) h t + s + u − ˜ µ (1) h,s + u (cid:17)(cid:16) h t + s + u + v − ˜ µ (1) h,s + u + v (cid:17) (cid:16) h t + s + u + v + w − ˜ µ (1) h,s + u + v + w (cid:17) . Performing the necessary calculations yields the formula for M (4) h,n from Theorem 2. Thestandardized moments (i.e. skewness and kurtosis) of the forward and aggregated variancedistributions are now easily obtained from the central moments, as defined in Section 2.1. T.A.3: Limits of the Moments of Returns
This appendix derives the limits of the conditional moments of the forward and aggregatedreturns of the generic GJR model as the time horizon increases. We also outline the results fortwo important special cases of the generic framework, namely the normal GJR (i.e. D (0 , D (0 ,
1) is the standard46ormal and λ = 0). We only specify the results for these special cases when they differ fromthe results obtained for the generic model.In what follows we use the notation defined in the beginning of this Appendix. For the twospecial cases where D (0 ,
1) is the standard normal, τ z = 0, F = and κ z = 3 and ϕ and γ become (for the normal GJR): ϕ = α + λ β and γ = ϕ + 2 (cid:18) α + λ (cid:19) + 34 λ . (35)Moreover, for the normal GARCH(1,1), λ = 0 and the two constants above simplify further: ϕ = α + β and γ = ϕ + 2 α = ( α + β ) + 2 α . (36)We assume ϕ ∈ (0 ,
1) and ϕ = γ .(a) Limits of the Forward and Aggregated Conditional Variance
Both the forward variance and the aggregated variance limit, expressed in daily units, areequal to the long term variance, which we have denoted by ¯ h . That is,lim s →∞ µ (2) r,s = lim s →∞ ˜ µ (1) h,s = lim s →∞ (cid:0) ¯ h + ϕ s − (cid:0) h t +1 − ¯ h (cid:1)(cid:1) = ¯ h lim n →∞ M (2) r,n n = lim n →∞ (cid:20) n ¯ h +(1 − ϕ ) − (1 − ϕ n ) ( h t +1 − ¯ h ) n (cid:21) = ¯ h. (b) Limits of the Forward and Aggregated Conditional Skewness
The forward skewness limit is:lim s →∞ τ r,s = lim s →∞ (cid:20) τ z (cid:18) µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − (cid:19)(cid:21) = τ z (cid:18) s →∞ ˜ µ (2) h,s (cid:16) lim s →∞ ˜ µ (1) h,s (cid:17) − (cid:19) where lim s →∞ ˜ µ (2) h,s == c if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . Hence:lim s →∞ τ r,s = τ z (cid:16) (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − γ ) − (cid:0) ¯ h (cid:1) − (cid:17) if γ ∈ (0 , , sgn ( τ z ) ∞ if γ ∈ [1 , ∞ ) . For the normal GJR and the normal GARCH(1,1), τ r,s = lim s →∞ τ r,s = 0.For the limit of the aggregated skewness set: c = (cid:18) τ z + 3 (cid:18) ατ z + λ ´ x = −∞ x f ( x ) dx (cid:19) (1 − ϕ ) − (cid:19) c = (cid:18)(cid:18) ατ z + λ ´ x = −∞ x f ( x ) dx (cid:19) (1 − ϕ ) − (cid:19) = c − τ z .47e have:lim n →∞ T r,n = lim n →∞ M (3) r,n n / (cid:16) M (2) r,n n (cid:17) − / = lim n →∞ M (3) r,n n / h lim n →∞ (cid:16) M (2) r,n n (cid:17)i − / = c ¯ h − / lim n →∞ (cid:20) n − / n P s =1 (cid:16) c c − ϕ n − s (cid:17) (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / + 3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19)(cid:21) . Write S n = c n P s =1 (cid:16) c c − ϕ n − s (cid:17)(cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / +3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) n / and L = lim n →∞ S n S ,n = c n P s =1 (cid:16) c c − ϕ n − s (cid:17) (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / and L = lim n →∞ S ,n S ,n = c n P s =1 (cid:16) c c − ϕ n − s (cid:17) (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / and L = lim n →∞ S ,n .Case 1 c = c c ≤
0, then c − ϕ n − s ≤ s . Also, if τ z <
0, then S ,n ≤ S n ≤ S ,n for any s . Hence, if L = L then L = L = L , by the squeeze theorem. We nowprove that L = L . Setting L max = lim n →∞ (cid:18) max ≤ s ≤ n ˜ µ (1) h,s (cid:19) = max (cid:0) ¯ h, h t +1 (cid:1) and L min =lim n →∞ (cid:18) min ≤ s ≤ n ˜ µ (1) h,s (cid:19) = min (cid:0) ¯ h, h t +1 (cid:1) we may write: L = c (cid:20) L / lim n →∞ (cid:0) n − / c − n − / (1 − ϕ ) − (1 − ϕ n ) (cid:1) + 3 L − / lim n →∞ n − / n P s =1 ( c − ϕ n − s ) ˜ µ (2) h,s (cid:21) and the first term above is zero. For γ = 1, n P s =1 ( c − ϕ n − s ) ˜ µ (2) h,s = n P s =1 ( c − ϕ n − s ) (cid:0) c + (cid:0) h t +1 − c (cid:1) γ s − + c ϕ s − (cid:1) = nc c + c (cid:0) h t +1 − c (cid:1) (1 − γ ) − (1 − γ n ) + ( c c − c ) (1 − ϕ ) − (1 − ϕ n ) − (cid:0) h t +1 − c (cid:1) ( ϕ − γ ) − ( ϕ n − γ n ) − nc ϕ n − . For γ = 1, n P s =1 ( c − ϕ n − s ) ˜ µ (2) h,s = n P s =1 ( c − ϕ n − s ) ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) +2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + ˜ µ (2) h, = / n ( n − c ( ω + 2 ωϕh ) ¯ h − ϕ (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − ϕ ) − (cid:2) (1 − ϕ ) − (1 − ϕ n ) − ϕ − ( n − (cid:3) + 2 c ϕh (cid:0) h t +1 − ¯ h (cid:1) (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) − ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (cid:2) (1 − ϕ ) − (1 − ϕ n ) − nϕ n − (cid:3) + ˜ µ (2) h, (cid:0) nc − (1 − ϕ ) − (1 − ϕ n ) (cid:1) . Hence If τ z >
0, then S ,n ≤ S n ≤ S ,n . However, the limit does not change: the proof above still applies, onlythat S ,n and S ,n swap place above. = γ ∈ (0 , , sgn ( c ) sgn (cid:0) c (cid:0) ω + 2 ωϕ ¯ h (cid:1)(cid:1) ∞ if γ = 1 , sgn ( c ) sgn (cid:2)(cid:0) ( ϕ − γ ) − − c (1 − γ ) − (cid:1) (cid:0) h t +1 − c (cid:1)(cid:3) ∞ if γ ∈ (1 , ∞ ) . Now, for γ ∈ [1 , ∞ ), sgn (cid:0) h t +1 − c (cid:1) = 1, sgn (cid:0) ( ϕ − γ ) − − c (1 − γ ) − (cid:1) = − ω + 2 ωϕh ) = 1, so L = γ ∈ (0 , , sgn ( c ) ∞ if γ ∈ [1 , ∞ ) . Analogously it can be shown that L is the same limit, hence L = L and finally:lim n →∞ T r,n = γ ∈ (0 , , sgn ( c ) ∞ if γ ∈ [1 , ∞ ) . Case 2 c >
0: In this case ∃ an integer ˜ s ≥ c − ϕ n − s > ∀ s > ˜ s and c − ϕ n − s ≤ ∀ ˜ s ≥ s ≥
1. It can be easily seen that if c > ϕ , then ˜ s = 0 . Write: S n = S (1) n + S (2) n , ˜ S ,n = S (1)1 ,n + S (2)2 ,n , ˜ S ,n = S (1)2 ,n + S (2)1 ,n where S (1) n = c
13 ˜ s P s =1 ( c − ϕ n − s ) (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / +3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) n / , S (2) n = c n P s =˜ s +1 ( c − ϕ n − s ) (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / +3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) n / S (1)1 ,n = c
13 ˜ s P s =1 ( c − ϕ n − s ) (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / , S (2)2 ,n = c n P s =˜ s +1 ( c − ϕ n − s ) (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / S (1)2 ,n = c
13 ˜ s P s =1 ( c − ϕ n − s ) (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / and S (2)1 ,n = c n P s =˜ s +1 ( c − ϕ n − s ) (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) / +3˜ µ (2) r,s (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) r,s (cid:17)(cid:19) − / ! n / . If we solve further for S (1)1 ,n , we get: S (1)1 ,n = c (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) h,s (cid:17)(cid:19) / n − / (cid:18) ˜ sc − ϕ n − ˜ s ˜ s P s =1 ϕ ˜ s − s (cid:19) +3 (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) h,s (cid:17)(cid:19) − / n − / (cid:18) c
14 ˜ s P s =1 ˜ µ (2) h,s − ϕ n − ˜ s ˜ s P s =1 ϕ ˜ s − s ˜ µ (2) h,s (cid:19) . Define: f a,b = b P s = a ϕ b − s , f a,b = b P s = a ˜ µ (2) h,s and f a,b = b P s = a ϕ b − s ˜ µ (2) h,s . S (1)1 ,n becomes: S (1)1 ,n = c " (cid:18) max ≤ s ≤ n (cid:16) ˜ µ (1) h,s (cid:17)(cid:19) / n − / h ˜ sc − ϕ n − ˜ s f , ˜ s i + 3 (cid:18) min ≤ s ≤ n (cid:16) ˜ µ (1) h,s (cid:17)(cid:19) − / n − / h f , ˜ s − ϕ n − ˜ s f , ˜ s i , where f , ˜ sj j = 1 , , n . Thus lim n →∞ S (1)1 ,n = 0. Also49 (2)2 ,n = c (cid:18) min
0, then ˜ S ,n ≤ S n ≤ ˜ S ,n , whereas if c <
0, the inequality is reversed. However, this does notchange the proof above. It can be easily noticed that sgn ( − c ( ϕ − γ ) + c (1 − γ )) = sgn ( c ), for c <
0. Hence the limits ofaggregated skewness are the same, regardless of c being greater than or less than 0. n →∞ T r,n = γ ∈ (0 , , sgn (cid:18) τ z (cid:0) α + γ − ϕ (cid:1) + λ ´ x = −∞ x f ( x ) dx (cid:19) ∞ if γ ∈ [1 , ∞ ) . For the normal GJR, τ z = 0 and ´ x = −∞ x f ( x ) dx = − q π hence: lim n →∞ T r,n = γ ∈ (0 , , − sgn ( λ ) ∞ if γ ∈ [1 , ∞ ) . For the normal GARCH(1,1) τ z = λ = 0 and thus T r,n = lim n →∞ T r,n = 0.(c) Limits of Forward and Aggregated Conditional Kurtosis
The forward kurtosis limit is:lim s →∞ κ r,s = κ z lim s →∞ (cid:16) ˜ µ (1) h,s (cid:17) − lim s →∞ ˜ µ (2) h,s = κ z ω ¯ h − (cid:0) ω + 2 ϕ ¯ h (cid:1) (1 − γ ) − if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . For the normal GJR and normal GARCH(1,1) the limit of the kurtosis of forward returnsbecomes: lim s →∞ κ r,s = ω ¯ h − (cid:0) ω + 2 ϕ ¯ h (cid:1) (1 − γ ) − if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) , where ϕ and γ are now given by (35) and (36) for the normal GJR and normal GARCH(1,1).For the limit of the aggregated kurtosis, we write:lim n →∞ K r,n = lim n →∞ (cid:16) M (2) r,n (cid:17) − M (4) r,n = lim n →∞ (cid:16) n − M (2) r,n (cid:17) − (cid:16) n − M (4) r,n (cid:17) = ¯ h − lim n →∞ (cid:16) n − M (4) r,n (cid:17) .lim n →∞ (cid:0) n − M (4) r,n (cid:1) = κ z A + 6 A + 4 A , (37)where A = lim n →∞ (cid:18) n − n P s =1 ˜ µ (2) h,s (cid:19) , A = lim n →∞ (cid:18) n − n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:19) ,A = lim n →∞ (cid:18) n − n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:19) + 3 lim n →∞ (cid:18) n − n P s =1 n − s P u =1 n − s − u P v =1 E t (cid:0) ε t + s ε t + s + u ε t + s + u + v (cid:1)(cid:19) . Now, if γ = 1, A = lim n →∞ n − (cid:16) nc + (cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − (1 − γ n ) + c (1 − ϕ ) − (1 − ϕ n ) (cid:17) = c lim n →∞ n − + (cid:0) h t +1 − c (cid:1) (1 − γ ) − lim n →∞ n − (1 − γ n ) + c (1 − ϕ ) − lim n →∞ n − (1 − ϕ n ) . γ = 1: n P s =1 ˜ µ (2) h,s = n P s =1 (cid:0) ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) + 2 ϕh (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 (cid:1) = / (cid:0) ω + 2 ωϕ ¯ h (cid:1) n + (cid:0) ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) − / (cid:0) ω + 2 ωϕ ¯ h (cid:1) + h t +1 (cid:1) n − ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) . So for γ = 1, we obtain that: A = lim n →∞ / (cid:0) ω + 2 ωϕ ¯ h (cid:1) + lim n →∞ n − (cid:0) ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) − / (cid:0) ω + 2 ωϕ ¯ h (cid:1) + h t +1 (cid:1) − lim n →∞ n − ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n ) . Hence A = γ ∈ (0 , , / (cid:0) ω + 2 ωϕ ¯ h (cid:1) if γ = 1 , ∞ if γ ∈ (1 , ∞ ) . (38)For A , using the derivations from the Technical Appendix T.A.1, we can write: A = lim n → (cid:18) n − n P s =1 n − s P u =1 E t (cid:0) ε t + s ε t + s + u (cid:1)(cid:19) = lim n → (cid:18) n − (cid:20) n P s =1 n − s P u =1 h ¯ h (1 − ϕ u ) ˜ µ (1) h,s i + κ z ( α + λF + κ − z β ) n P s =1 n − P u =1 ϕ u − ˜ µ (2) h,s (cid:21)(cid:19) . For γ = 1, the expressions for the two double sums in the expressions above were derived inthe Technical Appendix T.A.1. Using those results, we have: A = lim n →∞ n − / n ( n −
1) ¯ h + (1 − ϕ ) − ¯ h (cid:0) h t +1 − ¯ h (cid:1)(cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) − ϕ n ¯ h (cid:0) h t +1 − ¯ h (cid:1) − + (1 − ϕ ) − (1 − ϕ n ) − h ( h t +1 − h ) − (1 − ϕ ) − (1 − ϕ n ) − nϕ n − + κ z ( α + λF + κ z − β ) (1 − ϕ ) − nc + (cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − (1 − γ n ) + c (1 − ϕ ) − (1 − ϕ n ) − c (1 − ϕ ) − (1 − ϕ n ) − (cid:0) h t +1 − c (cid:1) ( ϕ − γ ) − ( ϕ n − γ n ) − nc ϕ n − . For γ = 1, 52 = lim n →∞ n − / n ( n −
1) ¯ h + (1 − ϕ ) − ¯ h (cid:0) h t +1 − ¯ h (cid:1)(cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) − ϕ n ¯ h (cid:0) h t +1 − ¯ h (cid:1) − + (1 − ϕ ) − (1 − ϕ n ) − h ( h t +1 − h ) − (1 − ϕ ) − (1 − ϕ n ) − nϕ n − + κ z ( α + λF + κ z − β ) (1 − ϕ ) − / (cid:0) ω + 2 ωϕ ¯ h (cid:1) n + (cid:0) ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) − / (cid:0) ω + 2 ωϕ ¯ h (cid:1) + h t +1 (cid:1) n − ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (1 − ϕ n )+ ϕ (cid:0) ω + 2 ωϕ ¯ h (cid:1) (1 − ϕ ) − (cid:2) (1 − ϕ ) − (1 − ϕ n ) − ϕ − ( n − (cid:3) − ϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) (cid:2) (1 − ϕ ) − (1 − ϕ n ) − nϕ n − (cid:3) − ˜ µ (2) h, (1 − ϕ ) − (1 − ϕ n ) . Thus: A = / ¯ h if γ ∈ (0 , , / (cid:2) ¯ h + κ z ( α + λF + κ − z β ) (1 − ϕ ) − (cid:0) ω + 2 ωϕ ¯ h (cid:1)(cid:3) if γ = 1 , ∞ if γ ∈ (1 , ∞ ) . (39) A = (cid:2) τ z + 3(1 − ϕ ) − c (cid:3) lim n →∞ n − n P s =1 n − s P u =1 θ (3 / su − − ϕ ) − c lim n →∞ n − n P s =1 n − s P u =1 ϕ n − s − u θ (3 / su where θ (3 / su = c (cid:20)(cid:16) ˜ µ (1) h,s + u (cid:17) / + ωϕ ( ϕ − γ ) − (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:21) ϕ u − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / γ u − (cid:16) c E t (cid:16) h / t + s (cid:17) + 2 ωγ ( γ − ϕ ) − c E t (cid:16) h / t + s (cid:17)(cid:17) ,ϕ n − s − u θ (3 / su = c (cid:20)(cid:16) ˜ µ (1) h,s + u (cid:17) / + ωϕ ( ϕ − γ ) − (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:21) ϕ n − s − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / ϕ n − s − ( γ/ϕ ) u − (cid:16) c E t (cid:16) h / t + s (cid:17) + 2 ωγ ( γ − ϕ ) − c E t (cid:16) h / t + s (cid:17)(cid:17) . We can now write: b l,s,n ≤ n − s P u =1 θ (3 / su ≤ b u,s,n , where, for γ = 1 b l,s,n = (cid:18) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i / + ωϕ ( ϕ − γ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , ( ϕ − γ ) c (cid:17)i − / (cid:19) E t (cid:16) h / t + s (cid:17) (1 − ϕ ) − (1 − ϕ n − s )+ c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i − / E t (cid:16) h / t + s (cid:17) +2 ωγ ( γ − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , ( γ − ϕ ) c (cid:17)i − / E t (cid:16) h / t + s (cid:17) (1 − γ ) − (1 − γ n − s ) , u,s,n = c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i / + ωϕ ( ϕ − γ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17)i − / E t (cid:16) h / t + s (cid:17) (1 − ϕ ) − (1 − ϕ n − s )+ c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i − / E t (cid:16) h / t + s (cid:17) + 2 ωγ ( γ − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17)i − / E t (cid:16) h / t + s (cid:17) (1 − γ ) − (1 − γ n − s ) , and, for γ = 1 b l,s,n = (cid:18) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i / + ωϕ ( ϕ − − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i − / (cid:19) E t (cid:16) h / t + s (cid:17) (1 − ϕ ) − (1 − ϕ n − s )+ ( n − s ) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i − / E t (cid:16) h / t + s (cid:17) +2 ω (1 − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i − / E t (cid:16) h / t + s (cid:17) ,b u,s,n = (cid:18) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i / + ωϕ ( ϕ − − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i − / (cid:19) E t (cid:16) h / t + s (cid:17) (1 − ϕ ) − (1 − ϕ n − s )+ ( n − s ) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i − / E t (cid:16) h / t + s (cid:17) +2 ω (1 − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i − / E t (cid:16) h / t + s (cid:17) , with q z,a,b ( x z , y ) = max a ≤ z ≤ b x z if y ≥ , min a ≤ z ≤ b x z if y < , where z = u or z = s .Also, b ul,n ≤ n P s =1 b u,s,n ≤ b uu,n where for γ = 1 b ul,n = (1 − ϕ ) − n P s =1 (1 − ϕ n − s ) c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i / + ωϕ ( ϕ − γ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − ( ϕ − γ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17) , ( ϕ − γ ) c (cid:17)i − / + (1 − γ ) − n P s =1 (1 − γ n − s ) c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / +2 ωγ ( γ − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − ( γ − ϕ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17) , ( γ − ϕ ) c (cid:17)i − / = l ,ul,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + l ,ul,n (1 − γ ) − (cid:0) n − (1 − γ ) − (1 − γ n ) (cid:1) l ,ul,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i / + ωϕ ( ϕ − γ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − ( ϕ − γ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17) , ( ϕ − γ ) c (cid:17)i − / and l ,ul,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / +2 ωγ ( γ − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − ( γ − ϕ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17) , ( γ − ϕ ) c (cid:17)i − / . Also b uu,n = (1 − ϕ ) − n P s =1 (1 − ϕ n − s ) c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i / + ωϕ ( ϕ − γ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , ( ϕ − γ ) c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17) , − ( ϕ − γ ) c (cid:17)i − / + (1 − γ ) − n P s =1 (1 − γ n − s ) c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / + 2 ωγ ( γ − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , ( γ − ϕ ) c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17) , − ( γ − ϕ ) c (cid:17)i − / = l ,uu,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + l ,uu,n (1 − γ ) − (cid:0) n − (1 − γ ) − (1 − γ n ) (cid:1) .with l ,uu,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i / + ωϕ ( ϕ − γ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , ( ϕ − γ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17) , − ( ϕ − γ ) c (cid:17)i − / and l ,uu,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / +2 ωγ ( γ − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , ( γ − ϕ ) c (cid:17)h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17) , − ( γ − ϕ ) c (cid:17)i − / . .For γ = 1: 55 ul,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i / + ωϕ ( ϕ − − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i − / (1 − ϕ ) − n P s =1 (1 − ϕ n − s ) + n ( n − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / +2 ω (1 − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / = ˜ l ,ul,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + ˜ l ,ul,n n ( n − l ,ul,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i / + ωϕ ( ϕ − − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , − c (cid:17)i − / and˜ l ,ul,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / +2 ω (1 − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , c (cid:17)i − / . Also, b uu,n = (1 − ϕ ) − n P s =1 (1 − ϕ n − s ) c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i / + ωϕ ( ϕ − − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i − / + n ( n − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / +2 ω (1 − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / = ˜ l ,uu,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + ˜ l ,uu,n n ( n − l ,uu,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i / + ωϕ ( ϕ − − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , − c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17) , c (cid:17)i − / and˜ l ,uu,n = c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / +2 ω (1 − ϕ ) − c q s, ,n (cid:16) E t (cid:16) h / t + s (cid:17) , c (cid:17) h q s, ,n (cid:16) q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17) , − c (cid:17)i − / . We have previously shown that L max = lim n →∞ (cid:18) max ≤ s ≤ n ˜ µ (1) h,s (cid:19) = max (cid:0) ¯ h, h t +1 (cid:1) and L min = lim n →∞ (cid:18) min ≤ s ≤ n ˜ µ (1) h,s (cid:19) = min (cid:0) ¯ h, h t +1 (cid:1) . Also, we have:56im s →∞ E t (cid:16) h / t + s (cid:17) = lim s →∞ (cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / + 3˜ µ (2) h,s (cid:16) ˜ µ (1) h,s (cid:17) − / (cid:19) = (cid:16) h / + 3¯ h − / (cid:16) c + (cid:0) h t +1 − c (cid:1) lim s →∞ γ s − + c lim s →∞ ϕ s − (cid:17)(cid:17) if γ = 1 , h / + 3¯ h − / lim s →∞ ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) +2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 if γ = 1 , = (cid:0) h / + 3¯ h − / c (cid:1) if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . lim s →∞ E t (cid:16) h / t + s (cid:17) = lim s →∞ (cid:18)(cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) / (cid:19)(cid:19) = (cid:0) − h / + 15¯ h / c (cid:1) if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . Hence, when γ ∈ (0 , s →∞ E t (cid:16) h / t + s (cid:17) and lim s →∞ E t (cid:16) h / t + s (cid:17) exist and are finite. Further-more, max ≤ s ≤ n E t (cid:16) h i/ t + s (cid:17) and min ≤ s ≤ n E t (cid:16) h i/ t + s (cid:17) , i = 3 and 5 are bounded. Thus, lim n →∞ ( l j,uu,n ) andlim n →∞ ( l j,ul,n ), j = 1 and 2 exist and are finite. We get:lim n →∞ n − b ul,n = lim n →∞ n − l ,ul,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + l ,ul,n (1 − γ ) − (cid:0) n − (1 − γ ) − (1 − γ n ) (cid:1) = 0lim n →∞ n − b uu,n = lim n →∞ n − l ,uu,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) + l ,uu,n (1 − γ ) − (cid:0) n − (1 − γ ) − (1 − γ n ) (cid:1) = 0 . Thus lim n →∞ n − n P s =1 b u,s,n = 0. This translates into:lim n →∞ n − n X s =1 n − s X u =1 θ (3 / su ≤ lim n →∞ n − n X s =1 b u,s,n = 0 . Similarly, it can be shown that: 0 = lim n →∞ n − n P s =1 b l,s,n ≤ lim n →∞ n − n P s =1 n − s P u =1 θ (3 / su . Finally,we obtain that lim n →∞ n − n P s =1 n − s P u =1 θ (3 / su = 0 for γ ∈ (0 , n →∞ n − n P s =1 n − s P u =1 ϕ n − s − u θ (3 / su = 0 for γ ∈ (0 , B l,s,n ≤ n − s P u =1 ϕ n − s − u θ / su ≤ B u,s,n where, for any γ :57 l,s,n = (cid:18) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i / + ωϕ ( ϕ − γ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , ( ϕ − γ ) c (cid:17)i − / (cid:19) E t (cid:16) h / t + s (cid:17) ( n − s ) ϕ n − s − + c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i − / E t (cid:16) h / t + s (cid:17) + 2 ωγ ( γ − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , ( γ − ϕ ) c (cid:17)i − / E t (cid:16) h / t + s (cid:17) ( ϕ − γ ) − ( ϕ n − s − γ n − s ) ,B u,s,n = (cid:18) c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , c (cid:17)i / + ωϕ ( ϕ − γ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( ϕ − γ ) c (cid:17)i − / (cid:19) E t (cid:16) h / t + s (cid:17) ( n − s ) ϕ n − s − + c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − c (cid:17)i − / E t (cid:16) h / t + s (cid:17) + 2 ωγ ( γ − ϕ ) − c h q u, ,n − s (cid:16) ˜ µ (1) h,s + u , − ( γ − ϕ ) c (cid:17)i − / E t (cid:16) h / t + s (cid:17) ( ϕ − γ ) − ( ϕ n − s − γ n − s ) . Also, B ul,n ≤ n P s =1 B u,s,n ≤ B uu,n where for γ = 1: B ul,n = l ,ul,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + l ,ul,n (cid:2) ( ϕ − γ ) − (cid:0) (1 − ϕ ) − (1 − ϕ n ) − (1 − γ ) − (1 − γ n ) (cid:1)(cid:3) and B uu,n = l ,uu,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + l ,uu,n (cid:2) ( ϕ − γ ) − (cid:0) (1 − ϕ ) − (1 − ϕ n ) − (1 − γ ) − (1 − γ n ) (cid:1)(cid:3) . For γ = 1: B ul,n = ˜ l ,ul,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + ˜ l ,ul,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) and B uu,n = ˜ l ,uu,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + ˜ l ,uu,n (1 − ϕ ) − (cid:0) n − (1 − ϕ ) − (1 − ϕ n ) (cid:1) . For γ ∈ (0 ,
1) we obtained that lim n →∞ ( l j,uu,n ), lim n →∞ ( l j,ul,n ), j = 1, 2 exist and are finite. Thus:lim n →∞ n − B ul,n = lim n →∞ n − l ,ul,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + l ,ul,n (cid:2) ( ϕ − γ ) − (cid:0) (1 − ϕ ) − (1 − ϕ n ) − (1 − γ ) − (1 − γ n ) (cid:1)(cid:3) = 0andlim n →∞ n − B uu,n = lim n →∞ n − l ,uu,n (cid:2) (1 − ϕ ) − (1 − ϕ n ) − (1 − ϕ ) − nϕ n − (cid:3) + l ,uu,n (cid:2) ( ϕ − γ ) − (cid:0) (1 − ϕ ) − (1 − ϕ n ) − (1 − γ ) − (1 − γ n ) (cid:1)(cid:3) = 0 . Thus lim n →∞ n − n P s =1 B u,s,n = 0, yielding: lim n →∞ n − n P s =1 n − s P u =1 ϕ n − s − u θ (3 / su ≤ lim n →∞ n − n P s =1 B u,s,n = 0.58imilarly, it can be shown that: 0 = lim n →∞ n − n P s =1 B l,s,n ≤ lim n →∞ n − n P s =1 n − s P u =1 ϕ n − s − u θ (3 / su . Finally,we get: lim n →∞ n − n P s =1 n − s P u =1 ϕ n − s − u θ (3 / su = 0; we also showed above that lim n →∞ n − n P s =1 n − s P u =1 θ (3 / su = 0,therefore, A = 0 for γ ∈ (0 , γ ∈ (1 , ∞ ), we showed that A = A = ∞ . Thus if we can show that A is boundedbelow, then the kurtosis of aggregated returns will diverge to plus infinity in this case. Setting c = τ z + 3(1 − ϕ ) − c , we can then write: A = lim n →∞ n − n P s =1 n − s P u =1 (cid:16) ˜ µ (1) h,s + u (cid:17) − / c ( c + ( τ z − c ) ϕ n − s − u ) h ˜ µ (1) h,s + u ϕ u − + ω ( ϕ − γ ) − ( ϕ u − γ u ) i E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / γ u − c ( c + ( τ z − c ) ϕ n − s − u ) E t (cid:16) h / t + s (cid:17) . It is reasonable to assume that sgn ( τ z ) = − sgn ( λ ) when τ z = 0 and λ = 0, since a positive λ means that volatility is more responsive to negative shocks rather than positive shocks ofthe same magnitude and translates into a negative skew of the aggregated returns.If sgn ( τ z ) = − sgn ( λ ) = 0, then sgn ( τ z ) = sgn (c ) = sgn (c ), and consequentlysgn ( c ( c + ( τ z − c ) ϕ n − s − u )) = sgn (cid:0) c (cid:0) τ z + 3(1 − ϕ ) − c (1 − ϕ n − s − u ) (cid:1)(cid:1) = 1.For sgn ( τ z ) = sgn (cid:16) µ (5) z (cid:17) , sgn (c ) = sgn (c ) andsgn ( c [ c + ϕ n − s − u ( τ z − c )]) = sgn (cid:0) c (cid:0) τ z + 3(1 − ϕ ) − c (1 − ϕ n − s − u ) (cid:1)(cid:1) = 1.Furthermore, for either { τ z = 0 and λ = 0 } or { τ z = 0 and λ = 0 } , we still get thatsgn ( c ( c + ( τ z − c ) ϕ n − s − u )) = sgn ( c ( c + ( τ z − c ) ϕ n − s − u )) = 1, if sgn ( τ z ) = sgn (cid:16) µ (5) z (cid:17) ;hence all terms in A are positive if sgn ( | λ | + | τ z | ) = 0. Finally, for τ z = λ = 0, A = 0. Wehave thus shown that A is bounded below (by 0), and hence the limit lim n →∞ K r,n = ∞ for γ ∈ (1 , ∞ ).For γ = 1, we can write: θ (3 / su = c (cid:20)(cid:16) ˜ µ (1) h,s + u (cid:17) / + ωϕ ( ϕ − − (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:21) ϕ u − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:16) c E t (cid:16) h / t + s (cid:17) + 2 ω (1 − ϕ ) − c E t (cid:16) h / t + s (cid:17)(cid:17) This is a sufficient but not necessary condition. ϕ n − s − u θ (3 / su = c (cid:20)(cid:16) ˜ µ (1) h,s + u (cid:17) / + ωϕ ( ϕ − − (cid:16) ˜ µ (1) h,s + u (cid:17) − / (cid:21) ϕ n − s − E t (cid:16) h / t + s (cid:17) + (cid:16) ˜ µ (1) h,s + u (cid:17) − / ϕ n − s − u (cid:16) c E t (cid:16) h / t + s (cid:17) + 2 ω (1 − ϕ ) − c E t (cid:16) h / t + s (cid:17)(cid:17) . Thus, for γ = 1, the expression for A is: A = lim n →∞ n − n P s =1 n − s P u =1 c ( c + ( τ z − c ) ϕ n − s − u ) (cid:16) ˜ µ (1) h,s + u (cid:17) − / h ˜ µ (1) h,s + u ϕ u − + ω (1 − ϕ ) − (1 − ϕ u ) i E t (cid:16) h / t + s (cid:17) + c ( c + ( τ z − c ) ϕ n − s − u ) (cid:16) ˜ µ (1) h,s + u (cid:17) − / E t (cid:16) h / t + s (cid:17) . Since c , c , and c do not depend on γ , we still have that sgn ( c ) = sgn ( c ) = sgn ( c )and that: sgn ( c ( c + ( τ z − c ) ϕ n − s − u )) = sgn ( c ( c + ( τ z − c ) ϕ n − s − u )) = 1. Now wewrite: A ≥ lim n →∞ n − n P s =1 n − s P u =1 38 c ( c + ( τ z − c ) ϕ n − s − u ) (cid:16) ˜ µ (1) h,s + u (cid:17) − / E t (cid:16) h / t + s (cid:17) ≥ lim n →∞ (cid:20) max ≤ s ≤ n (cid:16) ˜ µ (1) h,s + u (cid:17)(cid:21) − / lim n →∞ n − n P s =1 n − s P u =1 c ( c + ( τ z − c ) ϕ n − s − u ) E t (cid:16) h / t + s (cid:17) ≥ lim n →∞ (cid:20) max ≤ s ≤ n (cid:16) ˜ µ (1) h,s + u (cid:17)(cid:21) − / lim n →∞ (cid:20) min ≤ s ≤ n (cid:20) min ≤ u ≤ n − s [ c ( c + ( τ z − c ) ϕ n − s − u )] (cid:21)(cid:21) lim n →∞ n − n P s =1 n − s P u =1 (cid:18)(cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) / (cid:19)(cid:19) Since ϕ ∈ (0 , n →∞ (cid:20) min ≤ s ≤ n (cid:20) min ≤ u ≤ n − s [ c ( c + ( τ z − c ) ϕ n − s − u )] (cid:21)(cid:21) and lim n →∞ (cid:20) max ≤ s ≤ n (cid:16) ˜ µ (1) h,s + u (cid:17)(cid:21) are finite and we have shown above that they are positive, if τ z = 0 or λ = 0. In this case,lim n →∞ n − n P s =1 n − s P u =1 (cid:18)(cid:18) (cid:16) ˜ µ (1) h,s (cid:17) / ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) / (cid:19)(cid:19) = lim n →∞ n − n P s =1 n − s P u =1 (cid:16) ˜ µ (1) h,s (cid:17) / ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) +2 ϕ ¯ h (1 − ϕ s − ) (cid:0) h t +1 − ¯ h (cid:1) + h t +1 − (cid:16) ˜ µ (1) h,s (cid:17) / = 15 (cid:0) ω + 2 ωϕ ¯ h (cid:1) lim n →∞ n − n P s =1 (cid:16) ˜ µ (1) h,s (cid:17) / ( n − s ) ( s − ≥ (cid:0) ω + 2 ωϕ ¯ h (cid:1) lim n →∞ (cid:20) min ≤ s ≤ n (cid:16) ˜ µ (1) h,s + u (cid:17)(cid:21) lim n →∞ n − (cid:16) n ( n − (cid:17) , where 15 (cid:0) ω + 2 ωϕ ¯ h (cid:1) lim n →∞ (cid:20) min ≤ s ≤ n (cid:16) ˜ µ (1) h,s + u (cid:17)(cid:21) is positive and finite. Thus A = ∞ for γ = 1and τ z = 0 and \ or λ = 0, while for γ = 1 and τ z = λ = 0, c = c = 0 and A = 0.This completes our proof for the final expression for the limit of aggregated kurtosis given inTheorem 3, equation (10). 60or the normal GJR, τ z = 0 (but λ = 0), thus the expression for the limit of the aggregatedkurtosis in his case simplifies to:lim n →∞ K r,n = γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . For the normal GARCH(1,1), we have τ z = λ = 0 and the limit of the aggregated kurtosisis now finite for γ = 1, although different from the normal value of 3. Thus, for the normalGARCH(1,1), we get:lim n →∞ K r,n = γ ∈ (0 , , (cid:2) (1 + α + β ) (1 + 5 α + β ) (cid:3) if γ = 1 , ∞ if γ ∈ (1 , ∞ ) . T.A.4: Limits of the Moments of Variances
This appendix derives the limits of the conditional moments of forward and aggregated vari-ances of the generic GJR model as the time horizon increases. In what follows we use thenotation and assumptions defined at the start of the Appendix; additionally, we assume c = ϕ , c = γ .(a) Limit of the Variance of Forward Variance lim s →∞ µ (2) h,s = lim s →∞ (cid:18) ˜ µ (2) h,s − (cid:16) ˜ µ (1) h,s (cid:17) (cid:19) = lim s →∞ (cid:0) c − ¯ h (cid:1) + (cid:0) − c + h t +1 (cid:1) γ s − + (cid:2) c − h (cid:0) h t +1 − ¯ h (cid:1)(cid:3) ϕ s − − ϕ s − (cid:0) h t +1 − ¯ h (cid:1) if γ = 1 , lim s →∞ (cid:0) ωϕ ¯ h (cid:0) h t +1 − ¯ h (cid:1) + h t +1 − ¯ h (cid:1) + ( s − (cid:0) ω + 2 ωϕ ¯ h (cid:1) − h ( ωϕ − (cid:0) h t +1 − ¯ h (cid:1) ϕ s − − ϕ s − (cid:0) h t +1 − ¯ h (cid:1) if γ = 1 , = (cid:0) c − ¯ h (cid:1) if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) , where c − ¯ h > h t +1 − c >
0. 61b)
Limit of the Variance of Aggregated Variance
For γ = 1, using (24) and (28) - (30), the expression for the variance of the aggregatedvariance can be written as: M (2) h,n = An + Bγ n + C n where A = (cid:0) c − ¯ h (cid:1) (cid:0) ϕ (1 − ϕ ) − (cid:1) , B = − (cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − − ϕ (1 − ϕ ) − h(cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − − ( ϕ − γ ) − i , C n = f ( nϕ n , ϕ n ).Since ϕ ∈ (0 , n →∞ ϕ n = lim n →∞ nϕ n = 0 and ∃ C finite such that : lim n →∞ C n = C. Thus, thelimit of the conditional variance of the aggregated conditional variance becomes:lim n →∞ M (2) h,n = sgn ( A ) ∞ , if γ ∈ (0 , , sgn ( B ) ∞ if γ ∈ (1 , ∞ ) , (40)where it can be easily seen that sgn ( A ) = 1 and sgn ( B ) = 1.For γ = 1, using (22) - (23), (27), and (29) - (33), the expression for the variance of the aggre-gated variance becomes: M (2) h,n = A ′ n + B ′ n + C ′ where A ′ = (cid:2) ϕ (1 − ϕ ) − + 1 (cid:3) (cid:0) ω + 2 ωϕ ¯ h (cid:1) >
0. Hence: lim n →∞ M (2) h,n = ∞ for any γ .As with the variance of aggregated returns, the conditional variance of the aggregated condi-tional variance diverges to infinity when we increase the time horizon infinitely. It is mean-ingful to compute the limit of the daily variance, i.e. the variance divided by time. However,unlike the daily (one-period) variance of aggregated returns which converges to the levelof (daily) unconditional variance, the daily conditional variance of aggregated conditionalvariance diverges to infinity under certain parameter conditions: lim n →∞ M (2) h,n n = lim n →∞ (cid:0) A + B γ n n + C n n (cid:1) if γ = 1 , lim n →∞ (cid:0) A ′ n + B ′ + C ′ n (cid:1) if γ = 1 , = A if γ ∈ (0 , , ∞ if γ ∈ [1 , ∞ ) . The n terms cancel out. Showing that lim n →∞ nϕ n = 0 for ϕ ∈ (0 ,
1) is rather immediate. If we define y = 1/ ϕ , then y >
1. Wenow have lim n →∞ nϕ n = lim n →∞ ny n = 0 . As in the γ = 1 case above, C ′ = f ( nϕ n , ϕ n ) and lim n →∞ C ′ = constant . B ′ is a constant, but its valueand sign are irrelevant for the limit above. Unlike the conditional variance of the aggregated returns, which only depends on (powers of) the ϕ parameter which takes values only between 0 and 1, the conditional variance of the aggregated variance alsodepends on (powers of) the γ parameter, which can take any positive value. Limit of the Skewness of Forward Variance lim s →∞ τ h,s = lim s →∞ (cid:18) µ (3) h,s (cid:16) µ (2) h,s (cid:17) − / (cid:19) For γ ∈ (0 , s →∞ τ h,s = lim s →∞ (cid:16) µ (3) h,s (cid:17) lim s →∞ (cid:16) µ (2) h,s (cid:17) / , where lim s →∞ µ (2) h,s = c − ¯ h . Using (34),we get: lim s →∞ τ h,s = M if γ ∈ (0 ,
1) and c ∈ (0 , , ∞ if γ ∈ (0 ,
1) and c ∈ [1 , ∞ ) , (41)where M = ω ( ω +3 ωϕ ¯ h +3 γc ) (1 − c ) − − hc +2¯ h ( c − ¯ h ) / .For γ ∈ (1 , ∞ ), we can write: lim s →∞ τ h,s = lim s →∞ (cid:16) γ − (3 / s µ (3) h,s (cid:17)h lim s →∞ (cid:16) γ − s µ (2) h,s (cid:17)i / where, lim s →∞ (cid:16) γ − s µ (2) h,s (cid:17) = γ − (cid:0) − c + h t +1 (cid:1) .Using (34), we get: lim s →∞ τ h,s = ∞ if γ ∈ (1 , ∞ ) and c > γ / , γ ∈ (1 , ∞ ) and c ∈ (cid:0) , γ / (cid:1) ,M if γ ∈ (1 , ∞ ) and c = γ / . (42)where M = c ( − c + h t +1 ) / = h t +1 − ω ( ω +3 ωϕ ¯ h +3 γc ) (1 − c ) − − ( ω ϕ ( h t +1 − ¯ h ) +3 ωγ ( − c + h t +1 )) ( ϕ − c ) − ( − c + h t +1 ) / . For γ = 1, we can write: lim s →∞ τ h,s = lim s →∞ (cid:16) s − / µ (3) h,s (cid:17)h lim s →∞ (cid:16) s − µ (2) h,s (cid:17)i / , where, lim s →∞ (cid:16) s − µ (2) h,s (cid:17) = (cid:0) ω + 2 ωϕ ¯ h (cid:1) .In this case, the limit becomes:lim s →∞ τ h,s = lim s →∞ (cid:16) s − / ˜ µ (3) h,s (cid:17) = c ∈ (0 , , ∞ if c ∈ (1 , ∞ ) . (43)Now, (41) - (43) give (13) in Theorem 4.(d) Limit of the Skewness of Aggregated Variance lim n →∞ T h,n = lim n →∞ (cid:20) M (3) h,n (cid:16) M (2) h,n (cid:17) − / (cid:21) For γ ∈ (0 , n →∞ T h,n = lim n →∞ (cid:16) n − / M (3) h,n (cid:17)h lim n →∞ (cid:16) n − M (2) h,n (cid:17)i / ,where lim n →∞ (cid:16) n − M (2) h,n (cid:17) = (cid:0) c − ¯ h (cid:1) (cid:0) ϕ (1 − ϕ ) − (cid:1) . Also, we write:63 − / M (3) h,n = L − L + 2 L + 3 ( L + L + 2 ( L + L − L − L ) − L − L )+6 ( L − L − L − L + 2 L ) , where L = n P s =1 ˜ µ (3) h,s n / , L = n P s =1 ˜ µ (2) h,s ˜ µ (1) h,s n / , L = n P s =1 (cid:16) ˜ µ (1) h,s (cid:17) n / L = n P s =1 n − s P u =1 ˜ µ (2 , h,su n / , L = n P s =1 n − s P u =1 ˜ µ (1 , h,su n / ,L = n P s =1 n − s P u =1 (cid:16) ˜ µ (1) h,s (cid:17) ˜ µ (1) h,s + u n / , L = n P s =1 n − s P u =1 (cid:16) ˜ µ (1) h,s + u (cid:17) ˜ µ (1) h,s n / , L = n P s =1 n − s P u =1 ˜ µ (1) h,s ˜ µ (1 , h,su n / , L = n P s =1 n − s P u =1 ˜ µ (1) h,s + u ˜ µ (1 , h,su n / ,L = n P s =1 n − s P u =1 ˜ µ (1) h,s ˜ µ (2) h,s + u n / , L = n P s =1 n − s P u =1 ˜ µ (1) h,s + u ˜ µ (2) h,s n / , L = n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1 , , h,suv n / , L = n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1) h,s ˜ µ (1 , h, ( s + u ) v n / ,L = n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1) h, ( s + u ) ˜ µ (1 , h,s ( u + v ) n / , L = n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1) h, ( s + u + v ) ˜ µ (1 , h,su n / , L = n P s =1 n − s P u =1 n − s − u P v =1 ˜ µ (1) h,s ˜ µ (1) h, ( s + u ) ˜ µ (1) h, ( s + u + v ) n / . Performing the necessary (tedious but straightforward) calculations and using the notation R i , ˜ R i , i ∈ { , , .... } with lim n →∞ R i = lim n →∞ ˜ R i = 0, we get: L = c ( c − − c n n / + R if c = 1 , ω ( ω +3 ωϕ ¯ h +3 γc ) n n / + ˜ R if c = 1 , L = R , L = R , L = ϕc (1 − c )( ϕ − c ) c n n / + ¯ h c n n / + R if c = 1 , ¯ h (cid:0) c + ϕ (cid:0) ω + 3¯ hϕω + 3 γc (cid:1)(cid:1) n n / + ˜ R if c = 1 ,L = γc (1 − c )( γ − c ) c n n / + c h n n / + R if c = 1 , γ (1 − γ ) − ω ( ω +3¯ hϕω +3 γc ) ω (1 − γ ) − h + ωϕ ¯ h n n / + ˜ R if c = 1 ,L = ¯ h n n / + R , L = ¯ h n n / + R , L = ¯ h n n / + R , L = ¯ h n n / + R , L = c ¯ h n n / + R , L = c ¯ h n n / + R , L = ϕγc ( c − − ( c − ϕ ) − ( c − γ ) − c n n / + ¯ h n n / + ¯ h (1 − ϕ ) − (cid:0) ϕ (cid:0) c − ¯ h (cid:1) + ¯ h (cid:0)(cid:0) h t +1 − ¯ h (cid:1) − ω (cid:1)(cid:1) n n / + R if c = 1 , ¯ h n n / + ¯ h (1 − ϕ ) − (cid:0) c (1 + ϕ ) − ϕ ¯ h + ¯ h (cid:0)(cid:0) h t +1 − ¯ h (cid:1) − ω (cid:1)(cid:1) + ϕγ (1 − γ ) − (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) n n / + ˜ R if c = 1 ,L = ¯ h n n / + ¯ h (cid:0) − ¯ h + (1 − ϕ ) − (cid:0) ¯ h (cid:0) h t +1 − ¯ h (cid:1) + ϕ (cid:0) c − ¯ h (cid:1)(cid:1)(cid:1) n n / + R , L = ¯ h n n / + ¯ h (cid:0) − ¯ h + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (cid:1) n n / + R , L = ¯ h n n / + ¯ h (1 − ϕ ) − (cid:0) − h + ¯ hh t +1 + c ϕ (cid:1) n n / + R , L = ¯ h n n / + ¯ h (cid:0) − ¯ h + (cid:0) h t +1 − ¯ h (cid:1) (1 − ϕ ) − (cid:1) n n / + R .64erforming the necessary calculations, we obtain the expression in Theorem 4, equation (14),where: N = ω ( ω +3 ωϕ ¯ h +3 γc ) + 3 ¯ h (cid:0) c + ϕ (cid:0) ω + 3 ωϕ ¯ h + 3 γc (cid:1) + ω (1 − γ ) − + 2 ωϕ ¯ h (cid:1) +3 γ (1 − γ ) − ( ω +3 ωϕ ¯ h +3 γc ) (cid:0) ω + 2 ϕ ¯ h (cid:1) +3(1 − ϕ ) − ¯ h (cid:2) c (1 + ϕ ) − ϕ ¯ h + ¯ h (cid:0)(cid:0) h t +1 − ¯ h (cid:1) − ω (cid:1) − ϕ (cid:0) c − ¯ h (cid:1)(cid:3) . For γ ∈ (1 , ∞ ), we can write: lim n →∞ T h,n = lim n →∞ (cid:16) γ − / n M (3) h,n (cid:17)h lim n →∞ (cid:16) γ − n M (2) h,n (cid:17)i / ,where lim n →∞ (cid:16) γ − n M (2) h,n (cid:17) = − (cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − − ϕ (1 − ϕ ) − h(cid:16) ˜ µ (2) h, − c (cid:17) (1 − γ ) − − ( ϕ − γ ) − i .For γ = 1, we can write: lim n →∞ T h,n = lim n →∞ (cid:16) n − M (3) h,n (cid:17)h lim n →∞ (cid:16) n − M (2) h,n (cid:17)i / ,where lim n →∞ (cid:16) n − M (2) h,n (cid:17) = (cid:2) ϕ (1 − ϕ ) − + 1 (cid:3) (cid:0) ω + 2 ωϕ ¯ h (cid:1)(cid:1)