Testing for unit roots based on sample autocovariances
aa r X i v : . [ s t a t . M E ] J un A Power One Test for Unit Roots Based on SampleAutocovariances
Jinyuan Chang, Guanghui Cheng, Qiwei YaoSouthwestern University of Finance and Economics, Guangzhou University, London School ofEconomics and Political Science
Abstract
We propose a new unit-root test for a stationary null hypothesis H against a unit-rootalternative H . Our approach is nonparametric as the null hypothesis only assumes that theprocess concerned is I (0) without specifying any parametric forms. The new test is based onthe fact that the sample autocovariance function (ACF) converges to the finite populationACF for an I (0) process while it diverges to infinity with probability approaching one fora process with unit-roots. Therefore the new test rejects the null hypothesis for the largevalues of the sample ACF. To address the technical challenge ‘how large is large’, we splitthe sample and establish an appropriate normal approximation for the null-distribution ofthe test statistic. The substantial discriminative power of the new test statistic is rootedfrom the fact that it takes finite value under H and diverges to infinity almost surely under H . This allows us to truncate the critical values of the test to make it with the asymptoticpower one. It also alleviates the loss of power due to the sample-splitting. The finite sampleproperties of the test are illustrated by simulation which shows its stable and more powerfulperformance in comparison with the KPSS test (Kwiatkowski et al., 1992). The test isimplemented in a user-friendly R-function. Keywords : Autocovariance; Integrated processes; Normal approximation; Power-one test; Sample-splitting.
Unit-root is one of the most frequently used settings for modeling nonstationary time series. Itsimportance stems from the fact that many economic, financial, business and social-domain timeseries data exhibit segmented trend-like or random wandering phenomena. While the random-walk-like behavior of stock prices was notified and recorded much earlier by, for example, JulesRegnault, a French broker, in 1863 and then by Louis Bachelier in his 1900 PhD thesis, thedevelopment of statistical inference for unit-roots only started in late 1970s. Nevertheless theliterature on unit-root tests by now is immense and diverse. We only state a selection of someimportant developments below, which naturally leads to a new test to be presented in this paper.The Dickey-Fuller tests (Dickey and Fuller, 1979, 1981) dealt with Gaussian random walkswith independent error terms. Effort to relax the condition of independent Gaussian errorsleads to, among others, the augmented Dickey-Fuller (ADF) tests (Said and Dickey, 1984;Elliott, Rothenberg and Stock, 1996; Xiao and Phillips, 1997) which replace the error term byan autoregressive process, the Phillips-Perron test (Phillips, 1987; Phillips and Perron, 1988)which estimates the long-run variance of the error process nonparametrically using the method1f Newey and West (1987) and Andrews (1991). The ADF tests are further extended fordealing with structural breaks in trend (Zivot and Andrews, 1992), long memory processes(Robinson, 1994), seasonal unit roots (Hylleberg et al., 1990; Chan and Wei, 1988), bootstrapunit-root tests (Paparoditis and Politis, 2005), nonstationary volatility (Cavaliere and Taylor,2007), panel data (Pesaran, 2007), and local stationary processes (Rho and Shao, 2019). Werefer to survey papers Stock (1994) and Phillips and Xiao (1998), and monographs Hatanaka(1996) and Maddala and Kim (1998) for further information on the topic.The Dickey-Fuller tests and their variants are based on the regression of a time series onits first lag in which the existence of unit root is postulated as a null hypothesis in the form ofthe regression coefficient being equal to one. This null hypothesis is tested against a stationaryalternative hypothesis that the regression coefficient concerned is smaller than one. This settingleads to innate indecisive inference for ascertaining the existence of unit-roots, as a statisticaltest is incapable in accepting a null hypothesis. To make the assertion of unit-roots on afirmer ground, Kwiatkowski et al. (1992) adopted a different approach: the proposed KPSS testconsiders a stationary null hypothesis against a unit-root alternative. It is based on a plausiblerepresentation for possible nonstationary time series in which a unit-root is represented as anadditive random walk component. Then under the null hypothesis the variance of the randomwalk component is zero. The KPSS test is the one-sided Lagrange multiplier test for testing thevariance to be zero against greater than zero.In spite of the many exciting developments stated above, testing for the existence of unitroots remains as a challenge in time series analysis, as most available methods suffer from thelack of accurate size control and poor power. In this paper we propose a new test, based on aradically different idea from the existing approaches. Our setting is similar in spirit to the KPSStest as we test for stationary hull hypothesis H again a unit-root alternative H . However ourapproach is nonparametric as the null hypothesis only assumes that the process concerned is I (0)without specifying any parametric forms. The new test is based on the simple fact that under H the sample autocovariance function (ACF) converges to the finite population ACF while under H it diverges to infinity with probability approaching one. Therefore we can reject the nullhypothesis for large (absolute) values of the sample ACF. To address the technical challenge‘how large is large’, we split the sample and establish an appropriate normal approximationfor the null-distribution of the test statistic. Note that our sample ACF based test statisticoffers substantial discriminative power as it takes finite value under H or diverges to infinityalmost surely under H . This allows us to truncate the critical values determined by the normalapproximation under H to make the test with the asymptotic power one. Furthermore, italso alleviates the loss of power due to the sample-splitting as it outperforms the KPSS testin the power comparison in simulation. Another advantage of the new method is that it has aremarkable discriminate power to tell the difference between, for example, a random walk andan AR(1) with the autoregressive coefficient close to (but still smaller than) one, for which mostthe available unit-root tests, including the KPSS method, suffer from weak discriminate power.Admittedly the newly proposed test is technically sophisticated. To make it user-friendly,we have developed an R-function ur.test in the package HDTSA which implements the test in an2utomatic manner. See Section 2.4 below for more details. Note that the strong discriminativepower of the test statistic also makes the choice of the two tuning parameters involved less sen-sitive, function ur.test incorporates some well-tested default values for the tuning parameters.Indeed the test performed competently and robustly on, for example, the 14 Nelson and Plossertime series (Nelson and Plosser, 1982) which were often used for testing unit-roots.The rest of the paper is organised as follows. The main results including the newly proposedtest, its asymptotic properties, and the implementation are presented in Section 2. The finitesample properties of the test are investigated by simulation in Section 3 which also includes thenumerical comparison with the KPSS method. Technical proofs are collected in Section 4. Thesupplementary material contains the proof of Lemma 1 and some additional numerical results.
A time series { Y t } is said to be I (0), denoted by Y t ∼ I (0), if E ( Y t ) ≡ µ , E ( Y t ) < ∞ , γ ( k ) ≡ Cov( Y t + k , Y t ) for all t , and ∞ X k =0 | γ ( k ) | < ∞ . (1)Let ∇ Y t = Y t − Y t − , and ∇ d Y t = ∇ ( ∇ d − Y t ) for any integer d ≥
2. Time series { Y t } is said tobe I ( d ), denoted by Y t ∼ I ( d ), if {∇ d Y t } is I (0) and {∇ d − Y t } is not I (0). An I ( d ) process isalso called a unit-root process with the integration order d . With the observations Y , . . . , Y n ,we are interested in testing the unit-root hypotheses H : Y t ∼ I (0) versus H : Y t ∼ I ( d ) for some integer d ≥ . (2)We propose a new test for (2) based on a simple fact that the sample autocovariances of aunit-root process diverge to infinity with probability approaching one while those of an I (0)process converge to the true and finite autocovariances; see (1). More precisely we denote thesample ACF at lag k by ˆ γ ( k ) = 1 n n − k X t =1 ( Y t + k − ¯ Y )( Y t − ¯ Y ) , (3)which is a consistent estimate of γ ( k ) = Cov( Y t + k , Y t ) under null hypothesis H , where ¯ Y = n − P nt =1 Y t . However Proposition 1 below indicates that for I ( d ) processes, ˆ γ ( k ) is at leastas large as O p ( n d − ). See also Pe˜na and Pocela (2006). Therefore we can reject H for largevalues of | ˆ γ ( k ) | . 3o state Proposition 1, we assume Y t ∼ I ( d ) and ∇ d Y t = µ d + ∞ X j =0 ψ j ǫ t − j (4)where µ d = E ( ∇ d Y t ) is a constant, ψ = 1, and { ǫ t } are white noise innovations. Representation(4) is the Wold’s decomposition for any purely non-deterministic I (0) process. When { ǫ t } arei.i.d., {∇ d Y t } is a linear process. Proposition 1.
Let Y t be defined by (4) in which ǫ t i . i . d . ∼ (0 , σ ǫ ) and P ∞ j =1 j | ψ j | < ∞ . Let k ≥ be an integer. (i) When µ d = 0 , it holds that n − (2 d − ˆ γ ( k ) D −→ a σ ǫ R V d − ( t ) d t , where a = P ∞ j =0 ψ j , V d − ( t ) = F d − ( t ) − R F d − ( t ) t. and F d − ( t ) is the scalar multi-fold integrated Brownian motiondefined recursively as F j ( t ) = R t F j − ( x ) d x for any j ≥ and F ( t ) = W ( t ) is the standardBrownian motion. (ii) When µ d = 0 , it holds that n − d ˆ γ ( k ) P −→ φ d,k µ d , where φ d,k is a positive bounded constantonly depending on d and k . Based on Proposition 1, we may reject H for the large values of, for example, the teststatistic T naive = P K k =0 | ˆ γ ( k ) | , as under H the test statistic T naive converges to P K k =0 | γ ( k ) | which is finite, where K ≥ a n { T naive − P K k =0 | γ ( k ) | } under H , where a n is an appropriate normalizedconstant. Secondly, one needs a consistent estimator for P K k =0 | γ ( k ) | under H , which is notreadily available as in practice we do not know if H holds or not.To overcome these obstacles, we apply the idea of ‘data splitting’. Let N = ⌊ n/ ⌋ , andˆ γ ( k ) = 1 N N − k X t =1 ( Y t + k − ¯ Y )( Y t − ¯ Y ) and ˆ γ ( k ) = 1 N N − k X t = N +1 ( Y t + k − ¯ Y )( Y t − ¯ Y ) , (5)i.e. ˆ γ ( k ) and ˆ γ ( k ) are, respectively, the sample autocovariance at lag k of { Y t } Nt =1 and { Y t } Nt = N +1 . The test statistic is defined as T n = K X k =0 | ˆ γ ( k ) | , (6)where K ≥ K to diverge withsample size n , the simulation results reported in Section 3 below indicate that the finite sampleperformance of the test is robust with respect to the different values of K . In fact the testworks well even with small values of K . We suggest in practice to set K ∈ { , , , , } .4ormally we reject the null hypothesis H at the significance level φ ∈ (0 ,
1) if T n > cv φ , (7)where cv φ is the critical value satisfying P H ( T n > cv φ ) → φ as n → ∞ . As we will see in (10)below, { ˆ γ ( k ) } K k =0 are to be used to determine the critical value cv φ . One obvious concern forsplitting the sample into two halves is the loss in testing power. However the fact that T n takes afinite value under H and it diverges to ∞ (with probability approaching one) under H impliesthat T n has a strong discriminant power to tell apart H from H , which is enough to sustain theadequate power in comparison to that of, for example, the KPSS test. Our simulation resultsindicate that the sample-splitting works well even for sample size n = 80 (i.e. N = 40).For positive integers t and k , let y t,k = 2 { ( Y t − µ )( Y t + k − µ ) − γ ( k ) } sgn( k + t − N − / . (8)Define ξ t,k = 2 y t,k γ ( k ), Q t = P K k =0 ξ t,k , and B k = E { ( P kt =1 Q t ) } . Some regularity conditionsare now in order. Condition 1.
Under the null hypothesis H , there exist uniform constants s ∈ (2 ,
3] and c > ≤ t ≤ n E ( | Y t | s ) ≤ c . Condition 2.
Under the null hypothesis H , { Y t } is α -mixing in the sense that α ( τ ) = sup t sup A ∈F t −∞ ,B ∈F ∞ t + τ | P ( AB ) − P ( A ) P ( B ) | → τ → ∞ , where F t −∞ and F ∞ t + τ denote the σ -fields generated by { Y u } u ≤ t and { Y u } u ≥ t + τ , respectively.There exist uniform constants c > β > s − s / ( s − with s specified inCondition 1 such that α ( τ ) ≤ c τ − β for any τ ≥ Condition 3.
Under the null hypothesis H , there exists a uniform constant c > B ℓ ≥ c ℓ for any ℓ ≥ Y t has more than four moments. The α -mixing assumption inCondition 2 is mild. Causal ARMA processes with continuous innovation distributions are α -mixing with exponentially decaying α -mixing coefficients. So are stationary Markov chainssatisfying certain conditions. See Section 2.6.1 of Fan and Yao (2003) and the references within.In fact stationary GARCH models with finite second moments and continuous innovation distri-butions are also α -mixing with exponentially decaying α -mixing coefficients; see Proposition 12of Carrasco and Chen (2002). Condition 3 requires the long-run covariance of the newly definedprocess { Q t } to be non-degenerated. Theorem 1.
Let the null hypothesis H hold with Conditions – being satisfied, and K =5 { n ξ ( β,s ) } with ξ ( β, s ) = min (cid:26) s − s , ( β − s − β + 2) s (cid:27) , (9) where s and β are specified, respectively, in Conditions and , and β = β ( s − / { s ( s − } . Then as n → ∞ , it holds that sup u> (cid:12)(cid:12)(cid:12)(cid:12) P ( √ nT n > u ) − (cid:18) N ˜ uB N − K √ n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) → , where ˜ u = u −√ n P K k =0 | ˆ γ ( k ) | , and Φ( · ) is the cumulative distribution function of the standardnormal distribution. By Theorem 1, one may select the critical value cv φ of the test stated in (7) of the form z − φ ˆ B N − K / (2 N ) + P K k =0 | ˆ γ ( k ) | , where z − φ is the (1 − φ )-quantile of the standard nor-mal distribution N (0 , B N − K is a consistent estimate of B N − K in the sense thatˆ B N − K /B N − K P −→
1, as then P H (cid:26) T n > z − φ ˆ B N − K N + K X k =0 | ˆ γ ( k ) | (cid:27) → φ . However a critical value specified in this manner is only valid under H , as P K k =0 | ˆ γ ( k ) | divergesto infinity with probability approaching one under H . A non-trivial consequence of using thiscritical value is that the test suffers from the substantial power loss, as under H the probabilityof the event { T n > z − φ ˆ B N − K / (2 N ) + P K k =0 | ˆ γ ( k ) | } is small. This is also confirmed in oursimulation in Section 3 below. To rectify this defect, we apply here a truncation idea as inSection 2.3 of Chang et al. (2017). More precisely we set the critical value ascv φ = z − φ ˆ B N − K N + K X k =0 | ˆ γ ( k ) | , if event T occurs ,κ n , if event T c occurs , (10)where κ n > κ n = o ( n d − / log n ), and the event T sat-isfies conditions P H ( T ) → P H ( T c ) → n → ∞ . We state in Section 2.2 be-low how to specify T . The intuition behind this truncation is as follows: Under H , cv φ = z − φ ˆ B N − K / (2 N ) + P K k =0 | ˆ γ ( k ) | with the probability approaching one, ensuring the correctsize of the test asymptotically. See Theorem 2 below. Under H , cv φ = κ n with probability ap-proaching one. Proposition 1 implies that | ˆ γ (0) | ≥ n d − / log / n with probability approach-ing one under H . Consequently T n ≥ | ˆ γ (0) | ≥ n d − / log n > κ n = cv φ with probabilityapproaching one under H , as κ n = o ( n d − / log n ). This entails that the test has the powerone asymptotically. See Theorem 3 below. 6 heorem 2. Assume the conditions of Theorem hold. Let ˆ B N − K /B N − K P −→ under H .Then for cv φ defined in (10) , it holds that P H ( T n > cv φ ) → φ as n → ∞ . An estimate ˆ B N − K satisfying condition ˆ B N − K /B N − K P −→ H will be con-structed in Section 2.3 below. Theorem 3.
Consider the alternative hypothesis H under which Y t is defined by (4) with ǫ t i . i . d . ∼ (0 , σ ǫ ) and P ∞ j =1 j | ψ j | < ∞ . Then for cv φ defined in (10) , it holds that P H ( T n > cv φ ) → as n → ∞ . T and κ n The critical value cv φ defined in (10) depends on event T and truncation parameter κ n critically.As κ n = o ( n d − / log n ) (see Section 2.1), we set κ n = 0 . × log N with N = ⌊ n/ ⌋ . Let X t = ∇ Y t for t = 2 , . . . , n , and ˆ γ x ( k ) = 1 n − n − k X t =2 ( X t + k − ¯ X )( X t − ¯ X ) (11)for k ≥
0, where ¯ X = ( n − − P nt =2 X t . Recall ˆ γ ( k ) is defined as (3). Under H , it holds thatˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) = O p (1) , which implies that for any fixed constant C ∗ > P H (cid:26) ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) < C ∗ N / (cid:27) → n → ∞ . It follows from Proposition 1 that P H (cid:26) ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) ≥ C ∗ N / (cid:27) → n → ∞ . Consequently we define T in (10) as follows: T = (cid:26) ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) < C ∗ N / (cid:27) . (14)While the asymptotic properties (12) and (13) holds for any positive constant C ∗ >
0, to use T with finite samples C ∗ must be specified according to the underlying process. To specify such aconstant C ∗ , we first assume Y t ∼ I (1) defined by (4) with ǫ t i . i . d . ∼ (0 , σ ǫ ). Then the propositionbelow holds. Proposition 2.
Let Y t ∼ I (1) be defined by (4) in which ǫ t i . i . d . ∼ (0 , σ ǫ ) and P ∞ j =1 j | ψ j | < ∞ . When µ = 0 , it holds that n ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) D −→ a R V ( t ) d t P ∞ j =0 ψ j + P ∞ j =0 ψ j ψ j +1 , where a = P ∞ j =0 ψ j and V ( t ) = W ( t ) − R W ( t ) d t with W ( t ) being the standard Brownianmotion. (ii) When µ = 0 , it holds that n ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) P −→ µ σ ǫ ( P ∞ j =0 ψ j + P ∞ j =0 ψ j ψ j +1 ) . Proposition 2 shows that the ratio { ˆ γ (0) + ˆ γ (1) } / { ˆ γ x (0) + ˆ γ x (1) } with µ = 0 diverges toinfinity faster than that with µ = 0. Thus for any given C ∗ > P H ( T c ) → µ = 0 than that with µ = 0. Therefore below we focus on thecases with µ = 0 only.Recall X t = ∇ Y t = µ + P ∞ j =0 ψ j ǫ t − j . It holds that2 a R V ( t ) d t P ∞ j =0 ψ j + P ∞ j =0 ψ j ψ j +1 = 2 R V ( t ) d tλ (1 + ρ ) , where ρ = E { ( X t +1 − µ )( X t − µ ) } E { ( X t − µ ) } = P ∞ j =0 ψ j ψ j +1 P ∞ j =0 ψ j (15)is the first order autocorrelation coefficient, λ = σ S /σ L , and σ S and σ L are, respectively, theshort-run variance and the long-run variance: σ S = σ ǫ ∞ X j =0 ψ j and σ L = σ ǫ (cid:18) ∞ X j =0 ψ j (cid:19) . We estimate σ S = Var( X t ) by ˆ σ S = ˆ γ x (0) , and σ L = lim n →∞ Var( n − / P nt =2 X t ) by the kernel-type method suggested in Section 2.3 based on the sequence { X t − ¯ X } nt =2 . We denote by ˆ σ L the estimation of σ L . Then we estimate λ byˆ λ = ˆ σ S ˆ σ L . (16)Finally we estimate ρ by ˆ ρ = ˆ γ x (1)ˆ γ x (0) . (17)8s E { R V ( t ) d t } = 1 /
6, we now specify the model-dependent constant C ∗ in (14) as C ∗ = 2 c κ ˆ λ (1 + ˆ ρ )for some uniform constant c κ greater than 1 /
6, where ˆ λ and ˆ ρ are given in (16) and (17),respectively. Consequently, the critical value cv φ admits the following representationcv φ = z − φ ˆ B N − K N + K X k =0 | ˆ γ ( k ) | , if ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) < c κ N / ˆ λ (1 + ˆ ρ ) ;0 . × log N , if ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) ≥ c κ N / ˆ λ (1 + ˆ ρ ) . (18)Our extensive simulation results indicate that this specification of cv φ with c κ between 0.45 and0 .
65 works well across a variety of models. See Table 1 in Section 2.4, Tables 2 and 3 in Section3 and also the supplementary material for details.Though the above specification was derived for Y t ∼ I (1), our simulation results indicatethat it also works fine for I (2) processes; see Table 3 in Section 3 and Tables S5–S8 and S14–S18in the supplementary material. Note that testing I (0) against I ( d ) with d > d = 1, as the autocovariances are of the order at least n d − for I ( d ) processes. Thereforethe difference between the values of test statistic T n under H and those under H increases as d increases. B N − K Let m = 2 N − K . Recall that B N − K = m Var (cid:18) √ m m X t =1 Q t (cid:19) =: mV m where V m is the long-run variance of { Q t } mt =1 . We only need to estimate the long-run variance V m . There exist various estimation methods for long-run variances, including the kernel-typeestimators (Andrews, 1991) and the estimators utilizing moving block bootstraps (Lahiri, 2003).See also Den Haan and Levin (1997) and Kiefer, Vogelsang and Bunzel (2000).Recall Q t = P K k =0 ξ t,k with ξ t,k = 2 y t,k γ ( k ) and y t,k defined in (8). Let ( ˜ y t,k = 2 { ( Y t − ¯ Y )( Y t + k − ¯ Y ) − ˆ γ ( k ) } sgn( k + t − N − / , ˜ ξ t,k = 2˜ y t,k ˆ γ ( k ) , (19)and ˜ Q t = P K k =0 ˜ ξ t,k , where ¯ Y = n − P nt =1 Y t and ˆ γ ( k ) is defined in (3). We adopt the following9ernel-type estimator for V m based on { ˜ Q t } mt =1 :˜ V m = m − X j = − m +1 K (cid:18) jb m (cid:19) ˜ G j , (20)where K ( · ) is a symmetric kernel function that is continuous at 0 with K (0) = 1, b m is thebandwidth, ˜ G j = m − P mt = j +1 ˜ Q t ˜ Q t − j if j ≥ G j = m − P mt = − j +1 ˜ Q t + j ˜ Q t otherwise. Letˆ B N − K = ( m ˜ V m ) / . (21)Andrews (1991) systematically investigated the theoretical properties of this estimation method.It shows that the Quadratic Spectral kernel K QS ( u ) = 2512 π u (cid:26) sin(6 πu/ πu/ − cos(6 πu/ (cid:27) is optimal in the sense of minimizing the asymptotic truncated mean square error of the esti-mator. In our numerical work reported in both Section 3 and the supplementary material, wealways use this kernel function by calling function lrvar from the R-package sandwich withthe default bandwidth specified in the function.To state the required asymptotic property for ˆ B N − K , we introduce some regularity condi-tions first. Condition 4.
The kernel function K ( · ) : R → [ − ,
1] is continuously differentiable on R andsatisfies (i) K (0) = 1, (ii) K ( x ) = K ( − x ) for any x ∈ R , and (iii) R ∞−∞ |K ( x ) | d x < ∞ . Let K ∗ = 1 + K satisfying K ∗ log K ∗ = o ( n − /s ). The bandwidth b m → ∞ satisfies b m = o { n / − /s ( K ∗ log K ∗ ) − / } and K ∗ = o ( b m ). Condition 5.
Under the null hypothesis H , there exist uniform constants s > c > c > β > max { s / ( s − , s / ( s − } such that max ≤ t ≤ n E ( | Y t | s ) ≤ c , and the α -mixing coefficients { α ( τ ) } τ ≥ satisfy α ( τ ) ≤ c τ − β , where α ( τ ) is defined in Condition 2. Theorem 4.
Let Conditions and hold. Then as n → ∞ , it holds under the null hypothesis H that ˆ B N − K /B N − K P −→ . Based on Sections 2.2 and 2.3 above, Algorithm 1 outlines the steps to be taken in order toperform the proposed test. The algorithm is implemented in an R-function ur.test containedin the package
HDTSA which is available via ‘github’: devtoolsi::install github(’ghghgh2020/HDTSA/HDTSA’)
To perform the test using function ur.test , one merely needs to input time series { Y t } nt =1 andsignificance level φ . The package sets the default value c κ = 0 .
55 and returns the five testingresults for K = 0 , , . . . , c κ and K subjectively.We recommend to use c κ ∈ [0 . , .
65] and K ∈ { , , , , } .10 lgorithm 1 Sample ACF-based unit-root test Given { Y t } nt =1 and K , compute ˆ γ ( k ) defined as in (3) for k = 0 ,
1, and also ˆ γ ( k ) and ˆ γ ( k )defined as in (5) for k = 0 , , . . . , K . Let X t = ∇ Y t . Compute ˆ γ x ( k ) defined as (11) for k = 0 ,
1, and put ˆ ρ = ˆ γ x (1) / ˆ γ x (0). Call function lrvar from the R-package sandwich (with the default value of the bandwidthspecified in the function) to compute the long-run covariances of { ˜ Q t } and { X t } , denotedby ˜ V N − K and ˆ σ L , respectively, where ˜ Q t is defined immediately below (19). Given significant level φ ∈ (0 , T n = P K k =0 | ˆ γ ( k ) | and thecritical valuecv φ = z − φ ˆ B N − K N + K X k =0 | ˆ γ ( k ) | , if ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) < c κ N / ˆ λ (1 + ˆ ρ ) ;0 . × log N , if ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) ≥ c κ N / ˆ λ (1 + ˆ ρ ) ;where c κ ∈ (0 . , .
65) is a prescribed constant, z − φ is the (1 − φ )-quantile of N (0 , B N − K = (2 N − K ) / ˜ V / N − K and ˆ λ = ˆ γ x (0) / ˆ σ L . Reject null hypothesis H if T n > cv φ .To illustrate the robustness of the test with respect to the values of c κ and K , we applied thetest to the 14 US annual economic time serie initially analyzed by Nelson and Plosser (1982), andsubsequently by many others including Perron (1988); DeJong et al. (1989); Kwiatkowski et al.(1992). The length n for those 14 series varies between 62 and 111. The conventional wisdomis that a unit root is present in most of the Nelson-Plosser series. The analysis from theaforementioned papers indicates that the unit-root hypothesis cannot be validated for only oneor two series such as unemployment rates. The results from the proposed new test corroboratesthose findings; see Table 1. The new test rejects the null hypothesis Y t ∼ I (0) in favour of H : Y t ∼ I ( d ) at the 5% significance level for the 12 out of the 14 series. The two series forwhich H cannot be rejected are the unemployment rate series and the velocity series. Notethat the new test is robust in the sense that taking c κ = 0 . , .
55 or 0.65, and K = 0 , , , We illustrate the finite sample properties of the proposed test T n by simulation. Various versionsof the proposed test with K ∈ { , , , , } and c κ ∈ { . , . , . } are considered; see(18). We also consider the proposed test with untruncated critical value, i.e. c κ = ∞ in (18).For the comparison purpose, we also include the KPSS test (Kwiatkowski et al., 1992) in ourexperiments. We set n = 80 , ,
200 (i.e. N = 40 , , ǫ t i . i . d . ∼ N (0 , σ ǫ ) with σ ǫ = 1 or 2. Toexamine the sizes of the tests, we consider the following four models.11able 1: Testing results for the 14 Nelson-Plosser time series by the proposed new test at the 5%significance level with c κ = 0 . , .
55 and 0 .
65 respectively: ”+” indicates the stationary nullhypothesis is rejected in favour of a unit-root alternative, while ”–” indicates the null hypothesiscannot be rejected. The results are unchanged with K = 0 , , , c κ = 0 . c κ = 0 . c κ = 0 . • Model 1: Y t = ρY t − + ǫ t . • Model 2: Y t = ǫ t + φ ǫ t − + φ ǫ t − . • Model 3: Y t − ρ Y t − − ρ Y t − = ǫ t + 0 . ǫ t − + 0 . ǫ t − . • Model 4: Y t = ǫ t + P i =1 φǫ t − i .We set the nominal size of the tests at φ = 5%. The KPSS test is implemented by callingfunction kpss.test in R-package tseries . The results with K = 0 are listed in Table 2, andthe results with K ∈ { , , , } are reported in Tables S1–S4 in the supplementary material.Note that the results with different c κ and K are similar; indicating once again that the test T n is robust with respect to the choice of tuning parameters c κ and K . We also consider the casesthat ǫ t i . i . d . ∼ t (2) and ǫ t i . i . d . ∼ t (5) and report the results in Tables S9–S13 in the supplementarymaterial.Overall the proposed test provides reasonable approximations for the size of the test es-pecially with large n (eg. N = 100), and truncation (10) has little impact on the achievedsignificance levels, as indicated in Theorem 2. The performance of the new test is stable acrossdifferent models with different parameters, different K and different innovation distributions,while that of the KPSS test varies and is adequate only for some settings.Table 2 indicates that for Model 1 the new test controls the size well for both positive andnegative ρ , while the KPSS test performs poorly when ρ <
0, and even worse when ρ >
0. In12act the KPSS test completely fails when ρ = 0 .
9, as the empirical sizes are at least 46.7%. Thisis due to the fact that when ρ is close to 1, the KPSS test has difficulties in distinguishing itfrom 1 which is unit-root. See also Table 3 of Kwiatkowski et al. (1992). Our new method doesnot suffer from this closeness to 1, as for which the order of the magnitude of ACF matters.Table 2 also shows that for Model 2 both the new test and the KPSS test provide comparableapproximations for the size of the test. For Models 3 and 4, the new test provides adequateapproximations for the size of the test. Unfortunately the KPSS test does not work for Models3 and 4 as the empirical sizes range from 15.4% to 26.2% for Model 3, and 13.8% to 20.4% forModel 4.The performance of empirical powers is based on the following models. • Model 5: ∇ Y t = Z t , Z t = ρZ t − + ǫ t . • Model 6: ∇ Y t = Z t , Z t = ǫ t + φ ǫ t + φ ǫ t − . • Model 7: ∇ Y t = Z t , Z t − ρ Z t − − ρ Z t − = ǫ t + 0 . ǫ t + 0 . ǫ t − . • Model 8: ∇ Y t = Z t , Z t = ǫ t + φ ǫ t + φ ǫ t − .The corresponding results are reported in Table 3 for K = 0 with normal distributedinnovations, and in Tables S5–S8 in the supplementary material for K ∈ { , , , } . The KPSStest shows impressive power under the models above. Note that the KPSS test has a tendencyto overestimate test levels, leading to inflated power. Nevertheless the new test displays greaterpower in most cases. Noticeably the power one property of the new test, presented in Theorem 3,is observable in the simulation as the empirical power tends to 1 when N increases. Comparingthe results of Models 6 and 8, we found that the proposed new tests show off the asymptoticpower one property more distinctly as the test statistic T n has more discriminate power between I (2) and I (0) than that between I (1) and I (0). We also simulated the power of the tests with t (2) and t (5) innovations in Models 5–8. The results are presented in Tables S14–S18 in thesupplementary material; showing similar profiles as those in Tables 3 and S5–S8. For any 0 ≤ j ≤ d , we write Y ( d − j ) t = ∇ j Y t . Assume Y − ( d − = · · · = Y = 0. Let { F g ( · ) } bethe multi-fold integrated Brownian motion considered in Chan and Wei (1988), which is definedrecursively as F g ( t ) = R t F g − ( x ) d x for any g ≥ F ( t ) = W ( t ) is the scalar Brownianmotion.We first consider the case with µ d = 0 and k ≥
1. Due to Y t ∼ I ( d ), we can reformulate Y t × ) of the proposed test T n defined as (6) for K = 0 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , .
65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ .
45 0 .
55 0 .
65 KPSS ∞ .
45 0 .
55 0 .
65 KPSSModel 1 0.5 40 6.0 6.0 6.0 6.0 10.4 6.5 6.5 6.5 6.5 10.870 6.9 6.9 6.9 6.9 10.1 5.8 5.8 5.8 5.8 10.7100 6.1 6.1 6.1 6.1 10.2 5.6 5.6 5.6 5.6 8.50.9 40 7.2 41.9 30.0 20.3 51.2 8.8 40.8 27.4 19.3 50.570 7.8 23.7 14.6 10.4 46.7 8.0 23.4 13.9 10.5 49.5100 8.5 12.7 9.4 8.6 49.2 8.6 14.3 10.1 8.9 49.3 − . , .
9) 40 7.2 7.2 7.2 7.2 9.0 7.1 7.1 7.1 7.1 8.370 7.1 7.1 7.1 7.1 7.3 6.9 6.9 6.9 6.9 7.3100 5.5 5.5 5.5 5.5 8.1 5.9 5.9 5.9 5.9 8.2Model 3 (0.4, 0.2) 40 7.2 8.2 7.4 7.3 22.5 6.9 8.0 7.0 6.9 20.870 7.7 7.7 7.7 7.7 17.3 7.3 7.3 7.3 7.3 17.5100 7.2 7.2 7.2 7.2 18.0 6.0 6.0 6.0 6.0 17.4(0.5, 0.1) 40 8.5 8.9 8.5 8.5 19.6 7.1 7.5 7.1 7.1 19.670 8.0 8.0 8.0 8.0 16.6 6.2 6.2 6.2 6.2 17.4100 6.3 6.3 6.3 6.3 17.4 6.3 6.3 6.3 6.3 15.4(0.6, 0.1) 40 8.5 12.7 9.6 8.7 26.2 8.8 11.7 9.5 8.9 24.270 7.3 7.3 7.3 7.3 22.4 9.1 9.2 9.1 9.1 22.5100 7.6 7.6 7.6 7.6 20.3 7.0 7.0 7.0 7.0 23.6Model 4 0.4 40 9.7 10.3 9.8 9.7 20.4 8.9 9.6 8.9 8.9 17.770 7.5 7.5 7.5 7.5 15.7 8.8 8.8 8.8 8.8 15.2100 7.6 7.6 7.6 7.6 15.0 8.8 8.8 8.8 8.8 15.30.5 40 8.2 9.2 8.3 8.2 20.2 8.6 9.9 8.9 8.8 19.170 8.3 8.3 8.3 8.3 15.4 8.5 8.5 8.5 8.5 16.9100 7.8 7.8 7.8 7.8 15.8 6.8 6.8 6.8 6.8 15.80.6 40 8.5 10.2 8.8 8.5 19.6 7.3 8.9 7.8 7.3 20.070 8.9 8.9 8.9 8.9 13.8 9.2 9.2 9.2 9.2 16.2100 7.2 7.2 7.2 7.2 15.9 8.1 8.1 8.1 8.1 15.814able 3: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 0 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , .
65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ .
45 0 .
55 0 .
65 KPSS ∞ .
45 0 .
55 0 .
65 KPSSModel 5 0.5 40 11.7 94.2 88.4 84.0 84.2 12.4 93.8 89.5 84.5 83.270 11.7 96.5 92.9 88.4 90.9 12.6 97.1 93.8 90.3 91.0100 11.3 98.0 95.5 92.2 95.5 11.8 98.1 95.5 92.5 95.50.9 40 13.1 99.2 97.3 94.6 91.1 14.1 99.4 98.2 96.2 92.870 14.8 99.8 99.1 97.9 95.3 14.5 99.9 99.7 99.1 95.0100 16.4 99.9 99.5 99.1 97.2 15.4 100.0 99.9 99.6 97.4 − . , .
9) 40 13.1 95.0 90.0 83.9 83.0 12.0 95.4 90.7 85.8 84.570 11.6 97.3 93.8 89.7 90.2 12.8 97.5 94.4 89.8 90.1100 13.7 99.0 96.4 92.3 95.2 11.6 98.2 95.7 92.5 94.8Model 7 (0.4, 0.2) 40 14.8 98.0 95.2 90.6 85.9 14.8 98.3 95.4 91.1 85.870 15.4 99.1 97.0 93.8 92.0 14.1 99.3 97.2 94.3 91.6100 16.6 99.6 98.8 96.5 96.5 15.7 99.6 98.5 96.3 96.0(0.5, 0.1) 40 14.2 99.1 95.9 91.3 84.7 15.8 98.7 95.9 91.2 85.370 14.8 99.4 97.2 94.0 91.2 14.3 99.4 97.7 94.8 91.3100 15.0 99.6 98.5 96.2 95.5 15.3 99.3 98.2 96.5 95.0(0.6, 0.1) 40 14.5 99.2 97.1 93.3 87.2 15.3 98.9 96.7 93.2 86.870 15.7 99.7 98.5 96.2 93.5 17.6 99.7 98.7 97.0 92.0100 16.4 99.8 99.1 97.7 95.7 15.9 100.0 99.5 98.6 96.0Model 8 (0.8, 0.3) 40 6.7 100.0 100.0 99.9 98.5 6.2 100.0 100.0 100.0 98.270 6.3 100.0 100.0 100.0 99.7 6.0 100.0 100.0 100.0 99.2100 7.0 100.0 100.0 100.0 99.8 6.1 100.0 100.0 100.0 99.9(0.9, 0.5) 40 7.0 100.0 100.0 100.0 98.4 7.0 100.0 100.0 100.0 98.570 5.5 100.0 100.0 100.0 99.5 6.0 100.0 100.0 100.0 99.3100 5.9 100.0 100.0 100.0 99.9 6.2 100.0 100.0 100.0 99.8(0.95, 0.9) 40 8.0 100.0 100.0 100.0 98.5 6.8 100.0 100.0 100.0 98.670 7.3 100.0 100.0 100.0 99.2 6.3 100.0 100.0 100.0 99.7100 6.1 100.0 100.0 100.0 99.9 5.3 100.0 100.0 100.0 100.015s Y t = Y ( d ) t = Y ( d ) t − + Y ( d − t = · · · = Y ( d ) t − k + P k − j =0 Y ( d − t − j for any k ≥
1, which implies thatˆ γ ( k ) n d − = 1 n d n X t = k +1 { Y ( d ) t − k − ¯ Y }{ Y ( d ) t − ¯ Y } = 1 n d n X t = k +1 { Y ( d ) t − ¯ Y } − n d n X t = k +1 { Y ( d ) t − ¯ Y }{ Y ( d − t − k +1 + · · · + Y ( d − t } . (22)Meanwhile, for each 0 ≤ i ≤ k −
1, it holds that n X t = k +1 Y ( d − t − i Y ( d ) t = n X t = k +1 Y ( d − t − i { Y ( d ) t − i − + Y ( d − t − i + · · · + Y ( d − t } (23)and n X t = k +1 Y ( d − t − i Y ( d ) t − i − = n − i X j = k +1 − i Y ( d − j Y ( d ) j − = n i X j = k +1 − i { Y ( d ) j − Y ( d ) j − } Y ( d ) j − = 12 { Y ( d ) n i } − { Y ( d ) k − i } − n i X j = k +1 − i { Y ( d ) j − Y ( d ) j − } , where n i = n − i . Note that Y ( d ) j − Y ( d ) j − = Y ( d − j . If d ≥
2, by (2.87), (2.142) and Theo-rem 2.17 in Tanaka (2017), we have ( n d − i ) − P n i j =1 { Y ( d ) j − Y ( d ) j − } −→ a σ ǫ R F d − ( s ) s. and( n d − i ) − { Y ( d ) n i } −→ a σ ǫ F d − (1) as n i → ∞ . Due to Y ( d ) k − i = O p (1), then we have1 n d − n X t = k +1 Y ( d − t − i Y ( d ) t − i − −→ a σ ǫ F d − (1) . (24)For any 0 ≤ j ≤ i , by the Cauchy-Schwarz inequality, it holds that (cid:12)(cid:12)(cid:12)(cid:12) n X t = k +1 Y ( d − t − i Y ( d − t − j (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:20) n X t = k +1 { Y ( d − t − i } (cid:21) / (cid:20) n X t = k +1 { Y ( d − t − j } (cid:21) / = (cid:20) n X t = k +1 { Y ( d ) t − i − Y ( d ) t − i − } (cid:21) / (cid:20) n X t = k +1 { Y ( d ) t − j − Y ( d ) t − j − } (cid:21) / = O p ( n d − ) . When d ≥
2, together with (24), (23) leads to n X t = k +1 Y ( d − t − i Y ( d ) t = O p ( n d − ) , (25)16or any 0 ≤ i ≤ k −
1. Since n − P nt = k +1 Y (0) t − i Y (1) t − i − = n − P n i j = k +1 − i { Y (1) j − Y (1) j − } Y (1) j − , by(2.89) in Tanaka (2017), n − P nt = k +1 Y (0) t − i Y (1) t − i − −→ a σ ǫ { R W ( t ) d W ( t ) + (1 − λ ) / } with λ = a − P ∞ j =0 ψ j , which implies that P nt = k +1 Y (0) t − i Y (1) t − i − = O p ( n ). Hence, (25) holds for any d ≥ ≤ i ≤ k − n − d − / P nt =1 Y ( d ) t D −→ aσ ǫ R F d − ( s ) s. for any d ≥
1, then P nt = k +1 Y ( d − t − i ¯ Y = O p ( n d − )for any d ≥ ≤ i ≤ k −
1. Hence, (22) leads toˆ γ ( k ) n d − = 1 n d n X t = k +1 { Y ( d ) t − ¯ Y } + O p ( n − )= 1 n d n X t = k +1 { Y ( d ) t } − Y · n d n X t = k +1 Y ( d ) t + n − kn d ¯ Y + O p ( n − ) . Also notice that n − d P nt =1 { Y ( d ) t } −→ a σ ǫ R F d − ( s ) s.. For any k ≥
1, it follows from thecontinuous mapping theorem that n − d +1 ˆ γ ( k ) D −→ a σ ǫ R V d − ( s ) s., where V d − ( s ) = F d − ( s ) − R F d − ( s ) s.. For k = 0, since n − d +1 ˆ γ (0) = n − d P nt =1 { Y ( d ) t − ¯ Y } , we then also have n − d +1 ˆ γ (0) D −→ a σ ǫ R V d − ( s ) s.. We complete the proof of part (i) of Proposition 1.We now consider the case with µ d = 0. Let U (0) t = P ∞ j =0 ψ j ǫ t − j with { ψ j } and { ǫ t } specifiedin (4). Recall Y t = Y ( d ) t − k + P k − j =0 Y ( d − t − j for any k ≥
1. Then Y t = Y ( d )0 + P t − j =0 Y ( d − t − j .Recursively, we have Y t = Y ( d ) t = Y ( d )0 + t − X j =0 Y ( d − t − j = Y ( d )0 + t − X j =0 (cid:26) Y ( d − + t − j − X j =0 Y ( d − t − j − j (cid:27) = Y ( d )0 + d − X h =1 Y ( d − h )0 (cid:18) t − X j =0 · · · t − j −···− j h − − X j h =0 (cid:19) + t − X j =0 · · · t − j −···− j d − − X j d =0 µ d + t − X j =0 · · · t − j −···− j d − − X j d =0 U (0) t − j −···− j d . Define ω t = P t − j =0 · · · P t − j −···− j d − − j d =0 r t = P t − j =0 · · · P t − j −···− j d − − j d =0 U (0) t − j −···− j d . Noticethat Y − ( d − = · · · = Y = 0. Then Y t = Y ( d ) t = µ d · ω t + r t . For any k ≥
1, we haveˆ γ ( k ) n d = 1 n d +1 n X t = k +1 ( Y t − k − ¯ Y )( Y t − ¯ Y )= µ d n d +1 n X t = k +1 ( ω t − k − ¯ ω )( ω t − ¯ ω ) + 1 n d +1 n X t = k +1 ( r t − k − ¯ r )( r t − ¯ r )17 µ d n d +1 n X t = k +1 ( ω t − k − ¯ ω )( r t − ¯ r ) + µ d n d +1 n X t = k +1 ( r t − k − ¯ r )( ω t − ¯ ω ) , where ¯ ω = n − P nt =1 ω t and ¯ r = n − P nt =1 r t . Notice that ∇ r t = t − X j =0 t − j − X j =0 · · · t − j −···− j d − − X j d =0 U (0) t − j −···− j d , we have r t ∼ I ( d ) without the drift term µ d . Applying the result of part (i), we know that n − (2 d +1) P nt = k +1 ( r t − k − ¯ r )( r t − ¯ r ) = o p (1), n − (2 d +1) P nt = k +1 ( ω t − k − ¯ ω )( r t − ¯ r ) = o p (1) and n − (2 d +1) P nt = k +1 ( r t − k − ¯ r )( ω t − ¯ ω ) = o p (1). Notice that ω t = O ( t d ). Thus n − d ˆ γ ( k ) P −→ φ d,k µ d ,where φ d,k = lim n →∞ n − (2 d +1) P nt = k +1 ( ω t − k − ¯ ω )( ω t − ¯ ω ). Similarly, for k = 0, we also have Y t = w t + r t , then n − d ˆ γ (0) = µ d n − d − P nt =1 ( ω t − ¯ ω )( ω t − ¯ ω ) + 2 µ d n − d − P nt =1 ( ω t − ¯ ω )( r t − ¯ r ) + n − d − P nt =1 ( r t − ¯ r )( r t − ¯ r ), we therefore conclude n − d ˆ γ (0) P −→ φ d, µ d . We complete theproof of part (ii). (cid:3) Since X t = µ + P ∞ j =0 ψ j ǫ t − j and ǫ t i . i . d . ∼ (0 , σ ǫ ), then we have ˆ γ x (0) P −→ γ x (0) = σ ǫ P ∞ j =0 ψ j andˆ γ x (1) P −→ γ x (1) = σ ǫ P ∞ j =0 ψ j ψ j +1 .We first consider the case µ = 0. Recall that we have shown in the proof of Proposition 1that 1 n { ˆ γ (0) + ˆ γ (1) } = 2 n n X t =1 ( Y t − ¯ Y ) + O p ( n − ) D −→ a σ ǫ Z V ( s ) d s . It follows from the Slutsky’s Theorem that1 n ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) D −→ a R V ( t ) d t P ∞ j =0 ψ j + P ∞ j =0 ψ j ψ j +1 . We complete the proof of part (i).We begin to consider the case with µ = 0. As we have shown in the proof of Propositionthat n − ˆ γ (1) P −→ φ , µ and n − ˆ γ (0) P −→ φ , µ , it then holds that1 n { ˆ γ (0) + ˆ γ (1) } P −→ ( φ , + φ , ) µ , which implies that 1 n ˆ γ (0) + ˆ γ (1)ˆ γ x (0) + ˆ γ x (1) P −→ ( φ , + φ , ) µ σ ǫ ( P ∞ j =0 ψ j + P ∞ j =0 ψ j ψ j +1 ) . Notice that φ , = lim n →∞ n − P nt =1 ( ω t − ¯ ω ) and φ , = lim n →∞ n − P nt =2 ( ω t − − ¯ ω )( ω t − ¯ ω )18ith ω t = P t − j =0 t and ¯ ω = n − P nt =1 ω t = ( n + 1) /
2. Then φ , = φ , = 4 − R s (2 − s ) d s = 1 /
12. We complete the proof of part (ii). (cid:3)
Define T ∗ n = K X k =0 { ˆ γ ( k ) − ˆ γ ( k ) } γ ( k ) . (26)Notice thatˆ γ ( k ) − ˆ γ ( k ) = 1 N N − k X t = N +1 { ( Y t − µ )( Y t + k − µ ) − γ ( k ) }− N N − k X t =1 { ( Y t − µ )( Y t + k − µ ) − γ ( k ) } + ( ¯ Y − µ ) · N (cid:18) N + k X t = N +1 + N X t =2 N − k +1 − k X t =1 − N X t = N − k +1 (cid:19) ( Y t − µ )+ ( ¯ Y − µ ) · N (cid:18) N X t =1 − N X t = N +1 (cid:19) ( Y t − µ ) , where N = ⌊ n/ ⌋ . Recall that y t,k = 2 { ( Y t − µ )( Y t + k − µ ) − γ ( k ) } sgn( k + t − N − /
2) for each t and k . Thenˆ γ ( k ) − ˆ γ ( k ) = 12 N N − k X t =1 y t,k − N N X t = N − k +1 y t,k + ( ¯ Y − µ ) · N (cid:18) N + k X t = N +1 + N X t =2 N − k +1 − k X t =1 − N X t = N − k +1 (cid:19) ( Y t − µ ) | {z } R ,k + ( ¯ Y − µ ) · N (cid:18) N X t =1 − N X t = N +1 (cid:19) ( Y t − µ ) | {z } R ,k = 12 N N − k X t =1 y t,k − N N X t = N − k +1 y t,k + ( ¯ Y − µ ) R k R k = R ,k + R ,k . Notice that Q t = P K k =0 ξ t,k with ξ t,k = 2 y t,k γ ( k ) for each t = 1 , . . . , N − K . It holds that T ∗ n = 12 N K X k =0 2 N − k X t =1 ξ t,k − N K X k =0 N X t = N − k +1 ξ t,k | {z } T ∗∗ n +2( ¯ Y − µ ) K X k =0 R k γ ( k ) , (27)where T ∗∗ n = 12 N N − K X t =1 K X k =0 ξ t,k + 12 N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19) (28)= 12 N N − K X t =1 Q t | {z } I + 12 N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19)| {z } II . In the sequel, we use C to denote a generic positive finite constant that may be different indifferent uses. For two sequences of positive numbers { a q } and { b q } , we write a q . b q or b q & a q if there exists a positive uniform constant c such that a q /b q ≤ c for any q . We write a q ≍ b q ifand only if a q . b q and b q . a q hold simultaneously. We first present the following result. Proposition 3.
Let K ∗ = 1 + K . Under the null hypothesis H with Conditions – beingsatisfied, if K ∗ = o { n ξ ( β,s ) } with ξ ( β, s ) defined as (9) , then it holds that d n := sup u ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P ( √ nT ∗∗ n ≤ u ) − Φ (cid:18) N uB N − K √ n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) → as n → ∞ , where Φ( · ) is the cumulative distribution function of the standard normal distribu-tion. To construct Proposition 3, we need to analyze the two terms I and II in (28), respectively.Recall E ( Q t ) = 0 and E ( ξ t,k ) = 0 under H . Lemma 1 gives the Berry-Esseen bound for∆ m = sup u ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) B m m X t =1 Q t ≤ u (cid:19) − Φ( u ) (cid:12)(cid:12)(cid:12)(cid:12) (29)with B m = E { ( P mt =1 Q t ) } , and Lemma 2 gives an upper bound for the tail probability of theterm II in (28). Notice that II = 0 when K = 0. Our Lemma 2 focuses on the non-trivial casewith K ≥ Lemma 1.
Let K ∗ = 1 + K . Under the null hypothesis H with Conditions – being satisfied, hen it holds that ∆ m . K s − ∗ m − ( s − / + K s ∗ m − ( β − s − / (2 β +2) for any m ≥ , provided that K s ∗ m − ( β − s − / (2 β +2) = o (1) , where β = β ( s − / { s ( s − } . For each t and τ ≥
0, denote by ˜ F t −∞ and ˜ F ∞ t + τ the σ -fields generated by { Q u } u ≤ t and { Q u } u ≥ t + τ , respectively. Recall F t −∞ and F ∞ t + τ be the σ -fields generated by { Y u } u ≤ t and { Y u } u ≥ t + τ , respectively. It follows from the definition of the new process { Q t } that ˜ F t −∞ ⊂F t + K −∞ and ˜ F ∞ t + τ ⊂ F ∞ t + τ . For each τ ≥
0, it holds that α Q ( τ ) := sup t sup A ∈ ˜ F t −∞ ,B ∈ ˜ F ∞ t + τ | P ( AB ) − P ( A ) P ( B ) |≤ sup t sup A ∈F t + K −∞ ,B ∈F ∞ t + τ | P ( AB ) − P ( A ) P ( B ) | = α ( | τ − K | + ) , (30)where | · | + denotes the positive part of · . Since α ( τ ) → τ → ∞ , then α Q ( τ ) → τ → ∞ ,which indicates that the new process { Q t } is also α -mixing with α -mixing coefficients α Q ( · ).By Condition 1 and Cauchy-Schwarz inequality, we have E ( | ( Y t − µ )( Y t + k − µ ) | s ) ≤ c . When K is finite, Theorem 2 of Sunklodas (1984) shows that ∆ m . m − ( β − s − / (2 β +2) . Since thetuning parameter K may diverge with n , Theorem 2 of Sunklodas (1984) cannot be applieddirectly. Lemma 1 here extends Theorem 2 of Sunklodas (1984) to the more general triangulararray case. The proof of Lemma 1 is given in the supplementary material. Lemma 2.
Let K ∗ = 1 + K . Under the null hypothesis H with Conditions and beingsatisfied, then P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) . K ∗ exp {− CK − ∗ ( εn ) } + K s ∗ n ( εn ) − s + K ( β +1) s / ( β + s )+1 ∗ n ( εn ) − ( β +1) s / ( β + s ) for any ε > such that εn/K ∗ → ∞ .Proof. By the triangular inequality, it holds that P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) ≤ K X k =0 P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) N − k X t =2 N − K +1 ξ t,k (cid:12)(cid:12)(cid:12)(cid:12) > εNK ∗ (cid:19) + K X k =0 P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) N X t = N − k +1 ξ t,k (cid:12)(cid:12)(cid:12)(cid:12) > εNK ∗ (cid:19) (31)21or any ε >
0. We will use the Fuk-Nagaev inequality to bound the terms on the right-handside of (31). For each fixed k = 0 , . . . , K , similar to (30), we know { ξ t,k } is also an α -mixingprocess with α -mixing coefficients α k ( τ ) ≤ α ( | τ − k | + ) for all τ ≥
0. It follows from Condition1 and Lemma 2 of Chang, Tang and Wu (2013) that P ( | ξ t,k | > x ) ≤ Cx − s for any x >
0. ByTheorem 6.2 of Rio (2017), we have P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) N − k X t =2 N − K +1 ξ t,k (cid:12)(cid:12)(cid:12)(cid:12) > εNK ∗ (cid:19) . (cid:26) εn ) CrK ∗ (cid:27) − r/ + K s − ∗ r s − n ( εn ) s + K ( β +1) s / ( β + s ) ∗ r β ( s − / ( β + s ) n ( εn ) ( β +1) s / ( β + s ) (32)and P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) N X t = N − k +1 ξ t,k (cid:12)(cid:12)(cid:12)(cid:12) > εNK ∗ (cid:19) . (cid:26) εn ) CrK ∗ (cid:27) − r/ + K s − ∗ r s − n ( εn ) s + K ( β +1) s / ( β + s ) ∗ r β ( s − / ( β + s ) n ( εn ) ( β +1) s / ( β + s ) for any r ≥ ε > εn/ ( rK ∗ ) ≥ c ∗ , where c ∗ is a uniform positive constant.Therefore, (31) leads to P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) . K ∗ (cid:26) εn ) CrK ∗ (cid:27) − r/ + K s ∗ r s − n ( εn ) s + K ( β +1) s / ( β + s )+1 ∗ r β ( s − / ( β + s ) n ( εn ) ( β +1) s / ( β + s ) . Notice that (1 + x − ) − x → e − as x → ∞ . With a sufficiently large but fixed r , we have P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) . K ∗ exp {− CK − ∗ ( εn ) } + K s ∗ n ( εn ) − s + K ( β +1) s / ( β + s )+1 ∗ n ( εn ) − ( β +1) s / ( β + s ) for any ε > εn/K ∗ → ∞ . We complete the proof of Lemma 2. (cid:3) P ( √ nT ∗∗ n > u ) ≤ P (cid:18) √ n N N − K X t =1 Q t > u − ε (cid:19) P (cid:26) √ n N K X k =0 (cid:18) N − k X t =2 N − K +1 ξ t,k − N X t = N − k +1 ξ t,k (cid:19) > ε (cid:27) ≤ − Φ (cid:26) N ( u − ε ) B N − K √ n (cid:27) + ∆ N − K (33)+ CK ∗ exp( − CK − ∗ ε n ) + CK s ∗ n ( ε n ) − s / + CK ( β +1) s / ( β + s )+1 ∗ n ( ε n ) − ( β +1) s / (2 β +2 s ) ≤ − Φ (cid:18) N uB N − K √ n (cid:19) + Cε + ∆ N − K + CK ∗ exp( − CK − ∗ ε n ) + CK s ∗ n ( ε n ) − s / + CK ( β +1) s / ( β + s )+1 ∗ n ( ε n ) − ( β +1) s / (2 β +2 s ) for any ε > ε √ n/K ∗ → ∞ . On the other hand, analogous to (33), we have P ( √ nT ∗∗ n > u ) ≥ − Φ (cid:18) N uB N − K √ n (cid:19) − Cε − ∆ N − K − CK ∗ exp( − CK − ∗ ε n ) − CK s ∗ n ( ε n ) − s / − CK ( β +1) s / ( β + s )+1 ∗ n ( ε n ) − ( β +1) s / (2 β +2 s ) for any ε > ε √ n/K ∗ → ∞ . Therefore, d n = sup u ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P ( √ nT ∗∗ n ≤ u ) − Φ (cid:18) N uB N − K √ n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) . ε + ∆ N − K + K ∗ exp( − CK − ∗ ε n ) + K s ∗ n ( ε n ) − s / + K ( β +1) s / ( β + s )+1 ∗ n ( ε n ) − ( β +1) s / (2 β +2 s ) for any ε > ε √ n/K ∗ → ∞ . Since K ∗ = o { n ξ ( β,s ) } with ξ ( β, s ) defined as (9), thereexists suitable selection of ε = o (1) such that d n = o (1). We complete the proof of Proposition3. (cid:3) To prove Theorem 1, we need the following lemma.
Lemma 3.
Let K ∗ = 1 + K . Under the null hypothesis H with Conditions and beingsatisfied, then max ≤ k ≤ K P (cid:8) | ˆ γ ( k ) − γ ( k ) | > ε (cid:9) . exp( − CK − ∗ nε ) + K s − ∗ n ( εn ) − s + n ( εn ) − ( β +1) s / ( β + s ) nd max ≤ k ≤ K P (cid:8) | ˆ γ ( k ) − γ ( k ) | > ε (cid:9) . exp( − CK − ∗ nε ) + K s − ∗ n ( εn ) − s + n ( εn ) − ( β +1) s / ( β + s ) for any ε = o ( K ∗ ) such that εn → ∞ and ε = o ( n β /s ) .Proof. Recall thatˆ γ ( k ) − γ ( k ) = 1 N N − k X t =1 ( Y t + k − µ )( Y t − µ ) − γ ( k ) − ( ¯ Y − µ ) (cid:26) N N − k X t =1 ( Y t − µ ) + 1 N N − k X t =1 ( Y t + k − µ ) (cid:27) + N − kN ( ¯ Y − µ ) . Same as (32), it holds thatmax ≤ k ≤ K P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N N − k X t =1 ( Y t + k − µ )( Y t − µ ) − γ ( k ) (cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) . (cid:26) εn ) CrK ∗ n (cid:27) − r/ + K s − ∗ r s − n ( εn ) s + nr β ( s − / ( β + s ) ( εn ) ( β +1) s / ( β + s ) for any r ≥ ε > εn/r ≥ c ∗ , where c ∗ is a uniform positive constant. Withsufficiently large r , we havemax ≤ k ≤ K P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N N − k X t =1 ( Y t + k − µ )( Y t − µ ) − γ ( k ) (cid:12)(cid:12)(cid:12)(cid:12) > ε (cid:27) . exp( − CK − ∗ nε ) + K s − ∗ n ( εn ) − s + n ( εn ) − ( β +1) s / ( β + s ) . Applying Theorem 6.2 of Rio (2017) again, we have P ( | ¯ Y − µ | ≥ ε ) . (cid:18) nε Cr (cid:19) − r/ + nr β (2 s − / ( β +2 s ) ( εn ) s ( β +1) / ( β +2 s ) for any r ≥ ε >
0. With sufficiently large r , we have P ( | ¯ Y − µ | ≥ ε ) . exp( − Cnε ) + n ( εn ) − s ( β +1) / ( β +2 s ) (34)for any ε >
0. Analogously, we have P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N N − k X t =1 ( Y t − µ ) (cid:12)(cid:12)(cid:12)(cid:12) ≥ ε (cid:27) . exp( − Cnε ) + n ( εn ) − s ( β +1) / ( β +2 s ) P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) N N − k X t =1 ( Y t + k − µ ) (cid:12)(cid:12)(cid:12)(cid:12) ≥ ε (cid:27) . exp( − Cnε ) + n ( εn ) − s ( β +1) / ( β +2 s ) for any ε >
0. Therefore, it holds thatmax ≤ k ≤ K P (cid:8) | ˆ γ ( k ) − γ ( k ) | > ε (cid:9) . exp( − CK − ∗ nε ) + K s − ∗ n ( εn ) − s + n ( εn ) − ( β +1) s / ( β + s ) for any ε = o ( K ∗ ) such that εn → ∞ and ε = o ( n β /s ). Similarly, we can prove the otherresult. We complete the proof of Lemma 3. (cid:3) T n = P K k =0 | ˆ γ ( k ) | . For each u ∈ R , we have P ( √ nT n > u ) = P (cid:20) √ n K X k =0 {| ˆ γ ( k ) | − | ˆ γ ( k ) | } > ˜ u (cid:21) with ˜ u = u − √ n P K k =0 | ˆ γ ( k ) | . Notice that K X k =0 {| ˆ γ ( k ) | − | ˆ γ ( k ) | } = T ∗ n + K X k =0 { ˆ γ ( k ) − γ ( k ) } − K X k =0 { ˆ γ ( k ) − γ ( k ) } , where T ∗ n is defined as (26). It follows from (27) that P ( √ nT n > u ) ≤ P ( √ nT ∗ n > ˜ u − δ ) + P (cid:20) K X k =0 { ˆ γ ( k ) − γ ( k ) } > δ (cid:21) ≤ P ( √ nT ∗∗ n > ˜ u − δ ) + P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) ( ¯ Y − µ ) K X k =0 R k γ ( k ) (cid:12)(cid:12)(cid:12)(cid:12) > δ (cid:27) (35)+ P (cid:20) K X k =0 { ˆ γ ( k ) − γ ( k ) } > δ (cid:21) for any δ >
0. Write θ = max ≤ k ≤ K | γ ( k ) | . Same as (34), we have P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) K X k =0 R k γ ( k ) (cid:12)(cid:12)(cid:12)(cid:12) > K ∗ θǫ (cid:27) ≤ K X k =0 P ( | R k | > ǫ ) ≤ K ∗ exp( − Cnǫ ) + K ∗ n ( ǫn ) − s ( β +1) / ( β +2 s ) ǫ >
0. Let δ = K ∗ θǫ withmax[ n − / log / K ∗ , K ( β +2 s ) / { s ( β +1) }∗ n − β (2 s − / { s ( β +1) } ] = o ( ǫ ) . Then we have P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) ( ¯ Y − µ ) K X k =0 R k γ ( k ) (cid:12)(cid:12)(cid:12)(cid:12) > δ (cid:27) = o (1) . Since K ∗ = o { n ξ ( β,s ) } with ξ ( β, s ) defined as (9), there exists ǫ = o (1) satisfying δ = o (1) andmax[ n − / log / K ∗ , K ( β +2 s ) / { s ( β +1) }∗ n − β (2 s − / { s ( β +1) } ] = o ( ǫ ). Hence, P ( √ nT n > u ) ≤ P ( √ nT ∗∗ n > ˜ u − δ ) + P (cid:20) K X k =0 { ˆ γ ( k ) − γ ( k ) } > δ (cid:21) + o (1) (36)for some δ = o (1). On the other hand, by Lemma 3, we also have P (cid:20) K X k =0 { ˆ γ ( k ) − γ ( k ) } > δ (cid:21) = o (1)with such suitable selection of ε . From (36) and Proposition 3, we have P ( √ nT n > u ) ≤ P ( √ nT ∗∗ n > ˜ u − δ ) + o (1) ≤ − Φ (cid:26) N (˜ u − δ ) B N − K √ n (cid:27) + d n + o (1) ≤ − Φ (cid:18) N ˜ uB N − K √ n (cid:19) + Cδ + d n + o (1)for d n defined in Proposition 3. Letting δ →
0, we have P ( √ nT n > u ) ≤ − Φ (cid:18) N ˜ uB N − K √ n (cid:19) + d n + o (1) . Analogously, we can show P ( √ nT n > u ) ≥ − Φ (cid:18) N ˜ uB N − K √ n (cid:19) − d n − o (1) . We complete the proof of Theorem 1. (cid:3) .5 Proof of Theorem Under the null hypothesis H with Conditions and being satisfied, then (cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 K (cid:18) jb m (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21)(cid:12)(cid:12)(cid:12)(cid:12) = O p (cid:18) b m K / ∗ m / (cid:19) . Proof.
Let ζ t,j = Q t + j Q t − E ( Q t + j Q t ). It follows from the Markov inequality thatsup ≤ j ≤ m − sup ≤ t ≤ m − j P ( | ζ t,j | > x ) ≤ E ( | ζ t,j | s / ) x s / for any x >
0. By the Jensen’s inequality and the Cauchy-Schwarz inequality, Condition 1leads to E ( | ζ t,j | s / ) . E ( | Q t + j Q t | s / ) . { E ( | Q t + j | s ) } / { E ( | Q t | s ) } / . K s ∗ . By the triangleinequality and the Davydov’s inequality, we haveVar (cid:18) m m − j X t =1 ζ t,j (cid:19) ≤ m m − j X t =1 m − j X t =1 (cid:12)(cid:12) Cov( ζ t ,j , ζ t ,j ) (cid:12)(cid:12) . m m − j X t =1 E (cid:0) | ζ t,j | (cid:1) + 1 m X t
1. If 1 ≤ t − t ≤ j , by the triangleinequality, it holds that (cid:12)(cid:12) Cov( ζ t ,j , ζ t ,j ) (cid:12)(cid:12) ≤ (cid:12)(cid:12) E ( Q t Q t + j Q t Q t + j ) − E ( Q t Q t ) E ( Q t + j Q t + j ) (cid:12)(cid:12) + (cid:12)(cid:12) E ( Q t Q t ) E ( Q t + j Q t + j ) − E ( Q t Q t + j ) E ( Q t Q t + j ) (cid:12)(cid:12) ≤ (cid:12)(cid:12) E ( Q t Q t + j Q t Q t + j ) − E ( Q t Q t ) E ( Q t + j Q t + j ) (cid:12)(cid:12) + (cid:12)(cid:12) E ( Q t Q t ) E ( Q t + j Q t + j ) (cid:12)(cid:12) + (cid:12)(cid:12) E ( Q t Q t + j ) E ( Q t Q t + j ) (cid:12)(cid:12) . It follows from the Davydov’s inequality that | E ( Q k Q k ) | . K ∗ { α Q ( | k − k | ) } − /s and27 E ( Q t Q t + j Q t Q t + j ) − E ( Q t Q t ) E ( Q t + j Q t + j ) | . K ∗ { α Q ( t + j − t ) } − /s , Thus,I( j ) . K ∗ m j X τ =1 | m − j − τ | + { α Q ( j − τ ) } − /s + K ∗ m j X τ =1 | m − j − τ | + { α Q ( τ ) } s − /s + K ∗ m ( m − j ) min( j, m − j ) { α Q ( j ) } s − /s . K ∗ m j X τ =0 { α Q ( τ ) } − /s , where the last inequality is based on the facts α Q ( j ) ≤ α Q ( τ ) for any τ ≤ j and ( m − j ) min( j, m − j ) ≤ P jτ =1 m .If j + 1 ≤ t − t ≤ m − j −
1, by the Davydov’s inequality, we have | Cov( ζ t ,j , ζ t ,j ) | . K ∗ { α Q ( t − t − j ) } − /s , which implies thatII( j ) . K ∗ m m − j − X τ =1 | m − j − τ | + { α Q ( τ ) } − /s . K ∗ m m − j − X τ =1 { α Q ( τ ) } − /s . Recall that α Q ( τ ) ≤ α ( | τ − K | + ) and P ∞ τ =1 { α ( τ ) } − /s < ∞ . Then I( j ) + II( j ) . K ∗ m − .Together with (37), we have Var( m − P m − jt =1 ζ t,j ) . K ∗ m − . Notice that S ( m ) := m − X j =0 K (cid:18) jb m (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21) = m − X j =0 K (cid:18) jb m (cid:19)(cid:18) m m − j X t =1 ζ t,j (cid:19) =: m − X j =0 K (cid:18) jb m (cid:19) η j . It follows from the Jensen’s inequality that E {| S ( m ) | } ≤ (cid:26) m − X j =0 (cid:12)(cid:12)(cid:12)(cid:12) K (cid:18) jb m (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:27)(cid:26) m − X j =0 (cid:12)(cid:12)(cid:12)(cid:12) K (cid:18) jb m (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) Var( η j ) (cid:27) . b m K ∗ m , where the last step is based on the fact P m − j =0 |K ( j/b m ) | ≍ b m . By the Markov inequality, wehave (cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 K (cid:18) jb m (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21)(cid:12)(cid:12)(cid:12)(cid:12) = O p (cid:18) b m K / ∗ m / (cid:19) . We complete the proof of Lemma 4. (cid:3) .5.2 Proof of Theorem m = 2 N − K . Define ˆ V m = m − X j = − m +1 K (cid:18) jb m (cid:19) G j with G j = m − P mt = j +1 E ( Q t Q t − j ) if j ≥ G j = m − P mt = − j +1 E ( Q t + j Q t ) otherwise. Ourproof includes two steps: (i) to show ˜ V m − ˆ V m = o p (1), and (ii) to show ˆ V m − V m = o (1). Itfollows from these two results that ˜ V m − V m = o p (1). Due to B ℓ = ℓV ℓ and B ℓ ≥ c ℓ for all ℓ ≥ c , we know V m is uniformly bounded away fromzero. Thus ˜ V m − V m = o p (1) implies ˜ V m /V m P −→ V m − ˆ V m = m − X j =0 K (cid:18) jb m (cid:19) ( ˜ G j − G j ) | {z } I + − X j = − m +1 K (cid:18) jb m (cid:19) ( ˜ G j − G j ) | {z } II . To show ˜ V m − ˆ V m = o p (1), it suffices to show I = o p (1) and II = o p (1), respectively. Recall that˜ G j = m − P mt = j +1 ˜ Q t ˜ Q t − j if j ≥ G j = m − P mt = − j +1 ˜ Q t + j ˜ Q t otherwise. For any j ≥
0, itholds that ˜ G j = 1 m m X t = j +1 Q t Q t − j + 1 m m X t = j +1 ( ˜ Q t − Q t ) Q t − j + 1 m m X t = j +1 Q t ( ˜ Q t − j − Q t − j ) + 1 m m X t = j +1 ( ˜ Q t − Q t )( ˜ Q t − j − Q t − j ) , which implies that I = m − X j =0 K (cid:18) jb n (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21) + m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 ( ˜ Q t − Q t ) Q t − j (cid:27) (38)+ m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 Q t ( ˜ Q t − j − Q t − j ) (cid:27) + m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 ( ˜ Q t − Q t )( ˜ Q t − j − Q t − j ) (cid:27) . | ˜ Q t − Q t | . K X k =0 | ( Y t − µ )( Y t + k − µ ) || ˆ γ ( k ) − γ ( k ) | + | ¯ Y − µ | K X k =0 | Y t + k − µ || ˆ γ ( k ) | + | ¯ Y − µ || Y t − µ | K X k =0 | ˆ γ ( k ) | (39)+ | ¯ Y − µ | K X k =0 | ˆ γ ( k ) | + K X k =0 | ˆ γ ( k ) − γ ( k ) || ˆ γ ( k ) + γ ( k ) | . Define E ( ε ) = (cid:26) max ≤ k ≤ K | ˆ γ ( k ) − γ ( k ) | ≤ ε (cid:27) . Restricted on E ( ε ), it holds thatmax ≤ t ≤ m | ˜ Q t − Q t | . K ∗ ε max ≤ t ≤ n | Y t − µ | + | ¯ Y − µ | max ≤ t ≤ n | Y t − µ | + K ∗ ε | ¯ Y − µ | max ≤ t ≤ n | Y t − µ | + | ¯ Y − µ | + K ∗ ε | ¯ Y − µ | + K ∗ ε + ε . Same as Lemma 3, we have P {E ( ε ) c } ≤ K X k =0 P (cid:8) | ˆ γ ( k ) − γ ( k ) | > ε (cid:9) . K ∗ exp( − CK − ∗ nε ) + K s ∗ n ( εn ) − s + K ∗ n ( εn ) − ( β +2) s / ( β + s ) . If we select ε = C ∗ ( n − K ∗ log K ∗ ) / for some sufficiently large C ∗ >
0, we then have that K ∗ exp( − CK − ∗ nε ) ≤ K − CC ∗ ∗ . If K ∗ = o ( n − /s ), it holds that K s ∗ n ( εn ) − s + K ∗ n ( εn ) − ( β +2) s / ( β + s ) → P {E ( ε ) c } = o (1). Same as (34), under Condition 5, it holds that | ¯ Y − µ | = O p [ n − (2 s − β / { s ( β +1) } ]. By the Markov inequality, Condition 5 implies that max ≤ t ≤ n | Y t − µ | = O p { n / (2 s ) } . It follows from (40) thatmax ≤ t ≤ m | ˜ Q t − Q t | = O p (cid:18) K / ∗ log / K ∗ n / − /s (cid:19) , which implies that (cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 ( ˜ Q t − Q t ) Q t − j (cid:27)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:26) m − X j =0 (cid:12)(cid:12)(cid:12)(cid:12) K (cid:18) jb m (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:27)(cid:18) m m X t =1 | Q t | (cid:19) · max ≤ t ≤ m | ˜ Q t − Q t | O p (cid:18) b m K / ∗ log / K ∗ n / − /s (cid:19) . Similarly, we have m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 Q t ( ˜ Q t − j − Q t − j ) (cid:27) = O p (cid:18) b m K / ∗ log / K ∗ n / − /s (cid:19) , m − X j =0 K (cid:18) jb m (cid:19)(cid:26) m m X t = j +1 ( ˜ Q t − Q t )( ˜ Q t − j − Q t − j ) (cid:27) = O p (cid:18) b m K ∗ log K ∗ n − /s (cid:19) . Due to b m K ∗ log K ∗ = o ( n − /s ), by (38), we haveI = m − X j =0 K (cid:18) jb m (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21) + o p (1) . As we will show in Lemma 4, if b m K ∗ = o ( m ), it holds that m − X j =0 K (cid:18) jb n (cid:19)(cid:20) m m X t = j +1 { Q t Q t − j − E ( Q t Q t − j ) } (cid:21) = O p (cid:18) b m K / ∗ m / (cid:19) = o p (1) . Thus, I = o p (1). Similarly, we have II = o p (1), which implies ˜ V m − ˆ V m = o p (1). We constructthe result (i).We begin to construct result (ii). Notice that V m = 1 m m X t =1 E ( Q t ) + 2 m m − X t =1 m X t = t +1 E ( Q t Q t )= 1 m m X t =1 E ( Q t ) + 2 m m − X t =1 m − t X j =1 E ( Q t Q t + j )= G + 2 m − X j =1 G j . Recall K ( · ) is symmetric with K (0) = 1, and G − j = G j for any j >
0. It follows from theDavydov’s inequality that | G j | . m − ( m − j ) K ∗ { α Q ( j ) } − /s for any j ≥
1. Thus, by thetriangle inequality, | ˆ V m − V m | ≤ m − X j =1 (cid:12)(cid:12)(cid:12)(cid:12) K (cid:18) jb m (cid:19) − (cid:12)(cid:12)(cid:12)(cid:12) | G j | . K ∗ m − X j =1 jb m m − jm { α Q ( j ) } − /s K ∗ b m (cid:20) K X j =1 j + m − X j = K +1 j { α ( | j − K | + ) } − /s (cid:21) . K ∗ b m + K ∗ b m m X j =1 j − β ( s − /s = o (1)provided that K ∗ /b m → β > s / ( s − V m /V m P −→
1, which completes the proof of Theorem 4. (cid:3)
More technical proofs and simulation results are provided in the supplementary material.
References
Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelation consistent covariance matrixestimation.
Econometrica , , 817–858.Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods (2nd Edi.). Springer,New York.Carrasco, M. and Chen, X. (2002). Mixing and moment properties of various GARCH andstochastic volatility models.
Econometric Theory , , 17–39.Cavaliere, G. and Taylor, A. M. R. (2007). Testing for unit roots in time series models withnon-stationary volatility. Journal of Econometrics , , 919–947.Chan, N. H. and Wei, C. Z. (1988). Limiting distributions of least squares estimates of unstableautoregressive processes. The Annals of Statistics , , 367–401.Chang, J., Tang, C. Y. and Wu, Y. (2013). Marginal empirical likelihood and sure independencefeature screening. The Annals of Statistics , , 2123–2148.Chang, J., Zheng, C., Zhou, W.-X. and Zhou, W. (2017). Simulation-based hypothesis testingof high dimensional means under covariance heterogeneity. Biometrics , , 1300–1310.DeJong, D.N., Nankervis, J.C., Savin, N.E. and Whiteman, C.H. (1989). Integration versustrend stationarity in macroeconomic time series. Working Paper No.89-99, Department ofEconomics, University of Iowa.Den Haan, W. J. and Levin, A. (1997). A practitioner guide to robust covariance matrix es-timation. Handbook of Statistics , vol. 15. G. S. Maddala and C. R. Rao, eds. Amsterdam:Elsevier., Chapter 12, pp. 291–341.Dickey, D. A. and Fuller, W.A. (1979). Distribution of the estimators for autoregressive timeseries with a unit root.
Journal of the American Statistical Association , , 427–431.Dickey, D. A. and Fuller, W. A. (1981). Likelihood ratio statistics for autoregressive time serieswith a unit root. Econometrica , , 1057–1072.32lliott, G., Rothenberg, T. J., and Stock, J. H. (1996). Efficient tests for an autoregressive unitroot. Econometrica , , 813–836.Fan, J. and Yao, Q. (2003). Nonlinear Time Series: Nonparametric and Parametric Methods .NewYork: Springer.Hatanaka, M. (1996).
Time-Series-Based Econometrics: Unit Roots and Cointegration . OxfordUniversity Press.Hylleberg, S., Engle, R. F., Granger, C. F. J. and Yoo, S. (1990). Seasonal integration andcointegration.
Journal of Econometrics , , 215–238.Kiefer, N. M., Vogelsang, T. J. and Bunzel, H. (2000). Simple robust testing of regressionhypotheses. Econometrica , , 695–714.Kwiatkowski, D., Phillips, P. C. B., Schmidt, P. and Shin, Y. (1992). Testing the null hypothesisof stationarity against the alternative of a unit root. Journal of Econometrics , , 159–178.Lahiri, S. N. (2003). Resampling methods for dependent data . NewYork: Springer.Maddala, G. S. and Kim, I.-M. (1998).
Unit roots, Cointegration and Structural Change . Cam-bridge University Press.Nelson, C.R. and Plosser, C.I. (1982). Trends versus random walks in macroeconomic timeseries: some evidence and implications.
Journal of Monetary Economics , , 139–162.Newey, W. and West, K. D. (1987). A simple, positive semi-definite, heteroskedasticity andautocorrelation consistent covariance matrix. Econometrica , , 703–708.Paparoditis, E. and Politis, D. N. (2005). Bootstrapping unit root tests for autoregressive timeseries. Journal of the American Statistical Association , , 545–553.Pe˜na, D. and Poncela, P. (2006). Nonstationary dynamic factor analysis. Journal of StatisticalPlanning and Inference , , 1237–1257.Perron, P. (1988). Trends and random walks in macroeconomic time series: further evidencefrom a new approach. Journal of Economic Dynamics and Control , , 297–332.Pesaran, M. H. (2007). A simple panel unit root test in the presence of cross-section dependence. Applied Econometrics , , 265-317.Phillips, P. C. B. (1987). Time series regression with a unit root. Econometrica , , 277–301.Phillips, P. C. B. and Perron, P. (1988). Testing for a unit root in time series regression. Biometrika , , 335–346.Phillips, P. C. B. and Xiao, Z. (1998). A primer on unit root testing. Journal of EconomicSurveys , , 423–469. 33io, E. (2017). Asymptotic Theory of Weakly Dependent Random Processes . Springer.Rho, R. and Shao, X. (2019). Bootstrap-assisted unit root testing with piecewise locally sta-tionary errors.
Econometric Theory , , 143–166.Robinson, P. M. (1994). Efficient tests of nonstationary hypothesis. Journal of the AmericanStatistical Association , , 1420–1437.Said, S. E. and Dickey, D. A. (1984). Testing for unit roots in autoregressive-moving averagemodels of unknown order. Biometrika , , 599–608.Stock, J. H. (1994). Unit roots, structural breaks and trends. In R. F. Engle and D. L. McFadden(eds), Handbook of Econometrics , Vol.4, Elsevier Science.Sunklodas, J. (1984). Rate of convergence in the central limit theorem for random variableswith strong mixing.
Lithuanian Mathematical Journal , , 182–190.Tanaka, K. (2017). Time Series Analysis – Nonstationary and Noninvertible Distribution The-ory . Wiley, New York.Xiao, Z. and Phillips, P. C. B. (1997). An ADF coefficient test for ARMA models with unknownorders.
Cowles Foundation Discussion Paper
No.1161.Zivot, E. and Andrews, D. W. K. (1992). Further evidence on the great crash, the oil priceshock, and the unit root hypothesis.
Journal of Business & Economic Statistics , , 251–270.34 UPPLEMENTARY MATERIAL S1
Supplementary Material for “A Power One Test for Unit RootsBased on Sample Autocovariances” by Chang, Cheng and Yao.A Proof of Lemma 1
Without loss of generality, we assume µ = 0. Let T = { , . . . , m − } . Select h ∈ T and q ∈ T \{ } satisfying 2 qh ≤ m + 1. We will specify h and q later. For any t = 1 , . . . , m ,let A t = Q t /B m . Write Z m = P mt =1 A t . For any j = 1 , . . . , q , let W t,j = P t + jh − p = t − jh +1 A p , Z t,j = Z m − W t,j . Here we adopt the convention A p = 0 if p ≤ p ≥ m + 1. We also write Z t, = Z m . Denote by i = √− r = 2 , . . . , q , let ϕ t,r − = E (cid:18) A t r − Y l =1 ψ t,l (cid:19) and η t,r = e − iuW t,r − ψ t,l = e iu ( Z t,l − − Z t,l ) −
1. Define f m ( u ) = E ( e iuZ m ). Notice that f ′ m ( u ) = i P mt =1 E ( A t e iuZ t, ).Then it holds that f ′ m ( u ) = i m X t =1 E (cid:0) A t e iuZ t, − A t e iuZ t, + A t e iuZ t, (cid:1) = i m X t =1 E (cid:2) A t { e iu ( Z t, − Z t, ) − } e iuZ t, (cid:3) + i m X t =1 E (cid:0) A t e iuZ t, (cid:1) = i m X t =1 E (cid:0) A t e iuZ t, (cid:1) + i m X t =1 q X r =2 E (cid:18) A t (cid:20) r − Y l =1 { e iu ( Z t,l − − Z t,l ) − } (cid:21) e iuZ t,r (cid:19) (E.1)+ i m X t =1 E (cid:18) A t (cid:20) q Y l =1 { e iu ( Z t,l − − Z t,l ) − } (cid:21) e iuZ t,q (cid:19) = i m X t =1 E (cid:0) A t e iuZ t, (cid:1) + i m X t =1 q X r =2 E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) + i m X t =1 E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19) . On the other hand, we know E ( e iuZ t,r ) = f m ( u ) E ( η t,r + 1) + E [ { η t,r − E ( η t,r ) } e iuZ m ] for any r = 2 , . . . , q . Therefore, by (E.1), we can reformulate f ′ m ( u ) as follows: f ′ m ( u ) = i (cid:18) m X t =1 ϕ t, (cid:19) f m ( u ) + i (cid:26) m X t =1 ϕ t, E ( η t, ) + m X t =1 q X r =3 ϕ t,r − E ( η t,r + 1) (cid:27) f m ( u )+ i m X t =1 q X r =2 ϕ t,r − E (cid:2) { η t,r − E ( η t,r ) } e iuZ m (cid:3) + i m X t =1 E (cid:0) A t e iuZ t, (cid:1) (E.2)+ i m X t =1 q X r =2 (cid:26) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iuZ t,r (cid:1)(cid:27) UPPLEMENTARY MATERIAL S2 + i m X t =1 E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19) . Recall K ∗ = 1 + K . Let ˜ α = m X τ =1 { α Q ( τ ) } ( s − /s and d = 4 s c K s ∗ , (E.3)where c is specified in Condition 1. Since α Q ( τ ) ≤ α ( | τ − K | + ), it follows from Condition 2that ˜ α . K ∗ . To construct Lemma 1, we need Lemmas L1–L5 as follows. Lemma L1.
Under the null hypothesis H with Conditions and being satisfied, it holds that i m X t =1 ϕ t, = − u + θ ( u ) for any u ∈ R , where | θ ( u ) | . | u | B m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) + | u | s − B s m mdh s − with d specified in (E.3) .Proof. Notice that e iv − − iv = i R v ( e it −
1) d t for any v ∈ R . By the mean value theorem, wehave e iv − − iv = i { e iζ ( v ) − } v for some ζ ( v ) ∈ (0 , v ) if v > ζ ( v ) ∈ ( v,
0) if v <
0. Onthe other hand, it follows from the Taylor expansion that e iv − − iv = − − e iζ ( v ) v for some ζ ( v ) ∈ (0 , v ) if v > ζ ( v ) ∈ ( v,
0) if v <
0. Thus, e iv − − iv = (cid:2) i { e iζ ( v ) − } v (cid:3) − s (cid:26) − e iζ ( v ) v (cid:27) s − = i − s { e iζ ( v ) − } − s e i ( s − ζ ( v ) ( − s − s − · v s − =: ϑ ( v ) · v s − , where | ϑ ( v ) | ≤ − s for any v ∈ R , which implies that e iv − iv + ϑ ( v ) · v s − for any v ∈ R .Recall ϕ t, = E ( A t ψ t, ) with ψ t, = e iu ( Z t, − Z t, ) −
1, and W t, = Z t, − Z t, . Then i m X t =1 ϕ t, = i m X t =1 E (cid:2) A t { e iu ( Z t, − Z t, ) − } (cid:3) = − u m X t =1 E ( A t W t, ) + i m X t =1 E (cid:8) A t W s − t, ϑ ( uW t, ) (cid:9) u s − . (E.4) UPPLEMENTARY MATERIAL S3
Notice that W t, = P t + h − p = t − h +1 A p . It follows from the Jensen’s inequality that E ( | W t, | s ) ≤ (2 h − s − t + h − X p = t − h +1 E ( | A p | s ) ≤ (2 h − s max ≤ t ≤ m E ( | A t | s ) . Applying the H¨older’s inequality, we have m X t =1 E {| A t W s − t, ϑ ( uW t, ) |} ≤ − s m X t =1 { E ( | A t | s ) } /s { E ( | W t, | s ) } ( s − /s ≤ − s (2 h − s − m max ≤ t ≤ m E ( | A t | s ) . Write Z t, = ˆ Z t, + ˜ Z t, with ˆ Z t, = P p ≤ t − h A p and ˜ Z t, = P p ≥ t + h A p . It follows from theDavydov’s inequality that | E ( A t ˆ Z t, ) | ≤ { α Q ( h ) } ( s − / s { E ( | A t | s ) } /s { E ( ˆ Z t, ) } / . Using the Davydov’s inequality again, we have E ( ˆ Z t, ) ≤ X p ,p ≤ t − h { E ( | A p | s ) } /s { E ( | A p | s ) } /s { α Q ( | p − p | ) } ( s − /s ≤ | t − h | + (1 + 2 ˜ α ) (cid:26) max ≤ t ≤ m E ( | A t | s ) (cid:27) /s . Notice that A t = Q t /B m with Q t = P K k =0 ξ t,k where ξ t,k = y t,k sgn { γ ( k ) } and y t,k = 2 { Y t Y t + k − γ ( k ) } sgn( k + t − N − / E ( | A t | s ) ≤ s K s − ∗ B s m K X k =0 E {| Y t Y t + k − γ ( k ) | s }≤ s K s − ∗ B s m K X k =0 E ( | Y t Y t + k | s ) ≤ s K s ∗ c B s m = dB s m , (E.5)which implies m X t =1 E {| A t W s − t, ϑ ( uW t, ) |} ≤ − s h s − mdB s m and | E ( A t ˆ Z t, ) | ≤ √ d /s B m | t − h | / (1 + 2 ˜ α ) / { α Q ( h ) } ( s − / (2 s ) UPPLEMENTARY MATERIAL S4 . d /s B m m / K / ∗ { α Q ( h ) } ( s − / (2 s ) for any t = 1 , . . . , m . Analogously, we have | E ( A t ˜ Z t, ) | . d /s B m m / K / ∗ { α Q ( h ) } ( s − / (2 s ) for any t = 1 , . . . , m . Thus, (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 { E ( A t ˆ Z t, ) + E ( A t ˜ Z t, ) } (cid:12)(cid:12)(cid:12)(cid:12) . d /s B m m / K / ∗ { α Q ( h ) } ( s − / (2 s ) . Since P mt =1 E ( A t Z m ) = 1, then P mt =1 E ( A t W t, ) = 1 − P mt =1 { E ( A t ˆ Z t, ) + E ( A t ˜ Z t, ) } . Togetherwith (E.4), it holds that i m X t =1 ϕ t, = − u + u m X t =1 { E ( A t ˆ Z t, ) + E ( A t ˜ Z t, ) } + i m X t =1 E { A t W s − t, ϑ ( uW t, ) } u s − =: − u + θ ( u ) , where | θ ( u ) | . | u | B m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) + | u | s − B s m mdh s − for any u ∈ R . We complete the proof of Lemma L1. (cid:3) Lemma L2.
Let U ∗ = B m / (32 hd /s ) . Under the null hypothesis H with Conditions and being satisfied, if | u | ≤ U ∗ and q { α Q ( h ) } /s ≤ , (E.6) then it holds that | ϕ t,r − | . d /s B m (cid:20) d /s h u r B m + r r { α Q ( h ) } ( s − /s (cid:21) ( r = 3 , . . . , q ) and | ϕ t, | ≤ | u | hd /s B m for any t = 1 , . . . , m , where d is specified in (E.3) .Proof. Let ˜ ψ t,l = exp { iu P t + lh − p = t +( l − h A p } − ψ t,l = exp { iu P t − ( l − hp = t − lh +1 A p } −
1. Recall ψ t,l = e iu ( Z t,l − − Z t,l ) −
1. Notice that Z t,l − − Z t,l = P t + lh − p = t +( l − h A p + P t − ( l − hp = t − lh +1 A p . By theinequality | e a + b − | ≤ | e a || e b − | + | e a − | for any a, b ∈ C , we have | ψ t,l | ≤ | ˜ ψ t,l | + | ˆ ψ t,l | , which UPPLEMENTARY MATERIAL S5 implies that | ϕ t,r − | ≤ E (cid:18) | A t | r − Y l =1 | ψ t,l | (cid:19) ≤ X τ ,...,τ r − ∈{ , } E (cid:18) | A t | r − Y l =1 | ˜ ψ t,l | τ l | ˆ ψ t,l | − τ l (cid:19) . (E.7)For given ( τ , . . . , τ r − ), define L := L ( τ , . . . , τ r − ) = { ≤ l ≤ r − τ l = 1 } . Then E (cid:18) | A t | r − Y l =1 | ˜ ψ t,l | τ l | ˆ ψ t,l | − τ l (cid:19) = E (cid:18) | A t | Y l ∈L | ˜ ψ t,l | Y l ∈L c | ˆ ψ t,l | (cid:19) . Pick δ = s / ( s − E (cid:18) | A t | Y l ∈L | ˜ ψ t,l | Y l ∈L c | ˆ ψ t,l | (cid:19) ≤ (cid:26) E (cid:18) | A t | δ Y l ∈L , l even | ˜ ψ t,l | δ Y l ∈L c , l even | ˆ ψ t,l | δ (cid:19)(cid:27) /δ × (cid:20) E (cid:26) Y l ∈L , l odd | ˜ ψ t,l | δ/ ( δ − Y l ∈L c , l odd | ˆ ψ t,l | δ/ ( δ − (cid:27)(cid:21) ( δ − /δ . Due to | ˜ ψ t,l | ≤ | ˆ ψ t,l | ≤ l = 1 , . . . , r −
1, by the Davydov’s inequality and theH¨older’s inequality, we have E (cid:18) | A t | δ Y l ∈L , l even | ˜ ψ t,l | δ Y l ∈L c , l even | ˆ ψ t,l | δ (cid:19) ≤ E (cid:0) | A t | δ (cid:1) E (cid:18) Y l ∈L , l even | ˜ ψ t,l | δ Y l ∈L c , l even | ˆ ψ t,l | δ (cid:19) + 6 × δ ( r − / { E ( | A t | s ) } δ/s { α Q ( h ) } ( s − δ ) /s ≤ { E ( | A t | s ) } δ/s Y l ∈L , l even E (cid:0) | ˜ ψ t,l | δ (cid:1) Y l ∈L c , l even E (cid:0) | ˆ ψ t,l | δ (cid:1) + 3 r × δ ( r − / { E ( | A t | s ) } δ/s { α Q ( h ) } ( s − δ ) /s and E (cid:26) Y l ∈L , l odd | ˜ ψ t,l | δ/ ( δ − Y l ∈L c , l odd | ˆ ψ t,l | δ/ ( δ − (cid:27) ≤ Y l ∈L , l odd E (cid:8) | ˜ ψ t,l | δ/ ( δ − (cid:9) Y l ∈L c , l odd E (cid:8) | ˆ ψ t,l | δ/ ( δ − (cid:9) + 3 r × δr/ { δ − } α Q ( h ) . UPPLEMENTARY MATERIAL S6
By (E.5) and the inequality ( x + y ) γ ≤ x γ + y γ for any x, y > γ ∈ (0 , E (cid:18) | A t | Y l ∈L | ˜ ψ t,l | Y l ∈L c | ˆ ψ t,l | (cid:19) ≤ d /s B m (cid:20) Y l ∈L , l even (cid:8) E (cid:0) | ˜ ψ t,l | δ (cid:1)(cid:9) /δ Y l ∈L c , l even (cid:8) E (cid:0) | ˆ ψ t,l | δ (cid:1)(cid:9) /δ + (3 r ) /δ r/ { α Q ( h ) } ( s − δ ) / ( s δ ) (cid:21) × (cid:18) Y l ∈L , l odd (cid:2) E (cid:8) | ˜ ψ t,l | δ/ ( δ − (cid:9)(cid:3) ( δ − /δ Y l ∈L c , l odd (cid:2) E (cid:8) | ˆ ψ t,l | δ/ ( δ − (cid:9)(cid:3) ( δ − /δ + (3 r ) ( δ − /δ r/ { α Q ( h ) } ( δ − /δ (cid:19) . (E.8)On the other hand, by the Taylor expansion, it holds that˜ ψ t,l = i exp (cid:26) icu t + lh − X p = t +( l − h A p (cid:27)(cid:26) u t + lh − X p = t +( l − h A p (cid:27) for some c ∈ (0 , δ = s / ( s − δ/ ( δ −
1) = s . It follows from the H¨older in-equality that { E ( | ˜ ψ t,l | δ ) } /δ ≤ { E ( | ˜ ψ t,l | s ) } /s and { E ( | ˆ ψ t,l | δ ) } /δ ≤ { E ( | ˆ ψ t,l | s ) } /s . Togetherwith the Jensen’s inequality and (E.5), we have E ( | ˜ ψ t,l | s ) ≤ | u | s E (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) t + lh − X p = t +( l − h A p (cid:12)(cid:12)(cid:12)(cid:12) s (cid:19) ≤ | u | s h s − t + lh − X p = t +( l − h E ( | A p | s ) ≤ | u | s h s dB s m , (E.9)which implies { E ( | ˜ ψ t,l | s ) } /s ≤ | u | hd /s B − m . Analogously, we have { E ( | ˆ ψ t,l | s ) } /s ≤ | u | hd /s B − m .Notice that s ∈ (2 ,
3] and 0 ≤ α Q ( h ) ≤ / h ≥
0, then ( s − /s ≤ /s < ( s − /s and { α Q ( h ) } /s ≤ { α Q ( h ) } ( s − /s . Hence, if | u | ≤ U ∗ , (E.8) implies that E (cid:18) | A t | Y l ∈L | ˜ ψ t,l | Y l ∈L c | ˆ ψ t,l | (cid:19) ≤ d /s B m (cid:20)(cid:18) | u | hd /s B m (cid:19) ( r − / + (3 r ) ( s − /s r/ { α Q ( h ) } ( s − /s (cid:21) × (cid:20)(cid:18) | u | hd /s B m (cid:19) ( r − / + (3 r ) /s r/ { α Q ( h ) } /s (cid:21) ≤ d /s B m (cid:20)(cid:18) | u | hd /s B m (cid:19) r − + (3 r )2 r { α Q ( h ) } ( s − /s + (3 r ) ( s − /s ( r +2) / (cid:18) | u | hd /s B m (cid:19) ( r − / { α Q ( h ) } ( s − /s (cid:21) . UPPLEMENTARY MATERIAL S7
Since there are 2 r − different selections of ( τ , . . . , τ r − ), then (E.7) implies that | ϕ t,r − | . d /s B m (cid:20)(cid:18) | u | hd /s B m (cid:19) r − + r r { α Q ( h ) } ( s − /s + (cid:18) | u | hd /s B m (cid:19) ( r − / r ( s − /s r { α Q ( h ) } ( s − /s (cid:21) for any | u | ≤ U ∗ . Thus, if | u | ≤ U ∗ , we have | ϕ t,r − | . d /s B m (cid:20) d /s h u r B m + r ( s − /s r { α Q ( h ) } ( s − /s + r r { α Q ( h ) } ( s − /s (cid:21) for any r = 3 , . . . , q . Moreover, it follows from (E.6) that r r { α Q ( h ) } ( s − /s ≤ r − r { α Q ( h ) } ( s − /s for any r = 3 , . . . , q . Thus, it holds that | ϕ t,r − | . d /s B m (cid:20) d /s h u r B m + r r { α Q ( h ) } ( s − /s (cid:21) for any r = 3 , . . . , q . We have the first result of Lemma L2.Recall ϕ t, = E ( A t ψ t, ) and | ψ t, | ≤ | ˜ ψ t, | + | ˆ ψ t, | . By the H¨older’s inequality, (E.5) and(E.9), we have | ϕ t, | ≤ E ( | A t ˜ ψ t, | ) + E ( | A t ˆ ψ t, | ) ≤ (cid:8) E (cid:0) | A t ˜ ψ t, | s / (cid:1)(cid:9) /s + (cid:8) E (cid:0) | A t ˆ ψ t, | s / (cid:1)(cid:9) /s ≤ { E ( | A t | s ) } /s (cid:2)(cid:8) E (cid:0) | ˜ ψ t, | s (cid:1)(cid:9) /s + (cid:8) E (cid:0) | ˆ ψ t, | s (cid:1)(cid:9) /s (cid:3) ≤ | u | hd /s B m . We have the second result of Lemma L2. (cid:3)
Lemma L3.
Let U ∗ = B m / (32 hd /s ) . Under the null hypothesis H with Conditions and being satisfied, if | u | ≤ U ∗ , q ≥ m / and (E.6) is satisfied, then it holds that (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t, E ( η t, ) + m X t =1 q X r =3 ϕ t,r − E ( η t,r + 1) (cid:12)(cid:12)(cid:12)(cid:12) . md /s B m (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) , q X r =2 (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t,r − E (cid:2) { η t,r − E ( η t,r ) } e iuZ m (cid:3)(cid:12)(cid:12)(cid:12)(cid:12) . d /s m / B m ( h / + ˜ α / ) (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) , and m X t =1 (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) . m / d /s B m (cid:20) d /s h u B m + m / { α Q ( h ) } ( s − /s (cid:21) . UPPLEMENTARY MATERIAL S8 where ˜ α and d are specified in (E.3) .Proof. Recall that η t,r = e − iuW t,r − W t,r = P t + rh − p = t − rh +1 A p . By the triangle inequality, wehave (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t, E ( η t, ) + m X t =1 q X r =3 ϕ t,r − E ( η t,r + 1) (cid:12)(cid:12)(cid:12)(cid:12) ≤ m X t =1 | ϕ t, | E ( | η t, | ) + m X t =1 q X r =3 | ϕ t,r − | . By the Taylor expansion, it holds that η t, = − i exp( − icu P t +2 h − p = t − h +1 A p )( u P t +2 h − p = t − h +1 A p ) forsome c ∈ (0 , E ( | η t, | ) ≤ | u | t +2 h − X p = t − h +1 E ( | A p | ) ≤ h | u | d /s B m . It follows from Lemma L2 that (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t, E ( η t, ) + m X t =1 q X r =3 ϕ t,r − E ( η t,r + 1) (cid:12)(cid:12)(cid:12)(cid:12) . mh d /s u B m + md /s B m (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) . md /s B m (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) . We have the first result of Lemma L3.For any complex number a ∈ C , we denote by ¯ a the complex conjugate of a . Notice that | η t,r | ≤ t = 1 , . . . , m and r = 2 , . . . , q . For any r = 3 , . . . , q , by the Cauchy-Schwarzinequality, we have (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t,r − E (cid:2) { η t,r − E ( η t,r ) } e iuZ m (cid:3)(cid:12)(cid:12)(cid:12)(cid:12) ≤ E (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t,r − { η t,r − E ( η t,r ) } e iuZ m (cid:12)(cid:12)(cid:12)(cid:12) (cid:21) = E (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t,r − { η t,r − E ( η t,r ) } (cid:12)(cid:12)(cid:12)(cid:12) (cid:21) = m X t ,t =1 ϕ t ,r − ¯ ϕ t ,r − E [ { η t ,r − E ( η t ,r ) }{ ¯ η t ,r − E (¯ η t ,r ) } ] ≤ m X t ,t =1 | ϕ t ,r − || ϕ t ,r − | (cid:12)(cid:12) E [ { η t ,r − E ( η t ,r ) }{ ¯ η t ,r − E (¯ η t ,r ) } ] (cid:12)(cid:12) . m X t =1 | ϕ t,r − | + X t It follows from the Davydov’s inequality that (cid:12)(cid:12) E [ { η t ,r − E ( η t ,r ) }{ ¯ η t ,r − E (¯ η t ,r ) } ] (cid:12)(cid:12) . { E ( | η t ,r | s ) } /s { E ( | η t ,r | s ) } /s { α Q ( | t − t − rh + 2 | + ) } ( s − /s . { α Q ( | t − t − rh + 2 | + ) } ( s − /s for any t < t , which implies that X t Hence, together with (E.10), it holds that q X r =2 (cid:12)(cid:12)(cid:12)(cid:12) m X t =1 ϕ t,r − E (cid:2) { η t,r − E ( η t,r ) } e iuZ m (cid:3)(cid:12)(cid:12)(cid:12)(cid:12) . d /s m / B m ( h / + ˜ α / ) (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) . We have the second result of Lemma L3.Notice that (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ≤ E (cid:18) | A t | q Y l =1 | ψ t,l | (cid:19) . Applying the technique to bound E ( | A t | Q r − l =1 | ψ t,l | ), the upper bound of | ϕ t,r − | , stated in (E.7),with the restrictions (E.6) and 16 q ≥ m / , it holds that (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) . d /s B m (cid:20) d /s h u q B m + q q { α Q ( h ) } ( s − /s (cid:21) . d /s B m (cid:20) d /s h u m / B m + { α Q ( h ) } ( s − /s (cid:21) for any | u | ≤ U ∗ , which implies that m X t =1 (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,q q Y l =1 ψ t,l (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) . m / d /s B m (cid:20) d /s h u B m + m / { α Q ( h ) } ( s − /s (cid:21) . Then we have the third result of Lemma L3. (cid:3) Lemma L4. Let U ∗ = B m / (32 hd /s ) . Under the null hypothesis H with Conditions and being satisfied, if | u | ≤ U ∗ and (E.6) is satisfied, then it holds that m X t =1 (cid:12)(cid:12) E (cid:0) A t e iuZ t, (cid:1)(cid:12)(cid:12) . md /s B m { α Q ( h ) } ( s − /s and q X r =2 m X t =1 (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iuZ t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . md /s B m { α Q ( h ) } ( s − /s . Proof. Recall that Z t, = ˆ Z t, + ˜ Z t, with ˆ Z t, = P t − hp =1 A p and ˜ Z t, = P mp = t + h A p . It followsfrom the Davydov’s inequality that (cid:12)(cid:12) E (cid:0) A t e iuZ t, (cid:1)(cid:12)(cid:12) . (cid:8) E (cid:0)(cid:12)(cid:12) A t e iu ˆ Z t, (cid:12)(cid:12) s (cid:1)(cid:9) /s { α Q ( h ) } ( s − /s = { E ( | A t | s ) } /s { α Q ( h ) } ( s − /s . UPPLEMENTARY MATERIAL S11 By (E.5), we have the first result of Lemma L4.Let ˆ Z t,r = P p ≤ t − rh A p and ˜ Z t,r = P p ≥ t + rh A p . Then Z t,r = ˆ Z t,r + ˜ Z t,r . Without loss ofgenerality we assume that ˆ Z t,r = 0 and ˜ Z t,r = 0. It follows from the triangle inequality that (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iuZ t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˆ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12) + (cid:12)(cid:12) E (cid:0) e iuZ t,r (cid:1) − E (cid:0) e iu ˆ Z t,r (cid:1) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t r − Y l =1 ψ t,l (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˆ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12) E (cid:0) e iuZ t,r (cid:1) − E (cid:0) e iu ˆ Z t,r (cid:1) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12) | ϕ t,r − | . By the Davydov’s inequality, it holds that | E ( e iuZ t,r ) − E ( e iu ˆ Z t,r ) E ( e iu ˜ Z t,r ) | . α Q (2 rh ), (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . { α Q ( h ) } ( s − /s (cid:26) E (cid:18) | A t | s r − Y l =1 | ψ t,l | s (cid:19)(cid:27) /s and (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˆ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . { α Q ( h ) } ( s − /s (cid:26) E (cid:18) | A t | s r − Y l =1 | ψ t,l | s (cid:19)(cid:27) /s . Recall ψ t,l = e iu ( Z t,l − − Z t,l ) − 1. Then (E.5) yields that E (cid:18) | A t | s r − Y l =1 | ψ t,l | s (cid:19) ≤ s ( r − E ( | A t | s ) ≤ s ( r − dB s m , UPPLEMENTARY MATERIAL S12 which implies that (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˜ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . r − d /s B m { α Q ( h ) } ( s − /s and (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iu ˆ Z t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iu ˆ Z t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . r − d /s B m { α Q ( h ) } ( s − /s . On the other hand, it follows from Lemma L2 that | ϕ t,r − | . d /s B m (cid:20) d /s h u r B m + r r { α Q ( h ) } ( s − /s (cid:21) ( r = 3 , . . . , q )and | ϕ t, | ≤ | u | hd /s B m for any t = 1 , . . . , m . Thus, q X r =2 m X t =1 (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iuZ t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . q md /s B m { α Q ( h ) } ( s − /s . By (E.6), we have 2 q { α Q ( h ) } /s ≤ − q < 1, which implies q X r =2 m X t =1 (cid:12)(cid:12)(cid:12)(cid:12) E (cid:18) A t e iuZ t,r r − Y l =1 ψ t,l (cid:19) − E (cid:18) A t r − Y l =1 ψ t,l (cid:19) E (cid:0) e iuZ t,r (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) . md /s B m { α Q ( h ) } ( s − /s . We complete the proof of Lemma L4. (cid:3) Lemma L5. Let U ∗∗ = min (cid:26)(cid:18) CB s m mdh s − (cid:19) − / ( s − , a , a (cid:27) for some sufficiently large C > , where a , a , a , a , b and b will be defined in the proof below.Under the null hypothesis H with Conditions and being satisfied, if q ≥ m / , K ∗ = O ( h ) ,and (E.6) is satisfied, then it holds that ∆ m . a + a + a + a + b + b U ∗∗ + U − ∗∗ provided that B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) is sufficiently small, where d is specified in (E.3) . UPPLEMENTARY MATERIAL S13 Proof. Let θ ( u ) = i (cid:26) m X t =1 ϕ t, E ( η t, ) + m X t =1 q X r =3 ϕ t,r − E ( η t,r + 1) (cid:27) . It follows from Lemma L3 that | θ ( u ) | . d /s mB m (cid:20) d /s h u B m + { α Q ( h ) } ( s − /s (cid:21) . Based on (E.2), it follows from Lemmas L3 and L4 that f ′ m ( u ) = (cid:26) i m X t =1 ϕ t, + θ ( u ) (cid:27) f m ( u ) + R ( u )for any | u | ≤ U ∗ , where | R ( u ) | . d /s m / h u B m ( h / + ˜ α / ) + d /s mB m { α Q ( h ) } ( s − /s for any | u | ≤ U ∗ . Together with Lemma L1, we have f ′ m ( u ) = {− u + θ ( u ) + θ ( u ) } f m ( u ) + R ( u ) (E.11)with | θ ( u ) | . | u | B m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) + | u | s − B s m mdh s − . Let θ ( u ) = θ ( u ) + θ ( u ). By solving the linear differential equation (E.11), we have that f m ( u ) = exp (cid:26) − u Z u θ ( w ) d w (cid:27) · (cid:20) Z u R ( w ) exp (cid:26) w − Z w θ ( v ) d v (cid:27) d w (cid:21) for any | u | ≤ U ∗ , which implies (cid:12)(cid:12) f m ( u ) − e − u / (cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) exp (cid:26) − u Z u θ ( w ) d w (cid:27) − e − u / (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) e − u / exp (cid:26) Z u θ ( w ) d w (cid:27) Z u R ( w ) exp (cid:26) w − Z w θ ( v ) d v (cid:27) d w (cid:12)(cid:12)(cid:12)(cid:12) ≤ exp (cid:26) − u (cid:12)(cid:12)(cid:12)(cid:12) Z u θ ( w ) d w (cid:12)(cid:12)(cid:12)(cid:12)(cid:27) × (cid:12)(cid:12)(cid:12)(cid:12) Z u θ ( w ) d w (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) Z u R ( w ) exp (cid:26) − u w Z uw θ ( v ) d v (cid:27) d w (cid:12)(cid:12)(cid:12)(cid:12) for any | u | ≤ U ∗ . We will bound the two terms on the right-hand side of above inequality, UPPLEMENTARY MATERIAL S14 respectively.Clearly, if | u | ≤ U ∗ , it holds that (cid:12)(cid:12)(cid:12)(cid:12) Z u θ ( w ) d w (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z | u |−| u | | θ ( w ) | d w ≤ C ∗ u B m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) + C ∗ | u | s B s m mdh s − + C ∗ md /s B m (cid:20) d /s h | u | B m + { α Q ( h ) } ( s − /s | u | (cid:21) = : a | u | + a u + a | u | + a | u | s for some sufficiently large C ∗ > 0. Since B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) is sufficientlysmall, we can have a = C ∗ B m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) ≤ . (E.12)Let | u | ≤ U ∗∗ = min (cid:26)(cid:18) CB s m mdh s − (cid:19) − / ( s − , a , a (cid:27) (E.13)for some sufficiently large C > 0. By the Davydov’s inequality, B m = m Var (cid:18) √ m m X t =1 Q t (cid:19) = m X t =1 E ( | Q t | ) + X t = t E ( Q t Q t ) . m X t =1 E ( | Q t | ) + X t = t { E ( | Q t | s ) } /s { E ( | Q t | s ) } /s { α Q ( | t − t | ) } − /s . Recall Q t = B m A t and d = 4 s c K s ∗ . It follows from (E.5) that B m . md /s + d /s X t = t { α Q ( | t − t | ) } − /s = md /s + 2 d /s m − X τ =1 ( m − τ ) { α Q ( τ ) } − /s ≤ md /s + 2 md /s ˜ α . mK ∗ . Recall U ∗ = B m / (32 hd /s ) and K ∗ = O ( h ). Then U ∗∗ ≤ U ∗ for sufficiently large C > | R u θ ( w ) d w | ≤ u / | u | ≤ U ∗∗ .More generally, we have | R uw θ ( w ) d w | ≤ ( u − w ) / | u | ≤ U ∗∗ . Therefore, (cid:12)(cid:12) f m ( u ) − e − u / (cid:12)(cid:12) . ( a | u | + a u + a | u | + a | u | s ) e − u / UPPLEMENTARY MATERIAL S15 + (cid:12)(cid:12)(cid:12)(cid:12) Z u | R ( w ) | exp (cid:18) − u w (cid:19) d w (cid:12)(cid:12)(cid:12)(cid:12) (E.14)for any | u | ≤ U ∗∗ . It follows from the facts ˜ α . K ∗ and K ∗ = O ( h ) that | R ( u ) | ≤ C ∗∗ d /s m / h / u B m + C ∗∗ d /s mB m { α Q ( h ) } ( s − /s = : b + b u for any | u | ≤ U ∗∗ , where C ∗∗ > Z | u | w e w / d w ≤ | u | e u / , Z | u | e w / d w ≤ min(2 | u | − , | u | ) · e u / and Z −| u | w e w / d w ≤ | u | e u / , Z −| u | e w / d w ≤ min(2 | u | − , | u | ) · e u / , which implies that (cid:12)(cid:12)(cid:12)(cid:12) Z u | R ( w ) | exp (cid:18) − u w (cid:19) d w (cid:12)(cid:12)(cid:12)(cid:12) ≤ b min(2 | u | − , | u | ) + 2 b | u | for any | u | ≤ U ∗∗ . Therefore, (E.14) leads to (cid:12)(cid:12) f m ( u ) − e − u / (cid:12)(cid:12) ≤ ( a | u | + a u + a | u | + a | u | s ) e − u / + b min(2 | u | − , | u | ) + 2 b | u | (E.15)for any | u | ≤ U ∗∗ .Denote by F m ( x ) the distribution function of B − m P mt =1 Q t . By the Essen’s inequality [The-orem 1.5.2 of Ibragimov and Linik (1971)], we have∆ m = sup −∞ 1) + 2 b } d u + CU ∗∗ . a + a + a + a + b + b U ∗∗ + U − ∗∗ . We complete the proof. (cid:3) UPPLEMENTARY MATERIAL S16 Now, we begin to simplify the upper bound for ∆ m specified in Lemma L5. Recall that a ≍ B − m md /s { α Q ( h ) } ( s − /s , a ≍ B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) , a ≍ B − m md /s h , a ≍ B − s m mdh s − , b ≍ a and b ≍ B − m d /s m / h / with d specified in (E.3). Noticethat B m ≥ c m , d = 4 s c K s ∗ and α Q ( τ ) ≤ α ( | τ − K | + ) with α ( τ ) ≤ c τ − β , then a . m / K ∗ | h − K | − β ( s − /s + , a . m / K / ∗ | h − K | − β ( s − / (2 s )+ , a . m − / K ∗ h , a . m − ( s − / K s ∗ h s − and b . m / K ∗ | h − K | − β ( s − /s + . Recall 2 qh ≤ m + 1 and q ≥ 2. Thus,∆ m . m / K / ∗ | h − K | − β ( s − / (2 s )+ + m − / K ∗ h + m − ( s − / K s ∗ h s − + b U ∗∗ + U − ∗∗ . To make ∆ m → 0, it suffices to require h − K → ∞ and m − ( s − / K s ∗ h s − = o (1). Due to K ∗ h = o ( m / ) and s ∈ (2 , m − / K ∗ h ) / { m − ( s − / K s ∗ h s − } . 1. Then,∆ m . m / K / ∗ | h − K | − β ( s − / (2 s )+ + m − ( s − / K s ∗ h s − + b U ∗∗ + U − ∗∗ . (E.16)Notice that U ∗∗ = min (cid:26)(cid:18) CB s m mdh s − (cid:19) − / ( s − , a , a (cid:27) for some sufficiently large C > a ≍ B − m md /s { α Q ( h ) } ( s − /s and a ≍ B − m md /s h .Recall U ∗ = B m / (32 hd /s ). As we have shown in the proof of Lemma L5 that U ∗∗ ≤ U ∗ , wehave b U ∗∗ + U − ∗∗ . m / h / d /s B m · B m hd /s + (cid:18) B s m mdh s − (cid:19) − / ( s − + a + a = m / h / d /s B m + (cid:18) B s m mdh s − (cid:19) − / ( s − + a + a . m − / h / K ∗ + (cid:18) B s m mK s ∗ h s − (cid:19) − / ( s − + a + a The last step is based on the facts B m ≥ c m and d ≍ K s ∗ . Since B m ≥ c m , it holds that B − s m mK s ∗ h s − = O { m − ( s − / K s ∗ h s − } . Due to s ∈ (2 , 3] and m − ( s − / K s ∗ h s − = o (1),then B − s / ( s − m m / ( s − K s / ( s − ∗ h ( s − / ( s − . B − s m mK s ∗ h s − . m − ( s − / K s ∗ h s − .Since h − K → ∞ and ( m − / K ∗ h ) / { m − ( s − / K s ∗ h s − } . 1, it holds that m − / h / K ∗ = o { m − ( s − / K s ∗ h s − } . Together with (E.16), we have∆ m . m / K / ∗ | h − K | − β ( s − / (2 s )+ + m − ( s − / K s ∗ h s − . (E.17)We set h = K + ⌊ m ζ ⌋ for some ζ > 0. Then (E.17) implies that∆ m . m / K / ∗ m − β ζ ( s − / (2 s ) + m − ( s − / K s − ∗ + m − ( s − / K s ∗ m ( s − ζ . UPPLEMENTARY MATERIAL S17 Write β = 2 β ( s − s / ( s − for some β > 1. Then∆ m . m / K / ∗ m − βζ ( s − / ( s − + m − ( s − / K s − ∗ + m − ( s − / K s ∗ m ( s − ζ . Choosing ζ = ( s − / { ( β + 1)( s − } , we have∆ m . K / ∗ m − ( β − / (2 β +2) + K s − ∗ m − ( s − / + K s ∗ m − ( β − s − / (2 β +2) . To make ∆ m → 0, we need to require K s ∗ m − ( β − s − / (2 β +2) = o (1). Together with the fact s ∈ (2 , K / ∗ m − ( β − / (2 β +2) . K s ∗ m − ( β − s − / (2 β +2) . Therefore,∆ m . K s − ∗ m − ( s − / + K s ∗ m − ( β − s − / (2 β +2) . (E.18)Recall Lemma L5 requires 16 q ≥ m / , 2 q { α Q ( h ) } /s ≤ 1, and B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) is sufficiently small. Notice that B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) . K / ∗ m − ( β − / (2 β +2) .Thus B − m m / d /s K / ∗ { α Q ( h ) } ( s − / (2 s ) is sufficiently small provided that K s ∗ m − ( β − s − / (2 β +2) = o (1). Select q = (log m ) / 8. Then 16 q ≥ m / holds automatically, and 2 q { α Q ( h ) } /s . m / − ζβ /s . Due to 3 / − ζβ /s = 3 / − β/ { ( β + 1)( s − } < 0, we know 2 q { α Q ( h ) } /s ≤ q and h . Hence, (E.18) holds provided that K s ∗ m − ( β − s − / (2 β +2) = o (1). We complete the proof of Lemma 1. (cid:3) References Ibragimov, I. A. and Linnik, Yu. V. (1971) Independent and stationary sequences of randomvariables. Groningen : Wolters-Noordhoff. UPPLEMENTARY MATERIAL S18 Table S1: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 1 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 6.9 6.9 6.9 6.9 10.2 6.0 6.0 6.0 6.0 11.570 6.1 6.1 6.1 6.1 10.0 6.5 6.5 6.5 6.5 9.8100 6.7 6.7 6.7 6.7 9.5 4.5 4.5 4.5 4.5 9.00.9 40 9.3 42.4 29.6 21.3 51.7 8.9 39.9 27.3 19.1 50.070 9.2 24.4 15.2 11.6 48.6 9.8 22.6 14.3 11.4 47.0100 8.4 13.1 9.6 8.8 51.5 8.9 14.2 10.1 9.1 52.8 − . , . 9) 40 7.8 7.8 7.8 7.8 9.8 6.6 6.6 6.6 6.6 7.870 6.5 6.5 6.5 6.5 7.0 6.8 6.8 6.8 6.8 7.1100 6.8 6.8 6.8 6.8 7.1 6.6 6.6 6.6 6.6 7.4Model 3 (0.4, 0.2) 40 8.3 9.2 8.6 8.4 20.8 8.8 9.9 8.9 8.8 20.870 7.0 7.0 7.0 7.0 17.1 7.3 7.3 7.3 7.3 18.3100 7.3 7.3 7.3 7.3 15.8 6.9 6.9 6.9 6.9 18.5(0.5, 0.1) 40 7.9 8.2 8.0 7.9 18.5 9.2 9.5 9.2 9.2 19.970 7.5 7.5 7.5 7.5 17.8 7.6 7.6 7.6 7.6 15.8100 6.3 6.3 6.3 6.3 17.1 8.3 8.3 8.3 8.3 14.6(0.6, 0.1) 40 9.8 13.0 10.2 10.0 27.9 11.5 15.4 12.6 11.8 24.370 8.9 8.9 8.9 8.9 23.5 8.8 8.9 8.8 8.8 20.1100 7.8 7.8 7.8 7.8 21.1 7.1 7.1 7.1 7.1 20.2Model 4 0.4 40 7.8 8.2 7.9 7.8 19.9 8.5 8.8 8.5 8.5 18.370 7.8 7.8 7.8 7.8 15.7 7.4 7.4 7.4 7.4 15.2100 8.0 8.0 8.0 8.0 14.0 6.5 6.5 6.5 6.5 15.80.5 40 8.3 9.6 8.5 8.3 19.9 7.5 8.5 7.6 7.5 17.570 6.7 6.7 6.7 6.7 14.6 6.4 6.4 6.4 6.4 15.7100 7.5 7.5 7.5 7.5 15.2 7.0 7.0 7.0 7.0 16.10.6 40 8.2 9.6 8.6 8.2 19.2 8.0 9.2 8.1 8.0 20.470 7.1 7.1 7.1 7.1 14.1 7.7 7.7 7.7 7.7 16.0100 7.7 7.7 7.7 7.7 14.4 6.0 6.0 6.0 6.0 16.2 UPPLEMENTARY MATERIAL S19 Table S2: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 2 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 6.9 6.9 6.9 6.9 10.3 5.9 5.9 5.9 5.9 11.170 6.2 6.2 6.2 6.2 10.3 5.9 5.9 5.9 5.9 9.0100 6.4 6.4 6.4 6.4 9.6 5.7 5.7 5.7 5.7 8.50.9 40 9.5 39.6 28.2 20.3 47.6 11.1 42.0 29.8 22.1 51.070 9.7 24.6 16.1 11.7 48.5 9.4 26.1 15.8 11.8 49.6100 8.9 14.5 10.2 9.3 50.2 8.8 14.7 10.2 9.1 50.3 − . , . 9) 40 8.3 8.3 8.3 8.3 8.3 6.6 6.6 6.6 6.6 9.870 6.8 6.8 6.8 6.8 8.8 7.8 7.8 7.8 7.8 8.0100 6.6 6.6 6.6 6.6 7.7 5.7 5.7 5.7 5.7 6.0Model 3 (0.4, 0.2) 40 10.2 10.8 10.3 10.2 20.9 9.1 10.2 9.3 9.2 20.170 6.7 6.7 6.7 6.7 19.7 7.0 7.0 7.0 7.0 18.6100 6.7 6.7 6.7 6.7 18.6 7.1 7.1 7.1 7.1 17.4(0.5, 0.1) 40 8.7 9.3 8.8 8.7 18.6 9.0 9.3 9.0 9.0 20.870 8.3 8.3 8.3 8.3 17.1 7.4 7.4 7.4 7.4 16.2100 7.5 7.5 7.5 7.5 17.0 6.8 6.8 6.8 6.8 17.0(0.6, 0.1) 40 11.1 14.7 11.8 11.1 26.6 10.3 13.7 11.3 10.5 25.970 9.2 9.3 9.2 9.2 21.1 9.0 9.1 9.0 9.0 21.0100 7.4 7.4 7.4 7.4 20.8 9.5 9.5 9.5 9.5 23.2Model 4 0.4 40 8.0 8.5 8.0 8.0 19.4 7.9 8.6 8.1 8.0 18.270 6.9 6.9 6.9 6.9 14.9 7.6 7.6 7.6 7.6 15.3100 6.7 6.7 6.7 6.7 14.4 7.5 7.5 7.5 7.5 15.20.5 40 7.6 8.5 7.8 7.6 20.1 7.2 8.6 7.4 7.2 19.170 7.0 7.0 7.0 7.0 15.3 7.2 7.2 7.2 7.2 14.8100 6.6 6.6 6.6 6.6 16.2 6.8 6.8 6.8 6.8 16.90.6 40 8.2 9.8 8.5 8.2 20.2 8.2 9.5 8.5 8.3 18.370 8.0 8.0 8.0 8.0 15.6 7.2 7.2 7.2 7.2 15.0100 7.0 7.0 7.0 7.0 14.1 6.9 6.9 6.9 6.9 15.6 UPPLEMENTARY MATERIAL S20 Table S3: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 3 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 6.5 6.5 6.5 6.5 11.7 6.2 6.2 6.2 6.2 9.870 5.4 5.4 5.4 5.4 9.5 5.3 5.3 5.3 5.3 9.0100 5.3 5.3 5.3 5.3 10.0 5.8 5.8 5.8 5.8 10.10.9 40 9.6 41.7 29.8 21.3 52.8 11.2 42.4 30.5 21.9 50.570 10.2 27.0 16.9 12.1 47.5 10.6 25.7 16.1 12.7 48.8100 9.8 14.5 10.8 10.0 49.6 10 15.4 11.5 10.2 50.7 − . , . 9) 40 7.8 7.8 7.8 7.8 9.2 7.5 7.5 7.5 7.5 8.570 6.6 6.6 6.6 6.6 7.8 6.2 6.2 6.2 6.2 8.7100 6.0 6.0 6.0 6.0 7.8 6.2 6.2 6.2 6.2 7.5Model 3 (0.4, 0.2) 40 9.2 9.8 9.2 9.2 22.1 8.8 9.9 9.2 8.9 21.470 8.0 8.1 8.0 8.0 16.9 7.5 7.5 7.5 7.5 18.1100 7.1 7.1 7.1 7.1 18.7 7.5 7.5 7.5 7.5 17.5(0.5, 0.1) 40 10.1 10.4 10.2 10.1 20.9 9.2 9.8 9.2 9.2 18.970 8.6 8.6 8.6 8.6 15.3 7.2 7.2 7.2 7.2 16.1100 7.7 7.7 7.7 7.7 15.6 8.2 8.2 8.2 8.2 17.4(0.6, 0.1) 40 11.5 15.3 12.4 11.8 25.6 9.6 12.4 10.2 9.8 24.070 8.7 8.7 8.7 8.7 21.9 9.4 9.5 9.4 9.4 20.6100 8.5 8.5 8.5 8.5 21.9 8.3 8.3 8.3 8.3 22.7Model 4 0.4 40 8.6 9.1 8.6 8.6 17.8 7.6 8.1 7.7 7.6 18.170 7.6 7.6 7.6 7.6 14.5 8.0 8.1 8.0 8.0 16.1100 6.9 6.9 6.9 6.9 14.3 5.2 5.2 5.2 5.2 16.00.5 40 8.1 9.6 8.3 8.2 20.3 7.5 8.3 7.8 7.6 19.170 8.1 8.1 8.1 8.1 14.7 7.0 7.0 7.0 7.0 15.3100 6.2 6.2 6.2 6.2 14.2 6.0 6.0 6.0 6.0 15.60.6 40 8.0 9.5 8.1 8.0 18.9 8.0 9.9 8.6 8.1 19.270 6.2 6.2 6.2 6.2 14.6 7.8 7.8 7.8 7.8 13.8100 7.6 7.6 7.6 7.6 13.2 6.7 6.7 6.7 6.7 16.0 UPPLEMENTARY MATERIAL S21 Table S4: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 4 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 6.4 6.4 6.4 6.4 9.7 7.1 7.1 7.1 7.1 12.070 6.8 6.8 6.8 6.8 8.1 7.0 7.0 7.0 7.0 10.0100 5.5 5.5 5.5 5.5 8.1 5.6 5.6 5.6 5.6 9.70.9 40 9.8 44.2 30.2 21.6 52.7 11.3 41.9 28.7 22.0 47.970 10.4 25.1 16.1 12.8 48.4 10.3 26.0 16.0 12.5 47.8100 8.6 14.4 10.1 9.0 49.4 9.1 14.7 10.4 9.4 50.8-0.5 40 8.8 8.8 8.8 8.8 1.8 7.4 7.4 7.4 7.4 1.370 6.8 6.8 6.8 6.8 1.8 7.4 7.4 7.4 7.4 2.5100 6.2 6.2 6.2 6.2 2.1 7.0 7.0 7.0 7.0 2.5Model 2 (0.8,0.3) 40 7.6 7.6 7.6 7.6 8.2 7.8 7.8 7.8 7.8 7.670 5.8 5.8 5.8 5.8 6.7 5.7 5.7 5.7 5.7 7.6100 5.9 5.9 5.9 5.9 7.1 5.7 5.7 5.7 5.7 6.4(0.9,0.5) 40 7.4 7.4 7.4 7.4 8.2 7.3 7.3 7.3 7.3 7.470 5.9 5.9 5.9 5.9 7.1 6.1 6.1 6.1 6.1 7.5100 5.5 5.5 5.5 5.5 7.0 5.5 5.5 5.5 5.5 7.3(0 . , . 9) 40 6.9 6.9 6.9 6.9 8.0 7.0 7.0 7.0 7.0 8.870 7.2 7.2 7.2 7.2 7.8 6.8 6.8 6.8 6.8 7.5100 6.4 6.4 6.4 6.4 8.6 5.4 5.4 5.4 5.4 7.8Model 3 (0.4,0.2) 40 8.8 9.6 8.9 8.8 22.9 9.4 10.4 9.6 9.4 21.870 7.0 7.0 7.0 7.0 18.4 7.5 7.5 7.5 7.5 18.0100 8.2 8.2 8.2 8.2 20.2 6.6 6.6 6.6 6.6 18.1(0.5,0.1) 40 9.7 10.2 9.8 9.7 17.5 8.5 9.2 8.5 8.5 20.370 8.9 8.9 8.9 8.9 15.8 8.2 8.2 8.2 8.2 16.5100 7.6 7.6 7.6 7.6 17.8 6.8 6.8 6.8 6.8 17.2(0.6,0.1) 40 11.5 15.3 12.7 11.8 26.1 10.9 14.9 11.5 11.1 26.270 8.9 9.1 8.9 8.9 21.7 8.0 8.1 8.0 8.0 21.1100 7.0 7.0 7.0 7.0 23.0 7.1 7.1 7.1 7.1 22.7Model 4 0.4 40 8.2 8.9 8.2 8.2 19.2 7.3 7.8 7.4 7.3 20.870 6.6 6.6 6.6 6.6 15.8 7.2 7.2 7.2 7.2 14.0100 6.7 6.7 6.7 6.7 16.0 6.4 6.4 6.4 6.4 14.20.5 40 7.9 8.6 8.1 7.9 19.4 7.8 8.8 8.0 7.9 19.870 7.2 7.2 7.2 7.2 14.6 6.2 6.3 6.2 6.2 15.6100 6.8 6.8 6.8 6.8 14.3 6.5 6.5 6.5 6.5 15.20.6 40 9.2 10.7 9.6 9.3 19.7 8.8 10.3 8.9 8.8 20.070 7.2 7.2 7.2 7.2 14.1 6.7 6.7 6.7 6.7 17.4100 7.6 7.6 7.6 7.6 14.9 6.5 6.5 6.5 6.5 16.2 UPPLEMENTARY MATERIAL S22 Table S5: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 1 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 14.0 94.0 89.0 84.0 83.7 12.8 93.8 89.0 83.4 84.470 14.0 96.9 93.8 89.4 89.5 14.1 96.5 93.5 89.2 90.0100 15.3 98.2 95.9 93.0 95.2 14.0 97.9 95.0 92.0 95.10.9 40 14.1 99.1 97.9 95.5 92.7 14.5 99.6 98.7 96.7 92.670 16.2 99.7 99.1 98.5 95.5 17.0 99.8 99.6 99.3 94.3100 17.5 100.0 99.7 99.2 97.7 19.0 100.0 99.9 99.8 96.9 −− 65 KPSSModel 5 0.5 40 14.0 94.0 89.0 84.0 83.7 12.8 93.8 89.0 83.4 84.470 14.0 96.9 93.8 89.4 89.5 14.1 96.5 93.5 89.2 90.0100 15.3 98.2 95.9 93.0 95.2 14.0 97.9 95.0 92.0 95.10.9 40 14.1 99.1 97.9 95.5 92.7 14.5 99.6 98.7 96.7 92.670 16.2 99.7 99.1 98.5 95.5 17.0 99.8 99.6 99.3 94.3100 17.5 100.0 99.7 99.2 97.7 19.0 100.0 99.9 99.8 96.9 −− . , . 9) 40 14.2 96.0 91.5 85.8 83.1 13.8 95.3 89.8 83.2 82.970 13.6 97.5 93.8 89.5 90.7 11.9 97.3 93.6 88.8 90.1100 12.7 98.8 96.5 94.0 95.3 12.8 98.0 95.3 92.6 95.0Model 7 (0.4, 0.2) 40 14.6 98.2 94.9 91.2 86.2 15.0 98.3 95.4 91.1 86.770 15.7 99.0 97.0 93.5 91.1 17.0 99.4 97.9 95.7 92.2100 16.0 99.6 98.2 95.3 95.0 17.5 99.6 98.9 97.8 95.6(0.5, 0.1) 40 15.9 98.6 96.3 92.5 84.9 14.1 98.8 95.8 92.2 85.870 17.6 99.2 97.5 94.8 90.8 15.7 99.4 97.9 96.2 91.8100 18.1 99.6 98.4 96.1 96.3 17.4 99.7 99.2 97.9 95.1(0.6, 0.1) 40 15.6 99.1 97.0 93.7 87.7 15.9 99.5 97.8 94.7 88.070 16.8 99.6 98.5 96.7 92.0 17.2 99.8 99.6 98.3 92.0100 17.5 99.6 99.2 98.0 96.2 17.7 100.0 99.6 99.1 95.3Model 8 (0.8, 0.3) 40 5.5 100.0 100.0 100.0 99.2 6.9 100.0 100.0 100.0 98.370 6.1 100.0 100.0 100.0 99.6 6.3 100.0 100.0 100.0 99.1100 6.0 100.0 100.0 100.0 99.9 5.9 100.0 100.0 100.0 100.0(0.9, 0.5) 40 5.9 100.0 100.0 100.0 99.9 6.2 100.0 100.0 100.0 99.870 5.5 100.0 100.0 100.0 98.4 6.8 100.0 100.0 100.0 98.8100 6.9 100.0 100.0 100.0 99.2 5.9 100.0 100.0 100.0 99.2(0.95, 0.9) 40 7.5 100.0 100.0 100.0 98.0 7.0 100.0 100.0 100.0 98.270 5.8 100.0 100.0 100.0 99.3 5.9 100.0 100.0 100.0 99.4100 6.9 100.0 100.0 100.0 99.9 6.1 100.0 100.0 100.0 99.7 UPPLEMENTARY MATERIAL S23 Table S6: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 2 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 13.7 94.4 89.5 85.7 84.4 14.1 94.6 90.4 84.7 84.970 14.4 97.1 94.1 90.5 90.8 14.2 97.5 94.8 90.9 90.5100 15.0 98.6 96.7 93.8 95.2 14.3 98.3 95.4 92.1 95.20.9 40 14.4 99.1 97.6 95.3 92.5 16.2 99.6 98.2 97.0 93.470 18.2 99.9 99.4 98.4 95.5 18.9 100.0 99.8 99.2 95.2100 17.8 100.0 99.7 99.6 97.9 17.6 100.0 100.0 99.9 97.8 −− 65 KPSSModel 5 0.5 40 13.7 94.4 89.5 85.7 84.4 14.1 94.6 90.4 84.7 84.970 14.4 97.1 94.1 90.5 90.8 14.2 97.5 94.8 90.9 90.5100 15.0 98.6 96.7 93.8 95.2 14.3 98.3 95.4 92.1 95.20.9 40 14.4 99.1 97.6 95.3 92.5 16.2 99.6 98.2 97.0 93.470 18.2 99.9 99.4 98.4 95.5 18.9 100.0 99.8 99.2 95.2100 17.8 100.0 99.7 99.6 97.9 17.6 100.0 100.0 99.9 97.8 −− . , . 9) 40 16.3 96.2 91.3 86.7 83.5 14.1 96.0 91.3 85.8 83.970 13.9 97.7 94.3 90.8 90.5 14.5 97.8 93.8 89.8 90.2100 13.5 98.5 96.3 93.2 95.3 13.5 98.2 95.5 92.3 95.4Model 7 (0.4, 0.2) 40 17.9 98.8 95.0 90.9 85.3 15.5 98.0 95.5 91.3 86.170 17.2 99.1 97.1 93.8 90.5 16.2 99.5 98.2 96.0 92.0100 18.6 99.6 98.5 96.9 95.9 17.1 99.7 98.9 97.6 94.8(0.5, 0.1) 40 16.5 98.6 96.2 91.2 86.7 16.6 99.0 97.0 93.3 86.670 17.0 99.5 97.5 94.2 91.0 16.6 99.9 98.5 96.4 90.7100 16.7 99.8 98.2 96.4 96.2 17.8 99.8 99.1 98.5 95.2(0.6, 0.1) 40 17.2 99.2 97.3 93.9 87.5 18.4 99.2 97.2 94.8 87.270 17.1 99.8 98.6 96.6 91.5 16.4 99.5 99.0 97.5 92.2100 19.2 99.7 99.0 97.2 95.5 18.9 100 99.7 99.5 95.9Model 8 (0.8, 0.3) 40 6.8 100.0 100.0 100.0 97.7 6.7 100.0 100.0 100.0 98.970 6.3 100.0 100.0 100.0 99.4 5.3 100.0 100.0 100.0 99.2100 6.2 100.0 100.0 100.0 99.8 5.8 100.0 100.0 100.0 99.8(0.9, 0.5) 40 6.3 100.0 100.0 100.0 98.2 6.4 100.0 100.0 100.0 98.370 6.7 100.0 100.0 100.0 99.2 6.9 100.0 100.0 100.0 99.4100 5.9 100.0 100.0 100.0 99.9 6.2 100.0 100.0 100.0 99.9(0.95, 0.9) 40 8.2 100.0 100.0 100.0 98.0 6.4 100.0 100.0 100.0 98.670 6.0 100.0 100.0 100.0 99.4 6.8 100.0 100.0 100.0 99.5100 5.7 100.0 100.0 100.0 99.7 6.9 100.0 100.0 100.0 99.8 UPPLEMENTARY MATERIAL S24 Table S7: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 3 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 15.8 94.6 89.5 85.0 83.2 14.7 94.4 90.0 84.9 83.770 13.7 97.3 94.1 90.9 90.8 15.9 97.2 94.1 90.2 89.8100 15.8 98.3 95.5 93.1 95.0 14.9 98.7 96 93.6 95.30.9 40 13.8 99.1 97.5 95.5 92.2 14.3 99.6 98.5 96.9 92.270 17.9 100.0 99.6 98.9 94.8 16.4 99.9 99.7 99.4 95.0100 18.6 99.9 99.7 99.2 97.4 19.8 100.0 100.0 99.9 97.4 −− 65 KPSSModel 5 0.5 40 15.8 94.6 89.5 85.0 83.2 14.7 94.4 90.0 84.9 83.770 13.7 97.3 94.1 90.9 90.8 15.9 97.2 94.1 90.2 89.8100 15.8 98.3 95.5 93.1 95.0 14.9 98.7 96 93.6 95.30.9 40 13.8 99.1 97.5 95.5 92.2 14.3 99.6 98.5 96.9 92.270 17.9 100.0 99.6 98.9 94.8 16.4 99.9 99.7 99.4 95.0100 18.6 99.9 99.7 99.2 97.4 19.8 100.0 100.0 99.9 97.4 −− . , . 9) 40 12.9 96.5 91.2 84.8 82.0 16.0 95.5 91.3 85.8 83.770 16.0 97.2 94.0 90.1 89.2 17.8 97.9 93.9 90.8 90.2100 15.3 98.6 96.0 92.8 94.8 16.8 98.3 95.8 93.2 95.0Model 7 (0.4,0.2) 40 17.2 98.8 95.8 91.2 85.2 17.9 98.6 96.3 92.8 85.470 18.8 99.2 97.1 93.7 90.5 17.6 99.1 97.7 96.0 90.4100 16.9 99.6 98.2 96.2 95.0 18.4 99.7 99.2 98.2 95.3(0.5,0.1) 40 15.8 98.9 95.7 91.5 86.2 17.6 98.8 96.0 92.3 86.270 17.6 99.5 97.8 94.3 91.0 17.9 99.4 98.7 97.4 91.1100 18.5 99.8 99.0 97.3 96.2 19.4 99.7 99.0 98.5 95.1(0.6,0.1) 40 16.2 99.1 97.2 93.0 86.4 15.9 99.2 97.2 94.5 86.870 19.1 99.8 98.8 97.2 91.8 18.2 99.9 99.5 98.7 92.6100 19.9 99.9 99.2 97.9 96.8 18.4 100.0 99.8 99.5 95.2Model 8 (0.8, 0.3) 40 6.2 100.0 100.0 100.0 98.2 5.5 100.0 100.0 100.0 98.670 6.9 100.0 100.0 100.0 99.5 6.2 100.0 100.0 100.0 99.4100 6.2 100.0 100.0 100.0 99.7 6.1 100.0 100.0 100.0 99.9(0.9, 0.5) 40 6.2 100.0 100.0 100.0 98.2 6.8 100.0 100.0 100.0 98.670 6.7 100.0 100.0 100.0 99.2 6.4 100.0 100.0 100.0 98.9100 5.8 100.0 100.0 100.0 100.0 6.3 100.0 100.0 100.0 99.9(0.95, 0.9) 40 7.3 100.0 100.0 100.0 98.6 6.9 100.0 100.0 100.0 98.370 6.1 100.0 100.0 100.0 99.6 6.6 100.0 100.0 100.0 99.5100 5.8 100.0 100.0 100.0 99.9 6.6 100.0 100.0 100.0 99.7 UPPLEMENTARY MATERIAL S25 Table S8: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 4 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ N (0 , σ ǫ ).The nominal size of the tests is 5%. σ ǫ = 1 σ ǫ = 2Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 15.8 93.5 90.1 85.0 85.7 14.6 94.8 90.7 85.9 84.970 15.1 97.0 94.0 89.8 90.2 16.6 96.5 93.8 90.0 90.0100 15.3 98.2 96.0 93.4 95.4 16.8 98.4 96.0 92.8 94.50.9 40 15.8 99.5 98.0 96.0 92.8 14.0 99.5 98.1 96.7 91.570 16.1 99.8 99.2 98.4 94.7 16.4 99.9 99.7 99.5 95.0100 18.9 100.0 99.7 99.5 97.2 17.8 99.9 99.9 99.8 97.5 −− 65 KPSSModel 5 0.5 40 15.8 93.5 90.1 85.0 85.7 14.6 94.8 90.7 85.9 84.970 15.1 97.0 94.0 89.8 90.2 16.6 96.5 93.8 90.0 90.0100 15.3 98.2 96.0 93.4 95.4 16.8 98.4 96.0 92.8 94.50.9 40 15.8 99.5 98.0 96.0 92.8 14.0 99.5 98.1 96.7 91.570 16.1 99.8 99.2 98.4 94.7 16.4 99.9 99.7 99.5 95.0100 18.9 100.0 99.7 99.5 97.2 17.8 99.9 99.9 99.8 97.5 −− . , . 9) 40 13.9 94.8 89.8 83.7 83.2 17.1 95.2 90.1 85.0 83.070 16.0 97.8 93.5 88.8 89.9 14.9 96.5 93.0 89.6 90.4100 16.4 98.4 95.2 91.5 94.3 16.2 98.5 96.0 93.0 95.3Model 7 (0.4, 0.2) 40 17.9 98.5 95.0 90.5 85.9 17.1 98.7 96.0 92.0 86.470 17.2 99.2 97.5 95.1 91.3 18.1 99.5 97.9 95.9 91.4100 17.5 99.8 98.4 96.5 96.3 17.5 99.7 99.0 98.3 95.5(0.5, 0.1) 40 17.9 98.7 96.2 92.3 85.0 16.4 98.8 96.3 93.2 86.070 18.2 99.0 96.8 94.5 91.3 20.8 99.4 98.5 96.9 91.1100 17.2 99.6 98.7 96.3 94.8 18.5 99.8 99.4 98.8 95.4(0.6, 0.1) 40 15.0 99.2 97.3 94.0 87.7 18.3 99.5 98.0 95.8 86.870 19.2 99.8 98.8 97.5 92.6 19.3 99.8 99.4 98.8 92.5100 18.8 99.9 99.5 98.5 95.5 18.9 100.0 99.7 99.5 95.7Model 8 (0.8, 0.3) 40 7.8 100.0 100.0 100.0 98.8 6.8 100.0 100.0 100.0 98.070 5.7 100.0 100.0 100.0 99.2 7.3 100.0 100.0 100.0 99.6100 6.5 100.0 100.0 100.0 99.9 5.5 100.0 100.0 100.0 99.9(0.9, 0.5) 40 6.7 100.0 100.0 100.0 98.0 7.1 100.0 100.0 100.0 97.970 6.3 100.0 100.0 100.0 99.1 5.9 100.0 100.0 100.0 99.6100 6.7 100.0 100.0 100.0 99.9 6.5 100.0 100.0 100.0 99.9(0.95, 0.9) 40 7.3 100.0 100.0 100.0 98.2 6.8 100.0 100.0 100.0 98.170 5.9 100.0 100.0 100.0 99.5 6.9 100.0 100.0 100.0 99.2100 5.2 100.0 100.0 100.0 99.9 5.9 100.0 100.0 100.0 99.9 UPPLEMENTARY MATERIAL S26 Table S9: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 0 with theuntruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 5.3 5.6 5.6 5.6 9.7 6.2 6.2 6.2 6.2 10.970 4.9 5.1 5.1 5.1 8.6 6.2 6.2 6.2 6.2 9.7100 3.8 4.1 4.1 4.1 9.7 5.5 5.5 5.5 5.5 9.20.9 40 9.2 44.0 30.5 23.2 55.1 8.2 41.1 28.2 19.1 51.070 6.6 22.6 14.0 10.2 50.9 8.1 23.2 14.1 10.6 47.2100 6.2 12.0 8.9 8.1 52.0 8.0 13.7 9.6 8.3 49.9 − UPPLEMENTARY MATERIAL S27 Table S10: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 1 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 5.7 6.2 6.2 6.2 11.0 6.9 6.9 6.9 6.9 10.570 4.0 4.3 4.3 4.3 9.4 5.7 5.7 5.7 5.7 11.0100 4.2 4.5 4.5 4.5 9.9 5.8 5.8 5.8 5.8 10.50.9 40 8.6 42.2 29.6 21.5 52.1 10.2 41.4 28.3 21.3 51.770 7.5 25.1 15.4 12.0 48.4 9.3 25.6 15.8 12.0 50.0100 7.2 14.4 10.8 10.0 53.5 8.2 13.4 9.3 8.6 49.3 − UPPLEMENTARY MATERIAL S28 Table S11: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 2 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 5.5 6.0 6.0 6.0 10.4 5.5 5.5 5.5 5.5 10.570 5.5 5.9 5.9 5.9 9.2 5.8 5.8 5.8 5.8 9.2100 4.6 5.1 5.1 5.1 10.3 5.0 5.0 5.0 5.0 10.30.9 40 10.0 44.4 31.1 23.8 53.4 10.9 42.4 29.6 21.1 49.570 7.0 24.4 14.6 11.9 48.5 9.6 23.9 15.2 11.9 48.4100 6.3 12.6 9.7 9.0 51.3 8.4 12.6 9.4 8.6 50.3 − UPPLEMENTARY MATERIAL S29 Table S12: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 3 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 6.0 6.6 6.6 6.6 10.2 6.7 6.7 6.7 6.7 9.870 4.8 5.4 5.4 5.4 8.8 6.8 6.8 6.8 6.8 9.7100 5.0 5.3 5.3 5.3 10.1 5.7 5.7 5.7 5.7 9.40.9 40 9.8 43.9 32.1 23.6 54.3 11.0 42.4 29.2 20.9 49.470 7.4 22.6 14.4 11.4 49.8 9.0 23.9 14.4 10.9 49.6100 8.5 15.2 12.0 11.1 54.9 8.6 14.1 9.5 8.9 51.3 − UPPLEMENTARY MATERIAL S30 Table S13: Empirical sizes ( × ) of the proposed test T n defined as (6) for K = 4 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 1 0.5 40 7.3 7.5 7.5 7.5 10.6 6.5 6.5 6.5 6.5 10.470 4.5 5.0 5.0 5.0 9.3 5.9 5.9 5.9 5.9 8.5100 5.9 6.2 6.2 6.2 10.2 5.3 5.3 5.3 5.3 10.10.9 40 9.6 44.9 31.9 23.8 51.9 10.8 44 31.4 23.2 51.770 8.8 25.3 17.2 13.8 49.9 9.8 24.9 15.2 11.6 47.9100 7.5 14.4 11.2 10.2 51.0 10.5 15.2 12.0 11.1 50.2 − UPPLEMENTARY MATERIAL S31 Table S14: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 0 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 16.8 95.0 91.5 87.8 85.4 13.7 93.5 89.4 84.4 83.970 15.4 96.9 94.8 91.0 91.0 11.7 97.0 93.7 89.6 90.3100 15.4 98.1 96.4 93.8 95.3 13.4 97.9 96.4 92.8 95.00.9 40 14.5 99.4 98.7 97.8 93.0 12.8 99.2 97.4 95.2 92.770 18.4 99.9 99.7 99.5 95.4 15.3 99.9 99.5 98.4 95.2100 17.6 100.0 100.0 99.9 98.0 17.5 99.9 99.7 99.5 97.7 −− 65 KPSSModel 5 0.5 40 16.8 95.0 91.5 87.8 85.4 13.7 93.5 89.4 84.4 83.970 15.4 96.9 94.8 91.0 91.0 11.7 97.0 93.7 89.6 90.3100 15.4 98.1 96.4 93.8 95.3 13.4 97.9 96.4 92.8 95.00.9 40 14.5 99.4 98.7 97.8 93.0 12.8 99.2 97.4 95.2 92.770 18.4 99.9 99.7 99.5 95.4 15.3 99.9 99.5 98.4 95.2100 17.6 100.0 100.0 99.9 98.0 17.5 99.9 99.7 99.5 97.7 −− UPPLEMENTARY MATERIAL S32 Table S15: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 1 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 16.7 94.3 89.8 85.5 84.2 13.6 94.0 90.0 83.7 83.070 17.3 97.5 94.8 91.4 91.1 13.6 96.5 93.3 89.1 89.8100 15.6 97.9 96.1 93.5 95.9 12.7 97.9 95.0 92.3 94.90.9 40 15.8 99.7 98.9 98.2 93.3 14.1 99.1 97.8 96.4 92.370 18.6 100.0 100.0 100.0 96.1 18.4 99.9 99.4 99.2 94.9100 19.5 100.0 100.0 100.0 98.3 20.0 100.0 100.0 99.8 97.9 −− 65 KPSSModel 5 0.5 40 16.7 94.3 89.8 85.5 84.2 13.6 94.0 90.0 83.7 83.070 17.3 97.5 94.8 91.4 91.1 13.6 96.5 93.3 89.1 89.8100 15.6 97.9 96.1 93.5 95.9 12.7 97.9 95.0 92.3 94.90.9 40 15.8 99.7 98.9 98.2 93.3 14.1 99.1 97.8 96.4 92.370 18.6 100.0 100.0 100.0 96.1 18.4 99.9 99.4 99.2 94.9100 19.5 100.0 100.0 100.0 98.3 20.0 100.0 100.0 99.8 97.9 −− UPPLEMENTARY MATERIAL S33 Table S16: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 2 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 17.1 95.2 90.8 85.8 85.0 16.0 94.6 89.7 84.2 83.070 16.8 97.9 95.7 92.7 90.8 16.4 97.0 94.4 91.1 91.1100 17.0 98.2 96.7 94.7 94.7 14.9 98.3 96.2 92.7 94.30.9 40 16.0 99.7 99.2 98.5 93.3 14.9 99.0 97.8 96.1 92.270 17.5 100.0 99.8 99.8 95.0 16.1 99.9 99.5 99.2 95.5100 18.4 100.0 100.0 100.0 97.5 16.7 100.0 99.9 99.9 97.5 −− 65 KPSSModel 5 0.5 40 17.1 95.2 90.8 85.8 85.0 16.0 94.6 89.7 84.2 83.070 16.8 97.9 95.7 92.7 90.8 16.4 97.0 94.4 91.1 91.1100 17.0 98.2 96.7 94.7 94.7 14.9 98.3 96.2 92.7 94.30.9 40 16.0 99.7 99.2 98.5 93.3 14.9 99.0 97.8 96.1 92.270 17.5 100.0 99.8 99.8 95.0 16.1 99.9 99.5 99.2 95.5100 18.4 100.0 100.0 100.0 97.5 16.7 100.0 99.9 99.9 97.5 −− UPPLEMENTARY MATERIAL S34 Table S17: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 3 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 18.4 93.8 89.3 85.5 84.7 16.0 94.0 89.5 85.2 83.870 18.0 97.4 95.0 92.5 91.6 17.2 97.1 94.1 90.0 90.8100 17.8 98.8 97.5 95.5 95.2 16.2 98.2 96.0 93.0 95.20.9 40 16.0 99.8 99.4 98.7 92.5 12.7 99.4 97.9 96.0 92.270 18.4 100.0 100.0 100.0 95.8 17.8 100.0 99.8 99.3 95.0100 20.8 100.0 100.0 100.0 97.8 19.1 100.0 100.0 100.0 97.4 −− 65 KPSSModel 5 0.5 40 18.4 93.8 89.3 85.5 84.7 16.0 94.0 89.5 85.2 83.870 18.0 97.4 95.0 92.5 91.6 17.2 97.1 94.1 90.0 90.8100 17.8 98.8 97.5 95.5 95.2 16.2 98.2 96.0 93.0 95.20.9 40 16.0 99.8 99.4 98.7 92.5 12.7 99.4 97.9 96.0 92.270 18.4 100.0 100.0 100.0 95.8 17.8 100.0 99.8 99.3 95.0100 20.8 100.0 100.0 100.0 97.8 19.1 100.0 100.0 100.0 97.4 −− UPPLEMENTARY MATERIAL S35 Table S18: Empirical powers ( × ) of the proposed test T n defined as (6) for K = 4 withthe untruncated critical value ( c κ = ∞ ) and the truncated critical values defined as (18) with c κ = 0 . , . , . 65, and the KPSS test in a simulation with 2000 replications. Constant c κ determines the level of truncation for the critical values of T n . The innovations ǫ t i . i . d . ∼ t ( df ).The nominal size of the tests is 5%. df = 2 df = 5Setting N ∞ . 45 0 . 55 0 . 65 KPSS ∞ . 45 0 . 55 0 . 65 KPSSModel 5 0.5 40 17.9 95.2 91.2 87.1 84.2 16.8 94.1 90.3 86.0 83.870 19.1 97.7 95.3 92.8 91.6 14.2 97.2 93.7 90.0 90.0100 17.8 98.6 97.0 95.5 94.8 15.6 98.7 96.0 92.8 94.70.9 40 14.5 99.7 99.4 98.9 93.8 14.7 99.5 98.4 97.0 93.270 20.0 100.0 100.0 99.9 95.9 18.6 99.9 99.6 99.4 94.9100 19.6 100.0 100.0 100.0 97.3 17.9 100.0 100.0 100.0 97.5 −−