A Frequency Domain Bootstrap for General Multivariate Stationary Processes
aa r X i v : . [ s t a t . M E ] F e b A Frequency Domain Bootstrap forGeneral Multivariate StationaryProcesses
Marco Meyer and Efstathios Paparoditis ∗ Technische Universit¨at BraunschweigInst. f. Math. StochastikUniversit¨atsplatz 2D–38106 Braunschweig, Germany.e-mail: [email protected]
University of CyprusDepartment of Mathematics and StatisticsP.O.Box 20537, CY-1678 NicosiaCyprus.e-mail: [email protected]
Abstract:
For many relevant statistics of multivariate time series, no validfrequency domain bootstrap procedures exist. This is mainly due to thefact that the distribution of such statistics depends on the fourth-ordermoment structure of the underlying process in nearly every scenario, ex-cept for some special cases like Gaussian time series. In contrast to theunivariate case, even additional structural assumptions such as linearityof the multivariate process or a standardization of the statistic of interestdo not solve the problem. This paper focuses on integrated periodogramstatistics as well as functions thereof and presents a new frequency domainbootstrap procedure for multivariate time series, the multivariate frequencydomain hybrid bootstrap (MFHB), to fill this gap. Asymptotic validity ofthe MFHB procedure is established for general classes of periodogram-basedstatistics and for stationary multivariate processes satisfying rather weakdependence conditions. A simulation study is carried out which comparesthe finite sample performance of the MFHB with that of the moving blockbootstrap.
MSC2020 subject classifications:
Primary 62M10, 62M15; secondary62G09.
Keywords and phrases:
Bootstrap, periodogram, spectral means, sta-tionary processes.
1. Introduction
Developing valid bootstrap methods for time series has been a particularly chal-lenging problem since the mid-1980s. Soon after Efron’s seminal paper (cf. Efron(1979)) on the bootstrap for i.i.d. observations the first attempts towards an ex-tension to dependent data were made. While for univariate time series a varietyof proposals exists that are asymptotically valid under certain assumptions on ∗ Supported in part by a University of Cyprus Research Grant.1 . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap the dependence structure of the underlying process and for certain types ofstatistics, very little progress has been made for multivariate time series. Thisis due to the fact that in the multivariate context the distribution of most rele-vant statistics depends on the fourth-order moment structure of the underlyingprocess, which many bootstrap methods developed for univariate time series arenot able to imitate. Even under the assumption of a linear time series, that is,an R m -valued, strictly stationary stochastic process ( X ( t )) t ∈ Z which fullfils X ( t ) = ∞ X j = −∞ B j e ( t − j ) , t ∈ Z , (1.1)for certain m × m real matrices B j and an i.i.d. white noise process ( e ( t )) t ∈ Z , thedistribution of most statistics of interest depends on the fourth-order momentstructure and many bootstrap methods fail. In contrast to this, for univariatelinear time series – that is for processes (1.1) with dimension m = 1 – thereare a number of scenarios in which the distribution of some statistic of inter-est depends only on first and second-order moments and established univariatebootstrap methods are successful. It should be emphasized at this point thatwhenever we use the term linear time series (or linear process) in this work,we refer to a process as given by (1.1) including the i.i.d. assumption on theinnovations e ( t ), as it is done in many standard references for time series, cf.Brockwell and Davis (1991), among others. A subclass of (1.1) is that of thecausal and invertible linear processes. In this case a sequence { A j , j ∈ N } of m × m real matrices exists with P ∞ j =1 k A j k F < ∞ , such that (1.1) also can beexpressed as X ( t ) = P ∞ j =1 A j X ( t − j )+ e ( t ). { X ( t ) , t ∈ Z } is then called a linearVAR( ∞ ) process. Here k · k F refers to the Frobenius norm of a matrix. Noticethat any process following expression (1.1) but with non-i.i.d. white noise inno-vations, i.e. where the e t are uncorrelated with zero mean but not independent,is nonlinear.Bootstrap methods for time series are usually formulated either in the timedomain or in the frequency domain, with a few hybrid methods that combineboth approaches. In the following paragraphs we will give a short overview of ex-isting methods for the univariate and the multivariate setup, and discuss theirrespective limitations. Prominent examples in the time domain are the blockbootstrap and variants thereof, the AR sieve bootstrap and the linear processbootstrap, among others. Block bootstrap methods are rough tools which arevalid for a wide class of processes but typically their performance heavily de-pends on the block size. For univariate time series, the AR sieve bootstrap isknown to be valid exclusively in those situations where the distribution of in-terest depends on the first and second-order characteristics of the process only,cf. Kreiss, Paparoditis and Politis (2011). There are some relevant statistics forwhich this is the case, like the sample mean for general stationary processes orsample autocorrelations for linear processes while sample autocovariances (evenfor linear processes) are already outside the range of its validity. The same re-marks also can be made for the linear process bootstrap. To overcome some ofthe aforementioned limitations for univariate time series, Frangeskou and Pa- . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap paroditis (2019) proposed a procedure that involves a wild bootstrap schemeto generate pseudo innovations which asymptotically correctly imitate the first,the second and the fourth-order moment structure of the true innovations. Thisextends the validity of the AR sieve bootstrap beyond the class of linear, causaland invertible processes.However, if one switches to multivariate processes the fourth-order momentstructure shows up in the asymptotic distribution of almost any relevant statis-tic. This makes the bootstrap estimation problem much more involved. In par-ticular, the AR sieve and the linear process bootstrap fail for multivariate non-invertible linear processes (1.1) and for such basic statistics as sample cross-correlations; see Jentsch and Kreiss (2010) and Meyer and Kreiss (2015) formore details. Thus, beyond causal and invertible linear processes, and apartfrom special cases like Gaussian time series, it seems that validity of the vec-tor AR sieve bootstrap is essentially restricted to very elementary statisticslike the sample mean. Even the aforementioned extension of the AR sieve viawild bootstrap-generated pseudo innovations is not available in the multivari-ate context. Consequently, for a wide class of stationary multivariate processes,including linear processes (1.1), and for many interesting statistics, the onlyavailable bootstrap method is essentially the time domain block bootstrap andvariants thereof.Concerning frequency domain bootstrap methods for univariate time series,the situation is very similar. Interest is here focused on so-called integrated pe-riodogram statistics. These are statistics which are obtained by integrating overall frequencies the periodogram multiplied with some function of interest. Manytime domain statistics like autocorrelations or autocovariances have a frequencydomain analogue, that is they can also be expressed as (functions of) inte-grated periodograms. Hurvich and Zeger (1987), Franke and H¨ardle (1992) andDahlhaus and Janas (1996) developed for univariate time series a multiplicativebootstrap procedure for the periodogram. By construction, this scheme is capa-ble of imitating the variance of periodogram ordinates at different frequenciesbut not their dependence structure across frequencies. Since for periodogram-based statistics, the fourth-order structure of the process is transmitted to theirdistribution through the weak dependence of the periodogram ordinates acrossdifferent frequencies, the multiplicative periodogram bootstrap suffers from thesame limitations as some of the time domain procedures discussed so far. Morespecifically, it is valid exclusively in those situations where the distribution ofinterest only depends on the spectral density, that is on the second-order struc-ture of the process. Dahlhaus and Janas (1996) showed that a subclass of stan-dardized integrated periodograms, the so-called ratio statistics , falls into thiscategory, but only under the assumption that the underlying time series is lin-ear. Kreiss and Paparoditis (2012) proposed a modification of the multiplicativeperiodogram bootstrap which extends its validity for integrated periodogramsto the entire class of univariate linear time series. Recently, Meyer et al. (2020)proposed a hybrid method which is valid for a very general class of weaklydependent univariate processes and which goes far beyond the linear processclass. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap However, for multivariate time series the related inference problems are muchmore involved and no valid frequency domain bootstrap method exists so far.In principle, an analogue of the univariate multiplicative approach (Franke andH¨ardle (1992) and Dahlhaus and Janas (1996)), can be formulated in the mul-tivariate setup by using the fact that – for a wide class of stationary processes– the periodogram matrices at any fixed set of frequencies are asymptoticallyindependent and have a Wishart distribution; see for instance Brillinger (1981).However, applying such a bootstrap approach alone will fail in imitating thedependence across periodogram ordinates since it only can capture the second-order properties of the underlying multivariate process. Therefore, there seemto be no relevant statistics for multivariate time series for which such a pro-cedure will be valid. This is true even for the linear process class (1.1), forwhich the distribution of statistics like auto or cross-correlations depends onthe fourth-order moment structure of the process. In other words, for multivari-ate time series, any frequency domain bootstrap scheme that ignores the weakdependence structure of the periodogram matrix across frequencies, would besystematically failing to properly capture the distribution of a large class ofrelevant statistics.To summarize, a frequency domain bootstrap method that is capable of han-dling large classes of periodogram based statistics for general multivariate pro-cesses is not available so far. It is the purpose of the present paper to closethis gap. Towards this end we introduce the concept of the multivariate fre-quency domain hybrid bootstrap (MFHB). This procedure is composed of twoingredients. The first is the multivariate analogue of the previously mentionedmultiplicative bootstrap approach proposed for univariate processes. This ingre-dient is used to imitate features of the distribution of interest that depend onthe second-order structure of the process. The second ingredient of our MFHBuses an adaptation of the convolved subsampling idea (cf. Tewes et al. (2019)) tothe multivariate frequency domain context. It is used to capture those featuresof the same distribution that the first ingredient is systematically missing outon, and which are crucially determined by the fourth-order structure of the pro-cess. Notice that the MFHB procedure we propose is structurally related to thehybrid procedure that was proposed in Meyer et al. (2020) for univariate timeseries. However, the MFHB for multivariate time series has to take a number ofobstacles into account that are not present in the univariate situation and thathave to be solved in a novel way. In particular, the limiting distribution of mul-tivariate integrated periodogram statistics – as well as of functions thereof – isa multivariate complex normal distribution where both covariance and relationmatrix depend on the fourth-order moment structure of the underlying process.A valid frequency domain bootstrap procedure therefore has to imitate cor-rectly not only the covariance but also the relation matrix of the correspondingdistribution. Moreover, additional difficulties arise in the multivariate contextfrom applying a frequency domain bootstrap to the class of smooth functionsof integrated periodograms. Such an extension of the MFHB turns out to beimportant because it allows for applications to interesting classes of statistics,like for instance, to cross-correlations. In order to make such an extension possi- . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap ble, the second and the fourth-order characteristics of the covariance and of therelation matrix of the related smooth functions have to be separated and ap-propriately imitated by the two ingredients of the MFHB procedure. As we willsee, in contrast to integrated periodograms, this separation can not be explicitlycalculated and therefore the bootstrap scheme has to be appropriately modified.We show that the proposed MFHB procedure simultaneously establishes cor-rect covariance and relation matrix estimation for both aforementioned classesof statistics. Moreover, the MFHB will be proven to be valid for integratedperiodogram statistics as well as for smooth functions thereof and for a wideclass of stochastic processes which includes many known linear and nonlinearmultivariate time series models.The remainder of this paper is organized as follows. Section 2 states theassumptions we impose on the multivariate process and recalls some definitionsand limiting results for multivariate integrated periodogram statistics. Section3 presents the MFHB procedure for integrated periodograms and for functionsthereof, discusses some of its features and establishes its asymptotic validity.Section 4 presents simulations that investigate the finite sample performanceof the MFHB and compares it with that of the moving block bootstrap. Theproofs of our main results are presented in the Appendix of this paper while theproofs of some technical results are deferred to the Supplementary Material.
2. Preliminaries and basic results
Consider an R m -valued, weakly stationary stochastic process ( X ( t )) t ∈ Z withmean zero and component processes ( X r ( t )) t ∈ Z , r = 1 , . . . , m . We denote theautocovariance matrix of the process by Γ ( h ) = E ( X ( t + h ) X ( t ) ⊤ ) ∈ R m × m ,for h ∈ Z , with entries γ rs ( h ) = Cov( X r ( t + h ) , X s ( t )), r, s ∈ { , . . . , m } , whichfulfill γ rs ( h ) = γ sr ( − h ). Furthermore, the k -th order cumulant of ( X ( t )) t ∈ Z isdenoted, for any r , . . . , r k ∈ { , . . . , m } and t, h , . . . , h k − ∈ Z , bycum( X r ( t + h ) , X r ( t + h ) , . . . , X r k − ( t + h k − ) , X r k ( t )) . (2.1)The following stationarity and weak dependence assumptions are imposed onthe process under consideration. Assumption 1. ( X ( t )) t ∈ Z is eighth-order stationary, i.e. for all r j ∈ { , , . . . , m } all joint cumulants of the process up to order eight do not depend on the timepoint t . We therefore write c r r ...r k ( h , h , . . . , h k − ) for (2.1) . Furthermore, itholds for all r , . . . , r ∈ { , . . . , m } (i) P h ∈ Z (1 + | h | ) | γ r r ( h ) | < ∞ ,(ii) P h ,h ,h ∈ Z (1 + | h | + | h | + | h | ) | c r r r r ( h , h , h ) | < ∞ ,(iii) P h ,...,h ∈ Z | c r ...r ( h , h , . . . , h ) | < ∞ . The above assumption is satisfied for a large class of stochastic processeswhich includes, for instance, the multivariate linear processes class (1.1) under . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap certain moment assumptions on the innovations e t and summability conditionson the matrices B j ; see Brillinger (1981).Notice that since P h ∈ Z | γ r r ( h ) | < ∞ , for all r , r ∈ { , , . . . , m } , theprocess ( X ( t )) t ∈ Z possesses a spectral density matrix f : [ − π, π ] → C m × m withentries f rs ( λ ) = 12 π X h ∈ Z γ rs ( h ) e − ihλ , λ ∈ [ − π, π ] ,r, s ∈ { , , . . . , m } , which is Hermitian, i.e., f rs ( λ ) = f sr ( − λ ) = f sr ( λ ) and f ( λ ) = f ( λ ). Here, and throughout this work, A denotes the conjugate transposeof a complex-valued matrix A . Furthermore, f is bounded from above and is acontinuous function of the frequency λ . In the following we additionally assumethat the eigenvalues of the spectral density matrix f are all bounded away fromzero over all frequencies. Assumption 2.
A constant δ > exists such that the eigenvalues of the spectraldensity matrix, i.e., σ (cid:0) f ( λ ) (cid:1) satisfy min (cid:0) σ ( f ( λ ) (cid:1)(cid:1) > δ, for all frequencies λ ∈ ( − π, π ] . Given an m -dimensional vector time series X (1) , X (2) , . . . , X ( n ) stemmingfrom ( X ( t )) t ∈ Z , a common moment estimator of the cross covariances γ rs ( h ),for − n < h < n , is given by b γ rs ( h ) = ( n P n − ht =1 X r ( t + h ) X s ( t ) , for 0 ≤ h ≤ n − n P n −| h | t =1 X r ( t ) X s ( t + | h | ) , for − ( n − ≤ h ≤ − . In the following, the periodogram matrix I : [ − π, π ] → C m × m with I ( λ ) = d ( λ ) d ( λ ) is used as a basic statistic, where d ( λ ) = ( d ( λ ) , . . . , d m ( λ )) ⊤ , and the m -dimensional vector of finite Fourier transforms d r is given by d r ( λ ) = 1 √ πn n X t =1 X r ( t ) e − itλ , r = 1 , , . . . , m. Notice that I rs ( λ ) = d r ( λ ) d s ( − λ ) while I rr ( λ ) I ss ( λ ) = I rs ( λ ) I sr ( λ ). Further-more, it holds I rs ( λ ) = 12 π n − X h = − ( n − b γ rs ( h ) e − ihλ . Let λ j,n = 2 πj/n , j ∈ G ( n ), be the Fourier frequencies based on a samplesize n , where G ( n ) := { j ∈ Z : 1 ≤ | j | ≤ [ n/ } . (2.2) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Denote further for p, q, r, s ∈ { , , . . . , m } , by f pqrs ( λ, µ, η ) = 1(2 π ) X h ,h ,h ∈ Z c pqrs ( h , h , h ) e − i ( h λ + h µ + h η ) the fourth-order cumulant spectral density of ( X ( t )) t ∈ Z . The following lemma,which describes the covariance structure of the elements of the periodogrammatrix at the Fourier frequencies, is very useful for our subsequent analysis. Lemma 2.1.
Let ( X ( t )) t ∈ Z have finite fourth moments and be fourth-orderstationary satisfying the summability conditions (i) and (ii) of Assumption 1.Then it holds for all Fourier frequencies λ j,n , λ k,n ∈ [0 , π ] , and for all r, s, v, w ∈{ , . . . , m } Cov( I rs ( λ j,n ) , I vw ( λ k,n )) = S + S + S , where S = 2 πn f rsvw ( λ j,n , − λ j,n , − λ k,n ) + O (cid:18) n (cid:19) ,S = ( f rv ( λ j,n ) · f sw ( − λ j,n ) + O (cid:0) n (cid:1) , λ j,n = λ k,n O (cid:0) n (cid:1) , λ j,n = λ k,n ,S = ( f rw ( λ j,n ) · f sv ( λ j,n ) + O (cid:0) n (cid:1) , λ j,n = λ k,n ∈ { , π }O (cid:0) n (cid:1) , else , and where all O ( · ) bounds are uniform over all Fourier frequencies. As it is seen from the above result, the covariance between the elementsof the periodogram matrix at different frequencies vanishes by the order 1 /n and depends on the fourth-order cumulant spectral density f rsvw . We will seethat this rate is not fast enough so that the covariance between periodogramordinates – and consequently the fourth-order structure of the process – showup in the limiting distribution of integrated periodogram statistics. This classof statistics, which we will consider in the following, is defined as functions ofthe periodogram matrix. In particular, we require: Assumption 3.
For some J ∈ N , ϕ j : [ − π, π ] → C , j = 1 , , . . . , J are square-integrable functions which are bounded in absolute value. Now, the vector of integrated periodogram statistics we consider is definedas M n = (cid:18) M ( ϕ j , I r j s j ) = Z π − π ϕ j ( λ ) I r j s j ( λ ) dλ , j = 1 , , . . . , J (cid:19) ⊤ . (2.3)Note that M n is an estimator of the following vector of spectral means M = (cid:18) M ( ϕ j , f r j s j ) = Z π − π ϕ j ( λ ) f r j s j ( λ ) dλ , j = 1 , , . . . , J (cid:19) ⊤ . (2.4) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Before proceeding with some limiting results regarding the behaviour of theestimators M n , let us look at some examples. Example 2.2.
The sample cross-covariance b γ rs ( h ) at lag ≤ h < n is anintegrated periodogram statistic. This is due to the fact that choosing ϕ ( λ ) = e ihλ it follows from straightforward calculations that b γ rs ( h ) = M ( ϕ, I rs ) as well as γ rs ( h ) = M ( ϕ, f rs ) . Notice that for − n < h < , b γ rs ( h ) = b γ sr ( − h ) is anestimator of γ rs ( h ) . Example 2.3.
The sample cross-correlation b ρ rs ( h ) at lag ≤ h < n is afunction of integrated periodograms. To elaborate, let ϕ ( λ ) = e ihλ , ϕ ( λ ) = ϕ ( λ ) = 1 and consider the corresponding three-dimensional vector of spectralmeans. Then, b ρ rs ( h ) = M ( ϕ , I rs ) / p M ( ϕ , I rr ) M ( ϕ , I ss ) is an estimator of ρ rs ( h ) = M ( ϕ , f rs ) / p M ( ϕ , f rr ) M ( ϕ , f ss ) , the lag h cross-correlation. Notice that ρ rs ( h ) and b ρ rs ( h ) are functions of the elements of thevectors M = ( M ( ϕ , f rs ) , M ( ϕ , f rr ) , M ( ϕ , f ss )) ⊤ and M n = ( M ( ϕ , I rs ) , M ( ϕ , I rr ) , M ( ϕ , I ss )) ⊤ ,respectively. The limiting distribution of b ρ rs ( h ) will be derived in Example 2.4. For practical calculations the integral in the expression for M ( ϕ, I rs ) is com-monly replaced by a Riemann sum using the Fourier frequencies. The corre-sponding approximation of M ( ϕ, I rs ) is given by M G ( n ) ( ϕ, I rs ) = 2 πn X l ∈G ( n ) ϕ ( λ l,n ) I rs ( λ l,n ) . Before discussing the asymptotic properties of M n , we evaluate on some proper-ties of the complex normal distribution which are important for our subsequentdiscussion. An m -dimensional complex random vector X = ( X , X , . . . , X m ) ⊤ is called complex normal (or complex Gaussian) if and only if the 2 m -dimensionalvector of real and imaginary parts (cid:0) Re( X ) ⊤ , Im( X ) ⊤ (cid:1) ⊤ := (cid:0) Re( X ) , . . . , Re( X m ) , Im( X ) , . . . , Im( X m ) (cid:1) ⊤ has a 2 m -dimensional (real) normal distribution. The complex normal distribu-tion is determined by three parameters: expectation µ X = E ( X ), covariancematrix Σ X , and relation matrix Γ X , where Σ X = E (cid:0) [ X − µ X ] [ X − µ X ] (cid:1) , Γ X = E (cid:0) [ X − µ X ] [ X − µ X ] ⊤ (cid:1) , recalling that A denotes the conjugate transpose of any matrix or vector A . Wetherefore write X ∼ N cm ( µ X , Σ X , Γ X ) . We are particularly interested in the case of centered vectors, i.e. µ X = .It is then easy to see that the covariance matrix of the real normal vector . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap (Re( X ) ⊤ , Im( X ) ⊤ ) ⊤ can be directly deduced from the parameters of the com-plex normal vector X via X ∼ N cm ( , Σ X , Γ X ) ⇔ (cid:18) Re( X )Im( X ) (cid:19) ∼ N m ( , G X ) , (2.5)where G X = E "(cid:18) Re( X )Im( X ) (cid:19) (cid:18) Re( X )Im( X ) (cid:19) ⊤ = (Re( Σ X ) + Re( Γ X )) ( − Im( Σ X ) + Im( Γ X )) (Im( Σ X ) + Im( Γ X )) (Re( Σ X ) − Re( Γ X )) . The above expression shows that G X is fully determined by Σ X and Γ X andvice versa. In particular, denote by G X ; ij , i, j ∈ { , } , the four matrices ap-pearing in the i -th row and j -th column of the above block matrix G X . Then Σ X = ( G X ;11 + G X ;22 ) + i ( G X ;21 − G X ;12 ) while Γ X = ( G X ;11 − G X ;22 ) + i ( G X ;21 + G X ;12 ).The equivalence from (2.5) also carries over to weak convergence: For a sequenceof complex random vectors ( X n ) n ∈ N we have X n d → N cm ( , Σ X , Γ X ) if and onlyif (Re( X n ) ⊤ , Im( X n ) ⊤ ) ⊤ d → N m ( , G X ).For µ X = there are two particularly important special cases where the dis-tribution is completely determined by the matrix Σ X , the real normal case andthe circularly symmetric case. The centered random vector X is almost surelyreal-valued if and only if Σ X = Γ X , as a simple calculation shows. In this caseone may switch to the usual notation for real-valued random vectors: X ∼ N cm ( , Σ X , Σ X ) ⇔ X ∼ N m ( , Σ X ) . Another important special case is the circularly symmetric case. X is calledcircularly symmetric if for all ϕ ∈ ( − π, π ] the distribution of e iϕ X equals thedistribution of X . This is the case if and only if µ X = and Γ X = . In thiscase the joint distribution of real and imaginary parts takes a very specific form,as can be seen from (2.5): X ∼ N cm ( , Σ X , ) ⇔ (cid:18) Re( X )Im( X ) (cid:19) ∼ N m (cid:18)(cid:18) (cid:19) , (cid:18) Re( Σ X ) − Im( Σ X )Im( Σ X ) Re( Σ X ) (cid:19)(cid:19) . In particular, Re( X ) and Im( X ) then are identically distributed.Under Assumption 1 and some additional weak dependence conditions onthe process ( X ( t )) t ∈ Z it is known that, for any r, s ∈ { , . . . , m } , M ( ϕ, I rs ) is aconsistent estimator for M ( ϕ, f rs ) and that the following central limit theoremholds true for V n := √ n ( M n − M ): V n = √ n (cid:16) M ( ϕ j , I r j s j ) − M ( ϕ j , f r j s j ) , j = 1 , . . . , J (cid:17) ⊤ d −→ V , (2.6) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap where V is a J -dimensional complex normal random vector with mean zero,covariance matrix Σ , and relation matrix Γ . That is, V = ( V r j s j ( ϕ j ) , j = 1 , . . . , J ) ⊤ ∼ N cJ ( , Σ , Γ ) . V fulfils V r j s j ( ϕ j ( · )) = V r j s j ( ϕ j ( − · )) , (2.7)and covariance and relation parameters are given as follows. The covariancematrix decomposes into Σ = Σ + Σ where the ( j, k )-th element of the matrix Σ is given byΣ jk = Cov( V r j s j ( ϕ j ) , V r k s k ( ϕ k )) = Σ jk + Σ jk (2.8)with Σ jk = 2 π Z π − π ϕ j ( λ ) ϕ k ( λ ) f r j r k ( λ ) f s j s k ( − λ ) dλ (2.9)+ 2 π Z π − π ϕ j ( λ ) ϕ k ( − λ ) f r j s k ( λ ) f s j r k ( − λ ) dλ , and Σ jk = 2 π Z π − π Z π − π ϕ j ( λ ) ϕ k ( − λ ) f r j s j r k s k ( λ , − λ , λ ) dλ dλ . (2.10)The relation matrix decomposes into Γ = Γ + Γ where the ( j, k )-th elementis given by Γ jk = Cov( V r j s j ( ϕ j ) , V r k s k ( ϕ k )) = Γ jk + Γ jk (2.11)with Γ jk = 2 π Z π − π ϕ j ( λ ) ϕ k ( − λ ) f r j r k ( λ ) f s j s k ( − λ ) dλ (2.12)+ 2 π Z π − π ϕ j ( λ ) ϕ k ( λ ) f r j s k ( λ ) f s j r k ( − λ ) dλ , and Γ jk = 2 π Z π − π Z π − π ϕ j ( λ ) ϕ k ( λ ) f r j s j r k s k ( λ , − λ , λ ) dλ dλ . (2.13)Observe that due to (2.7) it holds Γ jk = Cov( V r j s j ( ϕ j ( · )) , V r k s k ( ϕ k ( − · ))).Hence, Γ can be obtained from Σ by replacing ϕ k ( · ) with ϕ k ( − · ), and westated the explicit form merely for convenience reasons. We refer to Rosenblatt(1963), Brillinger (1981), Dahlhaus (1985) and Taniguchi and Kakizawa (2000).As it is seen from the above expressions the terms Σ and Γ appearing in the . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap covariance and relation matrices of the limiting complex Gaussian distributiondepend on the entire fourth-order moment structure of the process ( X ( t )) t ∈ Z ,and this dependence is due to the covariance of periodogram ordinates acrossfrequencies; see Lemma 2.1.As argued in (2.5), the weak convergence V n d → V stated in (2.6) is equivalentto the statement that the 2 J -dimensional real random vector (cid:0) Re( V n ) ⊤ , (Im( V n ) ⊤ (cid:1) ⊤ converges weakly to a 2 J -dimensional real normal distribution with mean zeroand covariance matrix G = E (cid:2) (cid:18) Re( V )Im( V ) (cid:19) (cid:0) Re( V ) ⊤ , Im( V ) ⊤ (cid:1)(cid:3) = (Re( Σ ) + Re( Γ )) ( − Im( Σ ) + Im( Γ )) (Im( Σ ) + Im( Γ )) (Re( Σ ) − Re( Γ )) . (2.14)Notice that the limiting distribution V is not necessarily real-valued even ifthe functions ϕ j , j = 1 , , . . . , J are real-valued. However, if the functions ϕ j satisfy ϕ j ( − λ ) = ϕ j ( λ ) for j = 1 , , . . . , J , then Σ = Γ and V is real valued,that is V ∼ N J ( , Σ ). Example 2.4 (Ex. 2.3 continued) . With the central limit theorem given for V n above, we can now state the limiting distribution of the sample cross-correlation.For the particular vectors M n and M from Example 2.3 it can easily be seenthat V n is real-valued and converges to a real Gaussian random vector V . Ap-plying the delta method to this CLT it follows after straightforward but tediouscomputations that √ n ( b ρ rs ( h ) − ρ rs ( h )) d −→ N (0 , τ ) , where τ = X j ∈ Z (cid:26) ρ rr ( j ) ρ ss ( j ) + ρ rs ( j + h ) ρ sr ( j − h ) + c rsrs ( j, j − h, h ) γ rr (0) γ ss (0)+ 12 ρ rs ( h ) (cid:2) ρ rr ( j ) + ρ ss ( j ) + 2 ρ rs ( j ) (cid:3) − ρ rs ( h ) (cid:2) ρ rr ( j ) ρ sr ( j − h ) + ρ rs ( j ) ρ ss ( j − h ) (cid:3) + 14 ρ rs ( h ) (cid:20) c rrrr ( j, j, γ rr (0) + c ssss ( j, j, γ ss (0) + 2 c rrss ( j, j, γ rr (0) γ ss (0) (cid:21) − ρ rs ( h ) p γ rr (0) γ ss (0) (cid:20) c rsrr ( j, j − h, γ rr (0) + c rsss ( j, j − h, γ ss (0) (cid:21)) . The above dependence of the variance of the limiting Gaussian distributionon the fourth-order structure of the process does not simplify even in the case oflinear processes as this is the case for univariate linear processes. The followingsimple example illustrates this fact. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Example 2.5 (Example 2.4 continued) . Consider the following simple bivariatelinear process, which is a vector moving average process of order : X ( t ) = (cid:18) X ( t ) X ( t ) (cid:19) = (cid:18) ε ( t ) ε ( t ) (cid:19) + (cid:18) − (cid:19) (cid:18) ε ( t − ε ( t − (cid:19) , (2.15) where ( ε ( t )) t ∈ Z and ( ε ( t )) t ∈ Z are independent univariate i.i.d. white noiseprocesses, with E ( ε j ( t )) = 0 , E( ε j ( t ) ) = 1 , and kurtosis E( ε j ( t ) ) =: η j , for j = 1 , . We take a look at the cross-correlation at lag h = 0 , and derivethe limiting variance of √ n ( b ρ (0) − ρ (0)) . For the process (2.15) it holds ρ (0) = 0 , and the expression τ from Example 2.4 simplifies considerably to τ = ρ (0) ρ (0) + 2 ρ (1) ρ (1) + 2 ρ (1) ρ (1) + X j ∈ Z c ( j, j, γ (0) γ (0)= 1 + η −
39 + η − . As it is seen – and in contrast to what happens for univariate linear processes– in the multivariate context the limiting variance depends on the fourth-orderstructure of the underlying white noise which can in general not be expressed interms of second-order quantities of the process ( X ( t )) . The considerations in this section made it clear that for a frequency domainbootstrap to be successful in the multivariate context, it has to appropriatelyimitate the second and the fourth order structure of the underlying stochasticprocess. This is what the MFHB procedure achieves.
3. The multivariate frequency domain hybrid bootstrap (MFHB)
We discuss the frequency domain procedure proposed in this paper in twoparts. First we motivate and describe the MFHB procedure for integrated pe-riodograms. Then we present a modification of the MFHB so that in can besuccessfully applied to functions of integrated periodograms.
The MFHB procedure for integrated periodograms generates two sets of boot-strap pseudo random variables which will be denoted by the superscripts ∗ and +, respectively. The procedure is divided into three main steps which aredenoted by Step I, Step II and Step III. In Step I independent pseudo peri-odogram matrices are generated which are denoted by I ∗ . This is done usingthe asymptotic complex Gaussian distribution of the vector of finite Fouriertransforms and their asymptotic independence across frequencies. In Step II theidea of convolved bootstrap of subsamples, cf. Tewes et al. (2019), is adoptedto develop an algorithm which generates a second set of pseudo periodogrammatrices, denoted by I + , which are independent of I ∗ . The pseudo periodogram . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap matrices I + correctly imitate the weak dependence of the ordinary periodogrammatrices I across frequencies within subsamples. Step III merges the integratedperiodogram statistics based on the two bootstrapped periodograms I ∗ and I + in an appropriate way. The merging ensures that replicates of the integratedperiodograms based on I ∗ imitate all features of the corresponding distribu-tion up to those depending on the fourth-order structure of the process. Thefourth-order features of this distribution are contributed by the correspondingstatistics based on the pseudo periodograms I + . Notice that the MFHB boot-strap approximations are designed in such a way that the covariance and therelation matrix of the limiting complex Gaussian distribution of the integratedperiodograms is consistently estimated, see Remark 3.2. Furthermore, the boot-strap random matrices I ∗ and I + are defined on the same probability space, withprobability measure P ∗ , and are independent from each other. Consequently, allbootstrap expectations, variances and covariances are denoted in the followingby E ∗ , Var ∗ , and Cov ∗ , respectively.The following algorithm implements the previously discussed ideas. Step I.1
Let b f be an estimator of the spectral density matrix f anddenote by b f rs ( λ ) the ( r, s )-th element of b f ( λ ). Step I.2
Generate, independently for j = 1 , , . . . , N := [ n/ d ∗ ( λ j,n ) = ( d ∗ ( λ j,n ) , d ∗ ( λ j,n ) , . . . , d ∗ m ( λ j,n )) ⊤ ∼ N cm ( , b f ( λ j,n ) , )and calculate the pseudo periodogram matrices I ∗ ( λ j,n ) = ( d ∗ ( λ j,n ) d ∗ ( λ j,n ) for j = 1 , , . . . , N , I ∗ ( − λ j,n ) ⊤ , for j = − , − , . . . , − N with entries denoted by I ∗ rs ( λ j,n ), r, s ∈ { , , . . . , m } . Step I.3
Let V ∗ n = √ n (cid:16) M G ( n ) ( ϕ j , I ∗ r j ,s j ) − M G ( n ) ( ϕ j , b f r j ,s j ) , j = 1 , , . . . , J (cid:17) ⊤ , be the analogue of V n based on I ∗ . Step II.1
Select a positive integer b < n and consider the set of all peri-odogram matrices based on subsamples of length b . Denote by I t ( λ j,b ) theperiodogram matrix of the subsample X ( t ) , X ( t +1) , . . . , X ( t + b −
1) at fre-quency λ j,b = 2 πj/b , λ j,b ∈ G ( b ). Let e f ( λ j,b ) = ( n − b +1) − P n − b +1 t =1 I t ( λ j,b )be the average of the periodogram matrices of the subsamples at frequency λ j,b . Step II.2
Define the set of frequency domain residual matrices U t ( λ j,b ) = e f − / ( λ j,b ) I t ( λ j,b ) e f − / ( λ j,b ) , where j = 1 , , . . . , [ b/ t = 1 , , . . . , n − b + 1 and A − / denotes thesquare root of the inverse matrix A − , i.e., A − / A − / = A − . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Step II.3
Let k = [ n/b ] and generate i.i.d. bootstrap random variables i , i , . . . , i k with a discrete uniform distribution on the set { , , . . . , n − b + 1 } . Let I + ( λ j,b ) = 1 k k X ℓ =1 I + i ℓ ( λ j,b ) , where I + i ℓ ( λ j,b ) = b f / ( λ j,b ) U i ℓ ( λ j,b ) b f / ( λ j,b ). We write I + ℓ ( λ j,b ) for I + i ℓ ( λ j,b ).The entries of the pseudo periodogram matrices I + ( λ j,b ) are denoted I + rs ( λ j,b ). Step II.4
Let V + n = √ kb (cid:16) M G ( b ) ( ϕ j , I + r j ,s j ) − M G ( b ) ( ϕ j , b f r j ,s j ) , j = 1 , , . . . , J (cid:17) ⊤ be the analogue of V n based on I + . Step III.1
Let G ∗ n = E ∗ (cid:2) (cid:18) Re( V ∗ n )Im( V ∗ n ) (cid:19) (cid:0) Re( V ∗ n ) ⊤ , Im( V ∗ n ) ⊤ (cid:1)(cid:3) , G + n = E ∗ (cid:2) (cid:18) Re( V + n )Im( V + n ) (cid:19) (cid:0) Re( V + n ) ⊤ , Im( V + n ) ⊤ (cid:1)(cid:3) . Furthermore, denote by Σ +1 ,n the J × J matrix with ( j, k )-th element givenby σ + jk = 4 π b [ b/ X l = − [ b/ ϕ j ( λ l,b ) ϕ k ( λ l,b ) S r j s j s k r k ( λ l,b )+ 4 π b [ b/ X l = − [ b/ ϕ j ( λ l,b ) ϕ k ( − λ l,b ) S r j s j r k s k ( λ l,b ) . and Γ +1 ,n the J × J matrix with ( j, k )-th element given by c + jk = 4 π b [ b/ X l = − [ b/ ϕ j ( λ l,b ) ϕ k ( − λ l,b ) S r j s j s k r k ( λ l,b )+ 4 π b [ b/ X l = − [ b/ ϕ j ( λ l,b ) ϕ k ( λ l,b ) S r j s j r k s k ( λ l,b ) , where S rsuw ( λ ) = 1 n − b + 1 n − b +1 X t =1 (cid:0)e I t,rs ( λ ) − b f rs ( λ ) (cid:1)(cid:0)e I t,uw ( λ ) − b f uw ( λ ) (cid:1) , . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap and e I t,rs ( λ ) is the ( r, s )-th element of the matrix e I t ( λ ) = b f / ( λ ) U t ( λ ) b f / ( λ ) . Finally, define the matrix C + n = (Re( Σ +1 ,n ) + Re( Γ +1 ,n )) ( − Im( Σ +1 ,n ) + Im( Γ +1 ,n )) (Im( Σ +1 ,n ) + Im( Γ +1 ,n )) (Re( Σ +1 ,n ) − Re( Γ +1 ,n )) . Step III.2
Calculate G ◦ n = G ∗ n + (cid:0) G + n − C + n (cid:1) and (cid:18) Re( V ◦ n )Im( V ◦ n ) (cid:19) = (cid:0) G ◦ n (cid:1) / (cid:0) G ∗ n (cid:1) − / (cid:18) Re( V ∗ n )Im( V ∗ n ) (cid:19) . (3.1) Step III.3
Approximate the distribution of V n by that of V ◦ n = Re( V ◦ n ) + i Im( V ◦ n ) . The following series of remarks clarifies several aspects of the above bootstrapprocedure.
Remark 3.1.
In Step I.2, the problem of generating a complex normal ran-dom vector d ∗ ( λ j,n ) can in practice be reduced to that of generating a realnormal random vector (which is usually a pre-implemented routine). Since N cm ( , b f ( λ j,n ) , ) is a circularly symmetric complex normal distribution, onemay generate (cid:18) Re( d ∗ ( λ j,n ))Im( d ∗ ( λ j,n )) (cid:19) ∼ N m (cid:18) (cid:19) , Re( b f ( λ j,n )) − Im( b f ( λ j,n ))Im( b f ( λ j,n )) Re( b f ( λ j,n )) !! , and then set d ∗ ( λ j,n ) = Re( d ∗ ( λ j,n )) + i · Im( d ∗ ( λ j,n )). Remark 3.2. (i) As already mentioned, the pseudo periodogram matrices I ∗ ( λ j,n ) in StepI.2 are generated using the fact that the m -dimensional vector of finiteFourier transforms converges towards a circular symmetric complex nor-mal distribution. Notice that the I ∗ ( λ j,n ) generated in this step are inde-pendent across frequencies. Therefore, the integrated periodogram statistic V ∗ n obtained in Step I.3 and based on these pseudo periodogram matrices,is only able to imitate the parts Σ and Γ of the limiting variance andrelation matrices Σ and Γ , respectively. This will be proven in Lemma 3.9(i). Recall that Σ and Γ only depend on the spectral density matrix f of the underlying process. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap (ii) In Step II.1 the periodogram matrices I t ( λ j,b ) of the subsamples of length b are used. In Step II.2 frequency domain residual matrices U t ( λ j,b ) aredefined which are i.i.d. resampled in Step II.3 to obtain the pseudo pe-riodogram matrix I + ( λ j,b ). Notice that the latter matrix is an averageof k independent matrices I + ℓ ( λ j,b ) calculated using k randomly selectedresidual matrices U i ℓ ( λ j,b ) and after pre- and post-multiplying them withthe square root of the estimated spectral density matrix b f ( λ j,b ). Observethat all quantities in Step II are calculated at the Fourier frequencies λ j,b = 2 πj/b , j ∈ G ( b ) corresponding to the length b of the subsamples.(iii) The integrated periodogram statistic V + n based on I + ( λ j,b ) is used toimitate the missing terms Σ and Γ which depend on the fourth ordermoment structure of the process. To do this appropriately, notice firstthat the pseudo statistic V + n imitates asymptotically correct the entirecovariance and relation matrices Σ and Γ as will be proven in Lemma 3.9(ii). Since we use V ∗ n generated in Step I to imitate the distribution of V n and the matrices Σ and Γ , we have to subtract from the covarianceand relation matrix of V + n the corresponding parts Σ +1 ,n and Γ +1 ,n so thatonly the desired estimators of the components Σ and Γ are left. This isdone in Step III.1 and Step III.2. In particular, in Step III.1 the elements σ + jk and c + jk of the matrices Σ +1 ,n and Γ +1 ,n are explicitly calculated and thecorresponding matrix C + n is obtained. The latter matrix is then subtractedfrom the matrix G + n , so that the obtained matrix G + n − C + n contains theelements of the covariance and relation matrix of V + n which only dependon the fourth-order structure of the process, that is the parts Σ +2 ,n and Γ +2 ,n . These are used to estimate Σ and Γ . Now, adding to this differencethe matrix G ∗ n , leads to a new matrix G ◦ n which correctly imitates bothparts of Σ and Γ , compare also Lemma 3.9 (ii). We stress here the factthat in G ◦ n , the parts corresponding to Σ and Γ are contributed bythe bootstrap procedure based on the asymptotic Gaussianity of the finiteFourier transforms, that is by Σ ∗ ,n and Γ ∗ ,n , while the parts Σ and Γ by the convolved bootstrap procedure, that is by Σ +1 ,n and Γ +1 ,n .(iv) In Step III.2 the matrix G ◦ n is used in equation (3.1) to appropriatelyrescale the bootstrap vector V ∗ n . The resulting bootstrap complex vector V ◦ n is finally used in Step III.3 to approximate the distribution of thecomplex vector V n . Remark 3.3.
In the definition of the frequency domain residual matrices U t ( λ j,b )in Step II.2, the pre- and post-multiplication with the matrix e f − / ( λ j,b ) ensuresthat the resampled residual matrices U i ℓ ( λ j,b ) satisfyE ∗ ( U i ℓ ( λ j,b )) = e f − / ( λ j,b ) 1 n − b + 1 n − b +1 X t =1 I t ( λ j,b ) e f − / ( λ j,b ) = Id m , where Id m denotes the m × m unit matrix. Therefore,E ∗ ( I + ( λ j,b )) = E ∗ ( I + i ℓ ( λ j,b )) = b f / ( λ j,b ) · Id m · b f / ( λ j,b ) = b f ( λ j,b ) . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Furthermore, by the consistency of e f ( λ ) as an estimator of f ( λ ), see the proofof Lemma 3.9, it holds true that for any fixed frequency λ ∈ (0 , π ), U t ( λ ) = e f − / ( λ ) I t ( λ ) e f − / ( λ ) D → U ( λ ), as b → ∞ , where U ( λ ) has the complex Wishartdistribution of dimension m and one degree of freedom, i.e., U ( λ ) ∼ W Cm (1 , Id m );see Brillinger (1981), Section 4.2. Recall that if U ( λ ) ∼ W Cm (1 , Id m ) thenE( U ( λ )) = Id m and Cov( U jk ( λ ) , U lm ( λ )) = δ j,l δ k,m for any two elements U jk ( λ )and U lm ( λ ) of the matrix U ( λ ). Remark 3.4.
The terms S rsuw ( λ ) which appear in the expressions of σ + jk and c + jk in Step III.1 are obtained by evaluating the covariance expressionsCov ∗ ( I i ,r j s j ( λ ℓ,b ) , I i ,r k s k ( λ ℓ,b )) where I i ,rs ( λ ℓ,b ) denotes the ( r, s )-th elementof I i ( λ ℓ,b ). To elaborate on the parameters they are estimating, consider thesimplified form, e S rsuw ( λ ) = 1 n − b + 1 n − b +1 X t =1 (cid:0) I t,rs ( λ ) − f rs ( λ ) (cid:1)(cid:0) I t,uw ( λ ) − f uw ( λ ) (cid:1) , obtained after replacing estimated by true quantities and where I t,rs ( λ ) denotesthe ( r, s )-th of the matrix I t ( λ ) defined in Step II.1. See also the proof of Lemma3.9 (ii). It can then be shown that e S rsuw ( λ j,b ) = f rw ( λ j,b ) f su ( − λ j,b ) + o P (1) = f rw ( λ j,b ) f us ( λ j,b ) + o P (1) . (3.2)To see this observe thatE( e S rsuw ( λ j,b )) = 1 n − b + 1 n − b +1 X t =1 E (cid:0) ( I t,rs ( λ j,b ) − f rs ( λ j,b ))( I t,wu ( λ j,b ) − f wu ( λ j,b )) (cid:1) = 1 n − b + 1 n − b +1 X t =1 Cov (cid:0) I t,rs ( λ j,b ) , I t,wu ( λ j,b ) (cid:1) + O ( b − )= f rw ( λ j,b ) f su ( − λ j,b ) + o (1) , where the second equality follows because E( I t,rs ( λ j,b )) = f rs ( λ j,b ) + O ( b − )with the O ( b − ) term independent of t , and the last equality follows usingLemma 2.1. By Lemma 3.8 below, it can further be shown that Var( e S rsuw ( λ j,b )) → Remark 3.5.
Notice that if the limiting distribution of V n is real-valued, i.e. if V ∼ N (0 , Σ , Σ ), then Step III.1 and Step III.2 simplify. In particular, in thiscase one can set the matrices G ∗ n and G + n as real-valued and J × J dimensionalaccording to G ∗ n = E ∗ ( V ∗ n ( V ∗ n ) ⊤ ) and G + n = E ∗ ( V + n ( V + n ) ⊤ ). Consequently, thematrix C + n can then also be set as real-valued and J × J dimensional accordingto C + n = Σ +1 ,n . The MFHB bootstrap procedure proposed – appropriately modified – also canbe applied to estimate the distribution of statistics which are functions of inte- . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap grated periodograms such as, for instance, sample cross-correlations. To elabo-rate, suppose that g = ( g , g , . . . , g L ) : C J → C L (3.3)is some (smooth) function and that the statistic of interest is given by √ n ( R n − R ) where R n = g ( M n ) and R = g ( M ) for the vector of integrated periodograms M n and spectral means M given in (2.3) and (2.4). Sample cross-correlationscan be expressed in this way as can be seen from Example 2.3.Our bootstrap procedure from the previous section can be adapted to approx-imate the distribution of √ n ( R n − R ). We first impose some smoothness as-sumption on the function g . To do so, we can of course interpret g as a functiondefined on R J via the identification g ( z ) = g (cid:18) Re( z )Im( z ) (cid:19) ∀ z ∈ C J . Splitting up real and imaginary parts also for the values of g leads to the ac-companying function ˜ g : R J → R L given by˜ g ( x ) := (cid:18) Re( g ( x ))Im( g ( x )) (cid:19) ∀ x ∈ R J . Assumption 4.
The function ˜ g : R J → R L is continuously differentiable (inthe real sense) in some neighbourhood around (Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ with Jacobimatrix J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ) . Note that – although considering complex-valued random variables and com-plex functions g – we require only real differentiability of the accompanyingfunction ˜ g . This is a much less restrictive condition than assuming complexdifferentiability of g . We define the random vectors e R n := (cid:18) Re( R n )Im( R n ) (cid:19) , e R := (cid:18) Re( R )Im( R ) (cid:19) . Now, applying the delta method to (2.6), respectively (2.14), we get the limitingresult √ n ( e R n − e R ) d −→ J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ) N J ( , G ) . (3.4)From the continuous mapping theorem, applied for h ( x , x ) := x + i · x , x , x ∈ R L , it follows √ n ( R n − R ) d −→ W ∼ N cL ( , Σ R , Γ R ) , (3.5)for suitable matrices Σ R and Γ R which can be obtained from (3.4) (the preciseform of Σ R and Γ R is not needed for the ensuing bootstrap algorithm and itsvalidity results).We propose the following MFHB approximation of the distribution of √ n ( R n − R ). . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Step e I Apply Steps I, II, III.1, and III.2 of the algorithm from Section 3.1to obtain the matrix G ◦ n and the pseudo periodogram matrices I ∗ ( λ j,n ), j ∈ G ( n ). Step e II Let M ∗ n := (cid:16) M G ( n ) ( ϕ j , I ∗ r j ,s j ) , j = 1 , , . . . , J (cid:17) ⊤ , c M n := (cid:16) M G ( n ) ( ϕ j , b f r j ,s j ) , j = 1 , , . . . , J (cid:17) ⊤ , and (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) := √ n " ˜ g (cid:18) Re( M ∗ n )Im( M ∗ n ) (cid:19) − ˜ g Re( c M n )Im( c M n ) ! . Step f III
Let e G ∗ n := E ∗ "(cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) ⊤ − E ∗ (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) · E ∗ (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) ⊤ and e G ◦ n := J ˜ g Re( c M n )Im( c M n ) ! × G ◦ n × J ˜ g Re( c M n )Im( c M n ) ! ⊤ . Step f IV Calculate (cid:18)
Re( W ◦ n )Im( W ◦ n ) (cid:19) := (cid:0) e G ◦ n (cid:1) / (cid:0) e G ∗ n (cid:1) − / (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) . Step e V Approximate the distribution of √ n ( R n − R ) by that of W ◦ n := Re( W ◦ n ) + i Im( W ◦ n ) . Remark 3.6. (i) The algorithm above works on real and imaginary parts separately. Thishas the advantage that we do not have to impose complex differentiabilityassumptions on the function g ; it suffices to assume real differentiabilityof ˜ g .(ii) In Step e II the bootstrap vector (Re( W ∗ n ) ⊤ , Im( W ∗ n ) ⊤ ) ⊤ imitates thestructure of √ n ( e R n − e R ) = √ n (cid:20) ˜ g (cid:18) Re( M n )Im( M n ) (cid:19) − ˜ g (cid:18) Re( M )Im( M ) (cid:19)(cid:21) . But as a comparison of (5.12) in the proof of Theorem 3.11 with (3.4)shows, the limiting distribution of √ n ( e R n − e R ) is only partially captured . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap by (Re( W ∗ n ) ⊤ , Im( W ∗ n ) ⊤ ) ⊤ : While the factor determined by the Jacobimatrix is established properly, the matrix G entering the covariance struc-ture of the normal distribution is not captured. This is corrected in Steps f III and f IV where the bootstrap random vectors are first standardized viamultiplication with (cid:0) e G ∗ n (cid:1) − / . Then the proper variance is establishedby (cid:0) e G ◦ n (cid:1) / which simultaneously approximates both G and the Jacobimatrix J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ). Example 3.7.
One very important statistic of interest that can be written as asmooth function of integrated periodograms is the sample cross-correlation. Ex-amples 2.3 and 2.4 show how the function g can be defined such that √ n ( b ρ rs ( h ) − ρ rs ( h )) takes the form √ n ( R n − R ) . The limiting variance of this expressionin general takes the form indicated in (3.5) , and for this particular example itis given by the expression τ from Example 2.4. Since the function g fulfils As-sumption 4, our bootstrap algorithm W ◦ n is asymptotically valid for this statistic(as is proven in Theorem 3.11), and successfully imitates the rather complicatedexpression τ . We need the following consistency assumption for the spectral density estimator b f used in Step I of the bootstrap procedure in Section 3.1. Assumption 5.
The spectral density estimator b f is Hermitian, positive definiteand satisfies sup λ ∈ [ − π,π ] k b f ( λ ) − f ( λ ) k F P → . We refer to Wu and Zaffaroni (2018) for estimators of the spectral densitymatrix and for general classes of multivariate processes for which the aboveassumption is satisfied. We next establish the following two lemmas, the proofsof which are given in the Supplementary Material.
Lemma 3.8.
Let Assumption 1 be fulfilled. Denote by I t ; rs ( λ ) the ( r, s ) -thelement of the periodogram matrix I t ( λ ) based on the subsample X ( t ) , X ( t +1) , . . . , X ( t + b − . Then,(i) P ℓ ∈G ( b ) (cid:12)(cid:12) e f rs ( λ ℓ,b ) − E ( I rs ( λ ℓ,b )) (cid:12)(cid:12) = O P ( p b / ( n − b + 1)) , (ii) X ℓ ,ℓ ∈G ( b ) (cid:12)(cid:12)(cid:12) n − b + 1 n − b +1 X t =1 I t ; r j s j ( λ ℓ ,b ) I t ; r k s k ( λ ℓ ,b ) − E (cid:0) I r j s j ( λ ℓ ,b ) I r k s k ( λ ℓ ,b ) (cid:1)(cid:12)(cid:12)(cid:12) = O P ( p b / ( n − b + 1)) , . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap and X ℓ ∈G ( b ) (cid:12)(cid:12)(cid:12) n − b + 1 n − b +1 X t =1 I t ; r j s j ( λ ℓ,b ) I t ; r k s k ( λ ℓ,b ) − E (cid:0) I r j s j ( λ ℓ,b ) I r k s k ( λ ℓ,b ) (cid:1)(cid:12)(cid:12)(cid:12) = O P ( p b / ( n − b + 1)) . where the O P terms are uniformly in r, s , respectively r j , s j , r k , s k . In order to formulate our theoretical results dealing with the asymptoticproperties of the bootstrap approximations proposed, we state the followingassumption which summarizes our requirements regarding the behavior of thesubsampling parameter b . Assumption 6. b = b ( n ) → ∞ as n → ∞ such that b / ( n − b + 1) → . The following lemma investigates the asymptotic properties of the covarianceand relation matrices of the random vectors V ∗ n and V + n generated in the MFHBprocedure and derives the limiting distribution of V ∗ n . Lemma 3.9.
If Assumptions 1, 2, 3, 5 and 6 are satisfied, then,(i)
Cov ∗ ( V ∗ n ) P → Σ and E ∗ ( V ∗ n ( V ∗ n ) ⊤ ) P → Γ .(ii) G + n − C + n P → (Re( Σ ) + Re( Γ )) ( − Im( Σ ) + Im( Γ )) (Im( Σ ) + Im( Γ )) (Re( Σ ) − Re( Γ )) . (iii) V ∗ n d → N cJ ( , Σ , Γ ) in P -probability. Lemma 3.9 leads to the following result which establishes consistency of theMFHB procedure in estimating the distribution and the second-order momentsof the random vector V n . Theorem 3.10.
If Assumptions 1, 2, 3, 5 and 6 are satisfied, then,(i)
Cov ∗ ( V ◦ n ) P → Σ , and E ∗ (cid:0) V ◦ n ( V ◦ n ) ⊤ (cid:1) P → Γ .(ii) V ◦ n d → V , in P -probability. The next result establishes validity of the MFHB procedure for smooth func-tions of integrated periodograms.
Theorem 3.11.
Let Assumptions 1 to 6 be satisfied. Then, as n → ∞ , W ◦ n d −→ W in P -probability, where W is the limiting distribution of √ n ( R n − R ) given in (3.5) . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap
4. Simulations
The practical implementation of the MFHB procedure requires the choice oftwo parameters. The first is the spectral density estimator b f and the second thesubsampling parameter b . Assumption 5 and Assumption 6 state our generalrequirements on these parameters focusing on the asymptotic properties theyhave to satisfy in order for the proposed bootstrap procedure to be consistent.Certainly, the choice of these parameters for a given sample size n is an impor-tant venue of feature research. In the following we discuss some rather practicalrules on how to choose these parameters..Regarding the spectral density estimator b f , a variety of estimators existswhich can be used in our procedure; see Brillinger (1981). As a simple ap-proach, we use in the following kernel estimators obtained by locally averagingthe periodogram matrix over frequencies close to the frequency of interest, i.e., b f ( λ ) = ( nh ) − P j K (( λ − λ j,n ) /h ) I n ( λ j,n ). K denotes the kernel function whichdetermines the weights assigned to the ordinates of the periodogram matrix,while h is the bandwidth that controls the number of periodogram ordinateseffectively taken into account in order to obtain the kernel estimator b f . To se-lect the parameter h in practice, cross-validation type approaches have beenproposed and investigated in the literature which also can be applied in oursetting; see Robinson (1991) for details.For the subsampling parameter b , Assumption 6 solely states the requiredconditions on the rate at which this parameter has to increase to infinity withrespect to the sample size n in order to ensure consistency of the MFHB proce-dure. Our simulation experience with the choice of this parameter shows that theresults obtained are not very sensitive with respect to the choice of b , providedthat this parameter is not chosen too small. This motivates the suggestion ofthe following practical rule for selecting b . Select this parameter as the smallestinteger which is larger or equal to 3 · n . . This rule satisfies the requirements ofAssumption 6 and at the same time delivers a value of b which is large enoughfor the MFHB procedure to perform well in practice.Notice that the numerical results presented in the next section are reportedfor different combinations of bandwidth and block size parameters, h and b .On the one hand this avoids a further increase of the computational burdencaused by a cross-validation type choice of the bandwidth h . On the other handit allows us to investigate the sensitivity of the bootstrap estimates with respectto different choices of the parameters involved. In this section we investigate the finite sample performance of the MFHB pro-cedure and compare it with that of the time domain moving block bootstrap(MBB). Note that, in view of our discussion in the Introduction, popular time . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap domain bootstrap methods other than the MBB (and its variations) can not beconsidered as competitors for our MFHB since these procedures are asymptoti-cally non-valid for a large class of statistics in the context of multivariate linearand non-linear time series. We consider time series of length n = 100 stemmingfrom the following two bivariate processes considered in Tsay (2014). The firstis a VAR(1) process driven by i.i.d. innovations, i.e.,Model I: X ( t ) = ΦX ( t −
1) + e ( t ) , Φ = (cid:18) . . − . . (cid:19) , where e t ∼ N ( , S e ) and S e = ( σ i,j ) i,i =1 , , with σ , = 2 . σ , = 0 . σ , = 1. The second model is a bivariate VARMA(2,1) process the innovationsof which follow a bivariate GARCH-type process, that is,Model II: X ( t ) = Φ X ( t −
1) + Φ X ( t −
2) + Θu ( t −
1) + u ( t ) , with parameter matrices Φ = (cid:18) . − . − .
116 1 . (cid:19) , Φ = (cid:18) − .
643 0 . . − . (cid:19) and Θ = (cid:18) − . − .
801 0 (cid:19) . Furthermore, the innovations u ( t ) are generated as u ( t ) = S / t e ( t ), where the e ( t )’s are i.i.d. N ( , Id ) distributed, and the volatility matrix S t evolves ac-cording to a BEKK(1,1) model, i.e. S t = A A ⊤ + A u ( t − u ( t − ⊤ A ⊤ + B S t − B ⊤ , where A = (cid:18) .
01 00 0 . (cid:19) , A = (cid:18) .
15 0 . .
06 0 . (cid:19) and B = (cid:18) . . (cid:19) . Observe that for the VAR(1) model with Gaussian innovations the fourth-order cumulant spectral densities f rsvw equal zero, so the expressions of thecovariance and relation matrices Σ and Γ simplify. This is not the case forthe VARMA(2,1) process which is nonlinear due to the BEKK(1,1) generatedinnovations. The parameter matrices of this BEKK(1,1) model are very close tothose of the same model fitted to the IBM stock and S&P composite index inTsay (2014), p. 418, Table 7.3. See also Francq and Zako¨ıan (2016), Section 6,for a similar parametrization.We consider the problem of estimating the standard deviation of the cross-correlation estimates b ρ ( h ) for the values h = − , , +1. Recall that the esti-mators considered also can be written as functions of integrated periodograms;see Example 2.3. We have generated 10,000 replications of both models in or-der to estimate the exact standard deviation of the sample cross-correlationsconsidered. As already mentioned, the spectral density estimator b f used in theMFHB procedure is a kernel estimator obtained via smoothing the periodogrammatrix with bandwidth h and using the Bartlett-Priestley kernel; see Priestley(1981). Furthermore, and in order to see the sensitivity of the bootstrap methods . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap compared with respect to the choice of the bootstrap parameters, the MFHBprocedure has been applied using different choices of the bandwidth h and ofthe subsampling parameter b . The same values of b have also been used as blocksizes in the MBB procedure. We also report results for b = 12 which is the valueof the subsampling parameter selected according to the rule proposed in Section4.1.Table 1 presents the results for both models considered. The results reportedin this table are based on R = 500 repetitions where B = 300 bootstrap repli-cations have been applied for each repetition. As this table shows, the MFHBprocedure performs quite well for both models considered and outperforms theMBB. In particular, comparing the performance of both bootstrap proceduresfor the same subsampling parameter, respectively block size b , the MFHB meansquare errors are in almost all cases considered, and independent of the choice of h , lower than those of the MBB procedure. Furthermore, the MFHB estimatesseem to be less sensitive with respect to the choice of the subsampling parameter b than the MBB procedure is with respect to the choice of the block size b .
5. Appendix: Proofs
Proof of Lemma 3.9 ( i ) : The ( j, k )-th entry of the covariance matrixCov ∗ ( V ∗ n ) is given byCov ∗ ( V ∗ n,j , V ∗ n,k ) = Cov ∗ π √ n X l ∈G ( n ) ϕ j ( λ l ,n ) (cid:0) I ∗ r j s j ( λ l ,n ) − b f r j s j ( λ l ,n ) (cid:1) , π √ n X l ∈G ( n ) ϕ k ( λ l ,n ) (cid:0) I ∗ r k s k ( λ l ,n ) − b f r k s k ( λ l ,n ) (cid:1) which equals4 π n X l ,l ∈G ( n ) ϕ j ( λ l ,n ) ϕ k ( λ l ,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l ,n ) , I ∗ r k s k ( λ l ,n ) (cid:1) . (5.1)The last expression can be decomposed, using λ − l,n = − λ l,n and I ∗ rs ( − λ l,n ) = I ∗ sr ( λ l,n ), into4 π n N X l ,l =1 n ϕ j ( λ l ,n ) ϕ k ( λ l ,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l ,n ) , I ∗ r k s k ( λ l ,n ) (cid:1) (5.2)+ ϕ j ( − λ l ,n ) ϕ k ( − λ l ,n ) Cov ∗ (cid:0) I ∗ s j r j ( λ l ,n ) , I ∗ s k r k ( λ l ,n ) (cid:1) + ϕ j ( − λ l ,n ) ϕ k ( λ l ,n ) Cov ∗ (cid:0) I ∗ s j r j ( λ l ,n ) , I ∗ r k s k ( λ l ,n ) (cid:1) + ϕ j ( λ l ,n ) ϕ k ( − λ l ,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l ,n ) , I ∗ s k r k ( λ l ,n ) (cid:1)o . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap For the first of the four similar summands in this expression we can calcu-late the following, where we used that I ∗ ( λ l ,n ) and I ∗ ( λ l ,n ) are independentfor l = l and 1 ≤ l , l ≤ N , and that I ∗ ( λ l ,n ) has a complex Wishart W Cm (1 , b f ( λ l ,n )) distribution (the covariance structure of which can be obtainedfrom, e.g., Brillinger (1981), Section 4.2):4 π n N X l ,l =1 ϕ j ( λ l ,n ) ϕ k ( λ l ,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l ,n ) , I ∗ r k s k ( λ l ,n ) (cid:1) = 4 π n N X l =1 ϕ j ( λ l,n ) ϕ k ( λ l,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l,n ) , I ∗ r k s k ( λ l,n ) (cid:1) = 4 π n N X l =1 ϕ j ( λ l,n ) ϕ k ( λ l,n ) b f r j r k ( λ l,n ) b f s j s k ( λ l,n )= 4 π n N X l =1 ϕ j ( λ l,n ) ϕ k ( λ l,n ) f r j r k ( λ l,n ) f s j s k ( − λ l,n )+ 4 π n N X l =1 ϕ j ( λ l,n ) ϕ k ( λ l,n ) (cid:0) b f r j r k ( λ l,n ) b f s j s k ( − λ l,n ) − f r j r k ( λ l,n ) f s j s k ( − λ l,n ) (cid:1) . The second summand on the last right-hand side vanishes asymptotically dueto Assumption 5 and N = O ( n ). Since the first summand is a Riemann sum thelast right-hand side converges in probability to2 π Z π ϕ j ( λ ) ϕ k ( λ ) f r j r k ( λ ) f s j s k ( − λ ) dλ . With analogous calculations the other three summands in (5.2) yield2 π R π ϕ j ( − λ ) ϕ k ( − λ ) f s j s k ( λ ) f r j r k ( − λ ) dλ + 2 π R π ϕ j ( − λ ) ϕ k ( λ ) f s j r k ( λ ) f r j s k ( − λ ) dλ + 2 π R π ϕ j ( λ ) ϕ k ( − λ ) f r j s k ( λ ) f s j r k ( − λ ) dλ . Substituting λ with − λ in the first and second of these three terms shows thatthe limit in probability of (5.2) is given by Σ jk .As for the relation matrix, observe that the ( j, k )-th entry of E ∗ ( V ∗ n V ∗⊤ n ) isgiven byCov ∗ ( V ∗ n,j , V ∗ n,k ) = 4 π n X l ,l ∈G ( n ) ϕ j ( λ l ,n ) ϕ k ( − λ l ,n ) Cov ∗ (cid:0) I ∗ r j s j ( λ l ,n ) , I ∗ r k s k ( λ l ,n ) (cid:1) , which is (5.1) if one replaces ϕ k ( · ) with ϕ k ( − · ). Therefore, one can follow alongthe lines of the calculation for Σ jk above to see that the ( j, k )-th entry of E ∗ ( V ∗ n V ∗⊤ n ) converges in probability to Γ jk . (cid:3) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Proof of Lemma 3.9 ( ii ) : We first establish the uniform (over the Fourierfrequencies) consistency of e f − / and b f / as estimators of f − / and f / , re-spectively. We first show thatmax ℓ ∈G ( b ) k e f − ( λ ℓ,b ) − f − ( λ ℓ,b ) k F P → . (5.3)For this notice first that max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F P → . (5.4)This follows since E ( I rs ( λ ℓ,b )) = f rs ( λ ℓ,b ) + O ( b − ) uniformly in r, s andmax ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F ≤ max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − E e f ( λ ℓ,b ) k F + max ℓ ∈G ( b ) k E e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F = max ℓ ∈G ( b ) n m X r =1 m X s =1 (cid:12)(cid:12) e f rs ( λ ℓ,b ) − E e f rs ( λ ℓ,b ) (cid:12)(cid:12) o / + max ℓ ∈G ( b ) n m X r =1 m X s =1 (cid:12)(cid:12) E ( I rs ( λ ℓ,b )) − f rs ( λ ℓ,b ) (cid:12)(cid:12) o / ≤ m X r =1 m X s =1 X ℓ ∈G ( b ) (cid:12)(cid:12) e f rs ( λ ℓ,b ) − E e f rs ( λ ℓ,b ) (cid:12)(cid:12) + O ( b − )= O P ( p b / ( n − b + 1)) + O ( b − ) → , where the last convergence holds true because of Lemma 3.8 (i) and Assumption6. Now to see (5.3) notice that max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F = O (1) and thatmax ℓ ∈G ( b ) k e f − ( λ ℓ,b ) − f − ( λ ℓ,b ) k F = max ℓ ∈G ( b ) k e f − ( λ ℓ,b ) (cid:0) f ( λ ℓ,b ) − e f ( λ ℓ,b ) (cid:1) f − ( λ ℓ,b ) k F ≤ max ℓ ∈G ( b ) k e f − ( λ ℓ,b ) k F max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F ≤ (cid:0) max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F + max ℓ ∈G ( b ) k e f − ( λ ℓ,b ) − f − ( λ ℓ,b ) k F (cid:1) × max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F (5.5)By (5.4) and for n large enough such thatmax ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F < , expression (5.5) leads tomax ℓ ∈G ( b ) k e f − ( λ ℓ,b ) − f − ( λ ℓ,b ) k F . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap ≤ max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F − max ℓ ∈G ( b ) k f − ( λ ℓ,b ) k F max ℓ ∈G ( b ) k e f ( λ ℓ,b ) − f ( λ ℓ,b ) k F P → . Assumption 5 together with (5.3) implies thatmax ℓ ∈G ( b ) k b f / ( λ ℓ,b ) − f / ( λ ℓ,b ) k F P → ℓ ∈G ( b ) k e f − / ( λ ℓ,b ) − f − / ( λ ℓ,b ) k F P → . (5.7)To see (5.6) notice that since b f is positive definite, we get by Assumption 2 andequation (1.3) in Schmitt (1992), thatmax ℓ ∈G ( b ) k b f / ( λ ℓ,b ) − f / ( λ ℓ,b ) k F ≤ √ δ max ℓ ∈G ( b ) k b f ( λ ℓ,b ) − f ( λ ℓ,b ) k F . For (5.7) we get by the same arguments as above and since A − / = ( A − ) / ,the boundmax ℓ ∈G ( b ) k e f − / ( λ ℓ,b ) − f − / ( λ ℓ,b ) k F ≤ √ σ max max ℓ ∈G ( b ) k e f − ( λ ℓ,b ) − f − ( λ ℓ,b ) k F , where σ max = max λ ∈ [0 ,π ] max σ ( f ( λ )). Equation (5.7) follows then by (5.3).Assertions (5.6) and (5.7) imply thatmax ℓ ∈G ( b ) k b f / ( λ ℓ,b ) e f − / ( λ ℓ,b ) − Id m k F P → ℓ ∈G ( b ) k e f − / ( λ ℓ,b ) b f / ( λ ℓ,b ) − Id m k F P → . (5.9)Using (5.8) and (5.9) it follows by straightforward calculations thatCov ∗ (cid:0) V + n,j , V + n,k (cid:1) = Cov ∗ (cid:0) e V n,j , e V n,k (cid:1) + o P (1) , where e V n,j = √ kb πb X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) (cid:0)e I r j s j ( λ ℓ,b ) − e f r j s j ( λ ℓ,b ) (cid:1) and e I rs ( λ ℓ,b ) denotes the (r,s)-th element of the matrix k − P km =1 I i m ( λ ℓ,b ). Let I i ,rs ( λ ℓ,b ) denote the ( r, s )-th element of I i ( λ ℓ,b ). We then haveCov ∗ ( e V n,j , e V n,k ) = 4 π b X ℓ ,ℓ ∈G ( b ) ϕ j ( λ ℓ ,b ) ϕ k ( λ ℓ ,b )Cov ∗ ( I i ,r j s j ( λ ℓ ,b ) , I i ,r k s k ( λ ℓ ,b ))= 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( λ ℓ,b )Cov ∗ ( I i ,r j s j ( λ ℓ,b ) , I i ,r k s k ( λ ℓ,b )) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap + 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( − λ ℓ,b )Cov ∗ ( I i ,r j s j ( λ ℓ,b ) , I i ,r k s k ( − λ ℓ,b ))+ 4 π b X | ℓ |6 = | ℓ |∈G ( b ) ϕ j ( λ ℓ ,b ) ϕ k ( λ ℓ ,b )Cov ∗ ( I i ,r j s j ( λ ℓ ,b ) , I i ,r k s k ( λ ℓ ,b )) P → Σ jk + Σ jk , usingCov ∗ ( I i ,r j s j ( λ ℓ ,b ) , I i ,r k s k ( λ ℓ ,b ))= 1 n − b + 1 n − b +1 X t =1 (cid:0) I t,r j s j ( λ ℓ ,b ) − e f r j s j ( λ ℓ ,b ) (cid:1)(cid:0) I t,r k s k ( λ ℓ ,b ) − e f r k s k ( λ ℓ ,b ) (cid:1) , and Lemma 3.8. By the same arguments it follows that E ∗ (cid:0) V + n,j V + n,k (cid:1) = E ∗ ( e V n,j e V n,k (cid:1) + o P (1) , and that E ∗ ( e V n,j e V n,k (cid:1) P → Γ jk + Γ jk . The above results show that G + n P → G . To conclude the proof we have to showthat Σ +1 ,n P → Σ and that Γ +1 ,n P → Γ . Using (5.8) and (5.9) again, it followsthat σ + jk = 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( λ ℓ,b )˚ S r j s j s k r k ( λ ℓ,b )+ 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( − λ ℓ,b )˚ S r j s j r k s k ( λ ℓ,b ) + o P (1) . and c + jk = 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( − λ ℓ,b )˚ S r j s j s k r k ( λ ℓ,b )+ 4 π b X ℓ ∈G ( b ) ϕ j ( λ ℓ,b ) ϕ k ( λ ℓ,b )˚ S r j s j r k s k ( λ ℓ,b ) + o P (1) , where ˚ S rsuw ( λ ) = 1 n − b + 1 n − b +1 X t =1 (cid:0) I t,rs ( λ ) − e f rs ( λ ) (cid:1)(cid:0) I t,uw ( λ ) − e f uw ( λ ) (cid:1) . From Lemma 3.8 we then conclude that σ + jk P → Σ jk and that c + jk P → Γ jk . ✷ . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Proof of Lemma 3.9 ( iii ) : We use the notation D ∗ rs ( λ ) = I ∗ rs ( λ ) − b f rs ( λ ).To show that the J -dimensional complex vector V ∗ n = (cid:16) π √ n X l ∈G ( n ) ϕ j ( λ l,n ) D ∗ r j s j ( λ l,n ) , j = 1 , , . . . , J (cid:17) ⊤ , converges weakly to the complex normal variable N cJ ( , Σ , Γ ), it suffices toshow that the 2 J -dimensional real vector V ∗ n,T = 2 π √ n X l ∈G ( n ) Re { ϕ ( λ l,n ) D ∗ r s ( λ l,n ) } ...Re { ϕ J ( λ l,n ) D ∗ r J s J ( λ l,n ) } Im { ϕ ( λ l,n ) D ∗ r s ( λ l,n ) } ...Im { ϕ J ( λ l,n ) D ∗ r J s J ( λ l,n ) } converges to the 2 J dimensional, real normal distribution N J ( , Σ V ), where Σ V denotes the limit of E ∗ ( V ∗ n,T V ∗⊤ n,T ), which in view of assertion (i) is well de-fined. Notice that the covariance matrix Σ and the relation matrix Γ togetherspecify Σ V and vice versa. To simplify the presentation we give only the prooffor the case where the functions ϕ j are real-valued. The case of complex-valued ϕ j can be proved along the same lines but with a much more complicated no-tation. Write V ∗ n,T = P Nl =1 Z ∗ l , where the j -th component of Z ∗ l is given for j = 1 , , . . . , J , by2 π √ n (cid:16) Re { ϕ j ( λ l,n ) D ∗ r j s j ( λ l,n ) } + Re { ϕ j ( − λ l,n ) D ∗ r j s j ( − λ l,n ) } (cid:17) and for j = J + 1 , . . . , J , by2 π √ n (cid:16) Im { ϕ j − J ( λ l,n ) D ∗ r j − J s j − J ( λ l,n ) } + Im { ϕ j − J ( − λ l,n ) D ∗ r j − J s j − J ( − λ l,n ) } (cid:17) . Observe that the random vectors Z ∗ l are independent. Thus in view of assertion(i), to establish the desired weak convergence, it suffices to show that Lyapunov’scondition is satisfied, that is, that P Nl =1 E ∗ k Z ∗ l k δ → δ >
0. Choose δ = 2. Using ( a + b ) ≤ a + 2 b and ϕ ( λ )Re { D ∗ rs ( λ ) } + ϕ ( λ )Im { D ∗ rs ( λ ) } = ϕ ( λ ) | D ∗ rs ( λ ) | we have N X l =1 E ∗ k Z ∗ l k ≤ (2 π ) n N X l =1 E ∗ (cid:16) J X j =1 ϕ j ( λ l,n ) | D ∗ r j s j ( λ l,n ) | + 2 J X j =1 ϕ j ( − λ l,n ) | D ∗ r j s j ( − λ l,n ) | (cid:17) ≤ π ) n max ≤ j ≤ J sup λ ∈ [ − π,π ] ϕ j ( λ ) N X l =1 E ∗ (cid:16) J X j =1 | D ∗ r j s j ( λ l,n ) | . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap + J X j =1 | D ∗ r j s j ( − λ l,n ) | (cid:17) ≤ π ) n max ≤ j ≤ J sup λ ∈ [ − π,π ] ϕ j ( λ ) (cid:16) N X l =1 J X j =1 E ∗ | D ∗ r j s j ( λ l,n ) | + N X l =1 J X j =1 E ∗ | D ∗ r j s j ( − λ l,n ) | (cid:17) . Now, recall the definition of D ∗ r j s j ( λ ) and verify by staightforward calculationsthat E ∗ | I ∗ rs ( λ ) − b f rs ( λ ) | = E ∗ [ I ∗ rs ( λ ) I ∗ sr ( λ ) ] + + b f rs ( λ ) b f sr ( λ ) − b f rs ( λ ) E ∗ [ I ∗ rs ( λ ) I ∗ sr ( λ ) ] − b f sr ( λ ) E ∗ [ I ∗ rs ( λ ) I ∗ sr ( λ )]+ b f rs ( λ ) E ∗ [ I ∗ sr ( λ ) ] + b f sr ( λ ) E ∗ [ I ∗ rs ( λ ) ]+ 4 b f rs ( λ ) b f sr ( λ ) E ∗ [ I ∗ rs ( λ ) I ∗ sr ( λ )] − b f rs ( λ ) b f sr ( λ ) E ∗ [ I ∗ rs ( λ )] − b f rs ( λ ) b f sr ( λ ) E ∗ [ I ∗ sr ( λ )] . Now since I ∗ ( λ l,n ) has for every l = 1 , , . . . , N a complex Wishart distributionwith parameters 1 and b f ( λ l,n ), for short, I ∗ ( λ l,n ) ∼ Wishart (cid:0) , b f ( λ l,n ) (cid:1) , we getusing Assumption 5 and expressions for the moments of the complex Wishartdistribution, see Withers and Nadarajah (2012), that E ∗ | I ∗ rs ( λ ) − b f rs ( λ ) | P → E | S rs ( λ ) − f rs ( λ ) | = O (1), for all r, s ∈ { , , . . . , m } , where the random vari-able S ( λ ) = ( S rs ( λ )) r,s =1 , ,...,m has the Wishart(1 , f ( λ )) distribution. Therefore, N X l =1 E ∗ k Z ∗ l k ≤ π ) n max ≤ j ≤ J sup λ ∈ [ − π,π ] ϕ j ( λ ) · N · O P (1) = O P ( n − ) → . (cid:3) Proof of Theorem 3.10:
For assertion ( i ), note that E ∗ ( V ◦ n ) = E ∗ ( V ∗ n ) = which implies Cov ∗ ((Re( V ∗ n ) ⊤ , Im( V ∗ n ) ⊤ ) ⊤ ) = G ∗ n . From the definition of C + n one can see that C + n , and thus also G ◦ n , are symmetric. It follows from (3.1)thatCov ∗ (cid:20)(cid:18) Re( V ◦ n )Im( V ◦ n ) (cid:19)(cid:21) = (cid:0) G ◦ n (cid:1) / (cid:0) G ∗ n (cid:1) − / G ∗ n (cid:0) G ∗ n (cid:1) − / (cid:0) G ◦ n (cid:1) / = G ◦ n P → G , due to Lemma 3.9, (2.5) and (2.14). Again using (2.5) yields assertion ( i ).Since (Re( V ∗ n ) ⊤ , Im( V ∗ n ) ⊤ ) ⊤ is asymptotically normal due to Lemma 3.9 ( iii ),(Re( V ◦ n ) ⊤ , Im( V ◦ n ) ⊤ ) ⊤ is asymptotically normal with covariance matrix G whichgives assertion ( ii ). (cid:3) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Proof of Theorem 3.11:
First note that √ n ( M ∗ n − c M n ) equals V ∗ n fromStep I.3. Hence, Lemma 3.9 ( iii ) together with (2.5) yields √ n "(cid:18) Re( M ∗ n )Im( M ∗ n ) (cid:19) − Re( c M n )Im( c M n ) ! d −→ N J ( , G ) (5.10)in P -probability, where G is defined analogous to G in (2.14) but with Σ replaced by Σ and Γ replaced by Γ . We next apply the delta method to(5.10). Invoking the mean value theorem for all 2 L component functions of ˜ g leads to (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) = (cid:0) ∇ ˜ g i ( ξ i ) (cid:1) i =1 ,..., L × √ n "(cid:18) Re( M ∗ n )Im( M ∗ n ) (cid:19) − Re( c M n )Im( c M n ) ! , (5.11)where ∇ ˜ g i ( · ) denotes the gradient (as a row vector) of the i -th componentfunction of ˜ g , and ξ i ∈ R J denotes a vector on the line segment between(Re( M ∗ n ) ⊤ , Im( M ∗ n ) ⊤ ) ⊤ and (Re( c M n ) ⊤ , Im( c M n ) ⊤ ) ⊤ . We now have from (5.10)that k ξ i − (Re( c M n ) ⊤ , Im( c M n ) ⊤ ) ⊤ k → P ∗ -probability for all i ∈ { , . . . , L } ,and Assumption 5 together with convergence of Riemann sums yields k c M n − M k P →
0. Therefore (5.11) and (5.10) imply (cid:18)
Re( W ∗ n )Im( W ∗ n ) (cid:19) d −→ J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ) N J ( , G ) (5.12)in P -probability. We only need the asymptotic normality from this statement.Note that e G ∗ n is the covariance matrix of the last left-hand side and therefore (cid:0) e G ∗ n (cid:1) − / (cid:18) Re( W ∗ n )Im( W ∗ n ) (cid:19) d −→ N J ( , Id J ) (5.13)in probability. From the proof of Theorem 3.10 we have G ◦ n P → G . Also, theabove considerations together with continuity of J ˜ g yield J ˜ g ((Re( c M n ) ⊤ , Im( c M n ) ⊤ ) ⊤ ) P → J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ). Thus, e G ◦ n P −→ J ˜ g (cid:18) Re( M )Im( M ) (cid:19) × G × J ˜ g (cid:18) Re( M )Im( M ) (cid:19) ⊤ . Therefore we have (cid:18)
Re( W ◦ n )Im( W ◦ n ) (cid:19) d −→ J ˜ g ((Re( M ) ⊤ , Im( M ) ⊤ ) ⊤ ) N J ( , G ) (5.14)in probability. Applying the continuous mapping theorem with h ( x , x ) := x + i · x , for x , x ∈ R L , it follows W ◦ n d −→ W ∼ N cL ( , Σ R , Γ R ) (5.15)in probability, for the matrices Σ R , Γ R stated in (3.5) (the exact form of whichdoes not need to be specified for the desired assertion to hold). (cid:3) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap References [1] Brillinger, D.R. (1981).
Time Series: Data Analysis and Theory , Holden-Day, San Francisco.[2] Brockwell, P.J. and Davis, R.A. (1991):
Time Series: Theory and Methods(2nd Ed.) . Springer, New York.[3] Dahlhaus, R. (1985): On a spectral density estimate obtained by averagingperiodograms.
Journal of Applied Probability , , 598–610.[4] Dahlhaus, R. and Janas, D. (1996): A Frequency Domain Bootstrap forRatio Statistics in Time Series Analysis. The Annals of Statistics , , No.5, 1934–1963.[5] Efron, B. (1979). Bootstrap methods: Another look at the jackknife. TheAnnals of Statistics , 1–26.[6] Fragkeskou, M. and Paparoditis, E. (2019). Extending the range of validityof the autoregressive (sieve) bootstrap. Journal of Time Series Analysis , , 356–379.[7] Francq, C. and Zako¨ıan, J.-M. (2016). Estimating multivariate volatilitymodels equation by equation. Journ. of the Royal Stat. Society, Series B , , 613-635.[8] Franke, J. and H¨ardle, W. (1992). On bootstrapping kernel spectral esti-mates. The Annals of Statistics , Vol. , 121–145.[9] Hurvich, C.M. and Zeger, S.L. (1987). Frequency domain bootstrap meth-ods for time series. Preprint, Department of Stat. and Operations Re-search, New York University.[10] Jentsch, C. and Kreiss, J.-P. (2010): The Multiple Hybrid Bootstrap –Resampling Multivariate Linear Processes. J. Mult. Analysis , , 2320–2345.[11] Kreiss, J.-P. and Paparoditis, E. (2012). The Hybrid Wild Bootstrap forTime Series. Journal of the American Statistical Association , , 1073–1084.[12] Kreiss, J.-P., Paparoditis, E. and Politis, D. (2011): On the Range ofValidity of the Autoregressive Sieve Bootstrap. The Annals of Statistics , , 2103–2130.[13] Krogstad, H.E. (1982): On the Covariance of the Periodogram. Journal ofTime Series Analysis , , No. 3, 195–207.[14] Meyer, M. and Kreiss, J.-P. (2015): On the vector autoregressive sievebootstrap. Journal of Time Series Analysis , , 377–397.[15] Meyer, M., Paparoditis, E. and Kreiss, J.-P. (2020): Extending the validityof frequency domain bootstrap methods to general stationary processes. The Annals of Statistics , , 2402–2427.[16] Priestley, M. B. (1981): Spectral Analysis and Time Series . AcademicPress, London.[17] Robinson, P.M. (1991). Automatic frequency domain inference on semi-parametric and nonparametric models.
Econometrica , , 1329–1363[18] Rosenblatt, M. (1985): Stationary sequences and random fields .Birkh¨auser, Boston. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap [19] Schmitt, B. A. (1992): Perturbation bounds for matrix square roots andpythagorean sums. Linear Algebra and its Applications , , 215–227.[20] Taniguchi, M. and Kakizawa Y. (2000): Asymptotic Theory of StatisticalInference for Time Series . Springer.[21] Tewes, J., Nordman, D.J. and Politis, D.N. (2019). Convolved subsamplingestimation with applications to block bootstrap.
Annals of Stat. , , No. 1,468–496.[22] Withers, C. S. and Nadarajah, S. (2012). Moments and cumulants for thecomplex Wishart. Journal of Multivariate Analysis , , 242–247.[23] Wu, W. B. and P. Zaffaroni (2018). Asymptotic Theory for Spectral Den-sity Estimates of General Multivariate Time Series. Econometric Theory , , 1–22. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap b ρ ( − b ρ (0) b ρ (+1)Mean Std MSE ×
10 Mean Std MSE ×
10 Mean Std MSE × MODEL I
Est. Exact: 0.766 Est. Exact: 0.992 Est. Exact: 1.131
MFHB h=0.10 b=6 0.794 0.123 0.158 1.014 0.129 0.176 1.142 0.167 0.283b=8 0.788 0.130 0.172 1.024 0.140 0.206 1.153 0.188 0.358b=10 0.787 0.129 0.168 1.011 0.146 0.214 1.143 0.192 0.368b=12 0.788 0.136 0.188 1.000 0.160 0.256 1.130 0.192 0.367b=16 0.771 0.142 0.203 0.958 0.171 0.300 1.104 0.212 0.456h=0.12 b=6 0.824 0.124 0.187 1.027 0.140 0.207 1.159 0.172 0.308b=8 0.828 0.122 0.178 1.045 0.126 0.181 1.171 0.170 0.300b=10 0.821 0.117 0.162 1.030 0.129 0.181 1.152 0.171 0.297b=12 0.815 0.127 0.179 1.023 0.147 0.226 1.156 0.170 0.295b=16 0.799 0.145 0.243 1.005 0.167 0.283 1.140 0.191 0.365
MBB b=6 0.906 0.112 0.323 1.016 0.162 0.273 1.092 0.152 0.248b=8 0.849 0.122 0.201 0.977 0.181 0.332 1.055 0.185 0.406b=10 0.832 0.143 0.237 0.955 0.198 0.414 1.048 0.199 0.475b=12 0.795 0.145 0.217 0.933 0.208 0.477 1.020 0.205 0.548b=16 0.750 0.164 0.271 0.875 0.208 0.559 0.966 0.217 0.743
MODEL II
Est. Exact: 1.154 Est. Exact: 1.135 Est. Exact: 1.217
MFHB h=0.10 b=6 1.164 0.232 0.570 1.160 0.166 0.300 1.195 0.196 0.424b=8 1.153 0.263 0.620 1.168 0.168 0.301 1.196 0.195 0.409b=10 1.140 0.235 0.633 1.148 0.181 0.331 1.193 0.183 0.353b=12 1.113 0.240 0.785 1.115 0.205 0.428 1.193 0.216 0.491b=16 1.113 0.248 0.742 1.084 0.229 0.536 1.144 0.245 0.601h=0.12 b=6 1.167 0.224 0.544 1.195 0.173 0.337 1.222 0.195 0.431b=8 1.158 0.222 0.547 1.193 0.160 0.311 1.216 0.185 0.399b=10 1.184 0.230 0.538 1.200 0.176 0.357 1.227 0.186 0.387b=12 1.149 0.227 0.578 1.159 0.194 0.394 1.221 0.203 0.498b=16 1.134 0.235 0.661 1.111 0.208 0.437 1.168 0.232 0.538
MBB b=6 1.176 0.208 0.454 1.229 0.255 0.784 1.191 0.221 0.524b=8 1.157 0.245 0.655 1.189 0.273 0.782 1.187 0.246 0.623b=10 1.096 0.235 0.730 1.131 0.268 0.718 1.162 0.263 0.695b=12 1.048 0.238 0.999 1.090 0.271 0.766 1.116 0.269 0.735b=16 1.024 0.245 1.002 1.057 0.289 0.870 1.080 0.290 0.888
TABLE 1: Bootstrap estimates of the standard deviation of the samplecross-correlations b ρ ( h ) for lags h ∈ {− , , +1 } for time series of length n = 100 stemming from Model I and Model II. MFHB refers to the estimatesof the multivariate frequency domain hybrid bootstrap and MBB to those ofthe moving block bootstrap. . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap A FREQUENCY DOMAIN BOOTSTRAP FOR GENERALMULTIVARIATE STATIONARY PROCESSES– SUPPLEMENTARY MATERIAL –
Proof of Lemma 2.1:
The proof generalizes calculations from Rosenblatt(1985) and Krogstad (1982) regarding the covariance structure of univariateperiodogram ordinates. The assertion of Lemma 2.1 remains true if one switchesfrom Fourier frequencies λ j,n , λ k,n to two fixed frequencies λ , λ ∈ [0 , π ]; wewill state at the end of this proof which arguments have to be adapted in thissituation.A direct calculation yields the decomposition of the covariance into three majorcomponents:Cov( I rs ( λ j,n ) , I vw ( λ k,n ))= 14 π n n X t ,t ,t ,t =1 cum( X r ( t ) , X s ( t ) , X v ( t ) , X w ( t )) e − i ( t − t ) λ j,n + i ( t − t ) λ k,n + 14 π n n X t ,t ,t ,t =1 γ rv ( t − t ) γ sw ( t − t ) e − i ( t − t ) λ j,n + i ( t − t ) λ k,n + 14 π n n X t ,t ,t ,t =1 γ rw ( t − t ) γ sv ( t − t ) e − i ( t − t ) λ j,n + i ( t − t ) λ k,n =: S + S + S . For S we obtain with an index shift for three summands n π S = 1(2 π ) n n X t ,t ,t ,t =1 c rsvw ( t − t , t − t , t − t ) e − i ( t − t ) λ j,n + i ( t − t ) λ k,n = 1(2 π ) n n X t =1 n − t X t ,t ,t =1 − t c rsvw ( t , t , t ) e − i ( t − t ) λ j,n + it λ k,n . By merging like summands the last expression can be seen to be equal to1(2 π ) n n − X h ,h ,h = − ( n − q n ( h , h , h ) c rsvw ( h , h , h ) e − i ( h − h ) λ j,n + ih λ k,n , (5.16)where q n ( h , h , h ) counts the number of times the respective summand appearsin ( n/ π ) S . It holds q n ( h , h , h ) = (cid:16) n − max {| h | , | h | , | h | , | h − h | , | h − h | , | h − h |} (cid:17) + . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap We can replace the q n ( · ) term in (5.16) by n since the resulting remainder termvanishes with rate O ( n − ) as the following bound shows: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) π ) n n − X h ,h ,h = − ( n − (cid:0) q n ( h , h , h ) − n (cid:1) c rsvw ( h , h , h ) e − i... (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ π ) n ∞ X h ,h ,h = −∞ {| h | , | h | , | h |} | c rsvw ( h , h , h ) |≤ π ) n ∞ X h ,h ,h = −∞ | h | + | h | + | h | ) | c rsvw ( h , h , h ) | , which is of order O ( n − ). Now, after replacing q n ( · ) with n in (5.16), the dif-ference of the remaining term and f rsvw ( λ j,n , − λ j,n , − λ k,n ) can be boundedby 1(2 π ) ∞ X h ,h ,h = −∞ { max {| h | , | h | , | h |}≥ n } | c rsvw ( h , h , h ) | , which also vanishes with rate O ( n − ) under the imposed summability assump-tion on | c rsvw ( h , h , h ) | . This yields the desired assertion for S .To handle S and S we have to introduce some notation and preliminaries.We define the functions g n ( λ ) := P nk =1 e ikλ . For all λ π Z , we can use thegeometric sum formula to express g n as g n ( λ ) = e iλ e inλ − e iλ − e i ( n +1) λ/ sin( nλ/ λ/ . Note that for all λ ∈ R we have g n ( λ ) g n ( − λ ) = 2 πn F n ( λ ), where F n is the(continuously extended) Fej´er kernel: F n ( λ ) := ( πn sin ( nλ/ ( λ/ , λ π Z n π , λ ∈ π Z . Integrals over these functions can be handled in an elegant way by determiningtheir Fourier coefficients. For arbitrary λ , λ define z n,λ ,λ ( α ) := g n ( α − λ ) g n ( − ( α − λ )) . The ℓ -th Fourier coefficient of this function is given by b z n,λ ,λ [ ℓ ] = 12 π Z π − π g n ( α − λ ) g n ( − α + λ ) e − iℓα dα = 12 π n X p,q =1 e − ipλ e iqλ Z π − π e i ( p − q − ℓ ) α dα . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap = n X p,q =1 e − ipλ e iqλ { p − q = ℓ } . (5.17)In particular it holds b z n,λ ,λ [ ℓ ] = 0 for all | ℓ | ≥ n .With this notation, and using the fact that γ rs ( h ) = R π − π e ihα f rs ( α ) dα , we canwrite S = 14 π n Z π − π z n,λ j,n ,λ k,n ( α ) f rv ( α ) dα · Z π − π z n, − λ j,n , − λ k,n ( α ) f sw ( α ) dα . (5.18)Consider first the case with j = k , that is, with λ j,n = λ k,n . Then the lastexpression simplifies to S = Z π − π F n ( α − λ j,n ) f rv ( α ) dα · Z π − π F n ( α + λ j,n ) f sw ( α ) dα . Since F n ( · − λ j,n ) = z n,λ j,n ,λ j,n ( · ) / (2 πn ), its ℓ -th Fourier coefficient can beobtained from (5.17), it is b F n ( · − λ j,n )[ ℓ ] = ( n − | ℓ | ) e − iℓλ j,n πn {| ℓ | 6∈ { , π } or λ j,n = λ k,n we have ± ( λ j,n + λ k,n ) π Z and we get (cid:12)(cid:12)b z n,λ j,n , − λ k,n [ ℓ ] (cid:12)(cid:12) ≤ | ℓ | and (cid:12)(cid:12)b z n, − λ j,n ,λ k,n [ ℓ ] (cid:12)(cid:12) ≤ | ℓ | , which implies | S | = O ( n − ), analog.ous to the calculations for S before. More-over, an inspection of this proof shows that in all cases the O ( · ) bounds are uni-form over all frequencies. Also, if one is interested in Cov( I rs ( λ ) , I vw ( λ )) fortwo fixed frequencies λ , λ ∈ [0 , π ] instead of Fourier frequencies λ j,n , λ k,n , onecan in large parts use the same arguments presented here. The only argumentthat changes is (5.19). For fixed frequencies λ = λ one gets instead of (5.19) (cid:12)(cid:12)(cid:12)b z n,λ ,λ [ ℓ ] (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n −| ℓ | X q =1 e iq ( λ − λ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) e i ( λ − λ ) n −| ℓ | +12 sin (cid:16) ( n −| ℓ | )( λ − λ )2 (cid:17) sin (cid:0) λ − λ (cid:1) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C λ ,λ , (5.22)where the finite constant C λ ,λ depends only on λ , λ but not on n . With thisbound one can proceed as in the case of Fourier frequencies. (cid:3) Proof of Lemma 3.8 ( i ) : For the variance of e f rs ( λ ℓ,b ) we haveVar( e f rs ( λ ℓ,b )) = 1( n − b + 1) n − b +1 X t ,t =1 Cov( I t ; rs ( λ ℓ,b ) , I t ; rs ( λ ℓ,b )) ≤ n − b + 1) π b n − b +1 X t ,t =1 b X g ,g ,g ,g =1 n(cid:12)(cid:12) γ rr (( t − t ) + ( g − g )) (cid:12)(cid:12)(cid:12)(cid:12) γ ss (( t − t ) + ( g − g )) (cid:12)(cid:12) . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap + (cid:12)(cid:12) γ rs (( t − t ) + ( g − g )) (cid:12)(cid:12)(cid:12)(cid:12) γ sr (( t − t ) + ( g − g )) (cid:12)(cid:12) + (cid:12)(cid:12) c rsrs ( t − t + g − g , t − t + g − g , g − g ) (cid:12)(cid:12)o . For the first term on the right hand side of the last expression, we get by firstsumming over g and then over t that this term is O ( b/ ( n − b + 1)). The O ( b/ ( n − b + 1)) bound is obtained for the second term by the same argumentswhile for the last term we get by summing first over g , then over g andthen over t that this term is O (1 / ( n − b + 1)). Now, since E ( e f rs ( λ ℓ,b )) = E ( I rs ( λ ℓ,b )), we get that E (cid:16) X ℓ ∈G ( b ) (cid:12)(cid:12) e f rs ( λ ℓ,b ) − E ( I rs ( λ ℓ,b )) (cid:12)(cid:12)(cid:17) ≤ X ℓ ∈G ( b ) q Var( e f r,s ( λ ℓ,b ))= O ( p b / ( n − b + 1)) , which completes the proof. (cid:3) Proof of Lemma 3.8 ( ii ) : Notice that1 n − b + 1 n − b +1 X t =1 (cid:16) I t ; r j s j ( λ ℓ ,b ) I t ; r k s k ( λ ℓ ,b ) − E (cid:0) I t ; r j s j ( λ ℓ ,b ) I t ; r k s k ( λ ℓ ,b ) (cid:1)(cid:17) = E (cid:0) I r k s k ( λ ℓ ,b ) (cid:1) n − b + 1 n − b +1 X t =1 (cid:16) I t ; r j s j ( λ ℓ ,b ) − E (cid:0) I t ; r j s j ( λ ℓ ,b ) (cid:1)(cid:17) + E (cid:0) I r j s j ( λ ℓ ,b ) (cid:1) n − b + 1 n − b +1 X t =1 (cid:16) I t ; r k s k ( λ ℓ ,b ) − E (cid:0) I t ; r k s k ( λ ℓ ,b ) (cid:1)(cid:17) + 1 n − b + 1 n − b +1 X t =1 n(cid:0) I t ; r j s j ( λ ℓ ,b ) − E (cid:0) I t ; r j s j ( λ ℓ ,b ) (cid:1)(cid:1)(cid:0) I t ; r k s k ( λ ℓ ,b ) − E (cid:0) I t ; r k s k ( λ ℓ ,b ) (cid:1)(cid:1) − E (cid:16)(cid:0) I t ; r j s j ( λ ℓ ,b ) − E (cid:0) I t ; r j s j ( λ ℓ ,b ) (cid:1)(cid:1)(cid:0) I t ; r k s k ( λ ℓ ,b ) − E (cid:0) I t ; r k s k ( λ ℓ ,b ) (cid:1)(cid:1)(cid:17)o =: T ,n ( ℓ , ℓ ) + T ,n ( ℓ , ℓ ) + T ,n ( ℓ , ℓ )with an obvious notation for T j,n ( ℓ , ℓ ), j = 1 , , 3. Using E (cid:0) I rs ( λ ℓ,b ) (cid:1) = f rs ( λ ℓ,b ) + O (1 /b ) and assertion ( i ) of the lemma, we get X ℓ ,ℓ ∈G ( b ) | T ,n ( ℓ , ℓ ) | ≤ X m ∈G ( b ) (cid:12)(cid:12) E (cid:0) I r k s k ( λ ℓ ,b ) (cid:1)(cid:12)(cid:12) × X ℓ ∈G ( b ) (cid:12)(cid:12)(cid:12) n − b + 1 n − b +1 X t =1 (cid:16) I t ; r j s j ( λ ℓ ,b ) − E (cid:0) I t ; r j s j ( λ ℓ ,b ) (cid:1)(cid:17)(cid:12)(cid:12)(cid:12) = O P ( p b / ( n − b + 1)) . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap By analogous arguments, the same bound is obtained for the term T ,n ( ℓ , ℓ ).For the term T ,n ( ℓ , ℓ ) observe that E ( T ,n ( ℓ , ℓ )) = 0 and that P ℓ ,ℓ ∈G ( b ) E | T ,n ( ℓ , ℓ ) | ≤ P ℓ ,ℓ ∈G ( b ) p Var( T ,n ( ℓ , ℓ )). Using the notation I ct ; rs ( λ ) = I t ; rs ( λ ) − E ( I t ; rs ( λ ))we get for the varianceVar( T ,n ( ℓ , ℓ )) = 1( n − b + 1) n − b +1 X t ,t =1 n E (cid:0) I ct ; r j s j ( λ ℓ ,b ) I ct ; r j s j ( λ ℓ ,b ) (cid:1) E (cid:0) I ct ; r k s k ( λ ℓ ,b ) I ct ; r k s k ( λ ℓ ,b ) (cid:1) + E (cid:0) I ct ; r j s j ( λ ℓ ,b ) I ct ; r k s k ( λ ℓ ,b ) (cid:1) E (cid:0) I ct ; r j s j ( λ ℓ ,b ) I ct ; r k s k ( λ ℓ ,b ) (cid:1) + cum (cid:0) I ct ; r j s j ( λ ℓ ,b ) , I ct ; r k s k ( λ ℓ ,b ) , I ct ; r j s j ( λ ℓ ,b ) , I ct ; r k s k ( λ ℓ ,b ) (cid:1)o =: V ,n ( ℓ , ℓ ) + V ,n ( ℓ , ℓ ) + V ,n ( ℓ , ℓ ) , with an obvious notation for V ,n ( ℓ , ℓ ), j = 1 , , 3. We show that each one ofthe terms V j,n ( ℓ , ℓ ) is of the order O ( b/ ( n − b + 1)). This implies that X ℓ ,ℓ ∈G ( b ) q V ,n ( ℓ , ℓ ) + V ,n ( ℓ , ℓ ) + V ,n ( ℓ , ℓ ) = O ( p b / ( n − b + 1)) . The terms V ,n ( ℓ , ℓ ) and V ,n ( ℓ , ℓ ) can be treated similarly, so we only con-sider V ,n ( ℓ , ℓ ). For this term we have using (cid:12)(cid:12)(cid:12) E ( I ct ; r j s j ( λ ℓ ,b ) I ct ; r j s j ( λ ℓ ,b ) (cid:1)(cid:12)(cid:12)(cid:12) ≤ π b b X g ,g ,g ,g =1 n | γ r j r j (( t − t ) + ( g − g )) γ s j s j (( t − t ) + ( g − g )) | + | γ r j s j (( t − t ) + ( g − g )) γ s j r j (( t − t ) + ( g − g )) | + (cid:12)(cid:12) c r j s j r j s j ( t − t + g − g , t − t + g − g , g − g ) (cid:12)(cid:12)o that (cid:12)(cid:12) V ,n ( ℓ , ℓ ) (cid:12)(cid:12) ≤ π b n − b + 1) n − b +1 X t ,t =1 C j ( t , t ) C k ( t , t ) , (5.23)where C j ( t , t ) := b X g ,g ,g ,g =1 n | γ r j r j (( t − t ) + ( g − g )) γ s j s j (( t − t ) + ( g − g )) | + | γ r j s j (( t − t ) + ( g − g )) γ s j r j (( t − t ) + ( g − g )) | + (cid:12)(cid:12) c r j s j r j s j ( t − t + g − g , t − t + g − g , g − g ) (cid:12)(cid:12)o . . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap Evaluating the C j ( t , t ) C k ( t , t ) term in (5.23) leads to the consideration ofterms which are similar to the following three: I ,n = 116 π b n − b + 1) n − b +1 X t ,t =1 b X g ,g ,g ,g =1 b X v ,v ,v ,v =1 (cid:12)(cid:12) γ r j r j (( t − t ) + ( g − g )) (cid:12)(cid:12)(cid:12)(cid:12) γ s j s j (( t − t ) + ( g − g )) (cid:12)(cid:12) × (cid:12)(cid:12) γ r k s k (( t − t ) + ( v − v )) (cid:12)(cid:12)(cid:12)(cid:12) γ s k r k (( t − t ) + ( v − v )) (cid:12)(cid:12) ,I ,n = 116 π b n − b + 1) n − b +1 X t ,t =1 b X g ,g ,g ,g =1 b X v ,v ,v ,v =1 (cid:12)(cid:12) γ r j r j (( t − t ) + ( g − g )) (cid:12)(cid:12)(cid:12)(cid:12) γ s j s j (( t − t ) + ( g − g )) (cid:12)(cid:12) × (cid:12)(cid:12) c r k s k r k s k ( t − t + v − v , t − t + v − v , v − v ) (cid:12)(cid:12) and I ,n = 116 π b n − b + 1) n − b +1 X t ,t =1 b X g ,g ,g ,g =1 b X v ,v ,v ,v =1 (cid:12)(cid:12) c r j s j r j s j ( t − t + g − g , t − t + g − g , g − g ) (cid:12)(cid:12) × (cid:12)(cid:12) c r k s k r k s k ( t − t + v − v , t − t + v − v , v − v ) (cid:12)(cid:12) . For I ,n we get summing first over v , then over v , then over g and finally over t , that this term is O ( b/ ( n − b + 1)). For the term I ,n we get by summing firstover v , then over v , then over v , then over g and finally over t that this termis O (1 / ( n − b +1)). For I ,n we get summing first over v , then over v , then over v , then over g , then over g and finally over t , that I ,n = O (1 / ( b ( n − b + 1))).From this we conclude that V ,n ( ℓ , ℓ ) = O ( b/ ( n − b + 1)).It remains to show that V ,n ( ℓ , ℓ ) = O ( b/ ( n − b + 1)). For this we get (cid:12)(cid:12) V ,n ( ℓ , ℓ ) (cid:12)(cid:12) ≤ π b n − b + 1) n − b +1 X t ,t =1 b X g ,g , ··· ,g =1 (cid:12)(cid:12) cum (cid:0) X r j ( t + g − · X s j ( t + g − , X r k ( t + g − · X s k ( t + g − ,X r j ( t + g − · X s j ( t + g − , X r k ( t + g − · X s k ( t + g − (cid:1)(cid:12)(cid:12) . The above cumulant term can be expressed as the sum of products of cumulantsover all indecomposable partitions of the two dimensional table t + g − t + g − t + g − t + g − t + g − t + g − t + g − t + g − . Meyer and E. Paparoditis/Multivariate Frequency Domain Bootstrap see for instance Brillinger (1981), Theorem 2.3.2. Investigation of this sum showsthat it is dominated by those indecomposable partitions which contain only pairswith a typical term of such a partitions given by116 π b n − b + 1) n − b +1 X t ,t =1 b X g ,g , ··· ,g =1 (cid:12)(cid:12)(cid:12) cum (cid:0) X r j ( t + g − , X s j ( t + g − (cid:1) × cum (cid:0) X r k ( t + g − , X s k ( t + g − (cid:1) × cum (cid:0) X s j ( t + g − , X r j ( t + g − (cid:1) × cum (cid:0) X s k ( t + g − , X r k ( t + g − (cid:1)(cid:12)(cid:12)(cid:12) = 116 π b n − b + 1) n − b +1 X t ,t =1 b X g ,g , ··· ,g =1 (cid:12)(cid:12) γ r j s j (cid:0) ( t − t ) + ( g − g ) (cid:1) | (cid:12)(cid:12) γ r k s k (cid:0) ( t − t ) + ( g − g ) (cid:1) |× (cid:12)(cid:12) γ s j r j (cid:0) ( t − t ) + ( g − g ) (cid:1) | (cid:12)(cid:12) γ s k r k (cid:0) ( t − t ) + ( g − g ) (cid:1) | . Summing first over g , then over g , then over g and then over t we get thatthis term is O ( b/ ( n − b + 1)).+ 1)).