Bootstrap independence test for functional linear models
Wenceslao González-Manteiga, Gil González-Rodríguez, Adela Martínez-Calvo, Eduardo García-Portugués
BBootstrap independence test for functional linear models
Wenceslao Gonz´alez–Manteiga , Gil Gonz´alez–Rodr´ıguez ,Adela Mart´ınez–Calvo , and Eduardo Garc´ıa–Portugu´es Abstract
Functional data have been the subject of many research works over the last years. Functionalregression is one of the most discussed issues. Specifically, significant advances have been madefor functional linear regression models with scalar response. Let ( H , (cid:104)· , ·(cid:105) ) be a separable Hilbertspace. We focus on the model Y = (cid:104) Θ , X (cid:105) + b + ε , where Y and ε are real random variables, X isan H –valued random element, and the model parameters b and Θ are in R and H , respectively.Furthermore, the error satisfies that E ( ε | X ) = 0 and E ( ε | X ) = σ < ∞ . A consistent bootstrapmethod to calibrate the distribution of statistics for testing H : Θ = 0 versus H : Θ (cid:54) = 0 isdeveloped. The asymptotic theory, as well as a simulation study and a real data applicationillustrating the usefulness of our proposed bootstrap in practice, is presented. Keywords:
Bootstrap; bootstrap consistency; functional linear regression; functional principal componentsanalysis; hypothesis test.
Nowadays,
Functional Data Analysis (FDA) has turned into one of the most interesting statisti-cal fields. Particularly, functional regression models have been studied from a parametric pointof view (see Ramsay and Silverman (2002, 2005)), and from a non–parametric one (see Ferratyand Vieu (2006)), being the most recent advances compiled on Ferraty and Romain (2011). Thiswork focuses on the parametric approach, specifically, on the functional linear regression model withscalar response that is described below.Let ( H , (cid:104)· , ·(cid:105) ) be a separable Hilbert space, and let (cid:107)·(cid:107) be the norm associated with its inner product.Moreover, let (Ω , σ, P) be a probability space and let us consider (
X, Y ) a measurable mapping fromΩ to
H × R , that is, X is an H –valued random element whereas Y is a real random variable. Inthis situation, let us assume that ( X, Y ) verifies the following linear model with scalar response, Y = (cid:104) Θ , X (cid:105) + b + ε (1)where Θ ∈ H is a fixed functional model parameter, b ∈ R is the intercept term, and ε is a real ran-dom variable such that E ( ε | X ) = 0 and E ( ε | X ) = σ < ∞ . Many authors have dealt with model(1), being the methods based on Functional Principal Components Analysis (FPCA) amongst themost popular ones to estimate the model parameters (see Cardot, Ferraty, and Sarda (1999, 2003),Cai and Hall (2006), Hall and Hosseini-Nasab (2006), and Hall and Horowitz (2007)).The main aim of this work is to develop a consistent general bootstrap resampling approach tocalibrate the distribution of statistics for testing the significance of the relationship between X and Department of Statistics and Operations Research. University of Santiago de Compostela (Spain). Department of Statistics and Operations Research and Mathematics Didactics. University of Oviedo (Spain). Corresponding author. e–mail: [email protected]. a r X i v : . [ s t a t . M E ] F e b , that is, for testing H : Θ = 0 versus H : Θ (cid:54) = 0, on the basis of a simple random sample { ( X i , Y i ) } ni =1 drawn from ( X, Y ). The bootstrap techniques will become an alternative useful toolwhen the asymptotics of test statistics are unknown or when they are inaccurate due to small samplesize.Since its introduction by Efron (1979), it is well–known that the bootstrap method results in anew distribution approximation which can be applied to a large number of situations, such as thecalibration of pivotal quantities in the finite dimensional context (see Bickel and Freedman (1981),and Singh (1981)). As far as multivariate regression models are concerned, bootstrap validity forlinear and non–parametric models was also stated in literature (see Freedman (1981), and Cao-Abad (1991)). Currently, the application of bootstrap to the functional field has been successfullystarted. For instance, Cuevas, Febrero, and Fraiman (2006) have proposed bootstrap confidencebands for several functional estimators such as the sample and the trimmed functional means. In theregression context, Ferraty, Van Keilegom, and Vieu (2010), and Gonz´alez-Manteiga and Mart´ınez-Calvo (2011) have shown the validity of the bootstrap in the estimation of non–parametric functionalregression and functional linear model, respectively, when the response is scalar. They have alsoproposed pointwise confidence intervals for the regression operator involved in each case. In addi-tion, the asymptotic validity of a componentwise bootstrap procedure has been proved by Ferraty,Van Keilegom, and Vieu (2012) when a non–parametric regression is considered and both responseand regressor are functional.Bootstrap techniques can also be very helpful for testing purposes, since they can be used in orderto approximate the distribution of the statistic under the null hypothesis H . For example, Cuevas,Febrero, and Fraiman (2004) have developed a sort of parametric bootstrap to obtain quantilesfor an ANOVA test, and Gonz´alez-Rodr´ıguez, Colubi, and Gil (2012) have proved the validity ofa residual bootstrap in that context. Hall and Vial (2006) and, more recently, Bathia, Yao, andZiegelmann (2010) have studied the finite dimensionality of functional data using a bootstrap ap-proximation for independent and dependent data, respectively.As was indicated previously, testing the lack of dependence between X and Y is our goal. Thisissue has stirred up a great interest during the last years due to its practical applications in thefunctional context. For instance, Kokoszka, Maslova, Sojka, and Zhu (2008) proposed a test for lackof dependence in the functional linear model with functional response which was applied to magne-tometer curves consisting of minute–by–minute records of the horizontal intensity of the magneticfield measured at observatories located at different latitude. The aim was to analyse if the high–latitude records had a linear effect on the mid– or low–latitude records. On the other hand, Cardot,Prchal, and Sarda (2007) presented a statistical procedure to check if a real–valued covariate has aneffect on a functional response in a nonparametric regression context, using this methodology for astudy of atmospheric radiation. In this case, the dataset were radiation profiles curves measured ata random time and the authors tested if the radiation profiles changed along the time.Regarding the regression model (1), testing the significance of the relationship between a functionalcovariate and a scalar response has been the subject of recent contributions, and asymptotic ap-proaches for this problem can be found in Cardot, Ferraty, Mas, and Sarda (2003) or Kokoszka,Maslova, Sojka, and Zhu (2008). The methods presented in these two works are mainly based onthe calibration of the statistics distribution by using asymptotic distribution approximations. Incontrast, we propose a consistent bootstrap calibration in order to approximate the statistics dis-tribution. For that, we firstly introduce in Section 2 some notation and basic concepts about theregression model (1), the asymptotic theory for the testing procedure, and the consistency of thebootstrap techniques that we propose. In Section 3, the bootstrap calibration is presented as analternative to the asymptotic theory previously exposed. Then, Section 4 is devoted to the empirical2esults. A simulation study and a real data application allow us to show the performance of ourbootstrap methodology in comparison with the asymptotic approach. Finally, some conclusions aresummarized in Section 5. Let us consider the model (1) given in the previous Section 1. In this framework, the regressionfunction, denoted by m , is given by m ( x ) = E ( Y | X = x ) = (cid:104) Θ , x (cid:105) + b for all x ∈ H . The aim is to develop correct and consistent bootstrap techniques for testing (cid:26) H : Θ = 0 H : Θ (cid:54) = 0 (2)on the basis of a random sample { ( X i , Y i ) } ni =1 of independent and identically distributed randomelements with the same distribution as ( X, Y ). That is, our objective is to check whether X and Y are linearly independent ( H ) or not ( H ).Next, we expose briefly some technical background required to develop the theoretical results pre-sented throughout the section. Riesz Representation Theorem ensures that the functional linear model with scalar response can behandled theoretically within the considered framework. Specifically, let H be the separable Hilbertspace of square Lebesgue integrable functions on a given compact set C ⊂ R , denoted by L ( C, λ ),with the usual inner product and the associated norm (cid:107) · (cid:107) . The functional linear model with scalarresponse between a random function X and a real random variable Y is defined as Y = Φ( X ) + (cid:15), (3)where Φ is a continuous linear operator (that is, Φ ∈ H (cid:48) , being H (cid:48) the dual space of H with norm (cid:107) · (cid:107) (cid:48) ), and (cid:15) is a real random variable with finite variance and independent of X . In virtue of RieszRepresentation Theorem H and H (cid:48) are isometrically identified, in such a way that for any Φ ∈ H (cid:48) there exists a unique Θ ∈ H so that (cid:107) Θ (cid:107) = (cid:107) Φ (cid:107) (cid:48) and Φ( h ) = (cid:104) Θ , h (cid:105) for all h ∈ H . Consequently,the model presented in equation (3) is just a particular case of the one considered in (1).Previous works regarding functional linear models assume b = 0 (see Cardot, Ferraty, Mas, andSarda (2003), and Kokoszka, Maslova, Sojka, and Zhu (2008)). Of course, the intercept term canbe embedded in the variable counterpart of the model as in the multivariate case as follows. Let H e be the product space H × R with the corresponding inner product (cid:104)· , ·(cid:105) e , and define X (cid:48) = ( X, (cid:48) = (Θ , b ) ∈ H e . Then the model considered in (1) can be rewritten as Y = (cid:104) Θ (cid:48) , X (cid:48) (cid:105) e + ε (and consequently X (cid:48) cannot be assumed to be centered). Nevertheless, in the context of the linearindependence test, the aim is to check if Θ = 0 or not, and this is not equivalent to checking whetherΘ (cid:48) = 0 or not. In addition, in practice the intercept term b cannot be assumed to be equal to 0. Thus,in order to avoid any kind of confusion, in this paper the intercept term b has been written explicitly.In the same way, in the above mentioned papers, the random element X is assumed to be centered.Although, in many cases, the asymptotic distribution of the proposed statistics does not change if { X i } ni =1 is replaced by the dependent sample { X i − X } ni =1 , the situation with the bootstrap versionof the statistics could be quite different. In fact, as it will be shown afterwards, different bootstrapstatistics could be considered when this replacement is done. Hence, for the developments in thissection, it will not be assumed that the X variable is centered.3 .2 Linear independence test Given a generic H –valued random element H such that E ( (cid:107) H (cid:107) ) < ∞ , its associated covarianceoperator Γ H is defined as the operator Γ H : H → H Γ H ( h ) = E ( (cid:104) H − µ H , h (cid:105) ( H − µ H )) = E ( (cid:104) H, h (cid:105) H ) − (cid:104) µ H , h (cid:105) µ H , for all h ∈ H , where µ H ∈ H denotes the expected value of H . From now on, it will be assumedthat E ( (cid:107) X (cid:107) ) < ∞ , and thus, as a consequence of H¨older’s inequality, E ( Y ) < ∞ . Wheneverthere is no possible confusion, Γ X will be abbreviated as Γ. It is well–known that Γ is a nuclearand self–adjoint operator. In particular, it is a compact operator of trace class and thus, in virtueof the Spectral Theorem Decomposition, there is an orthonormal basis of H , { v j } j ∈ N , consisting oneigenvectors of Γ with corresponding eigenvalues { λ j } j ∈ N , that is, Γ( v j ) = λ j v j for all j ∈ N . Asusual, the eigenvalues are assumed to be arranged in decreasing order ( λ ≥ λ ≥ . . . ). Since theoperator Γ is symmetric and non–negative definite, then the eigenvalues are non–negative.In a similar way, let us consider the cross–covariance operator ∆ : H → R between X and Y givenby ∆( h ) = E ( (cid:104) X − µ X , h (cid:105) ( Y − µ Y )) = E ( (cid:104) X, h (cid:105) Y ) − (cid:104) µ X , h (cid:105) µ Y , for all h ∈ H , where µ Y ∈ R denotes the expected value of Y . Of course, ∆ ∈ H (cid:48) and the followingrelation between the considered operators and the regression parameter Θ is satisfied∆( · ) = (cid:104) Γ( · ) , Θ (cid:105) . (4)The Hilbert space H can be expressed as the direct sum of the two orthogonal subspaces inducedby the self–adjoint operator Γ: the kernel or null space of Γ, N (Γ), and the closure of the image orrange of Γ, R (Γ). Thus, Θ is determined uniquely by Θ = Θ + Θ with Θ ∈ N (Γ) and Θ ∈ R (Γ).As Θ ∈ N (Γ), it is easy to check that V ar ( (cid:104) X, Θ (cid:105) ) = 0 and, consequently, the model introducedin (1) can be expressed as Y = (cid:104) Θ , X (cid:105) + (cid:104) Θ , µ X (cid:105) + b + ε. Therefore, it is not possible to distinguish between the term (cid:104) Θ , µ X (cid:105) and the intercept term b ,and consequently it is not possible to check whether Θ = 0 or not. Taking this into account, thehypothesis test will be restricted to check (cid:26) H : Θ = 0 H : Θ (cid:54) = 0 (5)on the basis of the available sample information.Note that in this case, according to the relation between the operators and the regression parametershown in (4), Θ = 0 if, and only if, ∆( h ) = 0 for all h ∈ H . Consequently, the hypothesis test in(5) is equivalent to (cid:26) H : (cid:107) ∆ (cid:107) (cid:48) = 0 H : (cid:107) ∆ (cid:107) (cid:48) (cid:54) = 0 (6) Remark 1.
It should be recalled that, in previous works µ X is assumed to be equal 0. Thus,the preceding reasoning leads to the fact that Θ cannot be estimated based on the informationprovided by X (see, for instance, Cardot, Ferraty, Mas, and Sarda (2003)). Consequently thehypothesis testing is also restricted to the one in the preceding equations. In addition in Cardot,Ferraty, Mas, and Sarda (2003), it is also assumed for technical reasons that R (Γ) is an infinite–dimensional space. On the contrary, this restriction is not imposed in the study here developed.4 emark 2. Note that another usual assumption is that the intercept term vanishes. Although thisis not common in most of situations, it should be noted that if b = 0 and X is not assumed to becentered (as in this work), then an interesting possibility appears: to check whether Θ = 0 or notby checking the nullity of the intercept term of the model, and thus to check the original hypothesistesting in (2). This open problem cannot be solved with the methodology employed in the currentpaper (or in the previous ones) because the idea is based on checking (6), which is equivalent to therestricted test (5) but not to the unrestricted one in (2). According to the relation between (cid:107) · (cid:107) (cid:48) and (cid:107) · (cid:107) , the dual norm of ∆ ∈ H (cid:48) can be expressedequivalently in terms of the H –valued random element ( X − µ X )( Y − µ Y ) as follows (cid:107) ∆ (cid:107) (cid:48) = (cid:107)(cid:104) E (( X − µ X )( Y − µ Y )) , ·(cid:105)(cid:107) (cid:48) = (cid:107) E (( X − µ X )( Y − µ Y )) (cid:107) . Thus, based on an i.i.d. sample { ( X i , Y i ) } ni =1 drawn from ( X, Y ), D = (cid:107) E (( X − µ X )( Y − µ Y )) (cid:107) = (cid:107) T (cid:107) can be estimated in a natural way by means of its empirical counterpart D n = (cid:107) T n (cid:107) , where T n isthe H –valued random element given by T n = 1 n n (cid:88) i =1 ( X i − X )( Y i − Y ) , where X and Y denote as usual the corresponding sample means. The next theorem establishessome basic properties of T n . Theorem 1.
Assuming that (1) holds with E ( ε ) = 0 , E ( ε ) = σ < ∞ and E ( (cid:107) X (cid:107) ) < ∞ , then1. E ( T n ) = E (( X − µ X )( Y − µ Y )) ( n − /n T n converges a.s.– P to E (( X − µ X )( Y − µ Y )) as n → ∞ √ n ( T n − E (( X − µ X )( Y − µ Y ))) converges in law, as n → ∞ , to a centered Gaussian ele-ment Z in H with covariance operator Γ Z ( · ) = σ Γ( · ) + E (cid:0) ( X − µ X ) (cid:104) X − µ X , ·(cid:105)(cid:104) X − µ X , Θ (cid:105) (cid:1) . Proof.
Since T n can be equivalently expressed as T n = 1 n n (cid:88) i =1 ( X i − µ X )( Y i − µ Y ) − ( X − µ X )( Y − µ Y ) , it is straightforward to check item 1. The a.s.– P convergence is a direct application of the SLLNfor separable Hilbert–valued random elements.On the other hand, given that E ( (cid:107) ( X − µ X )( Y − µ Y ) (cid:107) ) < ∞ , the convergence in law can bededuced by applying the CLT for separable Hilbert–valued random elements (see, for instance, Lahaand Rohatgi (1979)) together with Slutsky’s Theorem. The concrete expression of the operatorΓ Z , that is, Γ Z = Γ ( X − µ X )( Y − µ Y ) = Γ ( X − µ X ) ε + Γ ( X − µ X ) (cid:104) X − µ X , Θ (cid:105) , can be obtained by simplecomputations.In order to simplify the notation, from now on, given any H –valued random element H with E ( (cid:107) H (cid:107) ) < ∞ , Z H will denote a centered Gaussian element in H with covariance operator Γ H .5 orollary 1. Under the conditions of Theorem 1, if the null hypothesis H : (cid:107) ∆ (cid:107) (cid:48) = 0 is satisfied,then √ nT n converges in law to Z ( X − µ X ) ε (with covariance operator σ Γ ), and consequently, (cid:107)√ nT n (cid:107) converges in law to (cid:107) Z ( X − µ X ) ε (cid:107) . In contrast to Theorem 1 in Cardot, Ferraty, Mas, and Sarda (2003), the result in Corollary 1 isestablished directly on the Hilbert space H instead of on its dual space. In addition, no assumptionof centered X random elements or null intercept term is necessary. Nevertheless these two assump-tions could be easily removed in that paper in order to establish a dual result of Corollary 1.Furthermore, in view of Corollary 1, the asymptotic null distribution of (cid:107)√ nT n (cid:107) is not explicitlyknown. This is the reason why no further research on how to use in practice this statistic (or itsdual one) for checking if Θ equals 0 is carried out in Cardot, Ferraty, Mas, and Sarda (2003).Instead, an alternative statistic that is used in the simulation section for comparative purposes isconsidered. Nevertheless, it is still possible to use (cid:107)√ nT n (cid:107) as a core statistic in order to solve thistest in practice by means of bootstrap techniques.One natural way of using the asymptotic result of Corollary 1 for solving the test under studyis as follows. Consider a consistent (at least under H ) estimator σ n of σ (for instance, thesample variance of Y could be used, or perhaps the one introduced by Cardot, Ferraty, Mas, andSarda (2003), provided that its theoretical behavior is analyzed). Then, according to Slutsky’sTheorem (cid:107)√ nT n (cid:107) /σ n converges in law under H to the norm of Z X . As its covariance operator Γ isunknown, it can be approximated by the empirical one Γ n . And thus, (cid:107) Z X (cid:107) can be approximatedby (cid:107) Z n (cid:107) , being Z n a centered Gaussian element in H with covariance operator Γ n . Of course, thedistribution of (cid:107) Z n (cid:107) is still difficult to compute directly, nevertheless one can make use of the CLTand approximate its distribution by Monte Carlo method by the distribution of (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m m (cid:88) i =1 ( X ∗ i − X ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) for a large value of m , being { X ∗ i } mi =1 i.i.d. random elements chosen at random from the fixedpopulation ( X , . . . , X n ). Obviously, this method is a precursor of the bootstrap procedures.In order to complete the asymptotic study of the statistic (cid:107)√ nT n (cid:107) , its behavior under local alter-natives is going to be analyzed. To this purpose, let us consider Θ ∈ H so that (cid:107) Θ (cid:107) >
0, and given δ n > Y ni = (cid:104) X i , δ n √ n Θ (cid:105) + b + ε i , for all i ∈ { , . . . , n } . Then, the null hypothesis is not verified. However, if δ n / √ n →
0, then (cid:107) ( δ n / √ n )Θ (cid:107) →
0, that is, H is approached with “speed” δ n / √ n . In these conditions, E (cid:0) ( X i − µ X i )( Y ni − µ Y ni ) (cid:1) = δ n √ n Γ(Θ) , and thus the following theorem that establishes the behavior of the statistic under the consideredlocal alternatives can be easily deduced. Theorem 2.
Under the conditions of Theorem 1 and with the above notation, if δ n → ∞ and δ n / √ n → as n → ∞ then P (cid:32)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) √ n n (cid:88) i =1 (cid:0) X i − X (cid:1) (cid:0) Y ni − Y n (cid:1)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ t (cid:33) → as n → ∞ , for all t ∈ R . .4 Bootstrap procedures The difficulty of using the previously proposed statistic to solve the hypothesis test by means ofasymptotic procedures suggests the development of appropriated bootstrap techniques. The asymp-totic consistency of a bootstrap approach is guaranteed if the associated bootstrap statistic convergesin law to a non–degenerated distribution irrespectively of H being satisfied or not. In addition, inorder to ensure its asymptotic correctness, this limit distribution must coincide with the asymptoticone of the testing statistic provided that H holds.Consequently, the asymptotic limit established in Corollary 1 plays a fundamental role for definingappropriate bootstrap statistics. In this way, recall that1 √ n n (cid:88) i =1 (cid:0)(cid:0) X i − X (cid:1)(cid:0) Y i − Y (cid:1) − E (cid:0) ( X − µ X )( Y − µ Y ) (cid:1)(cid:1) converges in law to Z ( X − µ X )( Y − µ Y ) , irrespectively of H being satisfied or not and, in addition, if H is satisfied then Γ ( X − µ X )( Y − µ Y ) = σ Γ. Thus, this is a natural statistic to be mimicked by abootstrap one. Note that, (cid:32) n n (cid:88) i =1 (cid:0) Y i − Y (cid:1) (cid:33) (cid:32) √ n n (cid:88) i =1 (cid:0) X i − µ X (cid:1)(cid:33) , (7)converges in law to ( σ + E ( (cid:104) X − µ X , Θ (cid:105) )) Z X , whose covariance operator is ( σ + E ( (cid:104) X − µ X , Θ (cid:105) )Γ.Particularly, when H is satisfied, this operator reduces again to σ Γ. Consequently, another pos-sibility consists in mimicking this second statistic by means of a bootstrap one, improving theapproximation suggested in the previous subsection. Note that the left term in the product inequation (7) could be substituted by any other estimator under H of σ that converges to a finiteconstant if H does not hold. Anyway, this second approximation could lead to worst results underthe null hypothesis, because the possible dependency between X and ε is lost (as the resamplewould focus only on the X information).Two possibilities for mimicking the statistics which were above–mentioned are going to be explored,namely a “naive” paired bootstrap and a “wild” bootstrap approach. In order to achieve this goal, let { ( X ∗ i , Y ∗ i ) } ni =1 be a collection of i.i.d. random elements drawn at random from ( X , Y ) , . . . , ( X n , Y n ),and let us consider the following “naive” paired bootstrap statistic T N ∗ n = 1 n n (cid:88) i =1 (cid:0)(cid:0) X ∗ i − X ∗ (cid:1)(cid:0) Y ∗ i − Y ∗ (cid:1) − (cid:0) X i − X (cid:1)(cid:0) Y i − Y (cid:1)(cid:1) . In addition, let us consider σ n = (1 /n ) (cid:80) ni =1 ( Y i − Y ) and σ ∗ n = (1 /n ) (cid:80) ni =1 ( Y ∗ i − Y ∗ ) , the empir-ical estimator of σ Y under H and its corresponding bootstrap version.The asymptotic behavior of the “naive” bootstrap statistic will be analyzed through some resultson bootstrapping general empirical measures obtained by Gin´e and Zinn (1990). It should be notedthat the bootstrap results in that paper refer to empirical process indexed by a class of functions F ,that particularly extend to the bootstrap about the mean in separable Banach (and thus Hilbert)spaces. In order to establish this connection, it is enough to choose F = { f ∈ H (cid:48) |(cid:107) f (cid:107) (cid:48) ≤ } (see Gin´e (1997) and Kosorok (2008), for a general overview of indexed empirical process). F isimage admissible Suslin (considering the weak topology). In addition, F ( h ) = sup f ∈F | f ( h ) | = (cid:107) h (cid:107) h ∈ H and thus E ( F ( X )) = E ( (cid:107) X (cid:107) ) < ∞ .Consider the bounded and linear (so continuous) operator δ from H to l ∞ ( F ) given by δ ( h )( f ) = δ h ( f ) = f ( h ) for all h ∈ H and all f ∈ F and denote by R ( δ ) ⊂ l ∞ ( F ) its range. As (cid:107) δ ( h ) (cid:107) ∞ = (cid:107) h (cid:107) for all h ∈ H then, there exists δ − : R ( δ ) → H , so that δ − is continuous. In addition, as R ( δ )is closed, Dugundji Theorem allows us to consider a continuous extension δ − : l ∞ ( F ) → H (seefor instance Kosorok (2008), Lemma 6.16 and Theorem 10.9). Thus, following the typical empiricalprocess notation, the empirical process (1 / √ n ) (cid:80) ni =1 ( δ X i − P ) indexed in F is directly connectedwith (1 / √ n ) (cid:80) ni =1 ( X i − E ( X )) by means of the continuous mapping δ − and vice–versa.Some consequences of this formulation applied to the work developed by Gin´e and Zinn (1990) leadto the results collected in following lemma. Lemma 1.
Let ξ be a measurable mapping from a probabilistic space denoted by (Ω , σ, P ) to aseparable Hilbert space ( H , (cid:104)· , ·(cid:105) ) with corresponding norm (cid:107) · (cid:107) so that E ( (cid:107) ξ (cid:107) ) < ∞ . Let { ξ i } ni =1 bea sequence of i.i.d. random elements with the same distribution as ξ , and let { ξ ∗ i } ni =1 be i.i.d. from { ξ i } ni =1 . Then1. √ n ( ξ ∗ − ξ ) converges in law to Z ξ a.s.– P ξ ∗ converges in probability to E ( ξ ) a.s.– P (cid:107) ξ ∗ (cid:107) converges in probability to E ( (cid:107) ξ (cid:107) ) a.s.– P Proof.
To prove item 1 note that the CLT for separable Hilbert–valued random elements (see,for instance, Laha and Rohatgi (1979)) together with the Continuous Mapping Theorem appliedto δ guarantees that F ∈
CLT( P ). Thus, Theorem 2.4 of Gin´e and Zinn (1990) ensures that n / ( ˆ P n ( w ) − P n ( w )) converges in law to a Gaussian process on F , G = δ ( Z ξ ) a.s.– P . Consequently,by applying again the Continuous Mapping Theorem √ n ( ξ ∗ − ξ ) = δ − ( n / ( ˆ P n ( w ) − P n ( w ))) con-verges in law to Z ξ = δ − ( G ).Items 2 and 3 can be checked in a similar way by applying Theorem 2.6 of Gin´e and Zinn (1990).Note that item 1 is also a direct consequence of Remark 2.5 of Gin´e and Zinn (1990); neverthelessit was proven based on Theorem 2.4 to illustrate the technique.The following theorem establishes the asymptotic consistency and correctness of the “naive” boot-strap approach. Theorem 3.
Under the conditions of Theorem 1, we have that √ nT N ∗ n converges in law to Z ( X − µ X )( Y − µ Y ) a.s.– P . In addition, σ ∗ n converges in probability to σ Y = σ + E ( (cid:104) X − µ X , Θ (cid:105) ) a.s.– P .Proof. First of all consider the bootstrap statistic S ∗ n = 1 √ n n (cid:88) i =1 (cid:16)(cid:0) X ∗ i − µ X (cid:1)(cid:0) Y ∗ i − µ Y (cid:1) − (cid:0) X − µ X (cid:1)(cid:0) Y − µ Y (cid:1)(cid:17) and note that { (cid:0) X ∗ i − µ X (cid:1)(cid:0) Y ∗ i − µ Y (cid:1) } ni =1 are i.i.d. H –valued random elements chosen at randomfrom the “bootstrap population” { (cid:0) X i − µ X (cid:1)(cid:0) Y i − µ Y (cid:1) } ni =1 . Then, item 1 in Lemma 1 guaranteesthat S ∗ n converges in law to Z ( X − µ X )( Y − µ Y ) a.s.– P .On the other hand, S ∗ n equals √ nT N ∗ n plus the following terms − √ n √ n ( X ∗ − X ) √ n ( Y ∗ − Y ) + √ n ( X ∗ − X )( Y ∗ − µ Y ) + ( X ∗ − µ X ) √ n ( Y ∗ − Y ) . P , and consequently the convergence in law stated in the theorem is proven.Finally, the convergence of σ ∗ n holds in virtue of items 2 and 3 in Lemma 1.The “naive” bootstrap approach is described in the following algorithm. Algorithm 1 (Naive Bootstrap) . Step 1. Compute the value of the statistic T n (or the value T n /σ n ).Step 2. Draw { ( X ∗ i , Y ∗ i ) } ni =1 , a sequence of i.i.d. random elements chosen at random from theinitial sample ( X , Y ) , . . . , ( X n , Y n ) , and compute a n = (cid:107) T N ∗ n (cid:107) (or b n = (cid:107) T N ∗ n (cid:107) /σ ∗ n ).Step 3. Repeat Step 2 a large number of times B ∈ N in order to obtain a sequence of values { a ln } Bl =1 (or { b ln } Bl =1 ).Step 4. Approximate the p–value of the test by the proportion of values in { a ln } Bl =1 greater than orequal to (cid:107) T n (cid:107) (or by the proportion of values in { b ln } Bl =1 greater than or equal to (cid:107) T n (cid:107) /σ n ) Analogously, let { ε ∗ i } ni =1 be i.i.d. centered real random variables so that E (cid:0) ( ε ∗ i ) (cid:1) = 1 and (cid:82) ∞ ( P ( | ε ∗ | > t ) / ) < ∞ (to guarantee this last assumption, it is enough that E (cid:0) ( ε ∗ i ) d (cid:1) < ∞ forcertain d > T W ∗ n = 1 n n (cid:88) i =1 (cid:0) X i − X (cid:1)(cid:0) Y i − Y (cid:1) ε ∗ i . In order to analyze the asymptotic behavior of the “wild” bootstrap statistic, the following lemmawill be fundamental. It is a particularization of a result due to Ledoux, Talagrand and Zinn (cf.Gin´e and Zinn (1990), and Ledoux and Talagrand (1988)). See also the Multiplier Central LimitTheorem in Kosorok (2008) for the empirical process indexed by a class of measurable functionscounterpart.
Lemma 2.
Let ξ be a measurable mapping from a probabilistic space denoted by (Ω , σ, P ) to aseparable Hilbert space ( H , (cid:104)· , ·(cid:105) ) with corresponding norm (cid:107) · (cid:107) so that E ( (cid:107) ξ (cid:107) ) < ∞ . Let { ξ i } ni =1 bea sequence of i.i.d. random elements with the same distribution as ξ , and let { W i } ni =1 be a sequenceof i.i.d. random variables (in the same probability space and independent of { ξ i } ni =1 ) with E ( W i ) = 0 and (cid:82) ∞ ( P ( | W | > t ) / ) < ∞ , then the following are equivalent1. E ( (cid:107) ξ (cid:107) ) < ∞ (and consequently √ n ( ξ − E ( ξ )) converges in law to Z ξ ).2. For almost all ω ∈ Ω , (1 / √ n ) (cid:80) ni =1 W i ξ i ( ω ) converges in law to Z ξ . As a consequence, the asymptotic consistency and correctness of the “wild” bootstrap approach isguaranteed by the following theorem.
Theorem 4.
Under the conditions of Theorem 1, we get that √ nT W ∗ n converges in law to Z ( X − µ X )( Y − µ Y ) a.s.– P .Proof. According to Lemma 2, for almost all ω ∈ Ω, S ∗ n = 1 √ n n (cid:88) i =1 (cid:0) X wi − µ X (cid:1)(cid:0) Y wi − µ Y (cid:1) ε ∗ i converges in law to Z ( X − µ X )( Y − µ Y ) . Moreover ( Y w − µ Y ) and ( X w − µ X ) converges to 0 (by SLLN).9inally note that, for almost all ω ∈ Ω, S ∗ n = √ nT W ∗ n + ( Y w − µ Y ) 1 √ n n (cid:88) i =1 ( X wi − µ X ) ε ∗ i + ( X w − µ X ) 1 √ n n (cid:88) i =1 ( Y wi − µ Y ) ε ∗ i − ( X w − µ X )( Y w − µ Y ) 1 √ n n (cid:88) i =1 ε ∗ i . Lemma 2, together with the SLLN above–mentioned, guarantees the convergence in probability to0 of the last three summands, and thus the result is reached in virtue of Slutsky’s Theorem.The “wild” bootstrap approach proposed can be applied by means of the following algorithm.
Algorithm 2 (Wild Bootstrap) . Step 1. Compute the value of the statistic T n (or the value T n /σ n ).Step 2. Draw { ε ∗ i } ni =1 a sequence of i.i.d. random elements ε , and compute a n = (cid:107) T W ∗ n (cid:107) (or b n = (cid:107) T W ∗ n (cid:107) /σ ∗ n , in this case σ ∗ n is computed like in Step 2 of the Naive Bootstrap algorithm).Step 3. Repeat Step 2 a large number of times B ∈ N in order to obtain a sequence of values { a ln } Bl =1 (or { b ln } Bl =1 ).Step 4. Approximate the p–value of the test by the proportion of values in { a ln } Bl =1 greater than orequal to (cid:107) T n (cid:107) (or by the proportion of values in { b ln } Bl =1 greater than or equal to (cid:107) T n (cid:107) /σ n ). For simplicity, suppose from now on that b = 0 and X of zero–mean in (1), that is, suppose thatthe regression model is given by Y = (cid:104) Θ , X (cid:105) + ε. Furthermore, ∆( h ) = E ( (cid:104) X, h (cid:105) Y ) and, analogously, Γ( h ) = E ( (cid:104) X, h (cid:105) X ). In such case, if we assumethat (cid:80) ∞ j =1 (∆( v j ) /λ j ) < + ∞ and Ker (Γ) = { } , thenΘ = ∞ (cid:88) j =1 ∆( v j ) λ j v j , being { ( λ j , v j ) } j ∈ N the eigenvalues and eigenfunctions of Γ (see Cardot, Ferraty, and Sarda (2003)).A natural estimator for Θ is the FPCA estimator based on k n functional principal components givenby ˆΘ k n = k n (cid:88) j =1 ∆ n (ˆ v j )ˆ λ j ˆ v j , where ∆ n is the empirical estimation of ∆, that is, ∆ n ( h ) = (1 /n ) (cid:80) ni =1 (cid:104) X i , h (cid:105) Y i , and { (ˆ λ j , ˆ v j ) } j ∈ N are the eigenvalues and the eigenfunctions of Γ n , the empirical estimator of Γ: Γ n ( h ) = (1 /n ) (cid:80) ni =1 (cid:104) X i , h (cid:105) X i .Different statistics can be used for testing the lack of dependence between X and Y . Bearing inmind the expression (5), one can think about using an estimator of (cid:107) Θ (cid:107) = (cid:80) ∞ j =1 (∆( v j ) /λ j ) inorder to test these hypotheses. In an alternative way, the expression (6) can be a motivation fordifferent class of statistics based on the estimation of (cid:107) ∆ (cid:107) (cid:48) .10ne asymptotic distribution free based on the latter approach was given by Cardot, Ferraty, Mas,and Sarda (2003). They proposed as test statistic T ,n = k − / n (cid:16) ˆ σ − ||√ n ∆ n ˆ A n || (cid:48) − k n (cid:17) , (8)where ˆ A n ( · ) = (cid:80) k n j =1 ˆ λ − / j (cid:104)· , ˆ v j (cid:105) ˆ v j and ˆ σ is a consistent estimator of σ . Cardot, Ferraty, Mas,and Sarda (2003) showed that, under H , T ,n converges in distribution to a centered Gaussianvariable with variance equal to 2. Hence, H is rejected if | T ,n | > √ z − α/ ( z α the α –quantile ofa N (0 , || Θ || = (cid:80) ∞ j =1 (∆( v j ) /λ j ) , one can use the statistic T ,n = k n (cid:88) j =1 (cid:32) ∆ n (ˆ v j )ˆ λ j (cid:33) , (9)which limit distribution is not known.Finally, a natural competitive statistic is the one proposed throughout Section 2.3 T ,n = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n n (cid:88) i =1 ( X i − ¯ X )( Y i − ¯ Y ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) , (10)which we will denote by “F–test” from now on since it is the natural generalization of the well–known F–test in the finite–dimensional context. Another possibility is to consider the studentizedversion of (10) T s,n = 1ˆ σ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n n (cid:88) i =1 ( X i − ¯ X )( Y i − ¯ Y ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) , (11)where ˆ σ is the empirical estimation of σ .In general, for the statistics such as (8), (9), (10) and (11), the calibration of the distributioncan be obtained by using bootstrap. Furthermore, in the previous section, “naive” and “wild”bootstrap were shown to be consistent for the F–test, that is, the distribution of T ,n and T s,n can be approximated by their corresponding bootstrap distribution, and H can be rejected whenthe statistic value does not belong to the interval defined for the bootstrap acceptation region ofconfidence 1 − α . The same calibration bootstrap can be applied to the tests based on T ,n and T ,n , although the consistence of the bootstrap procedure in this cases have not been proved in thiswork. In this section a simulation study and an application to a real dataset illustrate the performance ofthe asymptotic approach and the bootstrap calibration from a practical point of view.
We have simulated ns = 500 samples, each being composed of n ∈ { , } observations fromthe functional linear model Y = (cid:104) Θ , X (cid:105) + ε , being X a Brownian motion and ε ∼ N (0 , σ ) withsignal–to–noise ratio r = σ/ (cid:112) E ( (cid:104) X, Θ (cid:105) ) ∈ { . , , } .11nder H , we have considered the model parameter Θ ( t ) = 0, t ∈ [0 , H , theselected model parameter was Θ ( t ) = sin(2 πt ) , t ∈ [0 , H we have chosen σ = 1, while in the alternative H we assigned the three different values that were commentedbefore. Let us remark that both X and Θ were discretized to 100 equidistant design points.We have selected the statistical tests which were introduced in the previous section: (8), (9), (10)and (11). For (8), three distribution approximations were considered: the asymptotic approach( N (0 , T ∗ ( a )1 ,n = 1 √ k n n (ˆ σ ∗ ) k n (cid:88) j =1 (∆ ∗ n (ˆ v j )) ˆ λ j − k n ,T ∗ ( b )1 ,n = 1 √ k n n ˆ σ k n (cid:88) j =1 (∆ ∗ n (ˆ v j )) ˆ λ j − k n . The difference between the two proposed bootstrap approximations is that in the former the esti-mation of σ is also bootstrapped in each iteration. On the other hand, for (9), (10) and (11), onlythe bootstrap approaches were computed T ∗ ,n = k n (cid:88) j =1 (cid:32) ∆ ∗ n (ˆ v j )ˆ λ j (cid:33) ,T ∗ ,n = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n n (cid:88) i =1 ( X i − ¯ X )( Y i − ¯ Y ) ε ∗ i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ,T ∗ s,n = 1ˆ σ ∗ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n n (cid:88) i =1 ( X i − ¯ X )( Y i − ¯ Y ) ε ∗ i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) . For this simulation study, we have used the “wild” bootstrap algorithm introduced in Section 2.4for the F–test and its studentized version, and the following adaptation of this consistent “wild”bootstrap for T ,n and T ,n . Algorithm 3 (Wild Bootstrap) . Step 1. Compute the value of the statistic T ,n (or the value T ,n ).Step 2. Draw { ε ∗ i } ni =1 a sequence of i.i.d. random elements ε , and define Y ∗ i = Y i ε ∗ i for all i = 1 , . . . , n .Step 3. Build ∆ ∗ n ( · ) = n − (cid:80) ni =1 (cid:104) X i , ·(cid:105) Y ∗ i and compute a n = | T ∗ ,n | (or b n = | T ∗ ,n | ).Step 4. Repeat Steps 2 and 3 a large number of times B ∈ N in order to obtain a sequence ofvalues { a ln } Bl =1 (or { b ln } Bl =1 ).Step 5. Approximate the p–value of the test by the proportion of values in { a ln } Bl =1 greater than orequal to | T ,n | (or by the proportion of values in { b ln } Bl =1 greater than or equal to | T ,n | ). Let us indicate that 1 ,
000 bootstrap iterations were done in each simulation.Due to k n and α must be fixed to run the procedure, the study was repeated with different numbersof principal components involved ( k n ∈ { , . . . , } ) and confidence levels ( α ∈ { . , . , . , . } ).Nevertheless, in order to simplify the reading, the information collected in the following tables cor-responds to only three of the values of k n which were analyzed: k n = 5, k n = 10 and k n = 20.12 α N (0 , T ∗ ( a )1 ,n T ∗ ( b )1 ,n T ∗ ,n T ∗ ,n T ∗ s,n k n k n k n k n Comparison of the estimated levels for T ,n (using the asymptotic distribution N (0 ,
2) and thebootstrap distributions of T ∗ ( a )1 ,n and T ∗ ( b )1 ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ), T ,n (using thebootstrap distribution of T ∗ ,n ) and its studentized version, T s,n (using the bootstrap distribution of T ∗ s,n ). n α N (0 , T ∗ ( a )1 ,n T ∗ ( b )1 ,n T ∗ ,n T ∗ ,n T ∗ s,n k n k n k n k n For r = 0 .
5, comparison of the empirical power for T ,n (using the asymptotic distribution N (0 , T ∗ ( a )1 ,n and T ∗ ( b )1 ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ) and its studentized version, T s,n (using the bootstrap distributionof T ∗ s,n ). Table 1 on page 13 displays the sizes of the test statistics obtained in the simulation study. For T ,n , it can be highlighted that bootstrap approaches have closer sizes to the theoretical α thanthe asymptotic approximation for T ,n , mainly when k n is small. If we compare the performanceof the two bootstrap procedures proposed, it seems that if σ is bootstrapped ( T ∗ ( a )1 ,n ) the resultsare better than if the same estimation of the variance is considered in all the bootstrap replications( T ∗ ( b )1 ,n ) above all when k n is large. As far as T ,n is concerned, the estimated levels are quite near tothe nominal ones, being k n = 20 the case in which they are farther from the theoretical α . Finally,it must be remarked that the F–test and its studentized versions also get good results in termsof test levels, which are slightly closer to α when one uses the bootstrap distribution of T ∗ s,n toapproximate the distribution of the statistic.On the other hand, Table 2 on page 13, Table 3 on page 14, and Table 4 on page 14 show theempirical power obtained with the different procedures for each considered signal–to–noise ratio r .In terms of power, when r = 0 . T ,n for whichthe empirical power decreases drastically, above all when k n increases (this effect is also observedfor r = 1 and r = 2). This fact seems to be due to the construction of T ,n since this test statistic13s the only one which does not involve the estimation of σ . In addition, the power of T ,n also fallsabruptly when T ∗ ( b )1 ,n is considered, n is small and k n is very large.A similar situation can be observed when r = 1 and r = 2. In the latter it can be seen that theempirical power is smaller for all the methods in general, being obtained an important loss of powerwhen the sample is small ( n = 50), and k n increases and/or α decreases (see Table 4 on page 14).Furthermore, in this case, it can be seen that the empirical power relies heavily on the selected k n value. Hence, the advantage of using T ,n or T s,n is that they do not require the selection of anyparameter and they are competitive in terms of power. Nevertheless, it also seems that an adequate k n selection can make T ,n obtain larger empirical power than T ,n or T s,n in some cases. n α N (0 , T ∗ ( a )1 ,n T ∗ ( b )1 ,n T ∗ ,n T ∗ ,n T ∗ s,n k n k n k n k n For r = 1, comparison of the empirical power for T ,n (using the asymptotic distribution N (0 , T ∗ ( a )1 ,n and T ∗ ( b )1 ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ) and its studentized version, T s,n (using the bootstrap distributionof T ∗ s,n ). n α N (0 , T ∗ ( a )1 ,n T ∗ ( b )1 ,n T ∗ ,n T ∗ ,n T ∗ s,n k n k n k n k n For r = 2, comparison of the empirical power for T ,n (using the asymptotic distribution N (0 , T ∗ ( a )1 ,n and T ∗ ( b )1 ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ) and its studentized version, T s,n (using the bootstrap distributionof T ∗ s,n ). For the real data application, we have obtained concentrations of hourly averaged NO x in the neigh-borhood of a power station belonging to ENDESA, located in As Pontes in the Northwest of Spain.During unfavorable meteorological conditions, NO x levels can quickly rise and cause an air–quality14pisode. The aim is to forecast NO x with half an hour horizon to allow the power plant staff toavoid NO x concentrations reaching the limit values fixed by the current environmental legislation.This fact implies that it is necessary to estimate properly the regression model which defines therelationship between the observed NO x concentration in the last minutes ( X ) and the NO x concen-tration with half an hour horizon ( Y ). For that, a first step is to determine if there exists a lineardependence between X and Y .Therefore, we have built a sample where each curve X corresponds to 240 consecutive minutal val-ues of hourly averaged NO x concentration, and the response Y corresponds to the NO x value halfan hour ahead (from Jan 2007 to Dec 2009). Applying the tests for dependence to the dataset, thenull hypothesis is rejected in all cases (thus, there is a linear relationship between the variables),except for T ,n when k n is large (see Table 5 on page 15). Nevertheless, as we have commented inthe simulation study, this test statistic does not take into account the variance term and its poweris clearly lower than the power of the other tests. N (0 , T ∗ ( a )1 ,n T ∗ ( b )1 ,n T ∗ ,n T ∗ ,n T ∗ s,n k n k n k n k n Real data application. P–values for T ,n (using the asymptotic distribution N (0 ,
2) and thebootstrap distributions of T ∗ ( a )1 ,n and T ∗ ( b )1 ,n ), T ,n (using the bootstrap distribution of T ∗ ,n ), T ,n (using thebootstrap distribution of T ∗ ,n ) and its studentized version, T s,n (using the bootstrap distribution of T ∗ s,n ). The proposed bootstrap methods seems to give test sizes closer to the nominal ones than the testsbased on the asymptotic distributions. In terms of power, the statistic tests which include a consis-tent estimation of the error variance σ are better that the tests which do not take it into account.Furthermore, in all the cases, a suitable choice of k n seems to be quite important, and currently itis still an open question.Besides of the optimal k n selection, other issues related to these dependence tests require furtherresearch, such as their extension to functional linear models with functional response. On the otherhand, and in addition to the natural usefulness of this test, if would be interesting to combine it withthe functional ANOVA test (see Cuevas, Febrero, and Fraiman (2004), and Gonz´alez-Rodr´ıguez,Colubi, and Gil (2012)) in order to develop an ANCOVA test in this context. Acknowledgements
The work of the first and third authors was supported by Ministerio de Ciencia e Innovaci´on (projectMTM2008–03010), and by Conseller´ıa de Innovaci´on e Industria (project PGIDIT07PXIB207031PR),and Conseller´ıa de Econom´ıa e Industria (project 10MDS207015PR), Xunta de Galicia. The workof the second author was supported by Ministerio de Ciencia e Innovaci´on (project MTM2009–09440–C0202) and by the COST Action IC0702. The work of the fourth author was supported byMinisterio de Educaci´on (FPU grant AP2010–0957).15 eferences
Bathia, N., Yao, Q. and Ziegelmann, F. (2010). Identifying the finite dimensionality of curve timeseries.
Ann. Statist. , 3352–3386.Bickel, P. J. and Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. Ann. Statist. , 1196–1217.Cai, T. T. and Hall, P. (2006). Prediction in functional linear regression. Ann. Statist. , 2159–2179.Cao-Abad, R. (1991). Rate of convergence for the wild bootstrap in nonparametric regression. Ann.Statist. , 2226–2231.Cardot, H., Ferraty, F., Mas, A. and Sarda, P. (2003). Testing hypotheses in the functional linearmodel. Scand. J. Stat. , 241–255.Cardot, H., Ferraty, F. and Sarda, P. (1999). Functional Linear Model. Statist. Probab. Lett. ,11–22.Cardot, H., Ferraty, F. and Sarda, P. (2003). Spline estimators for the functional linear model. Statist. Sinica , 571–591.Cardot, H., Prchal, L. and Sarda, P. (2007). No effect and lack–of–fit permutation tests for functionalregression. Comput. Statist. , 371–390.Cuevas, A., Febrero, M. and Fraiman, R. (2004). An anova test for functional data. Comput. Statist.Data Anal. , 111–122.Cuevas, A., Febrero, M. and Fraiman, R. (2006). On the use of the bootstrap for estimating functionswith functional data. Comput. Statist. Data Anal. , 1063–1074.Efron, B. (1979). Bootstrap methods: another look at the jackknife. Ann. Statist. , 1–26.Ferraty, F. and Romain, Y. (eds.) (2011). The Oxford Handbook of Functional Data Analysis . OxfordUniversity Press, Oxford.Ferraty, F., Van Keilegom, I. and Vieu, P. (2010). On the validity of the bootstrap in non–parametricfunctional regression.
Scand. J. Stat. , 286–306.Ferraty, F., Van Keilegom, I. and Vieu, P. (2012). Regression when both response and predictor arefunctions. J. Multivariate Anal. , 10–28.Ferraty, F. and Vieu, P. (2006).
Nonparametric Functional Data Analysis: Theory and Practice .Springer, New York.Freedman, D. A. (1981). Bootstrapping regression models.
Ann. Statist. , 1218–1228.Gin´e, E. (1997). Lectures on some aspects of the bootstrap. In Lectures on Probability Theory andStatistics (Saint–Flour, 1996) (Edited by B. Pierre), 37–151. Springer, Berlin.Gin´e, E. and Zinn, J. (1990). Bootstrapping General Empirical Measures.
Ann. Probab. , 851–869.Gonz´alez-Manteiga, W. and Mart´ınez-Calvo, A. (2011). Bootstrap in functional linear regression. J. Statist. Plann. Inference , 453–461.Gonz´alez-Rodr´ıguez, G., Colubi, A. and Gil, M. ´A. (2012). Fuzzy data treated as functional data:A one–way ANOVA test approach.
Comput. Statist. Data Anal. , 943–955.16all, P. and Horowitz, J. L. (2007). Methodology and convergence rates for functional linear regres-sion. Ann. Statist. , 70–91.Hall, P. and Hosseini-Nasab, M. (2006). On properties of functional principal components analysis. J. R. Stat. Soc. Ser. B Stat. Methodol. , 109–126.Hall, P. and Vial, C. (2006). Assessing the finite dimensionality of functional data. J. R. Stat. Soc.Ser. B Stat. Methodol. , 689–705.Kokoszka, P., Maslova, I., Sojka, J. and Zhu, L. (2008). Testing for lack of dependence in thefunctional linear model. Canad. J. Statist. , 207–222.Kosorok, M. R. (2008). Introduction to Empirical Processes and Semiparametric Inference . Springer,New York.Laha, R. G. and Rohatgi, V. K. (1979).
Probability Theory . Wiley, New York.Ledoux, M. and Talagrand, M. (1988). Un crit`ere sur les petites boules dans le th´eor`eme limitecentral.
Probab. Theory Related Fields , 29–47.Ramsay, J. O. and Silverman, B. W. (2002). Applied Functional Data Analysis. Methods and CaseStudies . Springer, New York.Ramsay, J. O. and Silverman, B. W. (2005).
Functional Data Analysis . 2nd edition. Springer, NewYork.Singh, K. (1981). On the asymptotic accuracy of Efron’s bootstrap.
Ann. Statist.9