HHigh-dimensional mixed-frequency IV regression ∗ Andrii Babii † UNC Chapel Hill
March 31, 2020
Abstract
This paper introduces a high-dimensional linear IV regression for the datasampled at mixed frequencies. We show that the high-dimensional slope pa-rameter of a high-frequency covariate can be identified and accurately esti-mated leveraging on a low-frequency instrumental variable. The distinguishingfeature of the model is that it allows handing high-dimensional datasets with-out imposing the approximate sparsity restrictions. We propose a Tikhonov-regularized estimator and derive the convergence rate of its mean-integratedsquared error for time series data. The estimator has a closed-form expressionthat is easy to compute and demonstrates excellent performance in our MonteCarlo experiments. We estimate the real-time price elasticity of supply on theAustralian electricity spot market. Our estimates suggest that the supply isrelatively inelastic and that its elasticity is heterogeneous throughout the day. ∗ First draft: October 2014. This paper is a substantially revisited Chapter 2 of my Ph.D.thesis. I’m deeply indebted to my advisor Jean-Pierre Florens and other members of my Ph.D.committee: Eric Gautier, Ingrid van Keilegom, and Timothy Christensen for helpful discussionsand suggestions. This paper was presented at Conference on Inverse Problems in Econometrics atNorthwestern University, ”ENTER exchange seminar” at Tilburg University, ”48mes Journes deStatistique de la SFdS” in Montpellier, ”3rd ISNPS Conference” in Avignon, and ”Recent Advancesin Econometrics” conference at TSE. I’m grateful to all participants for interesting discussions,comments, and suggestions, especially to Christoph Breunig, Federico Bugni, Samuele Centorrino,Christophe Gaillac, Eric Gautier, Joel Horowitz, Pascal Lavergne, Robert Lieli, Valentin Patilea,Jeff Racine, Mario Rothfelder, Anna Simoni, and Daniel Wilhelm. I would also like to thank BrunoBiais, Sophie Moinas, and Aleksandra Babii for helpful conversations. † University of North Carolina at Chapel Hill - Gardner Hall, CB 3305 Chapel Hill, NC 27599-3305. Email: [email protected]. a r X i v : . [ ec on . E M ] M a r eywords : high-dimensional IV regression, mixed-frequency data, identification, Tikhonov regu-larization, continuum of moment conditions, real-time price elasticities. JEL Classifications : C14, C22, C26, C58
The technological progress over the past decades has made it possible to generate,to collect, and to store new intraday high-frequency time series datasets that arewidely available along with the ”old” low-frequency data. Indeed, the economic ac-tivity occurs in real time and the economic and financial transactions are frequentlyrecorded instantaneously, while the traditional time series data are available at aquarterly, monthly, or sometimes daily frequencies. Ignoring the high-frequency na-ture of the data leads to the loss of the information through the temporal aggregationand makes it impossible to quantify the economic activity in real time. At the sametime, combining the low and the high-frequency datasets allows obtaining more re-fined measures of the economic activity that can be used subsequently to informmarket participants and to guide policies.In this paper, we introduce a novel high-dimensional mixed-frequency instru-mental variable (IV) regression suitable for the datasets recorded at different fre-quencies. The model connects a low-frequency dependent variable to endogenouscovariates sampled from a continuous-time stochastic process. Alternatively, the re-gressor might be sampled from a continuous-space stochastic process encountered inthe spatial data analysis or any other stochastic process indexed by the continuum.This leads to the high-dimensional IV regression with a large number of endogenousregressors.The high-dimensional mixed-frequency IV regression features several remarkableproperties. First, we show that it is possible to identify and to estimate accuratelythe high-dimensional slope parameter leveraging on a low-frequency instrumentalvariable. In contrast, the point identification in the (high-dimensional) linear IV re-gression typically relies on the order condition postulating that the number of instru-mental variables should be at least as large as the number of endogenous regressors.Second, the mixed frequency IV regression can handle arbitrary large number ofendogenous covariates relatively to the sample size without relying on approximatesparsity condition and restrictive tail conditions. Such a remarkable property is pos-sible due to the continuous-time structure of the regressor and the slope parameter.Continuous-time structures is one of the ”blessings of dimensionality” according to2onoho (2000). These properties distinguish our model from the ridge IV regres-sion, cf., Carrasco (2012) or the high-dimensional IV regression of Belloni, Chen,Chernozhukov, and Hansen (2012).The high-dimensional mixed-frequency IV regression is an example of ill-posedinverse problem in the sense that the map from the distribution of the data to theslope parameter is not continuous. As a result, we need to introduce some amount ofregularization to smooth out the discontinuities and to obtain a consistent estimator.In this paper, we focus on the Tikhonov regularization and establish its statisticalproperties with weakly dependent data. The estimation accuracy of the continuous-time slope parameter depends both on its regularity as well as on the regularity of acertain integral operator.Our empirical application extends the classical IV estimation of the supply andthe demand equations, cf., Wright (1928), to the real-time spot markets. We collect anew dataset using publicly available data and estimate the real-time price elasticityof supply in the Australian electricity spot market. To that end, we leverage onthe daily temperature as an instrumental variable that shifts the demand curve andis exogenous for supply shocks. The temperature is a valid instrumental variablesince the electricity demand increases in hot and cold times due to cooling andheating needs. Our empirical results reveal that while the supply of the electricityis relatively inelastic, its elasticity is heterogeneous across the day, peaking around6 pm and dropping subsequently to its lowest value around 4 am. Contribution and related literature.
Our paper connects several strands ofthe literature. First, following Ghysels, Santa-Clara, and Valkanov (2004), Ghysels,Sinko, and Valkanov (2007), and Andreou, Ghysels, and Kourtellos (2010), thereis an increasing interest in using datasets sampled at different frequencies in theempirical practice. Most of this literature, with a notable exception for Ghyselsand Wright (2010) and Khalaf, Kichian, Saunders, and Voia (2017), is largely fo-cused on the forecasting problem with mixed-frequency data and does not considerthe structural econometric modeling with the instrumental variable approach. Themixed-frequency data typically lead to high-dimensional problems and the dimen-sionality is controlled using tightly parametrized weight functions, see also Foroni,Marcellino, and Schumacher (2015) for the unrestricted mixed-frequency data mod- The continuum modeling, the concentration of measure phenomenon, and the dimension asymp-totics are the three ”blessings of dimensionality”, according to Donoho (2000). The concept of regularization originates from the mathematical literature on ill-posed inverseproblems, cf., Tikhonov (1963a) and Tikhonov (1963b), see Carrasco, Florens, and Renault (2007b)for a review and further references in econometrics. single instrumental measured at a low-frequency only. The structure of our model is also qualitatively different and leads to the con-ditional expectation operator that was not previously encountered in the ill-posedinverse problems literature.Lastly, following the influential work of Belloni, Chernozhukov, and Hansen(2011), Belloni et al. (2012), and Belloni, Chernozhukov, and Hansen (2014), there isan increasing interest in the estimation and inference with high-dimensional datasetsin econometrics. In particular, Belloni et al. (2012) propose to use the LASSO toaddress the problem of many instruments and the nonparametric series estimation ofthe optimal instrument. Our mixed-frequency IV regression is qualitatively differentfrom the above models and does not impose the approximate sparsity on the high-dimensional slope coefficients, see Babii, Ghysels, and Striaukas (2019) for a compre-hensive treatment of approximately sparse mixed-frequency time series regressions.The problem of the optimal instrument is more challenging in our nonparametricsetting and is left for future research, see Florens and Sokullu (2018) for some stepsin this direction.The paper is organized as follows. In section 2, we present the mixed-frequencyIV regression, illustrate several economic examples, and discuss the main identifica-tion issues. In section 3, we present the Tikhonov-regularized estimator and deriveits statistical properties for the weakly dependent time series data. All technicaldetails appear in the appendix. We report on a Monte Carlo study in section 4which provides further insights about the validity of asymptotic analysis in finitesamples typically encountered in empirical applications. Section 5 presents an em-pirical application to the estimation of real-time supply elasticities. Lastly, section 6concludes. The literature on approximately sparse econometric models is vast, see Belloni, Chernozhukov,Chetverikov, Hansen, and Kato (2018) for an excellent introduction and further references. Mixed-frequency IV regression
The purpose of this section is to introduce the mixed-frequency IV regression and todiscuss our identification and estimation strategies.
Econometrician observes { ( Y t , Z t ( s j ) , W t ) : t = 1 , . . . , T, j = 0 , , . . . , m } , where Y t ∈ R is a low-frequency dependent variable, Z t ( s j ) is a realization of a real-valuedcontinuous-time stochastic process Z t = (cid:8) Z t ( s ) : s ∈ S ⊂ R d (cid:9) , and W t ∈ R q is a(vector of) low-frequency instrumental variables. The number of high-frequencyobservations m is left unrestricted and can (potentially) be much larger than thesample size T . The mixed-frequency IV regression is described as Y t = (cid:90) S β ( s ) Z t ( s )d s + U t , E [ U t | W t ] = 0 , t = 1 , . . . , T. Note that since the regressor is sampled from a high-frequency covariate Z t ,Discretizing the continuous-time equation, we obtain Y t = m (cid:88) j =1 β ( s j ) Z t ( s j )( s j − s j − ) + U t , E [ U t | W t ] = 0 , t = 1 , . . . , T. It is worth stressing that the discretization of the continuous-time model leads to aconsistent definition of regression slopes across different frequencies, cf., Sims (1971)and Geweke (1978). In contrast, the naive discrete-time regression equation doesnot impose any normalization and the magnitude of the slope parameter is differentacross different frequencies.The following three examples provide several empirical settings where our mixed-frequency IV regression model could be useful.
Example 2.1 (Real-time price elasticities) . Spot markets operate in real-time withcommodities traded for immediate delivery. The mixed-frequency IV regression can beused to estimate the real-time elasticities of supply/demand, which is a continuous-time extension of the classical linear IV regression, cf., Wright (1928). In our em-pirical application, Y t is the quantity sold at the spot market on a day t and Z t ( s ) If d = 1, then S ⊂ R can be interpreted as a time index. More generally, if d = 2, then S ⊂ R can be a geographical location (spatial process), and if d = 3, then S ⊂ R can denote both thespace and the time dimension (spatio-temporal process). Regardless of the dimension d , we alwaysrefer to Z t as a continuous-time stochastic process. s the equilibrium market price on a day t at time s . The market equilibrium leadsto the endogeneity problem. Using daily temperatures as a demand shifter, we canidentify the real-time price elasticity β of the electricity supply. Example 2.2 (Intraday liquidity) . In the equilibrium of a seminal Kyle (1985)model, Y t is a daily price change of an asset t , Z t ( s ) is an order flow imbalanceon a day t at time s , and /β is a liquidity parameter. The liquidity parameter quan-tifies the sensitivity of the market price to the imbalance between the supply and thedemand. Endogeneity comes from the strategic behavior of informed traders who arelikely to distribute orders over time to minimize the impact on prices and the marketequilibrium. Example 2.3 (Measurement errors) . Classical measurement errors in the high-frequency regressor sampled from a continuous-time stochastic process also lead tothe endogeneity problem. Such measurement errors are especially pronounced in thehigh-frequency intraday financial data contaminated by the market microstructurenoise, see Zhang, Mykland, and A¨ıt-Sahalia (2005) and Hansen and Lunde (2006).
To simplify the notation, in this section, we suppress the dependence of ( Y t , Z t , W t ) on t and write ( Y, Z, W ), which is well-justified under stationarity. The mixed-frequencyIV regression becomes Y = (cid:90) S β ( s ) Z ( s )d s + U, E [ U | W ] = 0 . The identification in the linear IV regression relies on the uncorrelatedness betweenthe instrumental variable and the unobservables, i.e., E [ U W ] = 0, and the rankcondition. The rank condition requires in turn that the number of the instrumentalvariable matches the dimension of the endogenous covariate. In our settings, theendogenous covariate is a high-dimensional realization of a continuous-time stochasticprocess, which requires in turn a high-dimensional instrumental variable. Given thatthe instrumental variable has to be exogenous to the system, this imposes a strongrequirement on the instrumental variable. Alternatively, if we start from the linear model Y = Φ( Z ) + U , where Φ : L ( S ) → R isa continuous linear functional, then by the Riesz representation theorem, we can always writeΦ( Z ) = (cid:104) β, Z (cid:105) for a unique slope parameter β ∈ L ( S ). Here and later, L ( S ) denotes the set ofreal functions on S , square-integrable with respect to the Lebesgue measure and the natural innerproduct (cid:104) ., . (cid:105) , see Appendix for more details on the notation.
6n contrast, our identification strategy relies on the mean independence exogene-ity condition, E [ U | W ] = 0. Assuming that the order of the integration can beinterchanged, the exogeneity leads to h ( w ) (cid:44) E [ Y | W = w ] = (cid:90) S β ( s ) E [ Z ( s ) | W = w ]d s (cid:44) ( Lβ )( w ) , (1)where L : L ( S ) → L ( W ) is an integral operator mapping the unknown slopeparameter β to the conditional mean function h . Eq. 1 is an example of the Fredholmintegral equation of type I solving, which is typically known to be ill-posed in thesense that the inverse map from h to β is discontinuous, see Carrasco et al. (2007b).Our identification strategy relies on the linear completeness property of the distri-bution of ( Z, W ). We say that the stochastic process Z ∈ L ( S ) is linearly complete for W ∈ R q if for all b ∈ L ( S ) with E |(cid:104) Z, b (cid:105)| < ∞ , we have E [ (cid:104) Z, b (cid:105)| W ] = 0 = ⇒ b = 0 . Assumption 2.1.
The stochastic process Z is linearly complete for W . The linear completeness is a generalization of the rank condition imposed in thefinite-dimensional linear IV regression and requires that the operator L is injective.Consider another injective operator M : L ( W ) → L ( S ) such that ( M ϕ )( u ) = E [ ϕ ( W )Ψ( u, W )] for some square-integrable function of the instrumental variable Ψ.Applying M to both sides of Eq. 1 leads to r ( u ) = E [ Y Ψ( u, W )] = (cid:90) S β ( s ) E [ Z ( s )Ψ( u, W )]d s = ( Kβ )( u ) , (2)where r = M h and K = M L is a new operator. It is more convenient to estimate theslope parameter β using the continuum of moment restrictions in Eq. 2, since it doesnot involve conditional expectations, nonparametric estimation of which involvesadditional tuning parameters. At the same time, Eq. 2 has the same identifyingpower as Eq. 1 provided that the operator M is injective. A large class of instrumentfunctions Ψ that ensure injectivity of M is characterized in Stinchcombe and White For a random variable W , we denote L ( W ) = { f : E | f ( W ) | < ∞} with some abuse ofnotation. The linear completeness condition is significantly weaker than the nonlinear completeness con-dition typically used in the nonparametric IV literature, cf., Babii and Florens (2018). The problem of estimating a finite-dimensional parameter using a continuum of moment con-ditions is addressed, e.g., in Carrasco and Florens (2000) and Carrasco, Chernov, Florens, andGhysels (2007a). Our default recommendation is the logistic CDF, Ψ( u, W ) = e − u (cid:62) W ,which real-valued and bounded. In this section, we introduce the Tikhonov-regularized estimator of the slope param-eter β and study its statistical properties with time series data. Our objective is to estimate the slope parameter β using the continuum of momentconditions in Eq. 2, which requires inverting the operator K . Note that the integraloperator K has the kernel function k ( s, u ) (cid:44) E [ Z ( s )Ψ( u, W )], which is typicallysquare-integrable. Consequently, the operator K is compact and its generalizedinverse is not continuous, see Carrasco et al. (2007b). The operator inversion problemis amplified by the fact that r and K are unobserved and have to be estimated fromthe data. In this paper, we focus on the Tikhonov-regularized estimator of β .Let ( Y t , Z t , W t ) Tt =1 be a stationary sample. The operator K and the function r are estimated using sample meansˆ r ( u ) = 1 T T (cid:88) t =1 Y t Ψ( u, W t ) , ( ˆ Kβ )( u ) = (cid:90) S β ( s )ˆ k ( s, u )d s, ˆ k ( s, u ) = 1 T T (cid:88) t =1 Z t ( s )Ψ( u, W t ) . The Tikhonov-regularized estimator solves the following penalized least-squares prob-lem ˆ β = arg min b (cid:13)(cid:13)(cid:13) ˆ Kb − ˆ r (cid:13)(cid:13)(cid:13) + α (cid:107) b (cid:107) , where α > (cid:107) . (cid:107) is the natural norm on the relevant L space. The estimator has a well-known See also an earlier work of Bierens (1982) who develops consistent specification tests and thework of Dominguez and Lobato (2004) and Lavergne and Patilea (2013) who develop estimators of finite-dimensional parameters based on the Bierens-type trick. ˆ β = ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ ˆ r, (3)where ˆ K ∗ is the adjoint operator to ˆ K . To compute the adjoint operator, note thatfor every ψ ∈ L , by Fubini’s theorem (cid:104) ˆ Kβ, ψ (cid:105) = (cid:90) (cid:18)(cid:90) β ( s )ˆ k ( s, u )d s (cid:19) ψ ( u )d u = (cid:90) β ( s ) (cid:18)(cid:90) ψ ( u )ˆ k ( s, u )d u (cid:19) d s = (cid:104) β, ˆ K ∗ ψ (cid:105) . Therefore, the adjoint operator is( ˆ K ∗ ψ )( s ) = (cid:90) ψ ( s )ˆ k ( s, u )d u. To investigate the statistical properties of ˆ β , we introduce several weak-dependenceconditions on the underlying stochastic processes. The following definition general-izes the notion of the covariance stationarity to function-valued stochastic processes,see Bosq (2012) for a comprehensive introduction to the statistical theory of stochas-tic processes in Hilbert and Banach spaces. Definition 3.1.
The L ( S ) -valued stochastic process ( X t ) t ∈ Z is covariance stationaryif (i) the second moment exists: sup t ∈ Z E (cid:107) X t (cid:107) < ∞ ;(ii) the mean function is constant over time: E [ X t ( s )] = µ ( s ) , ∀ s ∈ S and ∀ t ∈ Z ;(iii) the autocovariance function depends only on the distance between observations: ∀ s, u ∈ S and ∀ h, k ∈ Z γ h,k ( s, u ) = E [( X h ( s ) − µ ( s ))( X k ( u ) − µ ( u ))]= E [( X | h − k | ( s ) − µ ( s ))( X ( u ) − µ ( u ))] , (cid:44) γ h − k ( s, u ) . It is well-known that the compact self-adjoint operator has a countable, decreasing to zerosequence of eigenvalues. Tikhonov regularization stabilizes the spectrum of the generalized inverseof the operator ˆ K ∗ ˆ K , replacing its eigenvalues λ j by α +ˆ λ j , see Carrasco et al. (2007b) for moredetails.
9e also need a notion of the absolute summability of the autocovariance functionfor L ( S )-valued stochastic processes. Definition 3.2.
The L ( S ) -valued covariance stationary process ( X t ) t ∈ Z has the ab-solutely summable autocovariance function γ h if (cid:88) h ∈ Z (cid:107) γ h (cid:107) < ∞ , where (cid:107) γ h (cid:107) = (cid:82) S | γ ( s, s ) | d s denotes the L norm on the diagonal of S × S . The following assumption restricts the dependence structure of the process.
Assumption 3.1. { u (cid:55)→ Y t Ψ( u, W t ) : t ∈ Z } and { ( s, u ) (cid:55)→ Z t ( s )Ψ( u, W t ) : t ∈ Z } are covariance stationary L -valued stochastic processes with absolutely summableautocovariance functions. Assumption (i) is a relatively mild condition and is satisfied, in particular, when( Y t , Z t , W t ) t ∈ Z is strictly stationary. The absolute summability of autocovariances isalso a relatively mild condition that is typically assumed in the time series analysis.It is worth stressing that the stationarity is imposed on entire trajectories of the pro-cesses over t ∈ Z . At the same time, on a fixed day t ∈ Z , the intraday observations Z t ( s ) for s ∈ S can be non-stationary.Since the mixed-frequency IV regression model is ill-posed, we also need to quan-tify the degree of ill-posedness of the operator K and the regularity of the slopeparameter β . The following conditions serve this purpose. Assumption 3.2.
The slope parameter β belongs to the class F ( γ, R ) = (cid:8) b ∈ L ( S ) : b = ( K ∗ K ) γ ψ, (cid:107) ψ (cid:107) ≤ R (cid:9) for some γ ∈ (0 , and R > . To appreciate this condition, note that if β = ( K ∗ K ) γ ψ , then ψ = ( K ∗ K ) − γ β .Let ( σ j , β j , ψ j ) ∞ j =1 be the singular values decomposition of the compact linear operator K , see Carrasco et al. (2007b). Then β = (cid:80) ∞ j =1 (cid:104) β, β j (cid:105) β j and by the Parseval’sidentity (cid:107) ψ (cid:107) = ∞ (cid:88) j =1 |(cid:104) β, β j (cid:105)| σ γj . Therefore, β = ( K ∗ K ) γ ψ and (cid:107) ψ (cid:107) ≤ R in Assumption 3.2 restrict the regularity ofthe slope parameter β as measured by how fast the Fourier coefficients ( (cid:104) β, β j (cid:105) ) ∞ j =1 K as measuredby how fast the singular values ( σ j ) ∞ j =1 decrease to zero and the regularity parameter γ > Theorem 3.1.
Suppose that Assumptions 2.1, 3.1, and 3.2 are satisfied. Then E (cid:13)(cid:13)(cid:13) ˆ β − β (cid:13)(cid:13)(cid:13) ≤ C (cid:18) αT + α γ + α γ ∧ αT + α γ (cid:19) , where the constant C can be found in Eq. A.2. Consequently, if the regularization parameter α tends to zero, we obtain E (cid:107) ˆ β − β (cid:107) = O (cid:18) αT + α γ (cid:19) . The two terms are balanced for α ∼ T − γ +1 , in which case the convergence rateof the integrated MSE is O (cid:16) T − γ γ +1 (cid:17) . The uniform inference for the Tikhonov-regularized estimator is also possible, cf., Babii (2020a). Lastly, one could alsoconsider regularization with Sobolev norm penalty and/or more general spectralregularization schemes, see Carrasco et al. (2007b), Carrasco et al. (2014), Babii andFlorens (2018), and Babii (2020b). So far we have assumed that the trajectory of the stochastic process { Z t ( s ) : t =1 , . . . , T, s ∈ S } is completely observed. In this section, we relax this requirement andinvestigate the case when we only observe { Z t ( s j ) : t = 1 , . . . , T, j = 1 , . . . , m } , i.e.,realizations of the process at discrete time points s j ∈ S, j = 1 , . . . , m . For simplicityof presentation, suppose that S = [0 ,
1] and that 0 = s ≤ s < s < · · · < s m = 1.Then the operator ( ˆ Kφ )( u ) = (cid:90) φ ( s )ˆ k ( s, u )d s ˆ k ( s, u ) = 1 T T (cid:88) t =1 Z t ( s )Ψ( u, W t )11s not accessible in practice the continuous-time stochastic process Z t is only partiallyobserved. Instead, we observe its discrete-time approximation for every φ ∈ C [0 , K m φ )( u ) = m (cid:88) j =1 φ ( s j )ˆ k ( s j , u ) δ j with δ j = s j − s j − . Let ∆ m (cid:44) max ≤ j ≤ m δ j and let ˆ β m be the solution to( αI + ˆ K ∗ ˆ K m ) ˆ β m = ˆ K ∗ ˆ r. For the in-fill asymptotics, we need additionally the following assumption.
Assumption 3.3. (i) The process Z has trajectories in the H¨older class C κL [0 , for some κ ∈ (0 , and L ∈ (0 , ∞ ) ; (ii) sup w (cid:107) Ψ( ., w ) (cid:107) ≤ ¯Ψ < ∞ ; (iii) ∆ m = O ( α /κ /T / κ ) as α → and T → ∞ . Assumption 3.3 (i) is satisfied, e.g., for the Brownian motion on [0 ,
1] with κ < /
2. (ii) is satisfied, e.g., for uniformly bounded instrument functions on compactintervals. (iii) imposes restrictions on the in-fill asymptotics. In the special case ofthe uniform spacing, it reduces to the condition m − κ = O ( α/T ). In other words, thenumber of regressors should increase sufficiently fast. It is worth stressing that thenumber of regressors m can be much larger than the sample size T and can increaseeven faster than exponentially.The following result shows that the integrated MSE can converge at the samerate as if we observed the process, cf., Theorem 3.1. Theorem 3.2.
Suppose that Assumptions 2.1, 3.1, 3.2, and 3.3 are satisfied. Then E (cid:13)(cid:13)(cid:13) ˆ β m − β (cid:13)(cid:13)(cid:13) ≤ O (cid:18) αT + α γ (cid:19) for some constants C < ∞ . In this section, we discuss the numerical implementation of our high-dimensionalmixed-frequency IV estimator and study its behavior in finite samples with MonteCarlo experiments. 12e use the logistic CDF, Ψ( u, W ) = − u (cid:62) W ) , as an instrument function. We rewrite Eq. 3 as α ˆ β + ˆ K ∗ ˆ K ˆ β = ˆ K ∗ ˆ r and discretize it with the Riemann sum ona grid of uniformly spaced points j/m, j = 1 , . . . , m . The discretized equation is α ˆ β + Z (cid:62) Ψ (cid:62) ΨZ ˆ β / ( T m ) = Z (cid:62) Ψ (cid:62) Ψy /T m, where ˆ β = ( ˆ β ( j/m )) ≤ j ≤ m , Z = ( Z t ( j/m )) ≤ t ≤ T ≤ j ≤ m , Ψ = (Ψ( j/m, W t )) ≤ j ≤ m ≤ t ≤ T , y =( Y t ) ≤ t ≤ T and I T is a T × T identity matrix. Then we compute the estimator as ˆ β = (cid:0) α I T + Z (cid:62) Ψ (cid:62) ΨZ / ( T m ) (cid:1) − Z (cid:62) Ψ (cid:62) Ψy /T m. There are 5 ,
000 replications in each Monte Carlo experiment. We generate sam-ples of ( Y t , Z t , W t ) Tt =1 of size T ∈ { , , } as follows Y t = (cid:90) β ( s ) Z t ( s )d s + U t ,Z t ( s ) = k ( s, W t ) + σB t ( s ) , W t = 0 . . W t − + ε t k ( s, w ) = √ s + w , ε t ∼ i.i.d. N (0 , U t = 0 . (cid:90) B t ( s )d s + 0 . V t , V t ∼ i.i.d. N (0 , { B t ( s ) : s ∈ [0 , , t = 1 , . . . , T } are independent Brownian motion, gener-ated independently of all other variables and initiated at i.i.d. random draws from U ( − / , / σ ∈ { . , } represents the noise level. We con-sider two slope parameters β ( s ) = −
10 exp( s ) and β ( s ) = 10 s with s ∈ [0 , , α tends to zero, thebias decreases while the variance increases. The optimal choice of the regularizationparameter should balance the two. The estimator performs better when the samplesize increases and the noise level decreases. We can also see that the linear slope pa-rameter is estimated more accurately. Figure 1 and Figure 2 summarize graphically This function fits our assumptions since it is uniformly bounded and real-valued, unlike someother choices, cf., Bierens (1982) and Stinchcombe and White (1998). At the same time, we find inMonte Carlo experiments that it works significantly better than, e.g., Ψ( s, w ) = { w ≤ s } . β ( s ) = −
10 exp( s ). T σ α i-Bias i-Var i-MSE100 0.5 10 − − − − − − − − − − − − − − − − − − Note: results for different sample sizes T ,noise levels σ , and regularization parameters α . the outcome of Monte Carlo experiments for α = 10 − . The shaded gray area rep-resents the pointwise 95% confidence interval across 5 ,
000 replications. Overall, themixed-frequency IV estimator demonstrates excellent performance across differentspecifications.It is worth stressing that since the stochastic Z is observed at m = 200 timepoints, the number of endogenous regressors exceeds the sample size when T = 100.In this case, the conventional IV estimator does not exist. At the same time, the naivegeneralization of the ridge regression and the LASSO are also not appropriate in oursetting. The ridge regression would typically require m/T →
0, cf., Carrasco et al.(2007b). The LASSO would require the approximate sparsity, somewhat strongerweak dependence conditions, and m /κ /T − /κ →
0, where κ measures tails andweak dependence, cf., Babii et al. (2019).14able 2: Experiments for β ( s ) = 10 s . T σ α i-Bias i-Var i-MSE100 0.5 10 − − − − − − − − − − − − − − − − − − Note: results for different sample sizes T ,noise levels σ , and regularization parameters α . (a) σ = 0 . , T = 100 (b) σ = 0 . , T = 500 (c) σ = 0 . , T = 1000 (d) σ = 1 , T = 100 (e) σ = 1 , T = 500 (f) σ = 1 , T = 1000 Figure 1: Summary of Monte Carlo experiments for β ( s ) = −
10 exp( s ). Regulariza-tion parameter: α = 10 − . 16 (a) σ = 0 . , T = 100 (b) σ = 0 . , T = 500 (c) σ = 0 . , T = 1000 (d) σ = 1 , T = 100 (e) σ = 1 , T = 500 (f) σ = 1 , T = 1000 Figure 2: Summary of Monte Carlo experiments for β ( s ) = 10 s . Regularizationparameter: α = 10 − . 17 Real time elasticity of electricity supply
At the beginning of the 90s, electricity markets around the world were verticallyintegrated industries with prices set by regulators. Over the last 30 years, majorcountries experienced deregulation. Today, electricity is often sold at competitivespot markets where prices are determined according to the laws of supply and de-mand. Elasticities of supply and demand summarize the behavior of energy produc-ers and consumers, inform market participants, and play an important role in thepolicy design, forecasting, and energy planning. The real-time elasticity of supplycontains a piece of important information on seller’s response to the intraday pricefluctuations. Most of the electricity in Australia is generated, sold, and bought at the NationalElectricity Market (NEM), which is one of the largest interconnected electricity sys-tems in the world. The NEM started operating as a wholesale spot market in De-cember 1998. It supplies about 200 terawatt-hours of electricity to around 9 millioncustomers each year reaching $16.6 billion of trades in 2016-2017. The supply andthe demand come from over 100 competitive generators and retailers participatingin the market and are matched instantaneously in real time through a centrally co-ordinated dispatch process. Generators offer to supply a fixed amount of electricityat a specific time in the future and can resubmit subsequently the offered amountand price if needed. The Australian Energy Market Operator (AMEO) decides whichgenerators will produce electricity to meet the demand in the most cost-efficient way.We construct a new dataset using publicly available data from the AEMO andthe Australian Bureau of Meteorology for the New South Wales in 1999-2018. Thecentral pieces of the dataset are the daily aggregate quantities of the electricity soldat the spot market, intraday high-frequency prices measured each half an hour, andthe average daily temperatures. The high-dimensional mixed-frequency IV regressionmodel is log Q t = (cid:90) β ( s ) log P t ( s )d s + U t , E [ U t | W t ] = 0 , where Q t is the quantity sold on a day t , P t ( s ) is the price at time s on a day t ,and W t is an instrumental variable. To estimate the supply elasticity, we use the While there is an extensive literature on forecasting with intraday electricity data, see, e.g.,Aneiros P´erez, Vilar-Fern´andez, Cao Abad, and Mu˜noz San Roque (2013) and references therein,the structural econometric analysis of the real-time electricity data received less attention, seeBenatia et al. (2017) and Benatia (2018) for notable exceptions. The latter paper studies the multi-unit electricity auction in New York and estimates the firm-level market power. It is also worthmentioning that the real-time price elasticities of demand have been previously estimated in Patrickand Wolak (2001) and Lijesen (2007) relying on a different econometric methodology. Q t = 0 . (cid:88) j =1 β ( s j ) log P t ( s j ) + U t , where s j = 0 . j with j = 1 , , . . . , L norm of the residual of the inverse problem RSS ( α ) = α − (cid:107) ˆ K ˆ β α − ˆ g (cid:107) , where ˆ β α = ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ ˆ g . In our case, the minimum is reached at α ∗ =2 . × − as be seen from Figure 5, panel (a).Figure 5, panel (b) displays the estimated intraday elasticity of supply using ourhigh-dimensional mixed-frequency IV regression. We find that depending on thehour, the price elasticity of supply ranges between 0.135 and 0.165. The elasticity isthe highest in the evening, around 6 pm and the lowest during the night. The supplyappears to have the real-time price elasticity of a similar order of magnitude as thedemand. The relatively inelastic supply may probably be attributed to the factthat the market participants are allowed to hedge financial risks and the difficultyto adjust the electricity production in real time. Patrick and Wolak (2001) find real-time demand elasticities between 0 and -0.27 for 5 industrialsectors in UK. og Q D en s i t y (a) Distribution of quantities :
00 01 :
00 02 :
00 03 :
00 04 :
00 05 :
00 06 :
00 07 :
00 08 :
00 09 :
00 10 :
00 11 :
00 12 :
00 13 :
00 14 :
00 15 :
00 16 :
00 17 :
00 18 :
00 19 :
00 20 :
00 21 :
00 22 :
00 23 : P r i c e (b) Distribution of prices Figure 3: Quantities and Prices, in natural logarithms
Temperature D en s i t y
10 15 20 25 30 35 40 45 . . . . . (a) Temperature l llll lll ll ll l lll lll l ll llll llll ll ll l ll ll ll l llllll l llll ll lll l ll llll l ll l l llllll l lll l l llll llll lll ll ll lll l lll l l llll l l l llll ll l llll ll ll l lll ll l ll ll lll l lll llll ll llllll ll lllll l llll lll lll llll ll l llll l l lll l ll l llll ll ll l l lllll l llll l l lll llll ll l l ll l ll l ll ll ll lll ll ll lll l l ll l llllll l lll ll l lll llll l ll l llllllll ll l l ll l l ll l ll llll ll lllll l l l llll l ll l lll l lll l l ll llll l lll ll ll l lllll ll lllll ll llll l llll ll ll l lll ll l l llllll ll lll l ll l l ll l l ll lll ll l lll lll ll lll llllllll ll lll ll ll l lllll l ll l ll ll l lll ll llll l lll l l ll lll lll l ll l l ll ll ll ll lll ll ll llll ll l ll ll ll lllll l ll ll l l llllll l l lll lll llll l ll ll ll lll l ll llll ll lllll l lll l l lll l l lll ll ll l ll lllll lll l l l ll llll l ll l l lll ll lll lll ll l ll l l ll lll lll l ll llllll l ll ll ll ll llll l lll l l l llllll l ll lll ll ll lllll ll ll l lll ll l l lll ll ll l lll ll l l llll l lll ll ll ll l ll lll l l ll lllll l l lll ll l l l l l l llll ll lll ll l lll l l lll l lll l l lll l llll ll ll ll ll llll ll ll l lll l llll lll lll l ll lll l llll ll ll l l ll ll lll l ll l lllll ll l lllll l l ll lll lll l lll lll l lll lll ll lllll llll lll l l l lll l lllll llllll l ll lll ll l lllll lllll l lllll ll lll l l ll ll l l llll l lll lllll l lll ll ll l l lll ll lll l l ll l ll l lllll l lll ll ll l lll ll ll ll l lll l l lll l ll l llll ll llll l llll l lll ll ll l ll l l ll l lllll lll lll l ll l lll l ll ll l l lll l llll l l ll llllll lll lll llll l ll l llll lllll lll llll l l ll ll ll l l lll l l lll ll l ll l llll l ll ll l ll l l l ll lll l ll lll lll l ll l ll ll lll ll ll lll l llll l llll ll l lll ll ll llll llll lll ll l llll l ll l l lll ll ll ll l lll ll llll ll l ll l lllll l ll ll ll ll ll l llll ll l lll l lll l llll l l l lll llll l llll l l ll l llll ll l l l l lll l l lll ll lll l ll l ll l ll lll l lll ll l l l lll l l llll l ll l ll l lll l ll ll l ll llll l ll ll ll ll l llll l l l ll ll l lllll ll l l lll l l ll l l ll l lll l l l lll ll l ll lll llllll ll lll ll lll lllll l l l ll ll lll lll l lll ll ll ll l lll lll ll lll l llll ll l llll l ll lll ll l lllll l ll llll llll l l l ll lll ll llllll ll ll lll l lll l ll l l l llll ll lll ll lllllll l l ll l l ll lllll ll ll llllll llll lll l l l lllll l l l ll l ll llll lll lll l l ll ll ll lll l l l l ll lllll ll ll l llll l ll llll ll lll l ll llll l l ll l ll l ll ll l lll lll l ll l ll lll l ll ll ll l l ll l ll lll ll ll l l lllll l l ll lll lll l lll l l lllll l l l l l l ll l llll l l l ll ll ll ll ll l ll l lll l l lllll lll lll lllll ll lll l lll llll l lll llll l lll ll ll ll lll ll l lllll l ll ll l ll lll l lll ll l ll ll l ll l l l l lll l ll ll lll ll l l lllll ll ll l l ll ll lllll lll l llll lll ll ll l lll l l llll lll l l l l ll lll l l lllll l llll lll ll ll ll l ll ll ll lllll lll llll lll l lll ll ll lll llllll lll l ll l l llll ll l l l ll ll llll llll l ll l l ll l ll l ll lllll l llll l ll lll l ll lll lll l l llllll ll l l ll l l lll l ll lll l lll l ll lll ll l lll llll l l l ll l ll l l lll l ll lll ll l ll ll lll ll l lll l l lll l llllll l l l ll ll l l llll ll l llll l l ll ll l l lll l ll l lll l ll llll lll l lll l ll l lll l l ll lll l lllll lll llll l l l ll l ll l lllll ll ll lll ll ll llll llll lll ll l l lllllll ll ll l lll lll ll lllll ll ll llll ll lllll l llll ll ll lll l ll ll lll ll ll l ll ll ll ll ll lllll l llll lll l lllll l l l ll l l ll lll ll ll l l l ll lll l ll l l lll l l ll l l ll l l lllll lll l l ll ll l lll ll llll l l ll l l ll l lllll ll l ll l ll l l ll ll l ll l l ll l ll ll ll l ll l ll lll lll lll ll l llll l ll l llll ll ll ll ll l ll l l l lll ll l ll l l lllll l l llll l lll l ll l ll l lll l l l lll ll lll ll ll lll l lll lll l lll lll ll l lll l l l ll lll l l l ll llll l ll lll l l llll lllll l lll l llll ll l lll lll l l llllll llll llll ll lll l l llll l lllll ll lll l ll l llll llll lllll l lllll llll l ll ll ll ll lllll ll lll lll lll lll lll ll lll lll l ll l l lll llllll ll l l lll l lll ll lll ll ll l ll l ll l ll l l l llll l ll ll l l l ll lll l l l llll l lll l ll ll ll l l l l l ll l lll lll lll ll ll ll l ll l ll l l lll l l ll ll l l lllllll l l lll l lllll lll l l lllll l ll l lll l l ll ll lllll l ll llllllllll l l ll l l llll ll l lll l l lll l llll ll ll ll ll l lll lll l l l ll ll lllll l lllll ll l ll l l lllll lll ll l ll lll llll l lll l l lll l llllllll l ll ll l llll lll ll lll l lll l lll lll lll ll lllll ll ll l l l llll l l lll l l l l lll llll ll lll l lll l l l lll l ll lll lll ll l lll ll ll l llll l lll l ll l ll ll l llll l lll l l ll l ll l llll l l l ll llll llll l l ll ll ll llll llll l lll l lll l l lll ll lll l ll llll l l lllll l ll llll l lll lll ll ll ll l llll ll ll ll l l ll lll lll l llll ll ll lll l ll lll l ll l lll l ll llllll llllllll ll ll l ll llll l lll lll lllll l llll l lll lll ll l l l ll l ll ll l l llllllll l l lll l llll ll l lll llll lll l lll ll lll l l l l ll lll ll lll l lllll l l lll l ll l lll lllllll ll l ll l l l ll llll l lllll l lllll l l ll l ll lll ll l llllll llll l l ll lll l l ll ll ll l l lll l l lll llll l ll lll l ll llll l l ll lll l ll l ll l ll llll ll llll ll llll lll l ll l ll ll l lll lll l llll l lll l l ll l l l ll l l lll l ll lll l l ll l lll l l l l ll ll ll l ll lllll ll llll lll l l ll l lll l l llll lll llll lllll ll lllll lll llll lll l ll llll lll l ll l ll l llllll l ll llll lll lll ll lll lll l llllll llll l ll ll lll ll lll l l llll l l ll llll ll l llll ll l ll lll l lll l l lll lllll l l l l lll ll ll ll lll llll llll l l ll l llll l ll lll ll l ll l ll l lll l ll ll l ll l l lll l l ll llll ll l lll l lllll l l l ll llll l l ll llll l ll ll l l lll l ll l ll ll l l lll l ll lll l l ll lll ll ll l lll llllll l ll ll l lllll ll lll l ll l l ll l lll lll l l ll l ll l ll l lll lll llllll l llll ll l lllll l l lll l l lll l l ll ll l ll llll ll l l l l ll lll lll ll l lllll l l ll llll lll lllll l lllll lll l ll ll l l lll ll ll llll lllll lll l ll l lllll lll ll lll ll lll l l ll ll l l ll ll l ll l ll lll l l ll lllll lllll llllll l llll l llll lll ll l llll l lll ll lll ll ll l lll l lll ll lll ll l ll ll l lll l ll lllll ll l ll lll ll l l lll lll l l l ll l lll l llll ll ll l ll l ll lllll ll l l l ll ll lllll lll l lll lll llll lll ll ll ll l ll l l l l lll l l llllll ll l ll lll ll ll ll lll l l l ll ll lll ll l lll l l l ll l lll ll ll ll ll llll l l lll llll lllll l l lllll l l ll ll lll l ll llll llll l l ll lll ll ll ll l lll lll l lllll ll ll l ll l ll lll llll l l ll llllll llll l l ll ll l l ll ll l lll lll l lll lllll l ll ll ll ll ll l ll lll lll l ll ll lllll lll ll lll ll llll l l ll l ll lll l l l llll l ll l ll l l ll lll lll l ll llll l lll l l llllll llll l l lll lll l l llll ll l l l lll ll l l llll l l lllll l l lll lll llll ll ll ll lllll ll l ll l llll l lll l ll lll l l l lll lll l l llll lll l ll l l llll l l lll l lll lll l l lll ll ll lll l ll ll l lllll llll ll l ll l llll llll ll ll ll lllll ll ll l l ll llll l llll ll l l ll l ll lllll l l llll l l l l lll ll ll ll llll llll lll llll l ll l ll l lll lll l ll l l l ll llll ll ll l ll lll l lll ll l ll lll lllll l l l lll lll ll ll ll ll ll l l ll llll llll l l l llll ll ll ll l lll ll l l l ll l l lllll l l ll ll l lll ll l lll l lll l l llll l l l lll l lll lll l lll ll l ll l ll l l llll l l lll llll l lll l ll l ll l l ll l lll ll ll l lll l l lll lll l lll l l l lll l ll lllll ll lll l ll l l ll ll l l llll ll llll l lll ll l l l l llll lll ll l lll ll l ll l ll l ll l ll ll l ll l ll l l ll lll l lll l l lll ll lll ll l lll ll llll lllll ll l ll l l llll ll lll l ll l lll lll lll ll ll l llll l llll l lll lll lll l l ll l ll lllll l ll l lll lll l lll l lll l ll ll ll l llll ll l l lll ll llllll ll ll l llll lll l llll ll ll ll ll l lll l l lll l ll l ll l l lll l ll lll lll llll l ll ll l l lll l lllll l llll llll l ll ll l l lll l l l lll llll lll ll l lll ll lll l l ll ll ll l l l lll l lll l llll l lll l l lllllllll l lll ll ll ll ll llll lll lll ll ll lllllll llll ll lllllll lll l l ll llll ll l l l l llll l ll lll l ll ll ll l lllll ll lll l llllll ll l lll lll llllllll ll lll ll l ll l lll ll l ll llll l ll lll l ll lll ll ll lll ll l llll l ll lll ll l l lll ll l lll l ll l l lll ll ll l lllll llll l lll l l ll l l l lll ll l ll ll ll l l ll l l l ll ll l ll lll l ll lll lll lll l ll l llll ll llll lll l llll l ll ll ll l ll lllll l l ll lllll l ll ll l ll lll lll l lll l ll l l ll lll ll l ll lll l l lll lll l ll ll llll l ll ll lll l l ll lllll lllllll llll l ll ll ll l ll l ll l ll lll l ll ll l l ll llllll l ll l lllllll lll l ll lll ll l l ll lll l l ll l lll lll l ll l l lll l lll l lll lllll l l l ll l llll l llll llll ll l l lllllllllll lll l ll ll l lll ll l l ll l l lll l l l l l llll l lll lll l ll l ll l l l lll llllll lll ll ll llll l ll l llllll lllll ll l ll ll l l llll l ll ll l l lll l l ll l l l ll l ll l l lll l lll l ll l ll l l l ll lll ll l ll l ll lll l l lll ll ll l lll l l l lll l l l lll l l lllllll l ll l l ll ll l l l l llllll ll l l l ll lll l ll ll llll lllll lll l ll lll ll l lllllll l ll ll ll ll llll l l ll ll ll lll lll l lll l lll ll l ll ll ll ll ll ll ll lll lll lll l ll l l lll ll l ll l llllll l lll ll lll l l llll l ll ll ll ll llllll l lll lll llll ll l ll l l ll l ll l lll ll l ll l llll lll l ll lll lll l ll ll ll ll ll l lll lll l lll ll lllll lllll ll ll llll ll ll l l lll l ll l ll ll l l ll lll ll l l l lll l ll ll l lll lll ll llll ll lll l lll lll lll l lll ll l lll l l lll l ll l llll ll lll llllllll ll l l l ll l lll l ll lll l ll l l l l l ll ll l lll l l lllll l llll llllll l ll lllll lll l lll lllll ll ll ll ll l ll lll lll l lll l ll ll l ll lll llll ll lll l ll lll l l lll l lll ll l ll ll lll lllll llll lll llll l lll lll ll lll llll lllll lll ll lll llll ll l lll l lll l l lll ll l l ll llll l ll l l l lll lll llll l llll llll ll ll lll l llll l l ll l ll l ll l l llll l l lllll lllll ll l llll lll lll l l l lll l l ll ll ll l llll llll l llll ll l ll l lll lll ll ll l l lll l l l l ll ll l l ll l l ll l l l l ll
15 20 25 30 35 40 45 . . . . . . . . Temperature (°C) l og Q (b) Temperature and Electricity Figure 4: Temperature20 . . . . . . . Hour b ( t ) (a) Elasticity of electricity supply a · R SS ( a ) + − − − − (b) Residual curve Figure 5: Estimator and optimal regularization parameter
This paper introduces a novel high-dimensional mixed-frequency IV regression andcontributes to the growing literature on high-dimensional and mixed-frequency data.We show that the slope parameter of the high-dimensional endogenous regressorcan be identified and accurately estimated leveraging on an instrumental variableobserved at a low-frequency only.We characterize the identifying condition in the model and study the statisticalproperties of the Tikhonov-regularized estimator with time series data. The mixed-frequency IV estimator has a closed-form expression and is easy and fast to computenumerically. Our statistical analysis does not restrict the number of high-frequencyobservations of the process and can handle the number of covariates increasing withthe sample size even faster than exponentially.In our empirical application, we estimate the real-time price elasticity of supply atthe Australian electricity spot market. We find that the supply is relatively inelasticand that its elasticity is heterogeneous throughout the day. To conclude, we notethat our identification strategy with a low-frequency IV can also be applied to theinstrumental variable model of Benatia et al. (2017) with a high-frequency dependentvariable. 21 eferences
Elena Andreou, Eric Ghysels, and Andros Kourtellos. Regression models with mixedsampling frequencies.
Journal of Econometrics , 158(2):246–261, 2010.Germ´an Aneiros, Juan M. Vilar, Ricardo Cao, and Antonio Mu˜noz San Roque.Functional prediction for the residual demand in electricity spot markets.
IEEETransactions on Power Systems , 28(4):4201–4208, 2013.Andrii Babii. Honest confidence sets in nonparametric iv regression and other ill-posed models.
Econometric Theory (forthcoming) , 2020.Andrii Babii. Are unobservables separable?
UNC Working paper , 2020.Andrii Babii and Jean-Pierre Florens. Is completeness necessary? Estimation andinference in non-identified models.
UNC Working Paper , 2018.Andrii Babii, Eric Ghysels, and Jonas Striaukas. Estimation and HAC-based infer-ence for machine learning time series regressions.
UNC Working Paper , 2019.Alexandre Belloni, Victor Chernozhukov, and Christian Hansen. Lasso methods forgaussian instrumental variables models.
MIT Department of Economics WorkingPaper , 2011.Alexandre Belloni, Daniel Chen, Victor Chernozhukov, and Christian Hansen. Sparsemodels and methods for optimal instruments with an application to eminent do-main.
Econometrica , 80(6):2369–2429, 2012.Alexandre Belloni, Victor Chernozhukov, and Christian Hansen. Inference on treat-ment effects after selection among high-dimensional controls.
The Review of Eco-nomic Studies , 81(2):608–650, 2014.Alexandre Belloni, Victor Chernozhukov, Denis Chetverikov, Christian Hansen, andKengo Kato. High-dimensional econometrics and regularized gmm. arXiv preprintarXiv:1806.01888 , 2018.David Benatia. Functional econometrics of multi-unit auctions: an application tothe New York electricity market. CREST Working Paper, 2018.David Benatia, Marine Carrasco, and Jean-Pierre Florens. Functional linear regres-sion with functional response.
Journal of Econometrics , 201(2):269–291, 2017.22erman J Bierens. Consistent model specification tests.
Journal of Econometrics ,20(1):105–134, 1982.Denis Bosq. Linear processes in function spaces: theory and applications, volume149.
Springer Science & Business Media , 2000.M. Carrasco and J.P. Florens. Generalization of GMM to a continuum of momentconditions.
Econometric Theory , 16(06):797–834, 2000.Marine Carrasco. A regularization approach to the many instruments problem.
Jour-nal of Econometrics , 170(2):383–398, 2012.Marine Carrasco, Mikhail Chernov, Jean-Pierre Florens, and Eric Ghysels. Efficientestimation of general dynamic models with a continuum of moment conditions.
Journal of Econometrics , 140(2):529–573, 2007.Marine Carrasco, Jean-Pierre Florens, and Eric Renault. Linear inverse problems instructural econometrics estimation based on spectral decomposition and regular-ization.
Handbook of Econometrics, Vol. 6B , 5633–5751, 2007.Marine Carrasco, Jean-Pierre Florens, and Eric Renault. Asymptotic normal infer-ence in linear inverse problems.
The Oxford Handbook of Applied Nonparametricand Semiparametric Econometrics and Statistics , 2014.Manuel A. Dominguez and Ignacio N. Lobato. Consistent estimation of modelsdefined by conditional moment restrictions.
Econometrica , 72(5):1601–1615, 2004.David Donoho. High-dimensional data analysis: The curses and blessings of dimen-sionality.
AMS math challenges lecture , 2000.Fr´ed´erique F`eve and Jean-Pierre Florens. The practice of non-parametric estima-tion by solving inverse problems: the example of transformation models.
TheEconometrics Journal , 13(3):S1–S27, 2010.Jean-Pierre Florens and Senay Sokullu. Is there an optimal weighting for linearinverse problems?
University of Bristol Working Papers , 2018.Jean-Pierre Florens and S´ebastien Van Bellegem. Instrumental variable estimationin functional linear models.
Journal of Econometrics , 186(2):465–476, 2015.23laudia Foroni, Massimiliano Marcellino, and Christian Schumacher. Unrestrictedmixed data sampling (MIDAS): MIDAS regressions with unrestricted lag polyno-mials.
Journal of the Royal Statistical Society: Series A (Statistics in Society) ,178(1):57–82, 2015.Patrick Gagliardini and Olivier Scaillet. Tikhonov regularization for nonparametricinstrumental variable estimators.
Journal of Econometrics , 167(1):61–75, 2012.John Geweke. Temporal aggregation in the multiple regression model.
Econometrica ,46(3):643–661, 1978.Eric Ghysels and Jonathan H Wright. MIDAS instruments. UNC Working Paper,2010.Eric Ghysels, Pedro Santa-Clara, and Rossen Valkanov. The MIDAS touch: Mixeddata sampling regression models. UNC Working Paper, 2004.Eric Ghysels, Arthur Sinko, and Rossen Valkanov. MIDAS regressions: Furtherresults and new directions.
Econometric Reviews , 26(1):53–90, 2007.Peter R Hansen and Asger Lunde. Realized variance and market microstructurenoise.
Journal of Business & Economic Statistics , 24(2):127–161, 2006.Lynda Khalaf, Maral Kichian, Charles Saunders, and Marcel Voia. Dynamic panelswith MIDAS covariates: Nonlinearity, estimation and fit.
Technical Report , 2017.Albert S. Kyle. Continuous auctions and insider trading.
Econometrica , 53(6):1315–1335, 1985.Pascal Lavergne and Valentin Patilea. Smooth minimum distance estimation andtesting with conditional estimating equations: uniform in bandwidth theory.
Jour-nal of Econometrics , 177(1):47–59, 2013.Mark G. Lijesen. The real-time price elasticity of electricity.
Energy Economics , 29(2):249–258, 2007.Robert H. Patrick and Frank A. Wolak. Estimating the customer-level demand forelectricity under real-time market prices. Technical report, National Bureau ofEconomic Research, 2001.Christopher A. Sims. Discrete approximations to continuous time distributed lags ineconometrics.
Econometrica , 39(3):545–563, 1971.24axwell B. Stinchcombe and Halbert White. Consistent specification testing withnuisance parameters present only under the alternative.
Econometric Theory , 14(03):295–325, 1998.Andrei N. Tikhonov. On the regularization of ill-posed problems (in Russian). In
Doklady Akademii Nauk SSSR , 153(1):49–52, 1963.Andrey N. Tikhonov. On the solution of ill-posed problems and the method ofregularization. In
Doklady Akademii Nauk SSSR , 151(3):501–504, 1963.Philip G. Wright. Tariff on animal and vegetable oils.
Macmillan Company , NewYork, 1928.Lan Zhang, Per A Mykland, and Yacine A¨ıt-Sahalia. A tale of two time scales:determining integrated volatility with noisy high-frequency data.
Journal of theAmerican Statistical Association , 100(472), 2005.25
PPENDIX
A.1 Proofs
Notation:
We use L ( S ) to denote the space of functions on S ⊂ R d , square-integrable with respect to the Lebesgue measure. We endow the space L ( S ) withthe natural inner product (cid:104) β, γ (cid:105) = (cid:82) S β ( s ) γ ( s )d s and the norm (cid:107) β (cid:107) = (cid:112) (cid:104) β, β (cid:105) for all β, γ ∈ L ( S ). Any vector a ∈ R m should be considered as a column-vector and canbe written as a = ( a j ) ≤ j ≤ m . For a bounded linear operator K : E → F between thetwo Hilbert spaces E and F , let (cid:107) K (cid:107) ∞ = sup (cid:107) φ (cid:107)≤ (cid:107) Kφ (cid:107) denote its operator norm.Let σ ( K ∗ K ) denote the spectrum of the corresponding self-adjoint operator K ∗ K .The m × T matrix A is written by enumerating all its elements A = ( A j,t ) ≤ j ≤ m ≤ t ≤ T . If m = T , then we simply write A = ( A ij ) ≤ i,j ≤ m, . We use C κL [0 ,
1] = (cid:26) f : [0 , → R : max ≤ k ≤(cid:98) κ (cid:99) (cid:107) f ( k ) (cid:107) ∞ ≤ L, sup s (cid:54) = s (cid:48) | f ( (cid:98) κ (cid:99) ) ( s ) − f ( (cid:98) κ (cid:99) ) ( s (cid:48) ) || s − s (cid:48) | κ −(cid:98) κ (cid:99) ≤ L (cid:27) to denote the space of H¨older continuous functions with common parameters κ, L >
0. Lastly, for a, b ∈ R , put a ∨ b = max { a, b } .To prove Theorem 3.1, we need two auxiliary lemmas. The first lemma boundsthe expected norm of the sample mean of a covariance stationary zero-mean L ( S )-valued stochastic process ( X t ) t ∈ Z by the norm of its auto-covariance function γ h . Lemma A.1.1.
Suppose that ( X t ) t ∈ Z is a zero-mean covariance stationary processin L ( S ) with absolutely summable autocovariance function (cid:88) h ∈ Z (cid:107) γ h (cid:107) < ∞ , where (cid:107) γ h (cid:107) = (cid:82) S | γ h ( s, s ) | d s . Then E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T T (cid:88) t =1 X t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ T (cid:88) h ∈ Z (cid:107) γ h (cid:107) . Appendix - 1 roof.
We have E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T T (cid:88) t =1 X t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = 1 T E (cid:42) T (cid:88) t =1 X t , T (cid:88) k =1 X k (cid:43) = 1 T T (cid:88) t,k =1 (cid:90) S E [ X t ( s ) X k ( s )]d s = 1 T T (cid:88) t,k =1 (cid:90) S γ t − k ( s )d s = 1 T (cid:88) | h | Suppose that ˆ k, k, ˆ r, r, β are square-integrable. Then E (cid:13)(cid:13)(cid:13) ˆ K − K (cid:13)(cid:13)(cid:13) ∞ ≤ E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) and E (cid:13)(cid:13)(cid:13) ˆ r − ˆ Kβ (cid:13)(cid:13)(cid:13) ≤ E (cid:107) ˆ r − r (cid:107) + 2 (cid:107) β (cid:107) E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) . Proof. By the definition of the operator norm and the Cauchy-Schwartz inequality E (cid:13)(cid:13)(cid:13) ˆ K − K (cid:13)(cid:13)(cid:13) ∞ = E (cid:34) sup (cid:107) φ (cid:107)≤ (cid:13)(cid:13)(cid:13) ˆ Kφ − Kφ (cid:13)(cid:13)(cid:13) (cid:35) = E (cid:34) sup (cid:107) φ (cid:107)≤ (cid:90) (cid:12)(cid:12)(cid:12)(cid:12)(cid:90) φ ( s ) (cid:16) ˆ k ( s, u ) − k ( s, u ) (cid:17) d s (cid:12)(cid:12)(cid:12)(cid:12) d u (cid:35) ≤ E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) . (A.1)Appendix - 2or the second part, use r = Kβ , (cid:107) a + b (cid:107) ≤ (cid:107) a (cid:107) + 2 (cid:107) b (cid:107) , (cid:107) Kβ (cid:107) ≤ (cid:107) K (cid:107) ∞ (cid:107) β (cid:107) ,and the estimate in Eq. A.1 E (cid:13)(cid:13)(cid:13) ˆ r − ˆ Kβ (cid:13)(cid:13)(cid:13) ≤ E (cid:107) ˆ r − r (cid:107) + 2 E (cid:107) ˆ Kβ − Kβ (cid:107) ≤ E (cid:107) ˆ r − r (cid:107) + 2 (cid:107) β (cid:107) E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) . Proof of Theorem 3.1. The proof is based on the following decompositionˆ β − β = R + R + R + R with R = ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ (ˆ r − ˆ Kβ ) R = α ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ ( ˆ K − K )( αI + K ∗ K ) − βR = α ( αI + ˆ K ∗ ˆ K ) − ( ˆ K ∗ − K ∗ ) K ( αI + K ∗ K ) − βR = ( αI + K ∗ K ) − K ∗ Kβ − β. To see that this decomposition holds, note that R + R = α ( αI + ˆ K ∗ ˆ K ) − (cid:104) ˆ K ∗ ˆ K − K ∗ K (cid:105) ( αI + K ∗ K ) − β = α ( αI + ˆ K ∗ ˆ K ) − (cid:104) ( αI + ˆ K ∗ ˆ K ) − ( αI + K ∗ K ) (cid:105) ( αI + K ∗ K ) − β = α ( αI + K ∗ K ) − β − α ( αI + ˆ K ∗ ˆ K ) − β = (cid:104) I − α ( αI + ˆ K ∗ ˆ K ) − (cid:105) β + (cid:2) α ( αI + K ∗ K ) − − I (cid:3) β = ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ ˆ Kβ − ( αI + K ∗ K ) − K ∗ Kβ. Therefore, E (cid:13)(cid:13)(cid:13) ˆ β − β (cid:13)(cid:13)(cid:13) ≤ E (cid:107) R (cid:107) + 4 E (cid:107) R (cid:107) + 4 E (cid:107) R (cid:107) + 4 E (cid:107) R (cid:107) . The fourth term is a regularization bias and its order follows directly from theAssumption 3.2 and the isometry of the functional calculus E (cid:107) R (cid:107) = (cid:13)(cid:13)(cid:2) ( αI + K ∗ K ) − K ∗ K − I (cid:3) β (cid:13)(cid:13) ≤ (cid:13)(cid:13)(cid:2) I − ( αI + K ∗ K ) − K ∗ K (cid:3) ( K ∗ K ) γ (cid:13)(cid:13) ∞ R = sup λ ∈ σ ( K ∗ K ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:18) − λα + λ (cid:19) λ γ (cid:12)(cid:12)(cid:12)(cid:12) R = sup λ ∈ σ ( K ∗ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ γ α + λ (cid:12)(cid:12)(cid:12)(cid:12) α R. Appendix - 3e can have two cases depending on the value of γ > 0. For γ ∈ (0 , λ (cid:55)→ λ γ α + λ admits maximum at λ = γ − γ α . For γ ≥ 1, the function λ (cid:55)→ λ γ α + λ is strictlyincreasing on [0 , ∞ ), attaining maximum at the end of the spectrum λ = (cid:107) K ∗ K (cid:107) ∞ .Therefore, since γ γ (1 − γ ) − γ ≤ , γ ∈ (0 , λ ∈ σ ( K ∗ K ) λ γ α + λ ≤ (cid:40) (cid:107) K ∗ K (cid:107) γ − , γ ≥ α γ − , γ ∈ (0 , . This gives E (cid:107) R (cid:107) ≤ α γ R since γ ∈ (0 , give E (cid:107) R (cid:107) ≤ E (cid:13)(cid:13)(cid:13) ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13)(cid:13) ˆ r − ˆ Kβ (cid:13)(cid:13)(cid:13) = E (cid:34) sup λ ∈ σ ( ˆ K ∗ ˆ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ / α + λ (cid:12)(cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13) ˆ r − ˆ Kβ (cid:13)(cid:13)(cid:13) (cid:35) ≤ α (cid:18) E (cid:107) ˆ r − r (cid:107) + (cid:107) β (cid:107) E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) (cid:19) , where the last inequality follows by Lemma A.1.2.Next, E (cid:107) R (cid:107) ≤ E (cid:13)(cid:13)(cid:13) ( αI + ˆ K ∗ ˆ K ) − ˆ K ∗ (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13)(cid:13) ˆ K − K (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13) α ( αI + K ∗ K ) − ( K ∗ K ) γ (cid:13)(cid:13) ∞ R ≤ E (cid:34) sup λ ∈ σ ( ˆ K ∗ ˆ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ / α + λ (cid:12)(cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13) ˆ K − K (cid:13)(cid:13)(cid:13) ∞ sup λ ∈ σ ( K ∗ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ γ α + λ (cid:12)(cid:12)(cid:12)(cid:12) α R (cid:35) ≤ α γ α E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) R, where the last inequality follows by Lemma A.1.2. Note that ˆ K is a finite-rank operator, hence, compact. Appendix - 4ikewise, for the third term, we have E (cid:107) R (cid:107) = E (cid:13)(cid:13)(cid:13) ( αI + ˆ K ∗ ˆ K ) − ( ˆ K ∗ − K ∗ ) αK ( αI + K ∗ K ) − β (cid:13)(cid:13)(cid:13) ≤ E (cid:13)(cid:13)(cid:13) ( αI + ˆ K ∗ ˆ K ) − (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13)(cid:13) ˆ K ∗ − K ∗ (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13) K ( αI + K ∗ K ) − ( K ∗ K ) γ (cid:13)(cid:13) ∞ α R = E (cid:34) sup λ ∈ σ ( ˆ K ∗ ˆ K ) (cid:12)(cid:12)(cid:12)(cid:12) α + λ (cid:12)(cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13) ˆ K ∗ − K ∗ (cid:13)(cid:13)(cid:13) ∞ sup λ ∈ σ ( K ∗ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ γ +1 / α + λ (cid:12)(cid:12)(cid:12)(cid:12) α R (cid:35) ≤ sup λ ∈ σ ( K ∗ K ) (cid:12)(cid:12)(cid:12)(cid:12) λ γ +1 / α + λ (cid:12)(cid:12)(cid:12)(cid:12) E (cid:13)(cid:13)(cid:13) ˆ K ∗ − K ∗ (cid:13)(cid:13)(cid:13) ∞ R ≤ α γ ∧ α (cid:107) K ∗ K (cid:107) (2 γ − ∨ E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) R. Lastly, let γ (1) h be the autocovariance function of the process ( Y t Ψ( ., W t )) t ∈ Z andlet γ (2) h be the autocovariance function of the process ( Z t ( . ) , Ψ( ., W t )) t ∈ Z . Put η = (cid:88) h ∈ Z (cid:107) γ (1) h (cid:107) and η = (cid:88) h ∈ Z (cid:107) γ (2) h (cid:107) . Then under Assumption 3.1 by Lemma A.1.1 E (cid:107) ˆ r − r (cid:107) = E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T T (cid:88) t =1 { Y t Ψ( ., W t ) − E [ Y t Ψ( ., W t )] } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ η T and E (cid:13)(cid:13)(cid:13) ˆ k − k (cid:13)(cid:13)(cid:13) = E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T T (cid:88) t =1 { Z t ( . )Ψ( ., W t ) − E [ Z ( . )Ψ( ., W )] } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ η T . Combining all the estimates, we obtain E (cid:13)(cid:13)(cid:13) ˆ β − β (cid:13)(cid:13)(cid:13) ≤ η + η (cid:107) β (cid:107) ) αT + 4 Rη αT (cid:0) α γ + α γ ∧ (cid:107) K ∗ K (cid:107) (2 γ − ∨ (cid:1) + 4 α γ R. (A.2)Appendix - 5 roof of Theorem 3.2. Decomposeˆ β m − β = ˆ β m − ˆ β + ˆ β − β. By Theorem 3.1, we know that E (cid:107) ˆ β − β (cid:107) = O (cid:0) αT + α γ (cid:1) . Consequently, it remainsto control E (cid:107) ˆ β m − ˆ β (cid:107) . To that end, note that if ˆ ψ m solves( αI + ˆ K m ˆ K ∗ ) ˆ ψ m = ˆ r, then ˆ β m = ˆ K ∗ ˆ ψ m . Therefore,ˆ β m = ˆ K ∗ ( αI + ˆ K m ˆ K ∗ ) − ˆ r. Next, decomposeˆ β m − ˆ β = ˆ K ∗ ( αI + ˆ K m ˆ K ∗ ) − ˆ r − ˆ K ∗ ( αI + ˆ K ˆ K ∗ ) − ˆ r = ˆ K ∗ (cid:104) ( αI + ˆ K m ˆ K ∗ ) − − ( αI + ˆ K ˆ K ∗ ) − (cid:105) ˆ r = ˆ K ∗ ( αI + ˆ K ˆ K ∗ ) − (cid:104) ˆ K ˆ K ∗ − ˆ K m ˆ K ∗ (cid:105) ( αI + ˆ K m ˆ K ∗ ) − ˆ r = ˆ K ∗ ( αI + ˆ K ˆ K ∗ ) − ( ˆ K − ˆ K m ) ˆ K ∗ ( αI + ˆ K m ˆ K ∗ ) − ˆ r. Then (cid:13)(cid:13)(cid:13) ˆ β m − ˆ β (cid:13)(cid:13)(cid:13) ≤ (cid:13)(cid:13)(cid:13) ˆ K ∗ ( αI + ˆ K ˆ K ∗ ) − (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13)(cid:13) ( ˆ K − ˆ K m ) ˆ K ∗ (cid:13)(cid:13)(cid:13) ∞ (cid:13)(cid:13)(cid:13) ( αI + ˆ K m ˆ K ∗ ) − (cid:13)(cid:13)(cid:13) ∞ (cid:107) ˆ r (cid:107) ≤ (cid:107) ˆ r (cid:107) α (cid:13)(cid:13)(cid:13) ( ˆ K − ˆ K m ) ˆ K ∗ (cid:13)(cid:13)(cid:13) ∞ . Next, the expression inside of the operator norm is the integral operator on L ( ˆ K − ˆ K m ) ˆ K ∗ ψ = (cid:90) ψ ( u ) (cid:32)(cid:90) ˆ k ( s, v )ˆ k ( s, u )d s − m (cid:88) j =1 ˆ k ( s j , v )ˆ k ( s j , u ) δ j (cid:33) d u. Appendix - 6herefore, by the same computations as in Eq. A.1 and the triangle inequality (cid:13)(cid:13)(cid:13) ( ˆ K − ˆ K m ) ˆ K ∗ (cid:13)(cid:13)(cid:13) ∞ ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:90) ˆ k ( s, . )ˆ k ( s, . )d s − m (cid:88) j =1 ˆ k ( s j , . )ˆ k ( s j , . ) δ j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T T (cid:88) t =1 T (cid:88) k =1 Ψ( ., W t )Ψ( ., W k ) (cid:90) Z t ( s ) Z k ( s )d s − m (cid:88) j =1 Z t ( s j ) Z k ( s j ) δ j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ T T (cid:88) t =1 T (cid:88) k =1 (cid:107) Ψ( ., W t ) (cid:107)(cid:107) Ψ( ., W k ) (cid:107) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:90) Z t ( s ) Z k ( s )d s − m (cid:88) j =1 Z t ( s j ) Z k ( s j ) δ j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ max ≤ t ≤ T (cid:107) Ψ( ., W t ) (cid:107) max ≤ t,k ≤ T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) j =1 Z t ( s j ) Z k ( s j ) δ j − (cid:90) Z t ( s ) Z k ( s )d s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Under Assumption 3.3 (i) | Z t ( s j ) Z k ( s j ) − Z t ( s ) Z k ( s ) | ≤ | Z t ( s j ) − Z t ( s ) || Z k ( s j ) | + | Z t ( s ) || Z k ( s j ) − Z k ( s ) |≤ L | s j − s | κ , and whence (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) j =1 Z t ( s j ) Z k ( s j ) δ j − (cid:90) Z t ( s ) Z k ( s )d s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) j =1 (cid:90) s j s j − { Z t ( s j ) Z k ( s j ) − Z t ( s ) Z k ( s ) } d s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ m (cid:88) j =1 (cid:90) s j s j − | Z t ( s j ) Z k ( s j ) − Z t ( s ) Z k ( s ) | d s = 2 L m (cid:88) j =1 (cid:90) s j s j − | s j − s | κ d s ≤ L max ≤ j ≤ m δ κj . Therefore, E (cid:13)(cid:13)(cid:13) ˆ β m − ˆ β (cid:13)(cid:13)(cid:13) ≤ E (cid:107) ˆ r (cid:107) α L ¯Ψ max ≤ j ≤ m δ κj = O (cid:18) ∆ κm α (cid:19) = O (cid:18) αT (cid:19) ,,