Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiwei Yao is active.

Publication


Featured researches published by Qiwei Yao.


Journal of the American Statistical Association | 1999

Methods for Estimating a Conditional Distribution Function

Peter Hall; Rodney C. Wolff; Qiwei Yao

Motivated by the problem of setting prediction intervals in time series analysis, we suggest two new methods for conditional distribution estimation. The first method is based on locally fitting a logistic model and is in the spirit of recent work on locally parametric techniques in density estimation. It produces distribution estimators that may be of arbitrarily high order but nevertheless always lie between 0 and 1. The second method involves an adjusted form of the Nadaraya--Watson estimator. It preserves the bias and variance properties of a class of second-order estimators introduced by Yu and Jones but has the added advantage of always being a distribution itself. Our methods also have application outside the time series setting; for example, to quantile estimation for independent data. This problem motivated the work of Yu and Jones.


Econometrica | 2003

Inference in Arch and Garch Models with Heavy--Tailed Errors

Peter Hall; Qiwei Yao

ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.


Annals of Statistics | 2012

Factor modeling for high-dimensional time series: inference for the number of factors

Clifford Lam; Qiwei Yao

This paper deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. Under stationary settings, the inference is simple in the sense that both the number of factors and the factor loadings are estimated in terms of an eigenanalysis for a nonnegative definite matrix, and is therefore applicable when the dimension of time series is on the order of a few thousands. Asymptotic properties of the proposed method are investigated under two settings: (i) the sample size goes to infinity while the dimension of time series is fixed; and (ii) both the sample size and the dimension of time series go to infinity together. In particular, our estimators for zero-eigenvalues enjoy faster convergence (or slower divergence) rates, hence making the estimation for the number of factors easier. In particular, when the sample size and the dimension of time series go to infinity together, the estimators for the eigenvalues are no longer consistent. However, our estimator for the number of the factors, which is based on the ratios of the estimated eigenvalues, still works fine. Furthermore, this estimation shows the so-called “blessing of dimensionality” property in the sense that the performance of the estimation may improve when the dimension of time series increases. A two-step procedure is investigated when the factors are of different degrees of strength. Numerical illustration with both simulated and real data is also reported.


Journal of Statistical Planning and Inference | 1998

Linearity testing using local polynomial approximation

Vidar Hjellvik; Qiwei Yao; Dag Tjøstheim

We use local polynomial approximation to estimate the conditional mean and conditional variance, and test linearity by using a functional measuring the deviation between the nonparametric estimates and the parametric estimates based on a linear model. We also employ first-and second-order derivatives for this purpose, and we point out some advantages of using local polynomial approximation as opposed to kernel estimation in the context of linearity testing. The asymptotic theory of the test functionals is developed in some detail for a special case. It is used to draw qualitative conclusions concerning the bandwidth, but in order to apply the asymptotic distribution to specific testing problems very large sample sizes are needed. For moderate sample sizes we have examined a bootstrap alternative in a large variety of situations. We have tried bandwidths suggested by asymptotic results as well as bandwidths obtained by cross-validation.


Journal of the American Statistical Association | 2007

To How Many Simultaneous Hypothesis Tests Can Normal, Student's t or Bootstrap Calibration Be Applied?

Jianqing Fan; Peter Hall; Qiwei Yao

In the analysis of microarray data, and in some other contemporary statistical problems, it is not uncommon to apply hypothesis tests in a highly simultaneous way. The number, N say, of tests used can be much larger than the sample sizes, n, to which the tests are applied, yet we wish to calibrate the tests so that the overall level of the simultaneous test is accurate. Often the sampling distribution is quite different for each test, so there may not be an opportunity to combine data across samples. In this setting, how large can N be, as a function of n, before level accuracy becomes poor? Here we answer this question in cases where the statistic under test is of Students t type. We show that if either the normal or Student t distribution is used for calibration, then the level of the simultaneous test is accurate provided that log N increases at a strictly slower rate than n 1/3 as n diverges. On the other hand, if bootstrap methods are used for calibration, then we may choose log N almost as large as n 1/2 and still achieve asymptotic-level accuracy. The implications of these results are explored both theoretically and numerically.


Journal of the American Statistical Association | 2011

Large Volatility Matrix Inference via Combining Low-Frequency and High-Frequency Approaches

Minjing Tao; Yazhen Wang; Qiwei Yao; Jian Zou

It is increasingly important in financial economics to estimate volatilities of asset returns. However, most of the available methods are not directly applicable when the number of assets involved is large, due to the lack of accuracy in estimating high-dimensional matrices. Therefore it is pertinent to reduce the effective size of volatility matrices in order to produce adequate estimates and forecasts. Furthermore, since high-frequency financial data for different assets are typically not recorded at the same time points, conventional dimension-reduction techniques are not directly applicable. To overcome those difficulties we explore a novel approach that combines high-frequency volatility matrix estimation together with low-frequency dynamic models. The proposed methodology consists of three steps: (i) estimate daily realized covolatility matrices directly based on high-frequency data, (ii) fit a matrix factor model to the estimated daily covolatility matrices, and (iii) fit a vector autoregressive model to the estimated volatility factors. We establish the asymptotic theory for the proposed methodology in the framework that allows sample size, number of assets, and number of days go to infinity together. Our theory shows that the relevant eigenvalues and eigenvectors can be consistently estimated. We illustrate the methodology with the high-frequency price data on several hundreds of stocks traded in Shenzhen and Shanghai Stock Exchanges over a period of 177 days in 2003. Our approach pools together the strengths of modeling and estimation at both intra-daily (high-frequency) and inter-daily (low-frequency) levels.


Philosophical Transactions of the Royal Society A | 1994

On prediction and chaos in stochastic systems

Qiwei Yao; Howell Tong

We propose a new measure of sensitivity to initial conditions within a stochastic environment and explore its connection with nonlinear prediction and statistical estimation. We use modern statistical developments to construct and illustrate pointwise predictors and predictive intervals/distributions.


Journal of the American Statistical Association | 2013

Modeling and Forecasting Daily Electricity Load Curves: A Hybrid Approach

Haeran Cho; Yannig Goude; Xavier Brossat; Qiwei Yao

We propose a hybrid approach for the modeling and the short-term forecasting of electricity loads. Two building blocks of our approach are (1) modeling the overall trend and seasonality by fitting a generalized additive model to the weekly averages of the load and (2) modeling the dependence structure across consecutive daily loads via curve linear regression. For the latter, a new methodology is proposed for linear regression with both curve response and curve regressors. The key idea behind the proposed methodology is dimension reduction based on a singular value decomposition in a Hilbert space, which reduces the curve regression problem to several ordinary (i.e., scalar) linear regression problems. We illustrate the hybrid method using French electricity loads between 1996 and 2009, on which we also compare our method with other available models including the Électricité de France operational model. Supplementary materials for this article are available online.


Journal of the American Statistical Association | 2000

Conditional Minimum Volume Predictive Regions for Stochastic Processes

Wolfgang Polonik; Qiwei Yao

Abstract Motivated by interval/region prediction in nonlinear time series, we propose a minimum volume (MV) predictor for a strictly stationary process. The MV predictor varies with respect to the current position in the State space and has the minimum Lebesgue measure among all regions with the nominal coverage probability. We have established consistency, convergence rates, and asymptotic normality for both coverage probability and Lebesgue measure of the estimated MV predictor under the assumption that the observations are from a strong mixing process. Applications with both real and simulated datasets illustrate the proposed methods.


Journal of Time Series Analysis | 2006

Gaussian Maximum Likelihood Estimation for ARMA Models I: Time Series

Qiwei Yao; Peter J. Brockwell

We provide a direct proof for consistency and asymptotic normality of Gaussian maximum likelihood estimators for causal and invertible autoregressive moving-average (ARMA) time series models, which were initially established by Hannan [Journal of Applied Probability (1973) vol. 10, pp. 130-145] via the asymptotic properties of a Whittles estimator. This also paves the way to establish similar results for spatial processes presented in the follow-up article by Yao and Brockwell [Bernoulli (2006) in press]. Copyright 2006 The Authors Journal compilation 2006 Blackwell Publishing Ltd.

Collaboration


Dive into the Qiwei Yao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howell Tong

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar

Jinyuan Chang

Southwestern University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liang Peng

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Hall

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Zudi Lu

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Jiazhu Pan

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge