Craig F. Ansley
University of Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Craig F. Ansley.
Journal of the American Statistical Association | 1983
William E. Wecker; Craig F. Ansley
Abstract This article shows how to fit a smooth curve (polynomial spline) to pairs of data values (yi, xi ). Prior specification of a parametric functional form for the curve is not required. The resulting curve can be used to describe the pattern of the data, and to predict unknown values of y given x. Both point and interval estimates are produced. The method is easy to use, and the computational requirements are modest, even for large sample sizes. Our method is based on maximum likelihood estimation of a signal-in-noise model of the data. We use the Kalman filter to evaluate the likelihood function and achieve significant computational advantages over previous approaches to this problem.
Journal of the American Statistical Association | 1986
Robert Kohn; Craig F. Ansley
Abstract We show how to define and then compute efficiently the marginal likelihood of an ARIMA model with missing observations. The computation is carried out by using the univariate version of the modified Kalman filter introduced by Ansley and Kohn (1985a), which allows a partially diffuse initial state vector. We also show how to predict and interpolate missing observations and obtain the mean squared error of the estimate.
Journal of Econometrics | 1980
Craig F. Ansley; Paul Newbold
Abstract We analyze by simulation the properties of three estimators frequently used in the analysis of autoregressive moving average time series models for both nonseasonal and seasonal data. The estimators considered are exact maximum likelihood, exact least squares and conditional least squares. For samples of the size commonly found in economic applications, the estimators are compared in terms of bias, mean squared error, and predictive ability. The reliability of the usually calculated confidence intervals is assessed for the maximum likelihood estimator.
Siam Journal on Scientific and Statistical Computing | 1987
Robert Kohn; Craig F. Ansley
We derive a new efficient algorithm for optimal spline smoothing as the conditional expectation of a stochastic process observed with noise, using the stochastic model of Wahba (J. Royal Statist. Soc. Ser. B, 40 (1978), pp. 364–372). The conditional expectation is computed by expressing the process in state space form and using the filtering and smoothing results in Ansley and Kohn (Annals Statist., 11 (1985), pp.1286–1316). We show how to use our algorithms to estimate the smoothness parameter and how to obtain Bayesian confidence intervals for the unknown function and its derivatives. Algorithms based on other stochastic models are compared to ours, and a stochastic derivation is given for Reinsch’s (Numer. Math. 10 (1967), pp. 177–183) algorithm for polynomial splines.
Journal of the American Statistical Association | 1991
Robert Kohn; Craig F. Ansley; David Tharm
Abstract An important aspect of nonparametric regression by spline smoothing is the estimation of the smoothing parameter. In this article we report on an extensive simulation study that investigates the finite-sample performance of generalized cross-validation, cross-validation, and marginal likelihood estimators of the smoothing parameter in splines of orders 2 and 3. The performance criterion for both the estimate of the function and its first derivative is measured by the square root of integrated squared error. Marginal likelihood using splines of degree 5 emerges as an attractive alternative to the other estimators in that it usually outperforms them and is also faster to compute.
Journal of the American Statistical Association | 1990
Thomas S. Shively; Craig F. Ansley; Robert Kohn
Abstract A method is given for evaluating p values in O(n) operations for a general class of invariant test statistics that can be expressed as the ratio of quadratic forms in time series regression residuals. The best known of these is the Durbin-Watson statistic, although several others have been discussed in the literature. The method is numerically exact in the sense that the user specifies the error tolerance at the outset. As with existing exact methods, the problem is reexpressed in terms of the distribution function of a single quadratic form in independent normals, which is evaluated by numerically inverting its characteristic function. Existing methods, however, calculate the characteristic function by reducing the matrix defining the quadratic form to either eigenvalue or tridiagonal form, each of which requires O(n 3) operations for sample size n, whereas the new method uses a modification of the Kalman filter to do it in O(n) operations. Moreover, the new method has minimal storage requiremen...
Journal of Statistical Computation and Simulation | 1982
Robert Kohn; Craig F. Ansley
We point out that to obtain the theoretical autocovariances of any vector ARMA process we need to solve a smaller system of equations than in Ansley (1980). This implies a substantial reduction in computation time. As an application, we consider obtaining the likelihood of a stationary ARMA process. To speed up the computation, we suggest a stable fixed point iteration.
Journal of Statistical Computation and Simulation | 1980
Craig F. Ansley
The theoretical autocovariance function of a vector ARMA process arises in maximum likelihood estimation and forecasting of vector processes, and in the determination of the distributions of parameter estimators and residual autocorrelations in both vector and scalar processes. An algorithm for computing the theoretical autocovariances is presented, together with suggestions for its efficient implementation.
Journal of Econometrics | 1992
Craig F. Ansley; Robert Kohn; Thomas S. Shively
Abstract Shively, Ansley, and Kohn (1990) give an O( n ) algorithm for computing the p -values of the Durbin-Watson and other invariant test statistics in time series regression. They do so by evaluating the characteristic function of a quadratic form in standard normal random variables and then numerically inverting it. In this paper we obtain a new expression for the characteristic function which simplifies the handling of the independent regressors and so is easier to evaluate. We also obtain general, easily computable bounds on the integration and truncation errors which arise in the numerical inversion of the characteristic function. Empirical results are presented on the speed and accuracy of our algorithm.
Journal of Statistical Computation and Simulation | 1986
Craig F. Ansley; Robert Kohn
We use the orrespondence between the partial autocorrelation matrices and the parameter matrices of a vector autoregression to obtain a new parameterization of a vector ARMA model that enforces the stationarity condition. We show how to go efficiently from the new parameterization ohe usual one. Thus the likelihood of observations from an ARMA model can easily be obtained using the new parameterization. In addition, for vector autoregressive models and scalar ARMA models, the new parameterization permits fast computation of the autocovariances of the model.