Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Kabaila is active.

Publication


Featured researches published by Paul Kabaila.


Econometric Theory | 1995

The Effect of Model Selection on Confidence Regions and Prediction Regions

Paul Kabaila

Potscher (1991, Econometric Theory 7, 163–181) has recently considered the question of how the use of a model selection procedure affects the asymptotic distribution of parameter estimators and related statistics. An important potential application of such results is to the generation of confidence regions for the parameters of interest. It is demonstrated that a great deal of care must be exercised in any attempt at such an application. We also consider the effect of model selection on prediction regions. It is demonstrated that the use of asymptotic results for the construction of prediction regions requires the same sort of care as the use of such results for the construction of confidence regions.


Journal of the American Statistical Association | 2006

On the Large-Sample Minimal Coverage Probability of Confidence Intervals After Model Selection

Paul Kabaila; Hannes Leeb

We give a large-sample analysis of the minimal coverage probability of the usual confidence intervals for regression parameters when the underlying model is chosen by a “conservative” (or “overconsistent”) model selection procedure. We derive an upper bound for the large-sample limit minimal coverage probability of such intervals that applies to a large class of model selection procedures including the Akaike information criterion as well as various pretesting procedures. This upper bound can be used as a safeguard to identify situations where the actual coverage probability can be far below the nominal level. We illustrate that the (asymptotic) upper bound can be statistically meaningful even in rather small samples.


IEEE Transactions on Automatic Control | 1983

On output-error methods for system identification

Paul Kabaila

In this paper are derived consistency and asymptotic normality results for the output-error method of system identification. The output-error estimator has the advantage over the prediction-error estimator of being more easily computable. However, it is shown that the output-error estimator can never be more efficient than the prediction-error estimator. The main result of the paper provides necessary and sufficient conditions for the output-error estimator and the prediction-error estimator to have the same efficiency, irrespective of the spectral density of the noise process.


Econometric Theory | 1998

VALID CONFIDENCE INTERVALS IN REGRESSION AFTER VARIABLE SELECTION

Paul Kabaila

We consider a linear regression model with regression parameters (I¸1,...,I¸p) and error variance parameter Iƒ2. Our aim is to find a confidence interval with minimum coverage probability 1 − I± for a parameter of interest I¸1 in the presence of nuisance parameters (I¸2,...,I¸p,Iƒ2). We consider two confidence intervals, the first of which is the standard confidence interval for I¸1 with coverage probability 1 − I±. The second confidence interval for I¸1 is obtained after a variable selection procedure has been applied to I¸p. This interval is chosen to be as short as possible subject to the constraint that it has minimum coverage probability 1 − I±. The confidence intervals are compared using a risk function that is defined as a scaled version of the expected length of the confidence interval. We show that, subject to certain conditions including that [(dimension of response vector) − p] is small, the second confidence interval is preferable to the first when we anticipate (without being certain) that |I¸p|/Iƒ is small. This comparison of confidence intervals is shown to be mathematically equivalent to a corresponding comparison of prediction intervals.


Technometrics | 2002

The Importance of the Designated Statistic on Buehler Upper Limits on a System Failure Probability

Paul Kabaila; Chris Lloyd

The upper limits introduced by Buehler are constructed to be honest confidence limits that are as small as possible subject to the constraint that they are nondecreasing functions of a designated statisticS. This methodology has largely been developed for applications in reliability that involve finding an upper limit on the probability of failure θ(p1,…,pk) of a system of k components with probabilities of failure pi. For mainly computational reasons, it is common to choose the designated statistic S=θ(p1,…,pk) where pi is an estimator of pi. Buehler, on the other hand, suggested the choice S=θ(pu1,…,puk), where pui is a so-called exact (1 − α)1/k upper limit for pi. In this article we draw attention to existing general theory suggesting that S should be chosen to be an approximate upper limit for the interest parameter θ, not an estimator. We also give numerical results suggesting that extreme loss in efficiency typically follows if S is chosen poorly. Our current recommendation is that S should be an approximate upper limit based on the likelihood ratio statistic.


Communications in Statistics-theory and Methods | 2005

Comparison of Poisson Confidence Intervals

John Byrne; Paul Kabaila

Abstract The standard method of obtaining a two-sided confidence interval for the Poisson mean produces an interval which is exact but can be shortened without violating the minimum coverage requirement. We classify the intervals proposed as alternatives to the standard method interval. We carry out the classification using two desirable properties of two-sided confidence intervals.


Journal of Applied Probability | 1983

PARAMETER VALUES OF ARMA MODELS MINIMISING THE ONE-STEP-AHEAD PREDICTION ERROR WHEN THE TRUE SYSTEM IS NOT IN THE MODEL SET

Paul Kabaila

In this paper we answer the following question. Is there any a priori reason for supposing that there is no more than one set of ARMA model parameters minimising the one-step-ahead prediction error when the true system is not in the model set?


Stochastics An International Journal of Probability and Stochastic Processes | 1981

Estimation based on one step ahead prediction versus estimation based on multi-step ahead prediction

Paul Kabaila

In this paper we consider a strictly stationary time series generated by a nonlinear autoregression. We are concerned with the estimation of the parameter θ0 which specifies the autoregression Two estimators are considered, namely. θ n obtained by minimising the sum of squarcs of the sample prediction emets of a one step ahead predictor and θ n obtained by minimising the sum of squares of the sample prediction errors of a multi-step ahead predictor. It is shown that θn is a better estimator of θ0 than θ n .


Statistics & Probability Letters | 2001

Better Buehler confidence limits

Paul Kabaila

Consider the reliability problem of finding a 1-[alpha] upper (lower) confidence limit for [theta] the probability of system failure (non-failure), based on binomial data on the probability of failure of each component of the system. The Buehler 1-[alpha] confidence limit is usually based on an estimator of [theta]. This confidence limit has the desired coverage properties. We prove that in large samples the Buehler 1-[alpha] upper confidence limit based on an approximate 1-[alpha] upper limit for [theta] is less conservative, whilst also possessing the desired coverage properties.


Journal of Time Series Analysis | 1999

The Relevance Property For Prediction Intervals

Paul Kabaila

Suppose that we have time series data which we want to use to find a prediction interval for some future value of the series. It is widely recognized by time series practitioners that, to be practically useful, a prediction interval should possess the property that it relates to what actually happened during the period that the data were collected as opposed to what might have happened during that period but did not actually happen. We call this the ‘relevance property’. Despite its obvious importance, this property has hitherto not been formulated in a mathematically rigorous way. We provide a mathematically rigorous formulation of this property for a broad class of conditionally heteroscedastic processes in the practical context that the parameters of the time series model must be estimated from the data. The importance in applications of this formulation is that it provides us with the most appropriate way of measuring the finite-sample coverage performance of a time series prediction interval.

Collaboration


Dive into the Paul Kabaila's collaboration.

Top Co-Authors

Avatar

Chris Lloyd

Melbourne Business School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Welsh

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khreshna Syuhada

Bandung Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge