David A. Belsley
Boston College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David A. Belsley.
Computational Economics | 1991
David A. Belsley
The description of the collinearity diagnostics as presented in Belsley, Kuh, and Welschs, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, is principally formal, leaving it to the user to implement the diagnostics and learn to digest and interpret the diagnostic results. This paper is designed to overcome this shortcoming by describing the different graphical displays that can be used to present the diagnostic information and, more importantly, by providing the detailed guidance needed to promote the beginning user into an experienced diagnostician and to aid those who wish to incorporate or automate the collinearity diagnostics into a guided-computer environment.
The American Statistician | 1984
David A. Belsley
Abstract It is often thought that regression data should be mean-centered before being diagnosed for collinearity (ill conditioning). This view is shown not generally to be correct. Such centering can mask elements of ill conditioning and produce meaningless and misleading collinearity diagnostics. In order to assess conditioning meaningfully, the data must be in a form that possesses structural interpretability.
Journal of Econometrics | 1982
David A. Belsley
Abstract Data weaknesses (such as collinearity) reduce the quality of least-squares estimates by inflating parameter variances. Standard regression diagnostics and statistical tests of hypothesis are unable to indicate such variance inflation and hence cannot detect data weaknesses. In this paper, then, we consider a different means for determining the presence of weak data based on a test for signal-to-noise in which the size of the parameter variance (noise) is assessed relative to the magnitude of the parameter (signal). This test is combined with other collinearity diagnostics to provide a test for the presence of harmful collinearity and/or short data. The entire procedure is illustrated with an equation from the Michigan Quarterly Econometric Model. Tables of critical values for the test are provided in an appendix.
Journal of Econometrics | 1980
David A. Belsley
Abstract This paper examines important elements in calculating the nonlinear full-information maximum-likelihood (NLFIML) estimator which produce substantial reductions (80 percent or more) in computational cost. It examines the (i) choice of optimization algorithm, (ii) method of Hessian approximation, (iii) choice of stopping criterion, and (iv) exploitation of sparsity. We find that the Newton-Raphson algorithm employing an analytically-computed Hessian is computationally much more efficient (up to 75 percent) in this context than its oft-employed competitors, such as DFP. Additional gains (up to 30 percent) result from using a weighted-gradient stopping criterion. Exploitation of matrix sparsity adds further gains.
Computational Economics | 1988
David A. Belsley
Two elements enter the choice between 2 and 3SLS for full-system estimation: statistical efficiency and computational cost. 2SLS always has the computational edge, but 3SLS can be more efficient, a relative advantage that increases with the strength of the interrelations among the error terms. A measure of these interrelations is thus helpful in making the choice, and, when there are only two equations, this has suggested using a high pairwise error correlation as an indicator of when to use 3SLS. In larger systems of equations, however, these pairwise correlations can remain small even though more general interrelations give 3SLS the relative advantage. More general indicators are therefore needed, and this paper suggests three such and demonstrates their efficacy.
Computational Statistics & Data Analysis | 2003
Paolo Foschi; David A. Belsley; Erricos John Kontoghiorghes
The computational efficiency of various algorithms for solving seemingly unrelated regressions (SUR) models is investigated. Some of the algorithms adapt known methods; others are new. The first transforms the SUR model to an ordinary linear model and uses the QR decomposition to solve it. Three others employ the generalized QR decomposition to solve the SUR model formulated as a generalized linear least-squares problem. Strategies to exploit the structure of the matrices involved are developed. The algorithms are reconsidered for solving the SUR model after it has been transformed to one of smaller dimensions.
American Journal of Agricultural Economics | 1992
Charles W. Bausell; Scott L. Smith; David A. Belsley
Excessive government purchases of dairy surpluses during the early 1980s prompted three corrective measures: the Milk Diversion Program, Dairy Termination Program, and lower support prices. An eight-equation econometric model of the dairy sector simulates these programs and shows them to be incomplete or impermanent in their effects. However, further analysis shows that a more aggressive policy of lower support prices would be effective in reducing costs to government and consumers, and lowering transfers to producers.
International Journal of Forecasting | 1988
David A. Belsley
Abstract Four (counter) examples are used to estblish the proposition that good forecasting requires a meaningful and proper model, particularly when forecasting into situations that differ greatly from those that characterize the data upon which the model estimates are based. It is also argued that, contrary to much current opinion, it is this latter activity that is the real art of forecasting. The central notion of a ‘meaningful and proper’ model is defined, and the process leading to its construction is examined.
Computational Economics | 1992
David A. Belsley
The standard computational formula for the three-stage least-squares estimator is a daunting affair even for modest sized systems of equations. Through the use of the QR decomposition, however, these computations can be substantially reduced in size, removing the order of T (number of observations) from the relevant dimensions. This produces a set of calculations and memory requirements far more accommodating to all users of 3SLS, but particularly to those who may wish to include this estimator in their home-made arsenal without having to engage in special programming techniques.
Computational Statistics & Data Analysis | 1986
David A. Belsley; R.W. Oldford
Abstract The notion of a conditioning analysis of a general, nonlinear set of relations is defined along with an associated definition of ill conditioning. From these, one may identify at least three different kinds of conditioning analyses of interest in statistics and econometrics: data, estimator, and criterion conditioning. While these three coincide in the OLS/linear case, they can and do diverge otherwise. The absence of a general mathematical solution for a conditioning analysis points to computer-intensive alternatives, one of which is suggested and illustrated.