Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. D. McCullough is active.

Publication


Featured researches published by B. D. McCullough.


Statistical Modelling | 2003

Regression analysis of variates observed on (0, 1): percentages, proportions and fractions

Robert L. Kieschnick; B. D. McCullough

Many types of studies examine the influence of selected variables on the conditional expectation of a proportion or vector of proportions, for example, market shares, rock composition, and so on. We identify four distributional categories into which such data can be put, and focus on regression models for the first category, for proportions observed on the open interval (0, 1). For these data, we identify different specifications used in prior research and compare these specifications using two common samples and specifications of the regressors. Based upon our analysis, we recommend that researchers use either a parametric regression model based upon the beta distribution or a quasi-likelihood regression model developed by Papke and Wooldridge (1997) for these data. Concerning the choice between these two regression models, we recommend that researchers use the parametric regression model unless their sample size is large enough to justify the asymptotic arguments underlying the quasi-likelihood approach.


The American Statistician | 1998

Assessing the Reliability of Statistical Software: Part I

B. D. McCullough

Abstract Entry-level tests of the accuracy of statistical software, such as Wilkinsons Statistics Quiz, have long been available, but more advanced collections of tests have not. This article proposes a set of intermediate-level tests focusing on three areas: estimation, both linear and nonlinear; random number generation; and statistical distributions (e.g., for calculating p-values). The complete methodology is described in detail. Convenient methods for summarizing the results are presented, so that an assessment of numerical accuracy can easily be incorporated into a software review.


The American Economic Review | 2004

Verifying the Solution from a Nonlinear Solver: A Case Study

B. D. McCullough; Hrishikesh D. Vinod

We are pleased to confirm that any doubt our article (McCullough and Vinod, 2003; hereafter “MV03”) may have cast on Ron Shachar and Barry Nalebuff (1999; hereafter “SN99”) must be removed. We are especially pleased because we thought it quite unfair that other researchers were able to exempt themselves from such detailed scrutiny. It appears that such researchers no longer will have the luxury of reneging on their agreement to honor the replication policy, as this journal now requires authors of accepted empirical papers to provide all programs and data files for posting on the AER Web site as a precondition of publication. The primary aim of our article (MV03) was to provide a four-part methodology for verifying the solution from a nonlinear solver: check the gradient, examine the trace, analyze the Hessian, and profile the likelihood. We adduced copious evidence (MV03, p. 873) that solvers used by economists can produce inaccurate answers, gave examples of different packages giving different answers to the same nonlinear problems (MV03, p. 874), and showed (MV03, pp. 873–74), at least in this journal, that researchers make no effort to verify the solutions from the solvers that they use. We believe this uncritical acceptance of solutions from nonlinear solvers to be a systemic problem in economic research; that is why we wrote the article—certainly, econometrics texts do not show how to verify the solution from a nonlinear solver. In passing, we also showed how a problem can be too large for conventional PC methods, and indicated the failure of replication policies in this journal and other journals. We used the data and likelihood function from SN99 to illustrate the methodology. In the course of this illustration, we noted that the Hessian was ill-conditioned, suggested that there might exist multiple optima, and that inference based on the Wald statistic was not appropriate. We concluded that the solution we found was, at best, a tentative solution. However, as shown in Shachar and Nalebuff (2004; hereafter SN04), when the problem is rescaled the Hessian is not ill-conditioned; they have correctly identified the difference between the condition number of the badly scaled version of the problem that we analyzed in MV03 and the well-scaled problem that they have analyzed. When the problem is correctly scaled, the Hessian is well-conditioned, the model is locally identifiable, the problem can be solved on a PC, a solution to the problem exists, and SN present it in their Table 1. Though we were aware that rescaling could ameliorate ill-conditioning (MV03, p. 882), we were unaware of the distinction between artificial ill-conditioning and inherent ill-conditioning, so the method we suggested for analyzing the Hessian contained an error of omission. This error caused us to reach an incorrect conclusion concerning the existence of a solution to the problem. We apologize to Professors Shachar and Nalebuff, and we thank them for their gracious understanding in this regard. Accordingly, we have amended our prescription for analyzing the Hessian—see our nearby exchange with David M. Drukker and Vince Wiggins (2004; hereafter “DW”) for complete details. Our


Journal of Money, Credit and Banking | 2006

Lessons from the JMCB Archive

B. D. McCullough; Kerry Anne McGeary; Teresa D. Harrison

We examine the online archive of the Journal of Money, Credit, and Banking, in which an author is required to deposit the data and code that replicate the results of his paper. We find that most authors do not fulfill this requirement. Of more than 150 empirical articles, fewer than 15 could be replicated. Despite all this, there is no doubt that a data/code archive is more conducive to replicable research than the alternatives. We make recommendations to improve the functioning of the archive.


Journal of Economic Methodology | 2008

The role of data/code archives in the future of economic research

Richard G. Anderson; William H. Greene; B. D. McCullough; Hrishikesh D. Vinod

This essay examines the role of data and program‐code archives in making economic research ‘replicable.’ Replication of published results is recognized as an essential part of the scientific method. Yet, historically, both the ‘demand for’ and ‘supply of’ replicable results in economics has been minimal. ‘Respect for the scientific method’ is not sufficient to motivate either economists or editors of professional journals to ensure the replicability of published results. We enumerate the costs and benefits of mandatory data and code archives, and argue that the benefits far exceed the costs. Progress has been made since the gloomy assessment of Dewald, Thursby and Anderson some 20 years ago in the American Economic Review, but much remains to be done before empirical economics ceases to be a ‘dismal science’ when judged by the replicability of its published results.


Economic Analysis and Policy | 2009

Open Access Economics Journals and the Market for Reproducible Economic Research

B. D. McCullough

Most economics journals take no substantive measures to ensure that the results they publish are replicable. To make the data and code available so that published results can be checked requires an archive. Top economics journals have been adopting mandatory data+code archives in the past few years. The movement toward mandatory data+code archives has yet to reach the open access journals. This is paradoxical; given their emphasis on making articles readily available, one would think that open access journals also would want to make data and code readily available. Open access economics journals should adopt mandatory data+code archives en masse. Doing so will give them a competitive advantage with respect to traditional economics journals.


Computational Statistics & Data Analysis | 2008

Microsoft Excel's 'Not The Wichmann-Hill' random number generators

B. D. McCullough

Microsoft attempted to implement the Wichmann-Hill RNG in Excel 2003 and failed; it did not just produce numbers between zero and unity, it would also produce negative numbers. Microsoft issued a patch that allegedly fixed the problem so that the patched Excel 2003 and Excel 2007 now implement the Wichmann-Hill RNG, as least according to Microsoft. We show that whatever RNG it is that Microsoft has implemented in these versions of Excel, it is not the Wichmann-Hill RNG. Microsoft has now failed twice to implement the dozen lines of code that define the Wichmann-Hill RNG.


Computing in Economics and Finance | 1998

Implementing the Double Bootstrap

B. D. McCullough; Hrishikesh D. Vinod

The single bootstrap already is popular in economics, though the double bootstrap has better convergence properties. We discuss the theory and implementation of the double bootstrap, both with and without the pivotal transformation, and give detailed examples of each. One example is a nonlinear double bootstrap of a Cobb-Douglas production function, and explains the use of Gauss-Newton Regressions as a device to decrease computational time. Another example is double bootstrapping elasticities from a translog production function.


Quality Technology and Quantitative Management | 2006

The Effect of Estimated Parameters on Poisson EWMA Control Charts

Murat Caner Testik; B. D. McCullough; Connie M. Borrar

Abstract Performance of control charts is generally evaluated with the assumption that the process parameters are known. In many control chart applications, however, the process parameters are rarely known and their estimates from an in-control reference sample are used instead. In such cases, the moments of the run length distribution depend on the values of the estimated parameters. The Poisson exponentially weighted moving average (EWMA) is an effective control chart in situations where the number of nonconformities per unit from a repetitive production process is monitored. The objective of this paper is to study the effect of estimating the mean on the performance of the Poisson EWMA control chart. We make use of the Markov Chain approach. Sample-size recommendations and some concluding comments are provided.


American Journal of Agricultural Economics | 1998

Better Confidence Intervals: The Double Bootstrap with No Pivot

David Letson; B. D. McCullough

The double bootstrap is an important advance in confidence interval generation because it converges faster than the already popular single bootstrap. Yet the usual double bootstrap requires a stable pivot that is not always available, e.g., when estimating flexibilities or substitution elasticities. A recently developed double bootstrap does not require a pivot. A Monte Carlo analysis with the Waugh data finds the double bootstrap achieves nominal coverage whereas the single bootstrap does not. A useful artifice dramatically decreases the computational time of the double bootstrap. Copyright 1998, Oxford University Press.

Collaboration


Dive into the B. D. McCullough's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert L. Kieschnick

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristian Gatu

Alexandru Ioan Cuza University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge