William J. Kennedy
Iowa State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William J. Kennedy.
Computational Statistics & Data Analysis | 1992
Morgan C. Wang; William J. Kennedy
Abstract A Taylor series expansion of the multivariate normal integral is used to calculate the value of the integral over rectangular regions. Interval analysis and automatic differentiation provide self-validation for calculated probabilities. In examples, the Taylor series approximation gives more accurate results than the algorithm of Schervish (1984).
Communications in Statistics - Simulation and Computation | 1977
William J. Kennedy; James E. Gentle; V. A. Sposito
A numerical method for obtaining data (X|y), relative to the linear model , is given. The user is allowed to specify column means for the X matrix, the general order of condition number, the unique L1 solution vector and the deviations of ys about the fitted hyperplane. Implementation of the method requires little more than use of subroutines found in most modern subroutine libraries. Computer generated data of this kind are useful in numerical studies of the operating characteristics of different algorithms.
Journal of Statistical Computation and Simulation | 1990
Morgan C. Wang; William J. Kennedy
Comparison of algorithms for computing probabilities and percentiles is often carried out in an effort to identify the best algorithm for various applications. One requirement when conducting comparative studies is some useable source of “satisfactory approximations to correct answers” to use as a basis when making accuracy comparisons. This paper reports success in applying elements of interval analysis to obtain a self-validating computational method for Bivariate Normal Probabilities. Results from applying this method can be used to provide a basis for accuracy studies of algorithms for Bivariate Normal probabilities. A study to compare several methods for computing probabilities over rectangles for this probability distribution, using the self-validated bases values, was carried out. The paper reports a choice of best method.
Journal of Computational and Graphical Statistics | 2000
Kevin Wright; William J. Kennedy
Abstract The EM algorithm is widely used in incomplete-data problems (and some complete-data problems) for parameter estimation. One limitation of the EM algorithm is that, upon termination, it is not always near a global optimum. As reported by Wu (1982), when several stationary points exist, convergence to a particular stationary point depends on the choice of starting point. Furthermore, convergence to a saddle point or local minimum is also possible. In the EM algorithm, although the log-likelihood is unknown, an interval containing the gradient of the EM q function can be computed at individual points using interval analysis methods. By using interval analysis to enclose the gradient of the EM q function (and, consequently, the log-likelihood), an algorithm is developed that is able to locate all stationary points of the log-likelihood within any designated region of the parameter space. The algorithm is applied to several examples. In one example involving the t distribution, the algorithm successfully locates (all) seven stationary points of the log-likelihood.
Statistics and Computing | 1995
Morgan C. Wang; William J. Kennedy
A self-validating numerical method based on interval analysis for the computation of central and non-central F probabilities and percentiles is reported. The major advantage of this approach is that there are guaranteed error bounds associated with the computed values (or intervals), i.e. the computed values satisfy the user-specified accuracy requirements. The methodology reported in this paper can be adapted to approximate the probabilities and percentiles for other commonly used distribution functions.
Journal of the American Statistical Association | 1994
Morgan C. Wang; William J. Kennedy
Abstract Self-validating computation based on interval arithmetic can produce computed values with a guaranteed error bound. Such methods are especially useful whenever the computed results must satisfy given accuracy requirements. This article reports methods for obtaining self-validating results when computing probabilities and percentiles of univariate continuous distributions. Probability functions dealt with explicitly in the article are normal, incomplete gamma, incomplete beta, and noncentral chi-squared.
Journal of Statistical Computation and Simulation | 2002
Kevin Wright; William J. Kennedy
Self-validated computations using interval analysis produce results with a guaranteed error bound. This article presents methods for self-validated computation of probabilities and percentile points of the bivariate chi-square distribution and a bivariate F distribution. For the computation of critical points (c 1,c 2) in the equation P(Y 1 @ c 1, Y 2 ≤ c 2) = 1 − α, the case c 1 = c 2 is considered. A combination of interval secant and bisection algorithms is developed for finding enclosures of the percentile points of the distribution. Results are compared to previously published tables.
Statistics and Computing | 1997
Ouhong Wang; William J. Kennedy
Conventional computations use real numbers as input and produce real numbers as results without any indication of the accuracy. Interval analysis, instead, uses interval elements throughout the computation and produces intervals as output with the guarantee that the true results are contained in them. One major use for interval analysis in statistics is to get results of high-dimensional multivariate probabilities. With the efforts to decrease the length of the intervals that contain the theoretically true answers, we can obtain results to any arbitrary accuracy, which is demonstrated by multivariate normal and multivariate t integrations. This is an advantage over the approximation methods that are currently in use. Since interval analysis is more computationally intensive than traditional computing, a MasPar parallel computer is used in this research to improve performance.
Communications in Statistics-theory and Methods | 1980
V. A. Sposito; William J. Kennedy; James E. Gentle
Recent results by G. Appa and C. Smith, as well as I. Barrodale and F. D. K. Roberts, underscore several properties exhibited for fitting a linear model to a set of observation points under the criterion of least sum of absolute deviations(commonly denoted as the L1 criterion). This paper will generalize these properties to the non-full rank case and relax in a natural way some assumptions given by Appa and Smith.
Communications in Statistics - Simulation and Computation | 1977
William J. Kennedy; James E. Gentle
Two techniques for detecting inaccuracies in least absolute values (LAV) regression computations are presented and discussed. Examples of the use of the methods are given. The techniques are shown to apply to the more general case of M-estimation.