Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew A. Neath is active.

Publication


Featured researches published by Andrew A. Neath.


Communications in Statistics-theory and Methods | 1997

Regression and time series model selection using variants of the schwarz information criterion

Andrew A. Neath; Joseph E. Cavanaugh

The Schwarz (1978) information criterion, SIC, is a widely-used tool in model selection, largely due to its computational simplicity and effective performance in many modeling frameworks. The derivation of SIC (Schwarz, 1978) establishes the criterion as an asymptotic approximation to a transformation of the Bayesian posterior probability of a candidate model. In this paper, we investigate the derivation for the identification of terms which are discarded as being asymptotically negligible, but which may be significant in small to moderate sample-size applications. We suggest several SIC variants based on the inclusion of these terms. The results of a simulation study show that the variants improve upon the performance of SIC in two important areas of application:multiple linear regression and time series analysis.


Communications in Statistics-theory and Methods | 1999

Generalizing the derivation of the schwarz information criterion

Joseph E. Cavanaugh; Andrew A. Neath

The Schwarz information criterion (SIC, BTC, SBC) is one of the most widely known and used tools in statistical model selection. The criterion was derived by Schwarz (1978) to serve as an asymptotic approximation to a transformation of the Bayesian posterior probability of a candidate model. Although the original derivation assumes that the observed data is independent, identically distributed, and arising from a probability distribution in the regular exponential family, SIC has traditionally been used in a much larger scope of model selection problems. To better justify the widespread applicability of SIC, we derive the criterion in a very general framework: one which does not assume any specific form for the likelihood function, but only requires that it satisfies certain non-restrictive regularity conditions.


The American Statistician | 1997

On the Efficacy of Bayesian Inference for Nonidentifiable Models

Andrew A. Neath; Francisco J. Samaniego

Abstract Although classical statistical methods are inapplicable in point estimation problems involving nonidentifiable parameters, a Bayesian analysis using proper priors can produce a closed form, interpretable point estimate in such problems. The question of whether, and when, the Bayesian approach produces worthwhile answers is investigated. In contrast to the preposterior analysis of this question offered by Kadane, we examine the question conditionally, given the information provided by the experiment. An important initial insight on the matter is that posterior estimates of a nonidentifiable parameter can actually be inferior to the prior (no-data) estimate of that parameter, even as the sample size grows to infinity. In general, our goal is to characterize, within the space of prior distributions, classes of priors that lead to posterior estimates that are superior, in some reasonable sense, to ones prior estimate. This goal is shown to be feasible through a detailed examination of a particular t...


Journal of data science | 2006

A Bayesian Approach to the Multiple Comparisons Problem

Andrew A. Neath; Joseph E. Cavanaugh

Consider the problem of selecting independent samples from several populations for the purpose of between-group comparisons. An im- portant aspect of the solution is the determination of clusters where mean levels are equal, often accomplished using multiple comparisons testing. We formulate the hypothesis testing problem of determining equal-mean clus- ters as a model selection problem. Information from all competing models is combined through Bayesian methods in an effort to provide a more re- alistic accounting of uncertainty. An example illustrates how the Bayesian approach leads to a logically sound presentation of multiple comparison re- sults. Consider the problem of selecting independent samples from several popula- tions for the purpose of between-group comparisons, either through hypothesis testing or estimation of mean differences. A companion problem is the estima- tion of within-group mean levels. Together, these problems form the foundation for the very common analysis of variance framework, but also describe essen- tial aspects of stratified sampling, cluster analysis, empirical Bayes, and other settings. Procedures for making between-group comparisons are known as multiple comparisons methods. The goal of determining which groups have equal means requires testing a collection of related hypotheses. We examine this hypothesis testing problem from a Bayesian viewpoint. In Section 2, we detail how the deter- mination of equal mean clusters can be formulated as a Bayesian model selection problem. Posterior model probabilities are computed via the Bayesian informa- tion criterion. Bayesian model averaging is introduced as a tool for combining information from all competing models in an effort to provide a more realistic accounting of uncertainty. An example in Section 3 illustrates how the Bayesian


Communications in Statistics - Simulation and Computation | 2008

Performance of Variable Selection Methods in Regression Using Variations of the Bayesian Information Criterion

Tom Burr; Herb Fry; Brian D. McVey; Eric Sander; Joseph E. Cavanaugh; Andrew A. Neath

The Bayesian information criterion (BIC) is widely used for variable selection. We focus on the regression setting for which variations of the BIC have been proposed. A version that includes the Fisher Information matrix of the predictor variables performed best in one published study. In this article, we extend the evaluation, introduce a performance measure involving how closely posterior probabilities are approximated, and conclude that the version that includes the Fisher Information often favors regression models having more predictors, depending on the scale and correlation structure of the predictor matrix. In the image analysis application that we describe, we therefore prefer the standard BIC approximation because of its relative simplicity and competitive performance at approximating the true posterior probabilities.


Journal of Applied Mathematics and Decision Sciences | 2003

Polya tree distributions for statistical modeling of censored data

Andrew A. Neath

Polya tree distributions extend the idea of the Dirichlet process as a prior for Bayesian nonparametric problems. Finite dimensional distributions are defined through conditional probabilities in P. This allows for a specification of prior information which carries greater weight where it is deemed appropriate according to the choice of a partition of the sample space. Muliere and Walker[7] construct a partition so that the posterior from right censored data is also a Polya tree. A point of contention is that the specification of the prior is partially dependent on the data. In general, the posterior from censored data will be a mixture of Polya trees. This paper will present a straightforward method for determining the mixing distribution.


Statistics & Probability Letters | 1996

On bayesian estimation of the multiple decrement function in the competing risks problem

Andrew A. Neath; Francisco J. Samaniego

Classical methods are inapplicable in estimation problems involving non-identifiable parameters. Bayesian methods, on the other hand, are often both feasible and intuitively reasonable in such problems. This paper establishes the foundations for studying the efficacy of Bayesian updating in estimating nonidentifiable parameters in the competing risks framework. We obtain a useful representation of the posterior distribution of the multiple decrement function, assuming a Dirichler process prior, and derive the limiting posterior distribution. It is noted that posterior estimates of a nonidentifiable parameter may be inferior to estimates based on the prior distribution alone, even when the size of the available sample grows to infinity. This leads, among other things, to the search for distinguished parameter values, or models, in which Bayesian updating necessarily improves upon ones prior estimate. In a companion paper, it is shown that the multivariate exponential distribution can play such a role in the competing risks framework.


Journal of the American Statistical Association | 1996

How to be a Better Bayesian

Francisco J. Samaniego; Andrew A. Neath

Abstract Consider an experiment yielding an observable random quantity X whose distribution is indexed by a real parameter o. Suppose that a statistician is prepared to execute a Bayes procedure in estimating o and has quantified available prior information about o in his chosen prior distribution G. Suppose that before this estimation process is completed, the statistician becomes aware of the outcome Y of a “similar” experiment. In this article we investigate the questions of whether, and when, this additional information can be exploited so as to provide a better estimate of o in the “current” experiment. We show that in the traditional empirical Bayes framework and in situations involving exponential families, conjugate priors, and squared error loss, the answer is “essentially always.” An explicit Bayes empirical Bayes (BEB) estimator of o is given that is superior to the original Bayes estimator, showing that the statistician has, in these problems, the opportunity to be a “better Bayesian” by combi...


Computational Statistics & Data Analysis | 2000

A regression model selection criterion based on bootstrap bumping for use with resistant fitting

Andrew A. Neath; Joseph E. Cavanaugh

Abstract We propose a model selection criterion for regression applications where resistant fitting is appropriate. Our criterion gauges the adequacy of a fitted model based on the median squared error of prediction. The criterion is easily computed using the bootstrap “bumping” algorithm of Tibshirani and Knight (1999, Journal of Computational and Graphical Statistics, pp. 671–686) which provides a convenient method for obtaining least median of squares model parameter estimates. We present an example to illustrate the merit of the criterion in instances where the underlying data set contains influential values. Additionally, we present and discuss the results of a simulation study which illustrates the effectiveness of the criterion under a wide range of error distributions.


Statistics & Probability Letters | 1992

On the total time on test transform of an IFRA distribution

Andrew A. Neath; Francisco J. Samaniego

A new proof is provided for a monotonicity property of total time on test transform of an IFRA distribution.

Collaboration


Dive into the Andrew A. Neath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam G. Weyhaupt

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Brian D. McVey

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eric Sander

National Nuclear Security Administration

View shared research outputs
Top Co-Authors

Avatar

Herb Fry

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natalie Langenfeld

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Tom Burr

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge