Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger L. Berger is active.

Publication


Featured researches published by Roger L. Berger.


Journal of the American Statistical Association | 1987

Reconciling Bayesian and Frequentist Evidence in the One-Sided Testing Problem

George Casella; Roger L. Berger

Abstract For the one-sided hypothesis testing problem it is shown that it is possible to reconcile Bayesian evidence against H 0, expressed in terms of the posterior probability that H 0 is true, with frequentist evidence against H 0, expressed in terms of the p value. In fact, for many classes of prior distributions it is shown that the infimum of the Bayesian posterior probability of H 0 is equal to the p value; in other cases the infimum is less than the p value. The results are in contrast to recent work of Berger and Sellke (1987) in the two-sided (point null) case, where it was found that the p value is much smaller than the Bayesian infimum. Some comments on the point null problem are also given.


Technometrics | 1982

Multiparameter Hypothesis Testing and Acceptance Sampling.

Roger L. Berger

The quality of a product might be determined by several parameters, each of which must meet certain standards before the product is acceptable. In this article, a method of determining whether all the parameters meet their respective standards is proposed. The method consists of testing each parameter individually and deciding that the product is acceptable only if each parameter passes its test. This simple method has some optimal properties including attaining exactly a prespecified consumers risk and uniformly minimizing the producers risk. These results are obtained from more general hypothesis-testing results concerning null hypotheses consisting of the unions of sets.


Journal of the American Statistical Association | 1994

P values maximized over a confidence set for the nuisance parameter

Roger L. Berger; Dennis D. Boos

Abstract For testing problems of the form H 0: v = v 0 with unknown nuisance parameter θ, various methods are used to deal with θ. The simplest approach is exemplified by the t test where the unknown variance is replaced by the sample variance and the t distribution accounts for estimation of the variance. In other problems, such as the 2 × 2 contingency table, one conditions on a sufficient statistic for 0 and proceeds as in Fishers exact test. Because neither of these standard methods is appropriate for all situations, this article suggests a new method for handling the unknown θ. This new method is a simple modification of the formal definition of a p value that involves taking a maximum over the nuisance parameter space of a p value obtained for the case when θ is known. The suggested modification is to restrict the maximization to a confidence set for the nuisance parameter. After giving a brief justification, we give various examples to show how this new method gives improved results for 2 × 2 tabl...


Journal of the American Statistical Association | 1981

A Necessary and Sufficient Condition for Reaching a Consensus Using DeGroot's Method

Roger L. Berger

Abstract DeGroot (1974) proposed a model in which a group of k individuals might reach a consensus on a common subjective probability distribution for an unknown parameter. This paper presents a necessary and sufficient condition under which a consensus will be reached by using DeGroots method. This work corrects an incorrect statement in the original paper about the conditions needed for a consensus to be reached. The condition for a consensus to be reached is straightforward to check and yields the value of the consensus, if one is reached.


Journal of the American Statistical Association | 1999

Stepwise Confidence Intervals Without Multiplicity Adjustment for Dose-Response and Toxicity Studies

Jason C. Hsu; Roger L. Berger

Abstract Not all simultaneous inferences need multiplicity adjustment. If the sequence of individual inferences is predefined, and failure to achieve the desired inference at any step renders subsequent inferences unnecessary, then multiplicity adjustment is not needed. This can be justified using the closed testing principle to test appropriate hypotheses that are nested in sequence, starting with the most restrictive one. But what hypotheses are appropriate may not be obvious in some problems. We give a fundamentally different, confidence set–based justification by partitioning the parameter space naturally and using the principle that exactly one member of the partition contains the true parameter. In dose–response studies designed to show superiority of treatments over a placebo (negative control) or a drug known to be efficacious (active control), the confidence set approach generates methods with meaningful guarantee against incorrect decision, whereas previous applications of the closed testing app...


The American Statistician | 1996

More Powerful Tests from Confidence Interval p Values

Roger L. Berger

Abstract In this article the problem of comparing two independent binomial populations is considered. It is shown that the test based on the confidence interval p value of Berger and Boos often is uniformly more powerful than the standard unconditional test. This test also requires less computational time.


Journal of the American Statistical Association | 1989

Uniformly More Powerful Tests for Hypotheses concerning Linear Inequalities and Normal Means

Roger L. Berger

Abstract This article considers some hypothesis-testing problems regarding normal means. In these problems, the hypotheses are defined by linear inequalities on the means. We show that in certain problems the likelihood ratio test (LRT) is not very powerful. We describe a test that has the same size, α, as the LRT and is uniformly more powerful. The test is easily implemented, since its critical values are standard normal percentiles. The increase in power with the new test can be substantial. For example, the new tests power is 1/2α times bigger (10 times bigger for α = .05) than the LRTs power for some parameter points in a simple example. Specifically, let X = (X 1, …, Xp )′ (p ≥ 2) be a multivariate normal random vector with unknown mean μ = (μ1, …, μp )′ and known, nonsingular covariance matrix Σ. We consider testing the null hypothesis H 0: b′ i ,μ ≤ 0 for some i = 1, …, k versus the alternative hypothesis H 1: b′ i μ > 0 for all i = 1, …, k. Here b 1, …, b k (k ≥ 2) are specified p-dimensional ve...


Biometrics | 1988

Tests and Confidence Sets for Comparing Two Mean Residual Life Functions

Roger L. Berger; Dennis D. Boos; Frank M. Guess

The mean residual life function of a population gives an intuitive and interesting perspective on the aging process. Here we present new nonparametric methods for comparing mean residual life functions based on two independent samples. These methods have the flexibility to handle crossings of the functions and result in a new type of confidence set. We also discuss similar methods for comparison of median residual life functions.


Advances in statistical decision theory and applications | 1997

Likelihood ratio tests and intersection-union tests

Roger L. Berger

The likelihood ratio test (LRT) method is a commonly used method of hypothesis test construction. The intersection-union test (IUT) method is a less commonly used method. We will explore some relationships between these two methods. We show that, under some conditions, both methods yield the same test. But, we also describe conditions under which the size-α IUT is uniformly more powerful than the size-α LRT. We illustrate these relationships by considering the problem of testing H0: min{|μ 1|, |μ 2|} = 0 versus Hα: min{|μ 1|,|μ 2|} > 0, where μ 1 and μ 2are means of two normal populations.


Statistical Methods in Medical Research | 2003

Exact unconditional tests for a 2 × 2 matched-pairs design

Roger L. Berger; K. Sidik

The problem of comparing two proportions in a 2 × 2 matched-pairs design with binary responses is considered. We consider one-sided null and alternative hypotheses. The problem has two nuisance parameters. Using the monotonicity of the multinomial distribution, four exact unconditional tests based on p-values are proposed by reducing the dimension of the nuisance parameter space from two to one in computation. The size and power of the four exact tests and two other tests, the exact conditional binomial test and the asymptotic McNemar’s test, are considered. It is shown that the tests based on the confidence interval p-value are more powerful than the tests based on the standard p-value. In addition, it is found that the exact conditional binomial test is conservative and not powerful for testing the hypothesis. Moreover, the asymptotic McNemar’s test is shown to have incorrect size; that is, its size is larger than the nominal level of the test. Overall, the test based on McNemar’s statistic and the confidence interval p-value is found to be the most powerful test with the correct size among the tests in this comparison.

Collaboration


Dive into the Roger L. Berger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis D. Boos

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Frank Proschan

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis F. Sinclair

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Basil W. Coutant

Battelle Memorial Institute

View shared research outputs
Top Co-Authors

Avatar

Clifford C. Clogg

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Draper

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge