Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl N. Morris is active.

Publication


Featured researches published by Carl N. Morris.


Journal of the American Statistical Association | 1983

Parametric Empirical Bayes Inference: Theory and Applications

Carl N. Morris

Abstract This article reviews the state of multiparameter shrinkage estimators with emphasis on the empirical Bayes viewpoint, particularly in the case of parametric prior distributions. Some successful applications of major importance are considered. Recent results concerning estimates of error and confidence intervals are described and illustrated with data.


Journal of Business & Economic Statistics | 1983

A Comparison of Alternative Models for the Demand for Medical Care

Naihua Duan; Willard G. Manning; Carl N. Morris; Joseph P. Newhouse

We have tested alternative models of the demand for medical care using experimental data. The estimated response of demand to insurance plan is sensitive to the model used. We therefore use a split-sample analysis and find that a model that more closely approximates distributional assumptions and uses a nonparametric retransformation factor performs better in terms of mean squared forecast error. Simpler models are inferior either because they are not robust to outliers (e.g., ANOVA, ANOCOVA), or because they are inconsistent when strong distributional assumptions are violated (e.g., a two-parameter Box-Cox transformation).


Journal of the American Statistical Association | 1973

Stein's Estimation Rule and Its Competitors- An Empirical Bayes Approach

Bradley Efron; Carl N. Morris

Abstract Steins estimator for k normal means is known to dominate the MLE if k ≥ 3. In this article we ask if Steins estimator is any good in its own right. Our answer is yes: the positive part version of Steins estimator is one member of a class of “good” rules that have Bayesian properties and also dominate the MLE. Other members of this class are also useful in various situations. Our approach is by means of empirical Bayes ideas. In the later sections we discuss rules for more complicated estimation problems, and conclude with results from empirical linear Bayes rules in non-normal cases.


Journal of the American Statistical Association | 1975

Data Analysis Using Stein's Estimator and its Generalizations

Bradley Efron; Carl N. Morris

Abstract In 1961, James and Stein exhibited an estimator of the mean of a multivariate normal distribution having uniformly lower mean squared error than the sample mean. This estimator is reviewed briefly in an empirical Bayes context. Steins rule and its generalizations are then applied to predict baseball averages, to estimate toxomosis prevalence rates, and to estimate the exact size of Pearsons chi-square test with results from a computer simulation. In each of these examples, the mean square error of these rules is less than half that of the sample mean.


Journal of Business & Economic Statistics | 1984

Choosing Between the Sample-Selection Model and the Multi-Part Model

Naihua Duan; Willard G. Manning; Carl N. Morris; Joseph P. Newhouse

Hay and Olsen (1984) incorrectly argue that a multi-part model, the two-part model used in Duan et al. (1982,1983), is nested within the sample-selection model. Their proof relies on an unmentioned restrictive assumption that cannot be satisfied. We provide a counterexample to show that the propensity to use medical care and the level of expense can be positively associated in the two-part model, contrary to their assertion. The conditional specification in the multi-part model is preferable to the unconditional specification in the selection model for modeling actual (v. potential) outcomes. The selection model also has poor statistical and numerical properties and relies on untestable assumptions. Empirically the multi-part estimators perform as well as or better than the sample selection estimator for the data set analyzed in Duan et al. (1982, 1983).


Journal of the American Statistical Association | 1972

Limiting the Risk of Bayes and Empirical Bayes Estimators—Part II: The Empirical Bayes Case

Bradley Efron; Carl N. Morris

Abstract We discuss compromises between Steins estimator and the MLE which limit the risk to individual components of the estimation problem while sacrificing only a small fraction of the savings in total squared error loss given by Steins rule. The compromise estimators “limit translation” away from the MLE. The calculations are pursued in an empirical Bayesian manner by considering their performance against an entire family of prior distributions on the unknown parameters.


Annals of Internal Medicine | 1997

Improving the Statistical Approach to Health Care Provider Profiling

Cindy L. Christiansen; Carl N. Morris

For reports on the performance of health care providers to be effective, profiling must be done using the best statistical methods. Commonly used profiling methods often contain some of the following deficiencies. They ignore important relevant information. They use statistical standards where medical standards would serve better. They use the probability that the observed outcome is extreme, assuming that a hospitals true performance is acceptable to identify providers with extreme rates; this is not the probability that the medical units true mortality rate exceeds a given standard. (The true mortality rate is the rate that would have occurred if the hospital had served a very large number of patients.) The 1993 report on coronary artery bypass graft surgery from the New York State Department of Health [1] listed mortality data (deaths before leaving the hospital) and profile statistics for 31 hospitals. These profiles aimed to identify hospitals that had excessively high or low mortality rates associated with coronary artery bypass graft surgery. In this article, we review 3 of the 31 hospitals to see how profile results improve with the use of more information. Two of the hospitals were chosen for their high observed mortality rates (1 had a mortality rate that was substantially higher than the 1992 statewide rate of 2.78%), and 1 was chosen for its low mortality rate. (It is important to keep in mind that the performance of the hospitals may have changed since 1992.) Hierarchical models use information from the available data obtained from all health care providers being examined. These models are so named because they apply to situations with two or more levels of random variation. In the mortality rate example to follow, the first level of the hierarchy specifies a distribution for the random number of observed deaths at a given hospital and the second level specifies possible distributions for the true mortality rates in several hospitals. To use terminology from analysis of variance, level 1 in the hierarchical model concerns the variation of rates within providers and level 2 concerns the variation between true rates of the hospitals. We urge the use of medical standards that specify the largest or smallest medically acceptable true mortality rate in the setting being profiled. How standards are set depends on their purpose; standards meant to encourage quality improvement, for example, may differ from standards meant to distribute pay incentives. Major improvements to profiles will result from the use of medically appropriate performance standards. Case-mix adjustments are made in almost all profile analyses to account for the differences in provider performances attributable solely to differences in the populations served. Hierarchical models accommodate these crucial adjustments. In addition, standard profiling procedures typically ignore units with small caseloads, such as those with fewer than 50 patients. This practice gives little information on the performance of low-volume providers and can provide unfair gaming opportunities (for example, a hospital with a very high mortality rate for 49 patients might refuse to admit further patients in the given category). Hierarchical models require no minimum sample size for a particular health care provider, provided that the ensemble of all providers analyzed has adequate data. Ensemble data are used to correct for regression-to-the-mean bias. In this paper, we review standard statistical profiling methods, showing how successively better results are obtained as more information is included. We recommend the use of hierarchical models to extract ensemble information and advocate the use of more directly relevant standards. The advantages of a hierarchical approach are as follows: 1) The probabilities of performance standards are calculated, 2) comparisons of units are based on medically relevant standards, 3) regression-to-the-mean bias is removed, and 4) providers with small sample sizes remain in the analysis. The benefits of hierarchical modeling apply not only to the mortality data considered here but also to profiles of patient satisfaction, referral rates, and other outcomes. Profiling Data We estimated the true mortality rates associated with coronary artery bypass graft surgery for 31 hospitals in New York State [1] by using data on the number of patients, the number of deaths, and the case-mix difficulty of the patients treated at each hospital. For this analysis, we focused on data for two hospitals (H1 and H2) that had high observed mortality rates. The simplest profile analysis would compare the number of deaths in each hospital. Because 22 deaths occurred at H2, it seems to be a much worse hospital than H1, at which 3 deaths occurred. However, the number of patients served should also be considered: that is, mortality rates, not raw counts, should be analyzed. The disparity in the number of deaths can be completely explained by the caseload of 484 coronary artery bypass graft procedures at H2 compared with 67 at H1. The mortality rates of 4.48% (3 of 67 patients) at H1 and 4.55% (22 of 484 patients) at H2 are almost indistinguishable. This improvement makes a fairer comparison by adjusting for caseload while retaining the basic concept of comparing the number of deaths. The improvements stem from obtaining and using appropriate additional information. An even better approach accounts for case-mix differences. Data from the New York state study [1] show that the patients who had coronary artery bypass graft surgery at H2 were less healthy than those at H1. On average, patients at H1 had 51.1% of the risk for death of all patients who had coronary artery bypass graft surgery (expected mortality rate, 1.42% for the case mix of patients at H1 compared with 2.78% statewide; 1.42 2.78 = 0.511). When we adjusted for this, the 3 deaths at H1 equivalently resulted from 34.2 (0.511 67) procedures done on patients with an average case mix. The risk-adjusted mortality rate is therefore 8.77% (3 34.2) at H1. The risk-adjusted mortality rate at H2 was 5.77%. (With many deaths and an exceptionally healthy patient case mix, it is possible for the relative risk case-mix adjustment method to produce adjusted rates exceeding 1.00.) Risk adjustments contribute vitally to reducing unfair profile evaluations. The need for risk adjustment has led to vigorous research on ways to account for case-mix differences [2-8]. For our purposes, we used the case-mix data and the risk adjustment methods as they were used in the report from New York State [1]. Models and Tests of Statistical Significance Developing good profiling procedures requires specifying probability distributions for the observed outcomes and choosing the standards of acceptable care that define the hypotheses to be tested. Standards that are based on input from medical professionals and from users of the profiles will be the most useful and the most meaningful. When statistical convention alone determines these choices, they are likely to produce inaccurate conclusions and lead to poor decisions. Probability Distributions Once the hospital rates have been adjusted for case mix, a probability distribution is needed to perform a statistical test. The observed count is governed by the hospitals true mortality rate; here, it is the rate that could be observed only if the hospital had treated a very large number of patients with the average case mix. The true mortality rates are unknown and must be estimated from the data. The Poisson distribution is appropriate for these data because the probability of death after coronary artery bypass graft surgery was small (2.78% state-wide). An alternative, the binomial distribution, is well approximated by the Poisson distribution in this case. If the expected number of deaths is very small or if some individual probabilities are large, then normal, Poisson, and binomial assumptions may not be valid and an alternate calculation may be necessary [9]. Commonly Used Decision Criteria Profile analyses often require tests of the null hypothesis that a providers true mortality rate equals the average rate for all providers. The hypothesis is tested at a specified significance level. Following this convention, the New York State report [1] used the statewide mortality rate of 2.78% as the standard and set the significance level at 0.025. This hypothesis is not very useful: Taken literally, it means that if the true hospital mortality rates differ even by tiny amounts (which one would expect), many of the hospitals would have true rates that exceed the population mean. P Values When a distribution and a standard are specified, the P value can be calculated [10, 11]. A normal approximation to the Poisson distribution will work poorly for the profiles of New York State hospitals in which coronary artery bypass graft surgery was performed, because fewer than 10 deaths were expected at many of the hospitals. The P value for H1 is the probability of observing 3 or more deaths (because 3 deaths occurred at H1), assuming that H1 performed with a true mortality rate of 2.78% for patients with an average case mix. Had H1 performed at this average rate, 0.95 deaths would have been expected. A normal approximation produces a P value of 0.018. Small P values, such as this, identify high true mortality rates; H1 would therefore be identified by this approximate calculation. However, the exact P value based on the Poisson distribution is 0.072 and is too large to identify H1 as a poor performer. Profile procedures that use normal approximations result in incorrect profile estimates if the approximation is inaccurate. Errors such as this are unnecessary; many statistical computer packages make it easy to calculate exact P values for the Poisson and other common distributions. Hierarchical Bayesian Models for Profile Analyses A P value computed to test the hypothesis that a hospital


Journal of the American Statistical Association | 1997

Hierarchical Poisson Regression Modeling

Cindy L. Christiansen; Carl N. Morris

Abstract The Poisson model and analyses here feature nonexchangeable gamma distributions (although exchangeable following a scale transformation) for individual parameters, with standard deviations proportional to means. A relatively uninformative prior distribution for the shrinkage values eliminates the ill behavior of maximum likelihood estimators of the variance components. When tested in simulation studies, the resulting procedure provides better coverage probabilities and smaller risk than several other published rules, and thus works well from Bayesian and frequentist perspectives alike. The computations provide fast, accurate density approximations to individual parameters and to structural regression coefficients. The computer program is publicly available through Statlib.


Journal of the American Statistical Association | 1971

Limiting the Risk of Bayes and Empirical Bayes Estimators—Part I: The Bayes Case

Bradley Efron; Carl N. Morris

Abstract The first part of this article considers the Bayesian problem of estimating the mean, θ, of a normal distribution when the mean itself has a normal prior. The usual Bayes estimator for this situation has high risk if θ is far from the mean of the prior distribution. We suggest rules which do not have this bad property and still perform well against the normal prior. These rules are compromises between the Bayes rule and the MLE. Similar rules are suggested for the empirical Bayes situation where the mean and variance of the prior is unknown but can be estimated from the data provided by several simultaneous estimation problems. In this case the suggested rules compromise between the James-Stein estimator of a mean vector and the MLE.


Scientific Inference, Data Analysis, and Robustness#R##N#Proceedings of a Conference Conducted by the Mathematics Research Center, the University of Wisconsin–Madison, November 4–6, 1981 | 1983

Parametric Empirical Bayes Confidence Intervals

Carl N. Morris

Publisher Summary This chapter outlines parametric empirical Bayes confidence intervals. Empirical Bayes modeling assumes the distributions π for the parameters θ= (θ 1 , …, θ k ) exist, with π taken from a known class Π of possible parameter distributions. Π is considered independent N (u, A) distributions on R k . It is called parametric empirical Bayes problem, because πɛ Π is determined by the parameters (u, A) and so is a parametric family of distributions. A simulation presented in the chapter was used to determine that the intervals ±s i and ±1.96s i contain the true values θ i in at least 68 percent and 95 percent of the cases. Empirical Bayes estimators, or Steins estimator, can lead to misestimation of components that the statistician or his clients care about when exchangeability in the prior distribution is implausible. The term empirical Bayes, which is used for non-parametric empirical Bayes problems, actually fits the parametric empirical Bayes case too. The empirical Bayes methods in general and parametric empirical Bayes methods in particular provide a way to utilize this additional information by obtaining more precise estimates and estimating their precision.

Collaboration


Dive into the Carl N. Morris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge