Richard W. Andrews
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard W. Andrews.
Communications of The ACM | 1981
Thomas J. Schriber; Richard W. Andrews
An important part of a simulation study is the analysis of simulation output for the purpose of making inferences about properties of the process being simulated. Fundamental to much of this analysis is the building of a confidence interval on the mean value of a key output variable of interest. This procedure is complicated by the possible presence of serial correlation in the output, which makes it difficult to estimate the variability in the estimator of the process mean. Although some research has been devoted to estimating this variability, additional research must be directed toward developing reliable confidence interval procedures and making the practitioners selection of an appropriate procedure relatively straightforward. This paper provides a conceptual framework for research in the analysis of simulation output, focusing in particular on confidence interval methodology. The characteristics of confidence interval procedures, theoretical output processes, and processes of practical interest to simulation practitioners are discussed, and theirrelationships with one another spelled out. Standard measures of effectiveness applicable to proposed confidence interval procedures are introduced, and the application of these measures to two previously suggested confidence interval procedures is illustrated.
Quality Engineering | 2002
Karl D. Majeske; Richard W. Andrews
Manufacturers and suppliers use quality measures calculated from dimensional data to make informed decisions regarding measurement systems and product quality. Many manufacturers and suppliers use the precision-to-tolerance ratio—scaling the standard deviation of gage error (σg) by the design tolerance—to approve a measurement system. Manufacturers and suppliers also use one or more measures of process capability, such as C p that scales the tolerance by the standard deviation of the product (σp), to approve a manufacturing process. A measure used to assess the ability for two parties to communicate via dimension data is the correlation in repeat measurements that we derive as a function of σg and σp. By plotting the precision-to-tolerance ratio and the correlation in repeat measurements on the σg and σp axes, acceptable and nonacceptable regions for measurement systems are defined. When we add C p (a measure of process capability) to the mix, a relationship between the three measures suggests a method for determining an acceptable level for the correlation criterion and defines additional regions. This approach of plotting the quality measurement criteria in the σg and σp axes precisely defines the quality situation and lends to improvement suggestions.
Journal of Business & Economic Statistics | 1999
Marjorie A. Rosenberg; Richard W. Andrews; Peter Lenk
A nonacceptable claim (NAC) is an insurance claim for an unnecessary hospital stay. This study establishes a statistical model that predicts the NAC rate. The model supplements current insurer programs that rely on detailed audits of patient medical records. Hospital discharge claim records are used as inputs in the statistical model to predict retrospectively the probability that a hospital admission is nonacceptable. A full Bayesian hierarchical logistic regression model is used with regression coefficients that are random across the primary diagnosis codes. The model provides better fits and predictions than standard methods that pool across primary diagnosis codes.
Archive | 1993
Richard W. Andrews; James O. Berger; Murray H. Smith
There is currently considerable Congressional activity seeking to mandate drastic increases in fuel efficiency of automobiles. A key question is — How much of an increase in fuel economy is possible through implementation of existing technology?
decision support systems | 2006
Joni L. Jones; Richard W. Andrews
Considerable research discusses the advantages and disadvantages of combinatorial auctions. This study addresses a disadvantage, the loss of price discovery for the individual items sold as bundles. Prior studies confirm that there may not be a unique unit-level equilibrium price. We claim a distribution of prices satisfy a given allocation and describe a technique to determine these distributions. Gibbs Sampling allows us to discover characteristics of combinatorial auctions based on the allocated bids. We extract the market-influenced unit-level price, bidder profit, reservation discount distributions and are able to find patterns that depict synergies between products. The posterior distribution provides insights useful to managerial decision making.
winter simulation conference | 1986
Richard W. Andrews; William C. Birdsall; Frederick J. Gentner; W. Allen Spivey
Microeconomics refers to the economics of decision-making units, such as an individual consumer, a household, or a firm. In this paper we are concerned with microeconomic simulation models. Such models have been used for the purpose of analyzing the impact of various policies, such as tax and welfare reform, upon the distribution of income of households. For a comprehensive exposition of microeconomic simulation models see Orcutt, Caldwell and Wertheimer (1976). A microeconomic simulation model often employs Monte Carlo methods to alter the time-varying characteristics of a population; see Orcutt and Smith (1979). The U.S. government uses such a model to simulate a data base of individuals, their earnings histories and demographic characteristics in order to plan and execute policy decisions dealing with social security taxes and benefits (United States Department of Health and Human Services 1985 and Congressional Budget Office 1986). This extended abstract reports on our ongoing research into the development of statistical procedures for validating such a microeconomic simulation model of the household sector. The output variable which we will analyze is earnings. Two approaches are being considered. One is based on sampling theory methods; the other is Bayesian. The procedures will be demonstrated by analyzing 1980 simulated earnings from version II of MicroAnalytic Simulation System (MASS II); see Orcutt, Glazer, Jaramillo, and Nelson (1976). These data will be compared with the survey sampling data from the Panel Survey Income Dynamics (Institute of Social Research 1984). The results of the sampling theory and Bayesian approaches will be compared. Recommendations will be made as to the preferred approaches under various circumstances. A microeconomic simulation model can be summarized as follows: A representative sample of the decision units of the population of interest is used as input. The internal algorithm of the model generates periodic stochastic events that change the characteristics of those units. The relationships upon which the generation of these events are base are called the operating characteristics. Constraits and adjustments are used to insure that the current simulated totals agree with known and projected aggregate national statistics. The output at the end of any period consists of the sample units with their revised characteristics. Each of these components will be briefly discussed with specific examples from the MASS II model. MASS II is a modular simulation model written in PL/1. For input, MASS II has often used the equal probability sample of the household portion of the 1960 Census of Population (Orcutt and Smith, 1979). The sampling unit is a household and all individuals in that household are part of the input data base. For each individual, family relationships are included along with approximately 100 variables, e.g., age, gender, earnings, education, and job status. The operating characteristics are grouped into modules which generate such events as births, deaths, marriages, inheritance, and labor force participation. The status of any variable for an individual or a family is updated annually. The operating characteristics of the model are adjusted in each simulation year to insure that various totals, particularly national accounts totals, are equal to historical or projected national statistics. For example, the earnings module generates the earnings of each individual as a percent of labors share of GNP. Dollar earnings are then generated by multiplying the individuals relative earnings by the wage and salary statistic from the national accounts, which is externally supplied. The output after any annual update consists of the individuals and the values of their corresponding variables. For this research we are considering the following variables which are defined on each individual: X1t = annual earnings in year t X2 = gender (0=male, 1=female) X3t = education level at the end of year t X4t = marital status at the end of year t X5 = race (0=white, 1=nonwhite). In order to execute an operational validation study of MASS II (or any other microeconomic simulation model), data are needed from the population of interest. The MASS II output will be compared to the Panel Survey of Income Dynamics (PSID) data. The PSID is a longitudinal survey which was initiated in 1968 as a combination of two probability samples of families. Each year the source families and the families formed by births, marriages, and divorces are surveyed for demographic and economic information. It is not an equal probability sample and therefore sampling theory analysis requires the incorporation of weights and the incorporation of the sample design into the variance estimates. The following are PSID variables, the first five of which conceptually correspond to the same output variables from MASS II: Y1t = annual earnings in year t Y2 = gender (0=male, 1=female) Y3t = education level at the end of year t Y4t = marital status at the end of year t Y5 = race (0=white, 1-nonwhite) Y6t = weight attached to this individual in year t Y7 = design indicator giving stratum and cluster. As noted earlier, the variable of interest for this research is earnings. For any year we wish to compare the distribution of X1 to the distribution of Y1. The comparison of these two distributions will be considered for the entire population and for subclasses of the population as partitioned by variables (X2, X3, X4, X5) and (Y2, Y3, Y4, Y5). As an example, consider the subclass of white males 35-50 years old in the year 1980. From the output of MASS II, out of 6878 individuals for year 1980 we have 541 who are in this subclass. For PSID it is 777 out of 1974). With the sampling theory approach we want to test if the distribution of earnings from which the 541 MASS II individuals were chosen is the same as the distribution of earnings from which the 777 PSID individuals were chosen. The sampling theory approach uses a test of homogeneity, based on the Wald statistics, as described in Shuster and Downing (1976). Three cases are considered. First, we do not use the weights of the PSID data nor do we incorporate the sample design in the calculation of the Wald statistic. The second case uses the weights but not the design, and the third case uses both weights and the design. The variance estimators which take account of the PSID design are given in Landis, Lepkowski, Eklund, and Stehouwer (1982). Policy implications relating to the use of the output from MASS II suggest categories for the earnings variable. These categories are used in executing tests of homogeneity on this variable. Using the same categories of the earnings variable and the same subclasses provided by (X2, X3, X4, X5) and Y2, Y3, Y4, Y5), a Bayesian approach to model validation is being developed. A multinomial distribution is used for the likelihood. A Dirichlet prior distribution is assumed on the probabilities of each category. Using conjugate prior methods the posterior distribution of the category probabilities has a Dirichlet distribution. We are investigating the comparison of both the posterior distribution of the parameters and the predictive distribution of the counts in each category. The predictive distributions will be found using the same sample size for both the PSID and the MASS II. Whether we work with the posterior distribution of the parameters or the predictive distributions of the counts, a method will be derived to compare the PSID distribution with the MASS II distribution. Bayesian conclusions can be made as to how well the MASS II and PSID compare. These Bayesian conclusions can be directly compared with the sampling theory conclusions that were based on the tests of homogeneity. We will investigate why the conclusions from the two approaches are the same and/or different for various combinations of subclasses. This investigation will give us insight into which validation method to recommend. The complete report will be available at the Winter Simulation Conference, 1986. The authors acknowledge support by the Department of Health and Human Services, Social Security Administration under grant number 10-P-98285-5-01.
Statistics & Probability Letters | 1986
Richard W. Andrews
Consider a finite population which has many auxiliary variables. A statistic, which is a function of the moments of the auxiliary variables, is proposed to measure the balance of a sample. The mean and variance of this statistic are derived.
winter simulation conference | 1979
Thomas J. Schriber; Richard W. Andrews
Archive | 1984
Thomas J. Schriber; Richard W. Andrews
Journal of Applied Econometrics | 1993
Richard W. Andrews; James O. Berger; M H Smith